Nov 29 07:39:13 crc systemd[1]: Starting Kubernetes Kubelet... Nov 29 07:39:13 crc restorecon[4686]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 29 07:39:13 crc restorecon[4686]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 29 07:39:13 crc restorecon[4686]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Nov 29 07:39:14 crc kubenswrapper[4795]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 29 07:39:14 crc kubenswrapper[4795]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Nov 29 07:39:14 crc kubenswrapper[4795]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 29 07:39:14 crc kubenswrapper[4795]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 29 07:39:14 crc kubenswrapper[4795]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 29 07:39:14 crc kubenswrapper[4795]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.135173 4795 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.137962 4795 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.137982 4795 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.137987 4795 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.137993 4795 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.137999 4795 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.138005 4795 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.138012 4795 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.138017 4795 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.138022 4795 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.138027 4795 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.138033 4795 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.138040 4795 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.138046 4795 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.138051 4795 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.138056 4795 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.138062 4795 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.138067 4795 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.138071 4795 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.138078 4795 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.138082 4795 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.138087 4795 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.138091 4795 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.138095 4795 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.138099 4795 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.138102 4795 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.138106 4795 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.138110 4795 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.138114 4795 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.138118 4795 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.138123 4795 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.138127 4795 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.138133 4795 feature_gate.go:330] unrecognized feature gate: Example Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.138138 4795 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.138143 4795 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.138148 4795 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.138154 4795 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.138159 4795 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.138164 4795 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.138169 4795 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.138174 4795 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.138178 4795 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.138183 4795 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.138188 4795 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.138191 4795 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.138195 4795 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.138198 4795 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.138202 4795 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.138206 4795 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.138210 4795 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.138215 4795 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.138220 4795 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.138224 4795 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.138230 4795 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.138236 4795 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.138250 4795 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.138254 4795 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.138260 4795 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.138266 4795 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.138270 4795 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.138275 4795 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.138279 4795 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.138284 4795 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.138288 4795 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.138294 4795 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.138301 4795 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.138306 4795 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.138310 4795 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.138315 4795 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.138320 4795 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.138324 4795 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.138329 4795 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138434 4795 flags.go:64] FLAG: --address="0.0.0.0" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138445 4795 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138455 4795 flags.go:64] FLAG: --anonymous-auth="true" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138462 4795 flags.go:64] FLAG: --application-metrics-count-limit="100" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138469 4795 flags.go:64] FLAG: --authentication-token-webhook="false" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138474 4795 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138481 4795 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138488 4795 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138493 4795 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138498 4795 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138504 4795 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138509 4795 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138514 4795 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138519 4795 flags.go:64] FLAG: --cgroup-root="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138524 4795 flags.go:64] FLAG: --cgroups-per-qos="true" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138529 4795 flags.go:64] FLAG: --client-ca-file="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138534 4795 flags.go:64] FLAG: --cloud-config="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138540 4795 flags.go:64] FLAG: --cloud-provider="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138545 4795 flags.go:64] FLAG: --cluster-dns="[]" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138558 4795 flags.go:64] FLAG: --cluster-domain="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138563 4795 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138569 4795 flags.go:64] FLAG: --config-dir="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138575 4795 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138581 4795 flags.go:64] FLAG: --container-log-max-files="5" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138605 4795 flags.go:64] FLAG: --container-log-max-size="10Mi" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138610 4795 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138616 4795 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138621 4795 flags.go:64] FLAG: --containerd-namespace="k8s.io" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138626 4795 flags.go:64] FLAG: --contention-profiling="false" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138631 4795 flags.go:64] FLAG: --cpu-cfs-quota="true" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138636 4795 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138641 4795 flags.go:64] FLAG: --cpu-manager-policy="none" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138646 4795 flags.go:64] FLAG: --cpu-manager-policy-options="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138653 4795 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138658 4795 flags.go:64] FLAG: --enable-controller-attach-detach="true" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138663 4795 flags.go:64] FLAG: --enable-debugging-handlers="true" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138668 4795 flags.go:64] FLAG: --enable-load-reader="false" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138674 4795 flags.go:64] FLAG: --enable-server="true" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138678 4795 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138686 4795 flags.go:64] FLAG: --event-burst="100" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138692 4795 flags.go:64] FLAG: --event-qps="50" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138697 4795 flags.go:64] FLAG: --event-storage-age-limit="default=0" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138704 4795 flags.go:64] FLAG: --event-storage-event-limit="default=0" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138709 4795 flags.go:64] FLAG: --eviction-hard="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138717 4795 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138723 4795 flags.go:64] FLAG: --eviction-minimum-reclaim="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138729 4795 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138735 4795 flags.go:64] FLAG: --eviction-soft="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138740 4795 flags.go:64] FLAG: --eviction-soft-grace-period="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138746 4795 flags.go:64] FLAG: --exit-on-lock-contention="false" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138752 4795 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138758 4795 flags.go:64] FLAG: --experimental-mounter-path="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138763 4795 flags.go:64] FLAG: --fail-cgroupv1="false" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138768 4795 flags.go:64] FLAG: --fail-swap-on="true" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138773 4795 flags.go:64] FLAG: --feature-gates="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138780 4795 flags.go:64] FLAG: --file-check-frequency="20s" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138785 4795 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138791 4795 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138796 4795 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138801 4795 flags.go:64] FLAG: --healthz-port="10248" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138806 4795 flags.go:64] FLAG: --help="false" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138812 4795 flags.go:64] FLAG: --hostname-override="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138816 4795 flags.go:64] FLAG: --housekeeping-interval="10s" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138822 4795 flags.go:64] FLAG: --http-check-frequency="20s" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138827 4795 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138832 4795 flags.go:64] FLAG: --image-credential-provider-config="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138837 4795 flags.go:64] FLAG: --image-gc-high-threshold="85" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138842 4795 flags.go:64] FLAG: --image-gc-low-threshold="80" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138847 4795 flags.go:64] FLAG: --image-service-endpoint="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138851 4795 flags.go:64] FLAG: --kernel-memcg-notification="false" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138856 4795 flags.go:64] FLAG: --kube-api-burst="100" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138861 4795 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138867 4795 flags.go:64] FLAG: --kube-api-qps="50" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138871 4795 flags.go:64] FLAG: --kube-reserved="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138876 4795 flags.go:64] FLAG: --kube-reserved-cgroup="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138881 4795 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138885 4795 flags.go:64] FLAG: --kubelet-cgroups="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138890 4795 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138895 4795 flags.go:64] FLAG: --lock-file="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138899 4795 flags.go:64] FLAG: --log-cadvisor-usage="false" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138905 4795 flags.go:64] FLAG: --log-flush-frequency="5s" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138909 4795 flags.go:64] FLAG: --log-json-info-buffer-size="0" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138917 4795 flags.go:64] FLAG: --log-json-split-stream="false" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138922 4795 flags.go:64] FLAG: --log-text-info-buffer-size="0" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138926 4795 flags.go:64] FLAG: --log-text-split-stream="false" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138930 4795 flags.go:64] FLAG: --logging-format="text" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138934 4795 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138938 4795 flags.go:64] FLAG: --make-iptables-util-chains="true" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138942 4795 flags.go:64] FLAG: --manifest-url="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138946 4795 flags.go:64] FLAG: --manifest-url-header="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138952 4795 flags.go:64] FLAG: --max-housekeeping-interval="15s" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138957 4795 flags.go:64] FLAG: --max-open-files="1000000" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138962 4795 flags.go:64] FLAG: --max-pods="110" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138966 4795 flags.go:64] FLAG: --maximum-dead-containers="-1" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138970 4795 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138974 4795 flags.go:64] FLAG: --memory-manager-policy="None" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138978 4795 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138982 4795 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138986 4795 flags.go:64] FLAG: --node-ip="192.168.126.11" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.138990 4795 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.139001 4795 flags.go:64] FLAG: --node-status-max-images="50" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.139005 4795 flags.go:64] FLAG: --node-status-update-frequency="10s" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.139009 4795 flags.go:64] FLAG: --oom-score-adj="-999" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.139014 4795 flags.go:64] FLAG: --pod-cidr="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.139017 4795 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.139024 4795 flags.go:64] FLAG: --pod-manifest-path="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.139028 4795 flags.go:64] FLAG: --pod-max-pids="-1" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.139032 4795 flags.go:64] FLAG: --pods-per-core="0" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.139036 4795 flags.go:64] FLAG: --port="10250" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.139040 4795 flags.go:64] FLAG: --protect-kernel-defaults="false" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.139044 4795 flags.go:64] FLAG: --provider-id="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.139049 4795 flags.go:64] FLAG: --qos-reserved="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.139053 4795 flags.go:64] FLAG: --read-only-port="10255" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.139058 4795 flags.go:64] FLAG: --register-node="true" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.139062 4795 flags.go:64] FLAG: --register-schedulable="true" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.139067 4795 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.139074 4795 flags.go:64] FLAG: --registry-burst="10" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.139078 4795 flags.go:64] FLAG: --registry-qps="5" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.139083 4795 flags.go:64] FLAG: --reserved-cpus="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.139086 4795 flags.go:64] FLAG: --reserved-memory="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.139091 4795 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.139095 4795 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.139099 4795 flags.go:64] FLAG: --rotate-certificates="false" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.139104 4795 flags.go:64] FLAG: --rotate-server-certificates="false" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.139108 4795 flags.go:64] FLAG: --runonce="false" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.139113 4795 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.139117 4795 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.139122 4795 flags.go:64] FLAG: --seccomp-default="false" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.139127 4795 flags.go:64] FLAG: --serialize-image-pulls="true" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.139131 4795 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.139135 4795 flags.go:64] FLAG: --storage-driver-db="cadvisor" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.139139 4795 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.139143 4795 flags.go:64] FLAG: --storage-driver-password="root" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.139147 4795 flags.go:64] FLAG: --storage-driver-secure="false" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.139152 4795 flags.go:64] FLAG: --storage-driver-table="stats" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.139156 4795 flags.go:64] FLAG: --storage-driver-user="root" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.139160 4795 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.139165 4795 flags.go:64] FLAG: --sync-frequency="1m0s" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.139169 4795 flags.go:64] FLAG: --system-cgroups="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.139173 4795 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.139180 4795 flags.go:64] FLAG: --system-reserved-cgroup="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.139184 4795 flags.go:64] FLAG: --tls-cert-file="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.139187 4795 flags.go:64] FLAG: --tls-cipher-suites="[]" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.139192 4795 flags.go:64] FLAG: --tls-min-version="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.139197 4795 flags.go:64] FLAG: --tls-private-key-file="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.139201 4795 flags.go:64] FLAG: --topology-manager-policy="none" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.139205 4795 flags.go:64] FLAG: --topology-manager-policy-options="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.139209 4795 flags.go:64] FLAG: --topology-manager-scope="container" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.139213 4795 flags.go:64] FLAG: --v="2" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.139219 4795 flags.go:64] FLAG: --version="false" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.139224 4795 flags.go:64] FLAG: --vmodule="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.139229 4795 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.139233 4795 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.139336 4795 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.139342 4795 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.139347 4795 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.139351 4795 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.139355 4795 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.139358 4795 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.139362 4795 feature_gate.go:330] unrecognized feature gate: Example Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.139365 4795 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.139369 4795 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.139372 4795 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.139376 4795 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.139380 4795 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.139384 4795 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.139388 4795 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.139391 4795 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.139394 4795 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.139398 4795 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.139402 4795 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.139405 4795 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.139409 4795 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.139412 4795 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.139416 4795 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.139419 4795 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.139423 4795 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.139426 4795 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.139430 4795 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.139434 4795 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.139438 4795 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.139442 4795 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.139445 4795 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.139449 4795 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.139453 4795 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.139456 4795 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.139460 4795 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.139463 4795 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.139467 4795 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.139470 4795 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.139475 4795 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.139481 4795 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.139486 4795 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.139490 4795 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.139495 4795 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.139500 4795 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.139505 4795 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.139508 4795 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.139513 4795 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.139521 4795 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.139525 4795 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.139529 4795 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.139533 4795 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.139537 4795 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.139541 4795 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.139545 4795 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.139550 4795 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.139554 4795 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.139560 4795 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.139564 4795 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.139569 4795 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.139573 4795 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.139577 4795 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.139581 4795 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.139600 4795 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.139604 4795 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.139607 4795 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.139611 4795 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.139615 4795 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.139618 4795 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.139621 4795 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.139625 4795 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.139628 4795 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.139632 4795 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.139774 4795 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.146837 4795 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.146867 4795 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147129 4795 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147151 4795 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147157 4795 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147161 4795 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147165 4795 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147170 4795 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147174 4795 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147179 4795 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147183 4795 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147187 4795 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147190 4795 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147194 4795 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147197 4795 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147201 4795 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147205 4795 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147209 4795 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147215 4795 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147221 4795 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147225 4795 feature_gate.go:330] unrecognized feature gate: Example Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147230 4795 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147234 4795 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147238 4795 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147242 4795 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147247 4795 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147251 4795 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147255 4795 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147259 4795 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147263 4795 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147268 4795 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147272 4795 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147276 4795 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147280 4795 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147284 4795 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147291 4795 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147298 4795 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147303 4795 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147308 4795 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147315 4795 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147320 4795 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147326 4795 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147332 4795 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147338 4795 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147343 4795 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147347 4795 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147353 4795 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147358 4795 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147363 4795 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147368 4795 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147372 4795 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147377 4795 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147382 4795 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147387 4795 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147392 4795 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147396 4795 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147400 4795 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147405 4795 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147409 4795 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147413 4795 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147418 4795 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147422 4795 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147425 4795 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147429 4795 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147432 4795 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147435 4795 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147439 4795 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147443 4795 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147446 4795 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147451 4795 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147456 4795 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147460 4795 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147465 4795 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.147473 4795 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147620 4795 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147630 4795 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147635 4795 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147640 4795 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147645 4795 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147650 4795 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147656 4795 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147662 4795 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147668 4795 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147672 4795 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147676 4795 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147681 4795 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147686 4795 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147690 4795 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147694 4795 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147699 4795 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147703 4795 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147708 4795 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147712 4795 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147717 4795 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147721 4795 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147725 4795 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147730 4795 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147734 4795 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147739 4795 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147745 4795 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147751 4795 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147755 4795 feature_gate.go:330] unrecognized feature gate: Example Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147760 4795 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147766 4795 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147770 4795 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147775 4795 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147780 4795 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147785 4795 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147790 4795 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147795 4795 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147800 4795 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147805 4795 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147810 4795 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147816 4795 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147821 4795 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147826 4795 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147830 4795 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147835 4795 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147840 4795 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147845 4795 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147849 4795 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147853 4795 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147858 4795 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147862 4795 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147867 4795 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147872 4795 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147877 4795 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147881 4795 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147886 4795 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147891 4795 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147896 4795 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147901 4795 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147905 4795 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147911 4795 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147918 4795 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147924 4795 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147930 4795 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147935 4795 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147939 4795 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147944 4795 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147949 4795 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147953 4795 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147957 4795 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147962 4795 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.147966 4795 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.147974 4795 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.148362 4795 server.go:940] "Client rotation is on, will bootstrap in background" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.151258 4795 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.151551 4795 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.152342 4795 server.go:997] "Starting client certificate rotation" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.152372 4795 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.155060 4795 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-17 02:52:23.428225587 +0000 UTC Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.155207 4795 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 1171h13m9.273026709s for next certificate rotation Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.162975 4795 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.165135 4795 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.174244 4795 log.go:25] "Validated CRI v1 runtime API" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.193309 4795 log.go:25] "Validated CRI v1 image API" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.194742 4795 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.197258 4795 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-11-29-07-35-01-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.197292 4795 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.210446 4795 manager.go:217] Machine: {Timestamp:2025-11-29 07:39:14.208699315 +0000 UTC m=+0.184275125 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:bd085386-a70e-485f-9f18-00b3aef4bcca BootID:8cd0c036-22eb-405e-b57e-ec0c4424780e Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:9d:72:fa Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:9d:72:fa Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:b9:d4:d1 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:2e:e9:d9 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:ae:93:60 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:81:38:8f Speed:-1 Mtu:1496} {Name:eth10 MacAddress:c2:80:f1:17:57:21 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:d2:d6:3b:9c:6a:cf Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.210701 4795 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.210868 4795 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.211867 4795 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.212036 4795 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.212070 4795 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.212301 4795 topology_manager.go:138] "Creating topology manager with none policy" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.212310 4795 container_manager_linux.go:303] "Creating device plugin manager" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.212486 4795 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.212520 4795 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.212733 4795 state_mem.go:36] "Initialized new in-memory state store" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.212812 4795 server.go:1245] "Using root directory" path="/var/lib/kubelet" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.213618 4795 kubelet.go:418] "Attempting to sync node with API server" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.213664 4795 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.213721 4795 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.213734 4795 kubelet.go:324] "Adding apiserver pod source" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.213745 4795 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.216223 4795 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.216751 4795 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.218493 4795 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.218620 4795 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.107:6443: connect: connection refused Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.218616 4795 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.107:6443: connect: connection refused Nov 29 07:39:14 crc kubenswrapper[4795]: E1129 07:39:14.218726 4795 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.107:6443: connect: connection refused" logger="UnhandledError" Nov 29 07:39:14 crc kubenswrapper[4795]: E1129 07:39:14.218728 4795 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.107:6443: connect: connection refused" logger="UnhandledError" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.219398 4795 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.219421 4795 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.219429 4795 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.219437 4795 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.219448 4795 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.219456 4795 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.219463 4795 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.219474 4795 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.219482 4795 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.219490 4795 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.219501 4795 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.219507 4795 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.219720 4795 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.220145 4795 server.go:1280] "Started kubelet" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.220346 4795 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.220389 4795 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.220798 4795 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.107:6443: connect: connection refused Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.220911 4795 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 29 07:39:14 crc systemd[1]: Started Kubernetes Kubelet. Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.222309 4795 server.go:460] "Adding debug handlers to kubelet server" Nov 29 07:39:14 crc kubenswrapper[4795]: E1129 07:39:14.222499 4795 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.107:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187c6a3aaa4b9173 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-29 07:39:14.220118387 +0000 UTC m=+0.195694177,LastTimestamp:2025-11-29 07:39:14.220118387 +0000 UTC m=+0.195694177,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.224211 4795 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.224634 4795 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.224726 4795 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 08:06:10.034367564 +0000 UTC Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.224757 4795 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 432h26m55.809612305s for next certificate rotation Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.224871 4795 volume_manager.go:287] "The desired_state_of_world populator starts" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.224884 4795 volume_manager.go:289] "Starting Kubelet Volume Manager" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.224958 4795 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 29 07:39:14 crc kubenswrapper[4795]: E1129 07:39:14.225382 4795 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.225445 4795 factory.go:55] Registering systemd factory Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.225463 4795 factory.go:221] Registration of the systemd container factory successfully Nov 29 07:39:14 crc kubenswrapper[4795]: E1129 07:39:14.226024 4795 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" interval="200ms" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.226121 4795 factory.go:153] Registering CRI-O factory Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.226133 4795 factory.go:221] Registration of the crio container factory successfully Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.226188 4795 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.226209 4795 factory.go:103] Registering Raw factory Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.226229 4795 manager.go:1196] Started watching for new ooms in manager Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.226042 4795 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.107:6443: connect: connection refused Nov 29 07:39:14 crc kubenswrapper[4795]: E1129 07:39:14.226436 4795 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.107:6443: connect: connection refused" logger="UnhandledError" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.226942 4795 manager.go:319] Starting recovery of all containers Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.245904 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.245971 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.245983 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.245992 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.246000 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.246010 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.246021 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.246030 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.246040 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.246050 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.246059 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.246068 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.246076 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.246086 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.246095 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.246102 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.246116 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.246124 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.246134 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.246144 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.246152 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.246161 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.246170 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.246180 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.246188 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.246197 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.246207 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.246238 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.246248 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.246257 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.246266 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.246275 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.246285 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.246295 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.246304 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.246313 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.246322 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.246332 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.246341 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.246350 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.246360 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.246369 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.246395 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.246405 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.246413 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.246423 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.246433 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.246442 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.246453 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.246461 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.246473 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.246486 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.246503 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.246518 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.246531 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.246547 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.246561 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.246654 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.246673 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.246683 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.246692 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.246702 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.246710 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.246722 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.246733 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.246742 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.246752 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.246761 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.246770 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.246779 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.246790 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.246799 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.246810 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.246820 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.246831 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.246840 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.246848 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.246858 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.246866 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.246873 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.246649 4795 manager.go:324] Recovery completed Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.246883 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.247032 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.247041 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.247049 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.247058 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.247066 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.247077 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.247088 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.247096 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.247107 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.247116 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.247125 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.247133 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.247143 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.247151 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.247161 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.247171 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.247182 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.247191 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.247200 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.247209 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.247218 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.247227 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.247237 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.247260 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.247273 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.247282 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.247292 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.247302 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.247312 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.247321 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.247332 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.247342 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.247399 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.247410 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.247418 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.247427 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.247437 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.247448 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.247456 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.247465 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.247476 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.247486 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.247495 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.247503 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.247511 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.247519 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.247527 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.247537 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.247546 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.247555 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.247564 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.247573 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.247581 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.247609 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.247618 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.247626 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.247636 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.247644 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.247652 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.247661 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.247670 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.247682 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.247694 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.247706 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.247718 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.247730 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.247743 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.247755 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.247768 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.247780 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.247789 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.247797 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.247808 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.247818 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.247828 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.247837 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.247846 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.247854 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.247864 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.248438 4795 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.248458 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.248469 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.248479 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.248488 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.248497 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.248506 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.248514 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.248522 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.248530 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.248547 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.248557 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.248565 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.248573 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.248581 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.248604 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.248614 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.248623 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.248631 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.248638 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.248646 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.248654 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.248662 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.248670 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.248678 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.248687 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.248696 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.248705 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.248714 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.248722 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.248731 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.248740 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.248749 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.248758 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.248767 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.248775 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.248784 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.248793 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.248810 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.248821 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.248834 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.248846 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.248858 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.248870 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.248884 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.248896 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.248910 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.248923 4795 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.248934 4795 reconstruct.go:97] "Volume reconstruction finished" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.248943 4795 reconciler.go:26] "Reconciler: start to sync state" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.262309 4795 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.266323 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.266509 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.266522 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.268502 4795 cpu_manager.go:225] "Starting CPU manager" policy="none" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.268757 4795 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.268810 4795 state_mem.go:36] "Initialized new in-memory state store" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.271286 4795 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.274372 4795 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.274410 4795 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.274439 4795 kubelet.go:2335] "Starting kubelet main sync loop" Nov 29 07:39:14 crc kubenswrapper[4795]: E1129 07:39:14.274485 4795 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.277834 4795 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.107:6443: connect: connection refused Nov 29 07:39:14 crc kubenswrapper[4795]: E1129 07:39:14.277892 4795 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.107:6443: connect: connection refused" logger="UnhandledError" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.282064 4795 policy_none.go:49] "None policy: Start" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.284277 4795 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.284326 4795 state_mem.go:35] "Initializing new in-memory state store" Nov 29 07:39:14 crc kubenswrapper[4795]: E1129 07:39:14.326555 4795 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.347039 4795 manager.go:334] "Starting Device Plugin manager" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.347191 4795 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.347205 4795 server.go:79] "Starting device plugin registration server" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.347641 4795 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.347663 4795 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.347881 4795 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.347983 4795 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.347998 4795 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 29 07:39:14 crc kubenswrapper[4795]: E1129 07:39:14.358207 4795 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.374747 4795 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.374894 4795 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.376401 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.376443 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.376454 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.376650 4795 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.376778 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.376835 4795 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.377509 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.377524 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.377539 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.377558 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.377543 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.377616 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.377756 4795 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.377957 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.378027 4795 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.378469 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.378500 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.378514 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.378623 4795 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.378707 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.378747 4795 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.379464 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.379479 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.379479 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.379487 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.379495 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.379503 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.379654 4795 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.379655 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.379706 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.379716 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.379733 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.379759 4795 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.380209 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.380249 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.380257 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.380351 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.380372 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.380380 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.380403 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.380426 4795 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.381164 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.381197 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.381209 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:14 crc kubenswrapper[4795]: E1129 07:39:14.426829 4795 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" interval="400ms" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.447953 4795 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.449328 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.449354 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.449363 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.449385 4795 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 29 07:39:14 crc kubenswrapper[4795]: E1129 07:39:14.449864 4795 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.107:6443: connect: connection refused" node="crc" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.450543 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.450568 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.450601 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.450623 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.450644 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.450661 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.450767 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.450904 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.450998 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.451046 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.451134 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.451183 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.451217 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.451252 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.451280 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.552735 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.552790 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.552816 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.552834 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.552852 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.552867 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.552885 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.552899 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.552917 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.552933 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.552951 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.552973 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.552960 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.553008 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.553060 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.553071 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.553040 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.553092 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.553113 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.553117 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.553134 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.553136 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.552991 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.553201 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.553218 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.553231 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.553008 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.553292 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.553146 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.553392 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.650387 4795 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.651803 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.651847 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.651856 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.651909 4795 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 29 07:39:14 crc kubenswrapper[4795]: E1129 07:39:14.652448 4795 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.107:6443: connect: connection refused" node="crc" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.704224 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.727330 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.744177 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-9ba389dafb633a466e23aba15a503f52f7cca0664b781c19700a8c213a7aeb46 WatchSource:0}: Error finding container 9ba389dafb633a466e23aba15a503f52f7cca0664b781c19700a8c213a7aeb46: Status 404 returned error can't find the container with id 9ba389dafb633a466e23aba15a503f52f7cca0664b781c19700a8c213a7aeb46 Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.748707 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.755552 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.758671 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-d9d86320279e052249fd201fc03f1e6db8564084e2f88e2676b8d227a1b83737 WatchSource:0}: Error finding container d9d86320279e052249fd201fc03f1e6db8564084e2f88e2676b8d227a1b83737: Status 404 returned error can't find the container with id d9d86320279e052249fd201fc03f1e6db8564084e2f88e2676b8d227a1b83737 Nov 29 07:39:14 crc kubenswrapper[4795]: I1129 07:39:14.760721 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.768280 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-d00e99be9b633f1240d1c39a5d8afb903e0ba98190e678d809ffc29d17537e31 WatchSource:0}: Error finding container d00e99be9b633f1240d1c39a5d8afb903e0ba98190e678d809ffc29d17537e31: Status 404 returned error can't find the container with id d00e99be9b633f1240d1c39a5d8afb903e0ba98190e678d809ffc29d17537e31 Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.775180 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-3f06a88946da0e7002af7683d86e7e70c30af720e837eb907c9a464563eb1efa WatchSource:0}: Error finding container 3f06a88946da0e7002af7683d86e7e70c30af720e837eb907c9a464563eb1efa: Status 404 returned error can't find the container with id 3f06a88946da0e7002af7683d86e7e70c30af720e837eb907c9a464563eb1efa Nov 29 07:39:14 crc kubenswrapper[4795]: W1129 07:39:14.785130 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-e5ecffa6b8d2ecef80d972c230714f7022b45a8f28e882c829782e0aec948a95 WatchSource:0}: Error finding container e5ecffa6b8d2ecef80d972c230714f7022b45a8f28e882c829782e0aec948a95: Status 404 returned error can't find the container with id e5ecffa6b8d2ecef80d972c230714f7022b45a8f28e882c829782e0aec948a95 Nov 29 07:39:14 crc kubenswrapper[4795]: E1129 07:39:14.827863 4795 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" interval="800ms" Nov 29 07:39:15 crc kubenswrapper[4795]: W1129 07:39:15.032502 4795 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.107:6443: connect: connection refused Nov 29 07:39:15 crc kubenswrapper[4795]: E1129 07:39:15.032606 4795 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.107:6443: connect: connection refused" logger="UnhandledError" Nov 29 07:39:15 crc kubenswrapper[4795]: I1129 07:39:15.052852 4795 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:39:15 crc kubenswrapper[4795]: I1129 07:39:15.054986 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:15 crc kubenswrapper[4795]: I1129 07:39:15.055033 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:15 crc kubenswrapper[4795]: I1129 07:39:15.055046 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:15 crc kubenswrapper[4795]: I1129 07:39:15.055073 4795 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 29 07:39:15 crc kubenswrapper[4795]: E1129 07:39:15.055551 4795 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.107:6443: connect: connection refused" node="crc" Nov 29 07:39:15 crc kubenswrapper[4795]: W1129 07:39:15.174325 4795 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.107:6443: connect: connection refused Nov 29 07:39:15 crc kubenswrapper[4795]: E1129 07:39:15.174415 4795 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.107:6443: connect: connection refused" logger="UnhandledError" Nov 29 07:39:15 crc kubenswrapper[4795]: I1129 07:39:15.221646 4795 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.107:6443: connect: connection refused Nov 29 07:39:15 crc kubenswrapper[4795]: I1129 07:39:15.279798 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"3bcaeb06b249f6fa81deaf9f997ca41c9dd7f3568d458072208901c04543e272"} Nov 29 07:39:15 crc kubenswrapper[4795]: I1129 07:39:15.279978 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e5ecffa6b8d2ecef80d972c230714f7022b45a8f28e882c829782e0aec948a95"} Nov 29 07:39:15 crc kubenswrapper[4795]: I1129 07:39:15.280138 4795 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:39:15 crc kubenswrapper[4795]: I1129 07:39:15.281845 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:15 crc kubenswrapper[4795]: I1129 07:39:15.281889 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:15 crc kubenswrapper[4795]: I1129 07:39:15.281902 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:15 crc kubenswrapper[4795]: I1129 07:39:15.287308 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"508f7cb88a5b33c09bb52519555900031de44693e10de750ed34d9bb4c31738c"} Nov 29 07:39:15 crc kubenswrapper[4795]: I1129 07:39:15.287381 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3f06a88946da0e7002af7683d86e7e70c30af720e837eb907c9a464563eb1efa"} Nov 29 07:39:15 crc kubenswrapper[4795]: I1129 07:39:15.289662 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893"} Nov 29 07:39:15 crc kubenswrapper[4795]: I1129 07:39:15.289689 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d00e99be9b633f1240d1c39a5d8afb903e0ba98190e678d809ffc29d17537e31"} Nov 29 07:39:15 crc kubenswrapper[4795]: I1129 07:39:15.289803 4795 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:39:15 crc kubenswrapper[4795]: I1129 07:39:15.290840 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:15 crc kubenswrapper[4795]: I1129 07:39:15.290904 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:15 crc kubenswrapper[4795]: I1129 07:39:15.290921 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:15 crc kubenswrapper[4795]: I1129 07:39:15.292030 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"58a3a0c3fb40d0360031381d74dd8cf8e38feb80490e90537ae43ad92d16d169"} Nov 29 07:39:15 crc kubenswrapper[4795]: I1129 07:39:15.292067 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d9d86320279e052249fd201fc03f1e6db8564084e2f88e2676b8d227a1b83737"} Nov 29 07:39:15 crc kubenswrapper[4795]: I1129 07:39:15.292195 4795 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:39:15 crc kubenswrapper[4795]: I1129 07:39:15.292881 4795 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:39:15 crc kubenswrapper[4795]: I1129 07:39:15.293118 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:15 crc kubenswrapper[4795]: I1129 07:39:15.293160 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:15 crc kubenswrapper[4795]: I1129 07:39:15.293174 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:15 crc kubenswrapper[4795]: I1129 07:39:15.293828 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:15 crc kubenswrapper[4795]: I1129 07:39:15.293868 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:15 crc kubenswrapper[4795]: I1129 07:39:15.293881 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:15 crc kubenswrapper[4795]: I1129 07:39:15.293993 4795 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="4c9db4ec27597862bfc36c6a62900a3aa639ebb33d7c7212d449cc85a321d60d" exitCode=0 Nov 29 07:39:15 crc kubenswrapper[4795]: I1129 07:39:15.294045 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"4c9db4ec27597862bfc36c6a62900a3aa639ebb33d7c7212d449cc85a321d60d"} Nov 29 07:39:15 crc kubenswrapper[4795]: I1129 07:39:15.294075 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"9ba389dafb633a466e23aba15a503f52f7cca0664b781c19700a8c213a7aeb46"} Nov 29 07:39:15 crc kubenswrapper[4795]: I1129 07:39:15.294165 4795 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:39:15 crc kubenswrapper[4795]: I1129 07:39:15.294786 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:15 crc kubenswrapper[4795]: I1129 07:39:15.294821 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:15 crc kubenswrapper[4795]: I1129 07:39:15.294835 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:15 crc kubenswrapper[4795]: W1129 07:39:15.499246 4795 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.107:6443: connect: connection refused Nov 29 07:39:15 crc kubenswrapper[4795]: E1129 07:39:15.499347 4795 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.107:6443: connect: connection refused" logger="UnhandledError" Nov 29 07:39:15 crc kubenswrapper[4795]: E1129 07:39:15.629718 4795 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" interval="1.6s" Nov 29 07:39:15 crc kubenswrapper[4795]: W1129 07:39:15.743496 4795 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.107:6443: connect: connection refused Nov 29 07:39:15 crc kubenswrapper[4795]: E1129 07:39:15.743620 4795 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.107:6443: connect: connection refused" logger="UnhandledError" Nov 29 07:39:15 crc kubenswrapper[4795]: I1129 07:39:15.856189 4795 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:39:15 crc kubenswrapper[4795]: I1129 07:39:15.858158 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:15 crc kubenswrapper[4795]: I1129 07:39:15.858224 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:15 crc kubenswrapper[4795]: I1129 07:39:15.858244 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:15 crc kubenswrapper[4795]: I1129 07:39:15.858284 4795 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 29 07:39:15 crc kubenswrapper[4795]: E1129 07:39:15.859070 4795 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.107:6443: connect: connection refused" node="crc" Nov 29 07:39:16 crc kubenswrapper[4795]: I1129 07:39:16.303116 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"62dc6b7572b93a8e91f4dc1404de860af85b5e37d2c1a802e4a481b01bfd892c"} Nov 29 07:39:16 crc kubenswrapper[4795]: I1129 07:39:16.303275 4795 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:39:16 crc kubenswrapper[4795]: I1129 07:39:16.304395 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:16 crc kubenswrapper[4795]: I1129 07:39:16.304439 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:16 crc kubenswrapper[4795]: I1129 07:39:16.304452 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:16 crc kubenswrapper[4795]: I1129 07:39:16.306060 4795 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="3bcaeb06b249f6fa81deaf9f997ca41c9dd7f3568d458072208901c04543e272" exitCode=0 Nov 29 07:39:16 crc kubenswrapper[4795]: I1129 07:39:16.306140 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"3bcaeb06b249f6fa81deaf9f997ca41c9dd7f3568d458072208901c04543e272"} Nov 29 07:39:16 crc kubenswrapper[4795]: I1129 07:39:16.306305 4795 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:39:16 crc kubenswrapper[4795]: I1129 07:39:16.307218 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:16 crc kubenswrapper[4795]: I1129 07:39:16.307241 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:16 crc kubenswrapper[4795]: I1129 07:39:16.307253 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:16 crc kubenswrapper[4795]: I1129 07:39:16.331110 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b06202234f21aa27a3dc67978cc9f82738d7526100bce3fd640be08131413f69"} Nov 29 07:39:16 crc kubenswrapper[4795]: I1129 07:39:16.331166 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f9f1344bd7c3a1bc53870f7e5f5f2ca4993df112a24aa474655cc07670e22c48"} Nov 29 07:39:16 crc kubenswrapper[4795]: I1129 07:39:16.331183 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5bbfc45764b29055dcafdee2e3b77893c35b23cfa3eaa9125f01850baef2fd5c"} Nov 29 07:39:16 crc kubenswrapper[4795]: I1129 07:39:16.331132 4795 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:39:16 crc kubenswrapper[4795]: I1129 07:39:16.332165 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:16 crc kubenswrapper[4795]: I1129 07:39:16.332220 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:16 crc kubenswrapper[4795]: I1129 07:39:16.332233 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:16 crc kubenswrapper[4795]: I1129 07:39:16.334169 4795 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893" exitCode=0 Nov 29 07:39:16 crc kubenswrapper[4795]: I1129 07:39:16.334246 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893"} Nov 29 07:39:16 crc kubenswrapper[4795]: I1129 07:39:16.334275 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f7b73774f26f4ea48586f8faf7f53be02b4678c85937c8e402868610f36d4d00"} Nov 29 07:39:16 crc kubenswrapper[4795]: I1129 07:39:16.334289 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"cba2e40f95664da3b482ce01c3432b7a421b785791004833aeaffbc2b6bdcb9a"} Nov 29 07:39:16 crc kubenswrapper[4795]: I1129 07:39:16.334299 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c385dac962c7c2c54fe6b5eabe286bc75d0518601f68b6845677764c5916ad44"} Nov 29 07:39:16 crc kubenswrapper[4795]: I1129 07:39:16.334309 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"102735e5872ecbca5ba247f4fc457619ce4b383def180851d69bf37f7ea1051a"} Nov 29 07:39:16 crc kubenswrapper[4795]: I1129 07:39:16.336323 4795 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="58a3a0c3fb40d0360031381d74dd8cf8e38feb80490e90537ae43ad92d16d169" exitCode=0 Nov 29 07:39:16 crc kubenswrapper[4795]: I1129 07:39:16.336366 4795 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="44cfddf7d0206e96ce2881b5822fb58c61f3e601b71dccb2df8a8d49feb467ab" exitCode=0 Nov 29 07:39:16 crc kubenswrapper[4795]: I1129 07:39:16.336396 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"58a3a0c3fb40d0360031381d74dd8cf8e38feb80490e90537ae43ad92d16d169"} Nov 29 07:39:16 crc kubenswrapper[4795]: I1129 07:39:16.336446 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"44cfddf7d0206e96ce2881b5822fb58c61f3e601b71dccb2df8a8d49feb467ab"} Nov 29 07:39:16 crc kubenswrapper[4795]: I1129 07:39:16.336583 4795 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:39:16 crc kubenswrapper[4795]: I1129 07:39:16.337343 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:16 crc kubenswrapper[4795]: I1129 07:39:16.337386 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:16 crc kubenswrapper[4795]: I1129 07:39:16.337396 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:17 crc kubenswrapper[4795]: I1129 07:39:17.341158 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"0c9d7937efcaeb684eb229e396c7896a805f1bf205bda1cd1b8113aeda2bea5b"} Nov 29 07:39:17 crc kubenswrapper[4795]: I1129 07:39:17.341209 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e34ebb0f2c455d2133d2fcf00d19e74d539e80523f2ed0b2ccc1f8eabcf2cb9b"} Nov 29 07:39:17 crc kubenswrapper[4795]: I1129 07:39:17.341223 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"98152568d086a30f06fcd1535110482b356c587d906973de9baedd65fc85b6bf"} Nov 29 07:39:17 crc kubenswrapper[4795]: I1129 07:39:17.341361 4795 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:39:17 crc kubenswrapper[4795]: I1129 07:39:17.342615 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:17 crc kubenswrapper[4795]: I1129 07:39:17.342648 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:17 crc kubenswrapper[4795]: I1129 07:39:17.342664 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:17 crc kubenswrapper[4795]: I1129 07:39:17.346650 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"efa34ac2960898820609db88aa051a6567a7b6bd0bfc7fcb741205f6f5c3f402"} Nov 29 07:39:17 crc kubenswrapper[4795]: I1129 07:39:17.346688 4795 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:39:17 crc kubenswrapper[4795]: I1129 07:39:17.347656 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:17 crc kubenswrapper[4795]: I1129 07:39:17.347679 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:17 crc kubenswrapper[4795]: I1129 07:39:17.347688 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:17 crc kubenswrapper[4795]: I1129 07:39:17.349330 4795 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="46167c1b7b3d8e33ae3aaeaf4f29d3020419f0bb4a392e8141efcccf0e317768" exitCode=0 Nov 29 07:39:17 crc kubenswrapper[4795]: I1129 07:39:17.349409 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"46167c1b7b3d8e33ae3aaeaf4f29d3020419f0bb4a392e8141efcccf0e317768"} Nov 29 07:39:17 crc kubenswrapper[4795]: I1129 07:39:17.349417 4795 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:39:17 crc kubenswrapper[4795]: I1129 07:39:17.349529 4795 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:39:17 crc kubenswrapper[4795]: I1129 07:39:17.350166 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:17 crc kubenswrapper[4795]: I1129 07:39:17.350191 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:17 crc kubenswrapper[4795]: I1129 07:39:17.350200 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:17 crc kubenswrapper[4795]: I1129 07:39:17.350503 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:17 crc kubenswrapper[4795]: I1129 07:39:17.350534 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:17 crc kubenswrapper[4795]: I1129 07:39:17.350542 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:17 crc kubenswrapper[4795]: I1129 07:39:17.459642 4795 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:39:17 crc kubenswrapper[4795]: I1129 07:39:17.461277 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:17 crc kubenswrapper[4795]: I1129 07:39:17.461316 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:17 crc kubenswrapper[4795]: I1129 07:39:17.461358 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:17 crc kubenswrapper[4795]: I1129 07:39:17.461395 4795 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 29 07:39:18 crc kubenswrapper[4795]: I1129 07:39:18.355837 4795 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 29 07:39:18 crc kubenswrapper[4795]: I1129 07:39:18.355890 4795 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:39:18 crc kubenswrapper[4795]: I1129 07:39:18.356364 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"2a111538ce811520a1b9e67a4a1503e55f0af2544d20569e9925dc3d5e4056fe"} Nov 29 07:39:18 crc kubenswrapper[4795]: I1129 07:39:18.356394 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"5706e8daf35ce41f20a60c4bbb0e07944118c5b65e48dd1f43e5ceb0eb2ff801"} Nov 29 07:39:18 crc kubenswrapper[4795]: I1129 07:39:18.356405 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"107ce25c5a8e8a94233aec8fbb730818ec0d2291234024dbb3e00e157093bb66"} Nov 29 07:39:18 crc kubenswrapper[4795]: I1129 07:39:18.356413 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"dd660f0d8f3d9191078115f7b2a274154dbcc32f8e50c2c4084ba5ba05c24414"} Nov 29 07:39:18 crc kubenswrapper[4795]: I1129 07:39:18.356420 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"30b6480e3c3f22abb90885ad542d89de6e6d921f8aa673a4c3d2bf03bf7ff8d8"} Nov 29 07:39:18 crc kubenswrapper[4795]: I1129 07:39:18.356486 4795 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:39:18 crc kubenswrapper[4795]: I1129 07:39:18.356805 4795 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 29 07:39:18 crc kubenswrapper[4795]: I1129 07:39:18.356833 4795 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:39:18 crc kubenswrapper[4795]: I1129 07:39:18.357055 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:18 crc kubenswrapper[4795]: I1129 07:39:18.357095 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:18 crc kubenswrapper[4795]: I1129 07:39:18.357108 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:18 crc kubenswrapper[4795]: I1129 07:39:18.357228 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:18 crc kubenswrapper[4795]: I1129 07:39:18.357257 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:18 crc kubenswrapper[4795]: I1129 07:39:18.357267 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:18 crc kubenswrapper[4795]: I1129 07:39:18.357925 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:18 crc kubenswrapper[4795]: I1129 07:39:18.357941 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:18 crc kubenswrapper[4795]: I1129 07:39:18.357950 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:18 crc kubenswrapper[4795]: I1129 07:39:18.631451 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Nov 29 07:39:18 crc kubenswrapper[4795]: I1129 07:39:18.636722 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:39:18 crc kubenswrapper[4795]: I1129 07:39:18.637017 4795 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:39:18 crc kubenswrapper[4795]: I1129 07:39:18.638933 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:18 crc kubenswrapper[4795]: I1129 07:39:18.639000 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:18 crc kubenswrapper[4795]: I1129 07:39:18.639015 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:18 crc kubenswrapper[4795]: I1129 07:39:18.659394 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:39:19 crc kubenswrapper[4795]: I1129 07:39:19.358914 4795 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 29 07:39:19 crc kubenswrapper[4795]: I1129 07:39:19.358970 4795 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:39:19 crc kubenswrapper[4795]: I1129 07:39:19.359007 4795 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:39:19 crc kubenswrapper[4795]: I1129 07:39:19.360859 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:19 crc kubenswrapper[4795]: I1129 07:39:19.360923 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:19 crc kubenswrapper[4795]: I1129 07:39:19.360860 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:19 crc kubenswrapper[4795]: I1129 07:39:19.360980 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:19 crc kubenswrapper[4795]: I1129 07:39:19.361009 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:19 crc kubenswrapper[4795]: I1129 07:39:19.360944 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:20 crc kubenswrapper[4795]: I1129 07:39:20.258678 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:39:20 crc kubenswrapper[4795]: I1129 07:39:20.361578 4795 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 29 07:39:20 crc kubenswrapper[4795]: I1129 07:39:20.368819 4795 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:39:20 crc kubenswrapper[4795]: I1129 07:39:20.378710 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:20 crc kubenswrapper[4795]: I1129 07:39:20.378770 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:20 crc kubenswrapper[4795]: I1129 07:39:20.378783 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:20 crc kubenswrapper[4795]: I1129 07:39:20.398947 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Nov 29 07:39:20 crc kubenswrapper[4795]: I1129 07:39:20.399175 4795 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:39:20 crc kubenswrapper[4795]: I1129 07:39:20.400470 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:20 crc kubenswrapper[4795]: I1129 07:39:20.400539 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:20 crc kubenswrapper[4795]: I1129 07:39:20.400562 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:21 crc kubenswrapper[4795]: I1129 07:39:21.893334 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:39:21 crc kubenswrapper[4795]: I1129 07:39:21.893773 4795 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:39:21 crc kubenswrapper[4795]: I1129 07:39:21.895238 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:21 crc kubenswrapper[4795]: I1129 07:39:21.895270 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:21 crc kubenswrapper[4795]: I1129 07:39:21.895279 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:23 crc kubenswrapper[4795]: I1129 07:39:23.320837 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 29 07:39:23 crc kubenswrapper[4795]: I1129 07:39:23.321028 4795 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:39:23 crc kubenswrapper[4795]: I1129 07:39:23.322196 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:23 crc kubenswrapper[4795]: I1129 07:39:23.322233 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:23 crc kubenswrapper[4795]: I1129 07:39:23.322246 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:23 crc kubenswrapper[4795]: I1129 07:39:23.424756 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:39:23 crc kubenswrapper[4795]: I1129 07:39:23.424938 4795 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:39:23 crc kubenswrapper[4795]: I1129 07:39:23.426432 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:23 crc kubenswrapper[4795]: I1129 07:39:23.426489 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:23 crc kubenswrapper[4795]: I1129 07:39:23.426507 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:23 crc kubenswrapper[4795]: I1129 07:39:23.582031 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:39:23 crc kubenswrapper[4795]: I1129 07:39:23.586612 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:39:23 crc kubenswrapper[4795]: I1129 07:39:23.827319 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:39:23 crc kubenswrapper[4795]: I1129 07:39:23.827532 4795 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:39:23 crc kubenswrapper[4795]: I1129 07:39:23.828751 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:23 crc kubenswrapper[4795]: I1129 07:39:23.828785 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:23 crc kubenswrapper[4795]: I1129 07:39:23.828799 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:24 crc kubenswrapper[4795]: E1129 07:39:24.358397 4795 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 29 07:39:24 crc kubenswrapper[4795]: I1129 07:39:24.371033 4795 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:39:24 crc kubenswrapper[4795]: I1129 07:39:24.372372 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:24 crc kubenswrapper[4795]: I1129 07:39:24.372416 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:24 crc kubenswrapper[4795]: I1129 07:39:24.372430 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:25 crc kubenswrapper[4795]: I1129 07:39:25.372957 4795 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:39:25 crc kubenswrapper[4795]: I1129 07:39:25.374057 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:25 crc kubenswrapper[4795]: I1129 07:39:25.374089 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:25 crc kubenswrapper[4795]: I1129 07:39:25.374099 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:25 crc kubenswrapper[4795]: I1129 07:39:25.378607 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:39:26 crc kubenswrapper[4795]: I1129 07:39:26.223525 4795 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Nov 29 07:39:26 crc kubenswrapper[4795]: I1129 07:39:26.375194 4795 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:39:26 crc kubenswrapper[4795]: I1129 07:39:26.376404 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:26 crc kubenswrapper[4795]: I1129 07:39:26.376448 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:26 crc kubenswrapper[4795]: I1129 07:39:26.376461 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:26 crc kubenswrapper[4795]: I1129 07:39:26.426027 4795 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 29 07:39:26 crc kubenswrapper[4795]: I1129 07:39:26.426134 4795 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 29 07:39:27 crc kubenswrapper[4795]: E1129 07:39:27.230917 4795 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Nov 29 07:39:27 crc kubenswrapper[4795]: E1129 07:39:27.463204 4795 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Nov 29 07:39:28 crc kubenswrapper[4795]: W1129 07:39:28.087829 4795 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Nov 29 07:39:28 crc kubenswrapper[4795]: I1129 07:39:28.087985 4795 trace.go:236] Trace[442790307]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Nov-2025 07:39:18.086) (total time: 10001ms): Nov 29 07:39:28 crc kubenswrapper[4795]: Trace[442790307]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (07:39:28.087) Nov 29 07:39:28 crc kubenswrapper[4795]: Trace[442790307]: [10.001756014s] [10.001756014s] END Nov 29 07:39:28 crc kubenswrapper[4795]: E1129 07:39:28.088024 4795 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Nov 29 07:39:28 crc kubenswrapper[4795]: W1129 07:39:28.117084 4795 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Nov 29 07:39:28 crc kubenswrapper[4795]: I1129 07:39:28.117185 4795 trace.go:236] Trace[771884481]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Nov-2025 07:39:18.115) (total time: 10001ms): Nov 29 07:39:28 crc kubenswrapper[4795]: Trace[771884481]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (07:39:28.117) Nov 29 07:39:28 crc kubenswrapper[4795]: Trace[771884481]: [10.001215416s] [10.001215416s] END Nov 29 07:39:28 crc kubenswrapper[4795]: E1129 07:39:28.117209 4795 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Nov 29 07:39:28 crc kubenswrapper[4795]: W1129 07:39:28.320269 4795 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Nov 29 07:39:28 crc kubenswrapper[4795]: I1129 07:39:28.320376 4795 trace.go:236] Trace[781024757]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Nov-2025 07:39:18.319) (total time: 10000ms): Nov 29 07:39:28 crc kubenswrapper[4795]: Trace[781024757]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (07:39:28.320) Nov 29 07:39:28 crc kubenswrapper[4795]: Trace[781024757]: [10.000994707s] [10.000994707s] END Nov 29 07:39:28 crc kubenswrapper[4795]: E1129 07:39:28.320401 4795 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Nov 29 07:39:28 crc kubenswrapper[4795]: W1129 07:39:28.768017 4795 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Nov 29 07:39:28 crc kubenswrapper[4795]: I1129 07:39:28.768113 4795 trace.go:236] Trace[476470859]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Nov-2025 07:39:18.766) (total time: 10001ms): Nov 29 07:39:28 crc kubenswrapper[4795]: Trace[476470859]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (07:39:28.768) Nov 29 07:39:28 crc kubenswrapper[4795]: Trace[476470859]: [10.001856813s] [10.001856813s] END Nov 29 07:39:28 crc kubenswrapper[4795]: E1129 07:39:28.768131 4795 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Nov 29 07:39:30 crc kubenswrapper[4795]: I1129 07:39:30.259330 4795 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="Get \"https://192.168.126.11:6443/livez\": context deadline exceeded" start-of-body= Nov 29 07:39:30 crc kubenswrapper[4795]: I1129 07:39:30.259429 4795 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez\": context deadline exceeded" Nov 29 07:39:30 crc kubenswrapper[4795]: I1129 07:39:30.427353 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Nov 29 07:39:30 crc kubenswrapper[4795]: I1129 07:39:30.427538 4795 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:39:30 crc kubenswrapper[4795]: I1129 07:39:30.429141 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:30 crc kubenswrapper[4795]: I1129 07:39:30.429320 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:30 crc kubenswrapper[4795]: I1129 07:39:30.429348 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:30 crc kubenswrapper[4795]: I1129 07:39:30.442646 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Nov 29 07:39:30 crc kubenswrapper[4795]: I1129 07:39:30.663515 4795 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:39:30 crc kubenswrapper[4795]: I1129 07:39:30.665618 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:30 crc kubenswrapper[4795]: I1129 07:39:30.665675 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:30 crc kubenswrapper[4795]: I1129 07:39:30.665696 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:30 crc kubenswrapper[4795]: I1129 07:39:30.665736 4795 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 29 07:39:31 crc kubenswrapper[4795]: I1129 07:39:31.390737 4795 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:39:31 crc kubenswrapper[4795]: I1129 07:39:31.392062 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:31 crc kubenswrapper[4795]: I1129 07:39:31.392126 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:31 crc kubenswrapper[4795]: I1129 07:39:31.392141 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:32 crc kubenswrapper[4795]: E1129 07:39:32.358364 4795 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{crc.187c6a3aaa4b9173 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-29 07:39:14.220118387 +0000 UTC m=+0.195694177,LastTimestamp:2025-11-29 07:39:14.220118387 +0000 UTC m=+0.195694177,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 29 07:39:32 crc kubenswrapper[4795]: I1129 07:39:32.383330 4795 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 29 07:39:32 crc kubenswrapper[4795]: I1129 07:39:32.383416 4795 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 29 07:39:34 crc kubenswrapper[4795]: E1129 07:39:34.358506 4795 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 29 07:39:35 crc kubenswrapper[4795]: I1129 07:39:35.148859 4795 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 29 07:39:35 crc kubenswrapper[4795]: I1129 07:39:35.265958 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:39:35 crc kubenswrapper[4795]: I1129 07:39:35.266266 4795 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:39:35 crc kubenswrapper[4795]: I1129 07:39:35.267737 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:35 crc kubenswrapper[4795]: I1129 07:39:35.267780 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:35 crc kubenswrapper[4795]: I1129 07:39:35.267793 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:35 crc kubenswrapper[4795]: I1129 07:39:35.270713 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:39:35 crc kubenswrapper[4795]: I1129 07:39:35.402826 4795 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 29 07:39:35 crc kubenswrapper[4795]: I1129 07:39:35.402898 4795 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:39:35 crc kubenswrapper[4795]: I1129 07:39:35.403780 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:35 crc kubenswrapper[4795]: I1129 07:39:35.403820 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:35 crc kubenswrapper[4795]: I1129 07:39:35.403829 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:36 crc kubenswrapper[4795]: I1129 07:39:36.425480 4795 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded" start-of-body= Nov 29 07:39:36 crc kubenswrapper[4795]: I1129 07:39:36.425582 4795 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded" Nov 29 07:39:37 crc kubenswrapper[4795]: I1129 07:39:37.384907 4795 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Nov 29 07:39:37 crc kubenswrapper[4795]: I1129 07:39:37.385132 4795 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 29 07:39:37 crc kubenswrapper[4795]: E1129 07:39:37.386285 4795 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Nov 29 07:39:37 crc kubenswrapper[4795]: I1129 07:39:37.421133 4795 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:54640->192.168.126.11:17697: read: connection reset by peer" start-of-body= Nov 29 07:39:37 crc kubenswrapper[4795]: I1129 07:39:37.421201 4795 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:54642->192.168.126.11:17697: read: connection reset by peer" start-of-body= Nov 29 07:39:37 crc kubenswrapper[4795]: I1129 07:39:37.421269 4795 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:54642->192.168.126.11:17697: read: connection reset by peer" Nov 29 07:39:37 crc kubenswrapper[4795]: I1129 07:39:37.421204 4795 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:54640->192.168.126.11:17697: read: connection reset by peer" Nov 29 07:39:37 crc kubenswrapper[4795]: I1129 07:39:37.421585 4795 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Nov 29 07:39:37 crc kubenswrapper[4795]: I1129 07:39:37.421621 4795 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Nov 29 07:39:37 crc kubenswrapper[4795]: I1129 07:39:37.699973 4795 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 29 07:39:37 crc kubenswrapper[4795]: I1129 07:39:37.810754 4795 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.226996 4795 apiserver.go:52] "Watching apiserver" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.375107 4795 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.375529 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h"] Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.376831 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.376975 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.376978 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.377182 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.377448 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.377474 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 29 07:39:38 crc kubenswrapper[4795]: E1129 07:39:38.377648 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:39:38 crc kubenswrapper[4795]: E1129 07:39:38.377807 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:39:38 crc kubenswrapper[4795]: E1129 07:39:38.378135 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.379958 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.380867 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.381350 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.381772 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.381901 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.381936 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.382061 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.382788 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.383796 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.412125 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.414630 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"efa34ac2960898820609db88aa051a6567a7b6bd0bfc7fcb741205f6f5c3f402"} Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.414529 4795 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="efa34ac2960898820609db88aa051a6567a7b6bd0bfc7fcb741205f6f5c3f402" exitCode=255 Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.426535 4795 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.492464 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.492519 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.492540 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.492556 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.492574 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.492906 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.493069 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.493065 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.493131 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.493145 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.493128 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.493169 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.493265 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.493333 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.493372 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.493403 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.493434 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.493461 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.493487 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.493511 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.493544 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.493570 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.493620 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.493647 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.493738 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.493769 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.493793 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.493823 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.493849 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.493875 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.493905 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.493935 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.493964 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.493993 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.494012 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.494031 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.494049 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.494067 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.494085 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.494107 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.494128 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.494151 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.494179 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.494200 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.494219 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.494237 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.494256 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.494276 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.494316 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.494334 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.494380 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.494396 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.494411 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.494428 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.494445 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.494462 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.494484 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.494523 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.494545 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.494570 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.494594 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.494626 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.494645 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.494665 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.494707 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.494765 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.494782 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.494799 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.494818 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.494838 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.494856 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.494873 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.494889 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.494905 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.494920 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.494939 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.494954 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.494972 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.494990 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.495006 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.495022 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.493734 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.495043 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.493828 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.495062 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.495085 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.495117 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.495135 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.495159 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.495183 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.495199 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.495217 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.495234 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.495252 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.495269 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.495291 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.495339 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.495361 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.495386 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.495407 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.495431 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.495458 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.495476 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.495494 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.495511 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.495533 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.495557 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.495581 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.495701 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.495723 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.495751 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.495797 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.495826 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.495849 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.495871 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.495889 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.495906 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.495922 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.495939 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.495960 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.495979 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.495999 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.496018 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.496035 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.496054 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.496075 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.496096 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.496116 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.496139 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.496174 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.496203 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.496225 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.496246 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.496269 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.496291 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.496313 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.496340 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.496367 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.496400 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.496426 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.496455 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.496484 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.496512 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.496544 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.496572 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.496620 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.496652 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.496679 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.496707 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.496736 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.496764 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.496792 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.496819 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.496855 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.496896 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.496920 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.496945 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.496963 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.496982 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.497000 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.497033 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.497053 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.497074 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.497093 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.497114 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.497131 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.497160 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.497190 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.497211 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.497229 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.497255 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.497273 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.497291 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.497309 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.497328 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.497346 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.497366 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.497387 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.497405 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.497421 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.497441 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.497461 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.497480 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.497499 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.497517 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.497537 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.497556 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.497574 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.497596 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.497632 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.497652 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.497671 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.497692 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.497711 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.497758 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.497780 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.497801 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.497850 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.497869 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.497895 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.497931 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.497959 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.497978 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.497997 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.493883 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.493941 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.494081 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.494508 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.494552 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.494561 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.689745 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.690907 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.494789 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.494799 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.494821 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.691378 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.495023 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.495027 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.495226 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.495404 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.495422 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.495675 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.495686 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.495923 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.496027 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.496116 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.496251 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.496351 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.497177 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.497431 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.497643 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.497882 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.688124 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.688261 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.688245 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.691583 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.691655 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.691820 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.692895 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.693011 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.705227 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.705523 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.705820 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.706046 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.706347 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.706439 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.706441 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.706527 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.706923 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.706931 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.707039 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.707622 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.709140 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.709292 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.709386 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.709592 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.710158 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.710443 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.710503 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.716419 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.716537 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.716546 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.716649 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.716799 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.716965 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.717380 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.717432 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.717536 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.717630 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.717882 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.717838 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.718399 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.718998 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.719192 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.719197 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.719218 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.720049 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.720159 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.720260 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.720555 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.720586 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.720866 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.720964 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.721036 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.720857 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.721270 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.721551 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.721696 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.723105 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.723147 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.723715 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.723849 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.723859 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.723896 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.724363 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.724658 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.724954 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.725063 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.726363 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.728154 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.728438 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.728820 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.729000 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.729181 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.729017 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.729546 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.730259 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.730517 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.730682 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.731289 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.731493 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.731566 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.731727 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.731843 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.731862 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.731972 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.732072 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.732192 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.732906 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.733435 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.733549 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.733684 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.733764 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.733800 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.733409 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.733846 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.734658 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.734723 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.735152 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.735190 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.735540 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.735549 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.735834 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.736080 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.736122 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.736166 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.736632 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.736639 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.736752 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.737978 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.738035 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.738263 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.738416 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.738421 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.738551 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.738785 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.738981 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.739223 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.739328 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.738997 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.739154 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.739259 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.739268 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.740489 4795 scope.go:117] "RemoveContainer" containerID="efa34ac2960898820609db88aa051a6567a7b6bd0bfc7fcb741205f6f5c3f402" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.740998 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.741071 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.741105 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.741124 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.741140 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.741170 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.741212 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.741256 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.741293 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.741518 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:39:38 crc kubenswrapper[4795]: E1129 07:39:38.741982 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:39:39.241933409 +0000 UTC m=+25.217509199 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.742004 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.742287 4795 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.742498 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:39:38 crc kubenswrapper[4795]: E1129 07:39:38.742894 4795 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 07:39:38 crc kubenswrapper[4795]: E1129 07:39:38.743529 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 07:39:39.243511832 +0000 UTC m=+25.219087622 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.743105 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.742970 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.743588 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.743654 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.743704 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.743731 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.743880 4795 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.743899 4795 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.743927 4795 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.743942 4795 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.743957 4795 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.743970 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.743981 4795 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.744011 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.744024 4795 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.744035 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.744046 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.744058 4795 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.744088 4795 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.744101 4795 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.744112 4795 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.744124 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.744135 4795 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.744166 4795 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.744178 4795 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.744193 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.744207 4795 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.744219 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.744251 4795 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.744263 4795 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.744274 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.744285 4795 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.744299 4795 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.744330 4795 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.744342 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.744356 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.744368 4795 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.744397 4795 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.744410 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.744423 4795 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.744433 4795 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.744443 4795 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.744453 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.744483 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.744494 4795 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.744505 4795 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.744515 4795 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.744526 4795 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.744552 4795 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.744537 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.744564 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.744672 4795 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.744691 4795 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.744709 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.744727 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.744760 4795 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.744784 4795 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.744849 4795 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.744867 4795 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.744883 4795 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.744898 4795 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.744913 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.744928 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.744943 4795 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.744958 4795 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.744973 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.744987 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.745002 4795 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.745017 4795 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.745032 4795 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.745051 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.745067 4795 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.745080 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.745093 4795 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.745104 4795 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.745118 4795 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.745132 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.745145 4795 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.745157 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.745761 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 29 07:39:38 crc kubenswrapper[4795]: E1129 07:39:38.744673 4795 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 07:39:38 crc kubenswrapper[4795]: E1129 07:39:38.747502 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 07:39:39.247477019 +0000 UTC m=+25.223052879 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.744695 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.744952 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.745365 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.747216 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.747292 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.748972 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.752428 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.752468 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.752625 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.752695 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.752724 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.752753 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.753123 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.754678 4795 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.754716 4795 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.754731 4795 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.754746 4795 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.754761 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.754775 4795 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.754795 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.754822 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.754837 4795 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.754850 4795 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.754865 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.754790 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.754878 4795 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.754914 4795 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.754952 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.754967 4795 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.754980 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.754992 4795 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.755012 4795 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.755025 4795 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.755037 4795 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.755049 4795 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.755061 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.755075 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.755087 4795 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.755100 4795 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.755114 4795 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.755127 4795 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.755157 4795 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.755169 4795 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.755181 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.755193 4795 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.755205 4795 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.755215 4795 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.755226 4795 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.755236 4795 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.755247 4795 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.755257 4795 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.755269 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.755281 4795 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.755291 4795 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.755302 4795 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.755313 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.755325 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.755336 4795 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.755347 4795 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.755359 4795 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.755369 4795 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.755379 4795 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.755389 4795 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.755400 4795 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.755410 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.755420 4795 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.755430 4795 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.755439 4795 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.755455 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.755473 4795 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.755485 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.755497 4795 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.755509 4795 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.755523 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.755531 4795 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.755540 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.755549 4795 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.755557 4795 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.755565 4795 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.755573 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.755583 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: E1129 07:39:38.755582 4795 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 07:39:38 crc kubenswrapper[4795]: E1129 07:39:38.755613 4795 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 07:39:38 crc kubenswrapper[4795]: E1129 07:39:38.755625 4795 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:39:38 crc kubenswrapper[4795]: E1129 07:39:38.755675 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-29 07:39:39.25565861 +0000 UTC m=+25.231234400 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.755596 4795 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.755700 4795 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.755710 4795 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.755709 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.755718 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.755759 4795 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.755771 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.755781 4795 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.755790 4795 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.755800 4795 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.755810 4795 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.755820 4795 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.755832 4795 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.755841 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.755854 4795 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.755863 4795 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.755872 4795 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.755881 4795 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.755890 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.755900 4795 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.755910 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.755920 4795 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.755930 4795 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.756071 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.756280 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.756832 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.757204 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.757277 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.757785 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.759400 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.759432 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: E1129 07:39:38.759744 4795 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 07:39:38 crc kubenswrapper[4795]: E1129 07:39:38.759770 4795 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 07:39:38 crc kubenswrapper[4795]: E1129 07:39:38.759784 4795 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:39:38 crc kubenswrapper[4795]: E1129 07:39:38.759857 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-29 07:39:39.259841653 +0000 UTC m=+25.235417533 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.764859 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.765958 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.765962 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.766515 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.767165 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.767372 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.767498 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.767535 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.767616 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.767952 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.768218 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.768316 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.768376 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.768931 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.769310 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.769934 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.774463 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.781686 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.783508 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.797492 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.797752 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.809235 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.820053 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.856465 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.856533 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.856568 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.856579 4795 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.856605 4795 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.856616 4795 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.856624 4795 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.856632 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.856640 4795 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.856649 4795 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.856657 4795 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.856667 4795 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.856675 4795 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.856683 4795 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.856692 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.856699 4795 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.856707 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.856716 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.856725 4795 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.856733 4795 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.856742 4795 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.856750 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.856763 4795 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.856772 4795 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.856781 4795 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.856790 4795 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.856800 4795 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.856810 4795 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.856822 4795 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.856833 4795 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.856845 4795 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.856853 4795 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.856863 4795 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.856873 4795 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.856881 4795 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.856940 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 29 07:39:38 crc kubenswrapper[4795]: I1129 07:39:38.856976 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 29 07:39:39 crc kubenswrapper[4795]: I1129 07:39:39.002382 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 29 07:39:39 crc kubenswrapper[4795]: I1129 07:39:39.010887 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 29 07:39:39 crc kubenswrapper[4795]: I1129 07:39:39.016200 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 29 07:39:39 crc kubenswrapper[4795]: W1129 07:39:39.018799 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-847a768c48022a899f7d52ba9cf9f73657dadd577e5e465478ccf3e1192d024b WatchSource:0}: Error finding container 847a768c48022a899f7d52ba9cf9f73657dadd577e5e465478ccf3e1192d024b: Status 404 returned error can't find the container with id 847a768c48022a899f7d52ba9cf9f73657dadd577e5e465478ccf3e1192d024b Nov 29 07:39:39 crc kubenswrapper[4795]: W1129 07:39:39.180992 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-85f004c3d0d6a8832ba2f1b162d3adf995be20a1fcc58e5d0e04027516a79b84 WatchSource:0}: Error finding container 85f004c3d0d6a8832ba2f1b162d3adf995be20a1fcc58e5d0e04027516a79b84: Status 404 returned error can't find the container with id 85f004c3d0d6a8832ba2f1b162d3adf995be20a1fcc58e5d0e04027516a79b84 Nov 29 07:39:39 crc kubenswrapper[4795]: W1129 07:39:39.202901 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-a1ae5e0e36cd7dcb92a97cbee253b0a1b406ef0d3ff464ec9214bf36ff77154c WatchSource:0}: Error finding container a1ae5e0e36cd7dcb92a97cbee253b0a1b406ef0d3ff464ec9214bf36ff77154c: Status 404 returned error can't find the container with id a1ae5e0e36cd7dcb92a97cbee253b0a1b406ef0d3ff464ec9214bf36ff77154c Nov 29 07:39:39 crc kubenswrapper[4795]: I1129 07:39:39.260688 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:39:39 crc kubenswrapper[4795]: I1129 07:39:39.260763 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:39:39 crc kubenswrapper[4795]: I1129 07:39:39.260789 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:39:39 crc kubenswrapper[4795]: I1129 07:39:39.260809 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:39:39 crc kubenswrapper[4795]: E1129 07:39:39.260869 4795 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 07:39:39 crc kubenswrapper[4795]: E1129 07:39:39.260875 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:39:40.260844911 +0000 UTC m=+26.236420701 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:39:39 crc kubenswrapper[4795]: E1129 07:39:39.260932 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 07:39:40.260923913 +0000 UTC m=+26.236499703 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 07:39:39 crc kubenswrapper[4795]: E1129 07:39:39.260982 4795 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 07:39:39 crc kubenswrapper[4795]: E1129 07:39:39.260993 4795 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 07:39:39 crc kubenswrapper[4795]: I1129 07:39:39.260992 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:39:39 crc kubenswrapper[4795]: E1129 07:39:39.260990 4795 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 07:39:39 crc kubenswrapper[4795]: E1129 07:39:39.261049 4795 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 07:39:39 crc kubenswrapper[4795]: E1129 07:39:39.261062 4795 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:39:39 crc kubenswrapper[4795]: E1129 07:39:39.261095 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-29 07:39:40.261088697 +0000 UTC m=+26.236664487 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:39:39 crc kubenswrapper[4795]: E1129 07:39:39.261004 4795 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:39:39 crc kubenswrapper[4795]: E1129 07:39:39.261126 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-29 07:39:40.261120888 +0000 UTC m=+26.236696678 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:39:39 crc kubenswrapper[4795]: E1129 07:39:39.261138 4795 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 07:39:39 crc kubenswrapper[4795]: E1129 07:39:39.261226 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 07:39:40.261204451 +0000 UTC m=+26.236780261 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 07:39:39 crc kubenswrapper[4795]: I1129 07:39:39.423385 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 29 07:39:39 crc kubenswrapper[4795]: I1129 07:39:39.426944 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6319d19e0aa1655aaff5e3abcc13d8ff9ff26f401c28cf23593ecaa23d1ac5a2"} Nov 29 07:39:39 crc kubenswrapper[4795]: I1129 07:39:39.427927 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"a1ae5e0e36cd7dcb92a97cbee253b0a1b406ef0d3ff464ec9214bf36ff77154c"} Nov 29 07:39:39 crc kubenswrapper[4795]: I1129 07:39:39.429343 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"85f004c3d0d6a8832ba2f1b162d3adf995be20a1fcc58e5d0e04027516a79b84"} Nov 29 07:39:39 crc kubenswrapper[4795]: I1129 07:39:39.430113 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"847a768c48022a899f7d52ba9cf9f73657dadd577e5e465478ccf3e1192d024b"} Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.221678 4795 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.275126 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.275189 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.275226 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.275205 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:39:40 crc kubenswrapper[4795]: E1129 07:39:40.275294 4795 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 07:39:40 crc kubenswrapper[4795]: E1129 07:39:40.275321 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:39:42.275285162 +0000 UTC m=+28.250860952 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:39:40 crc kubenswrapper[4795]: E1129 07:39:40.275370 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 07:39:42.275360124 +0000 UTC m=+28.250935914 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.275405 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.275463 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.275492 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:39:40 crc kubenswrapper[4795]: E1129 07:39:40.275732 4795 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 07:39:40 crc kubenswrapper[4795]: E1129 07:39:40.275794 4795 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 07:39:40 crc kubenswrapper[4795]: E1129 07:39:40.275811 4795 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:39:40 crc kubenswrapper[4795]: E1129 07:39:40.275750 4795 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 07:39:40 crc kubenswrapper[4795]: E1129 07:39:40.275874 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-29 07:39:42.275850717 +0000 UTC m=+28.251426667 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:39:40 crc kubenswrapper[4795]: E1129 07:39:40.275902 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 07:39:42.275886788 +0000 UTC m=+28.251462578 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 07:39:40 crc kubenswrapper[4795]: E1129 07:39:40.276009 4795 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 07:39:40 crc kubenswrapper[4795]: E1129 07:39:40.276027 4795 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 07:39:40 crc kubenswrapper[4795]: E1129 07:39:40.276036 4795 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:39:40 crc kubenswrapper[4795]: E1129 07:39:40.276073 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-29 07:39:42.276063943 +0000 UTC m=+28.251639733 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.284414 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:39:40 crc kubenswrapper[4795]: E1129 07:39:40.284609 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:39:40 crc kubenswrapper[4795]: E1129 07:39:40.284960 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:39:40 crc kubenswrapper[4795]: E1129 07:39:40.285246 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.289105 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.289635 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.290949 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.291524 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.292594 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.293257 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.293877 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.295006 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.295768 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.296943 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.297481 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.298575 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.299123 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.299634 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.300533 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.301077 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.302106 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.302479 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.303071 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.304054 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.304519 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.305621 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.306118 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.307242 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.307853 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.308497 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.309668 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.310206 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.311241 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.311793 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.312952 4795 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.313085 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.315040 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.315977 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.316685 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.318153 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.318807 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.319866 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.320463 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.321459 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.321913 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.322907 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.323530 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.324502 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.325127 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.325757 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.326864 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.327637 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.328463 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.328965 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.329453 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.330389 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.330989 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.331857 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.433838 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"2faa21199ef0e7bbf065823fff1e4b33636239d022db292bec68ff54e8c948c3"} Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.433885 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"3c09cfeaabce351a2face8aa09a04f88aa9d50122718358d07301aeab84e4a76"} Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.436156 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"30fbc9849348e3900b416f78f697f1d88eb5eb018f5ee420e711561423a79459"} Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.436182 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.497778 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:40Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.531438 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2faa21199ef0e7bbf065823fff1e4b33636239d022db292bec68ff54e8c948c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c09cfeaabce351a2face8aa09a04f88aa9d50122718358d07301aeab84e4a76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:40Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.546686 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:40Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.567741 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:40Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.581076 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:40Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.608448 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84bbfda4-f166-406b-8459-d8cad5d21032\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://102735e5872ecbca5ba247f4fc457619ce4b383def180851d69bf37f7ea1051a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cba2e40f95664da3b482ce01c3432b7a421b785791004833aeaffbc2b6bdcb9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c385dac962c7c2c54fe6b5eabe286bc75d0518601f68b6845677764c5916ad44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa34ac2960898820609db88aa051a6567a7b6bd0bfc7fcb741205f6f5c3f402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa34ac2960898820609db88aa051a6567a7b6bd0bfc7fcb741205f6f5c3f402\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:39:37Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 07:39:27.030389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:39:27.033381 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-823169215/tls.crt::/tmp/serving-cert-823169215/tls.key\\\\\\\"\\\\nI1129 07:39:37.392848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:39:37.396263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:39:37.396294 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:39:37.396669 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:39:37.396692 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:39:37.407579 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:39:37.407638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407648 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407655 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:39:37.407660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:39:37.407665 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:39:37.407670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:39:37.408043 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:39:37.411891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7b73774f26f4ea48586f8faf7f53be02b4678c85937c8e402868610f36d4d00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:40Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.638193 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:40Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.662437 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:40Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.679407 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2faa21199ef0e7bbf065823fff1e4b33636239d022db292bec68ff54e8c948c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c09cfeaabce351a2face8aa09a04f88aa9d50122718358d07301aeab84e4a76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:40Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.699654 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84bbfda4-f166-406b-8459-d8cad5d21032\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://102735e5872ecbca5ba247f4fc457619ce4b383def180851d69bf37f7ea1051a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cba2e40f95664da3b482ce01c3432b7a421b785791004833aeaffbc2b6bdcb9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c385dac962c7c2c54fe6b5eabe286bc75d0518601f68b6845677764c5916ad44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6319d19e0aa1655aaff5e3abcc13d8ff9ff26f401c28cf23593ecaa23d1ac5a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa34ac2960898820609db88aa051a6567a7b6bd0bfc7fcb741205f6f5c3f402\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:39:37Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 07:39:27.030389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:39:27.033381 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-823169215/tls.crt::/tmp/serving-cert-823169215/tls.key\\\\\\\"\\\\nI1129 07:39:37.392848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:39:37.396263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:39:37.396294 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:39:37.396669 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:39:37.396692 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:39:37.407579 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:39:37.407638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407648 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407655 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:39:37.407660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:39:37.407665 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:39:37.407670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:39:37.408043 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:39:37.411891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7b73774f26f4ea48586f8faf7f53be02b4678c85937c8e402868610f36d4d00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:40Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.719849 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30fbc9849348e3900b416f78f697f1d88eb5eb018f5ee420e711561423a79459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:40Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.733220 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:40Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.748967 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:40Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.766680 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:40Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.777919 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-bkmq6"] Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.778370 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.778553 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-hbg2m"] Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.778750 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-hbg2m" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.778963 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-vcd5b"] Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.779101 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-27975"] Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.779392 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-vcd5b" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.779484 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-27975" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.782351 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.782613 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.782665 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.782766 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.782785 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.782675 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.783260 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.783512 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.785225 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.785248 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.785314 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.785998 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.786019 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.786057 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.786917 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.803729 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hbg2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50b9c3ea-4ff5-434f-803c-2365a0938f9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv7gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hbg2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:40Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.816562 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:40Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.829985 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2faa21199ef0e7bbf065823fff1e4b33636239d022db292bec68ff54e8c948c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c09cfeaabce351a2face8aa09a04f88aa9d50122718358d07301aeab84e4a76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:40Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.845003 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84bbfda4-f166-406b-8459-d8cad5d21032\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://102735e5872ecbca5ba247f4fc457619ce4b383def180851d69bf37f7ea1051a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cba2e40f95664da3b482ce01c3432b7a421b785791004833aeaffbc2b6bdcb9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c385dac962c7c2c54fe6b5eabe286bc75d0518601f68b6845677764c5916ad44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6319d19e0aa1655aaff5e3abcc13d8ff9ff26f401c28cf23593ecaa23d1ac5a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa34ac2960898820609db88aa051a6567a7b6bd0bfc7fcb741205f6f5c3f402\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:39:37Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 07:39:27.030389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:39:27.033381 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-823169215/tls.crt::/tmp/serving-cert-823169215/tls.key\\\\\\\"\\\\nI1129 07:39:37.392848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:39:37.396263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:39:37.396294 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:39:37.396669 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:39:37.396692 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:39:37.407579 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:39:37.407638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407648 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407655 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:39:37.407660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:39:37.407665 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:39:37.407670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:39:37.408043 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:39:37.411891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7b73774f26f4ea48586f8faf7f53be02b4678c85937c8e402868610f36d4d00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:40Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.875869 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30fbc9849348e3900b416f78f697f1d88eb5eb018f5ee420e711561423a79459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:40Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.888828 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4p5fv\" (UniqueName: \"kubernetes.io/projected/1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1-kube-api-access-4p5fv\") pod \"machine-config-daemon-bkmq6\" (UID: \"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1\") " pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.888890 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/50b9c3ea-4ff5-434f-803c-2365a0938f9a-host-run-k8s-cni-cncf-io\") pod \"multus-hbg2m\" (UID: \"50b9c3ea-4ff5-434f-803c-2365a0938f9a\") " pod="openshift-multus/multus-hbg2m" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.888909 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ceab872d-7a73-44d5-936e-3dd17facf399-cnibin\") pod \"multus-additional-cni-plugins-27975\" (UID: \"ceab872d-7a73-44d5-936e-3dd17facf399\") " pod="openshift-multus/multus-additional-cni-plugins-27975" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.888937 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/50b9c3ea-4ff5-434f-803c-2365a0938f9a-system-cni-dir\") pod \"multus-hbg2m\" (UID: \"50b9c3ea-4ff5-434f-803c-2365a0938f9a\") " pod="openshift-multus/multus-hbg2m" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.888954 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/50b9c3ea-4ff5-434f-803c-2365a0938f9a-host-var-lib-cni-bin\") pod \"multus-hbg2m\" (UID: \"50b9c3ea-4ff5-434f-803c-2365a0938f9a\") " pod="openshift-multus/multus-hbg2m" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.888976 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/50b9c3ea-4ff5-434f-803c-2365a0938f9a-multus-daemon-config\") pod \"multus-hbg2m\" (UID: \"50b9c3ea-4ff5-434f-803c-2365a0938f9a\") " pod="openshift-multus/multus-hbg2m" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.888996 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ceab872d-7a73-44d5-936e-3dd17facf399-cni-binary-copy\") pod \"multus-additional-cni-plugins-27975\" (UID: \"ceab872d-7a73-44d5-936e-3dd17facf399\") " pod="openshift-multus/multus-additional-cni-plugins-27975" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.889078 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ceab872d-7a73-44d5-936e-3dd17facf399-os-release\") pod \"multus-additional-cni-plugins-27975\" (UID: \"ceab872d-7a73-44d5-936e-3dd17facf399\") " pod="openshift-multus/multus-additional-cni-plugins-27975" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.889122 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/50b9c3ea-4ff5-434f-803c-2365a0938f9a-host-run-multus-certs\") pod \"multus-hbg2m\" (UID: \"50b9c3ea-4ff5-434f-803c-2365a0938f9a\") " pod="openshift-multus/multus-hbg2m" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.889149 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/50b9c3ea-4ff5-434f-803c-2365a0938f9a-multus-cni-dir\") pod \"multus-hbg2m\" (UID: \"50b9c3ea-4ff5-434f-803c-2365a0938f9a\") " pod="openshift-multus/multus-hbg2m" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.889167 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/50b9c3ea-4ff5-434f-803c-2365a0938f9a-cnibin\") pod \"multus-hbg2m\" (UID: \"50b9c3ea-4ff5-434f-803c-2365a0938f9a\") " pod="openshift-multus/multus-hbg2m" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.889181 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/50b9c3ea-4ff5-434f-803c-2365a0938f9a-etc-kubernetes\") pod \"multus-hbg2m\" (UID: \"50b9c3ea-4ff5-434f-803c-2365a0938f9a\") " pod="openshift-multus/multus-hbg2m" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.889195 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwcqq\" (UniqueName: \"kubernetes.io/projected/e3fc3441-5d98-4323-8a78-cab492090c5a-kube-api-access-jwcqq\") pod \"node-resolver-vcd5b\" (UID: \"e3fc3441-5d98-4323-8a78-cab492090c5a\") " pod="openshift-dns/node-resolver-vcd5b" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.889228 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/50b9c3ea-4ff5-434f-803c-2365a0938f9a-host-run-netns\") pod \"multus-hbg2m\" (UID: \"50b9c3ea-4ff5-434f-803c-2365a0938f9a\") " pod="openshift-multus/multus-hbg2m" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.889273 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/50b9c3ea-4ff5-434f-803c-2365a0938f9a-cni-binary-copy\") pod \"multus-hbg2m\" (UID: \"50b9c3ea-4ff5-434f-803c-2365a0938f9a\") " pod="openshift-multus/multus-hbg2m" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.889303 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/50b9c3ea-4ff5-434f-803c-2365a0938f9a-hostroot\") pod \"multus-hbg2m\" (UID: \"50b9c3ea-4ff5-434f-803c-2365a0938f9a\") " pod="openshift-multus/multus-hbg2m" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.889329 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/50b9c3ea-4ff5-434f-803c-2365a0938f9a-multus-conf-dir\") pod \"multus-hbg2m\" (UID: \"50b9c3ea-4ff5-434f-803c-2365a0938f9a\") " pod="openshift-multus/multus-hbg2m" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.889348 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ceab872d-7a73-44d5-936e-3dd17facf399-tuning-conf-dir\") pod \"multus-additional-cni-plugins-27975\" (UID: \"ceab872d-7a73-44d5-936e-3dd17facf399\") " pod="openshift-multus/multus-additional-cni-plugins-27975" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.889385 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/50b9c3ea-4ff5-434f-803c-2365a0938f9a-os-release\") pod \"multus-hbg2m\" (UID: \"50b9c3ea-4ff5-434f-803c-2365a0938f9a\") " pod="openshift-multus/multus-hbg2m" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.889405 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ceab872d-7a73-44d5-936e-3dd17facf399-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-27975\" (UID: \"ceab872d-7a73-44d5-936e-3dd17facf399\") " pod="openshift-multus/multus-additional-cni-plugins-27975" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.889434 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/e3fc3441-5d98-4323-8a78-cab492090c5a-hosts-file\") pod \"node-resolver-vcd5b\" (UID: \"e3fc3441-5d98-4323-8a78-cab492090c5a\") " pod="openshift-dns/node-resolver-vcd5b" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.889454 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgsq5\" (UniqueName: \"kubernetes.io/projected/ceab872d-7a73-44d5-936e-3dd17facf399-kube-api-access-xgsq5\") pod \"multus-additional-cni-plugins-27975\" (UID: \"ceab872d-7a73-44d5-936e-3dd17facf399\") " pod="openshift-multus/multus-additional-cni-plugins-27975" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.889473 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/50b9c3ea-4ff5-434f-803c-2365a0938f9a-multus-socket-dir-parent\") pod \"multus-hbg2m\" (UID: \"50b9c3ea-4ff5-434f-803c-2365a0938f9a\") " pod="openshift-multus/multus-hbg2m" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.889494 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/50b9c3ea-4ff5-434f-803c-2365a0938f9a-host-var-lib-kubelet\") pod \"multus-hbg2m\" (UID: \"50b9c3ea-4ff5-434f-803c-2365a0938f9a\") " pod="openshift-multus/multus-hbg2m" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.889515 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/50b9c3ea-4ff5-434f-803c-2365a0938f9a-host-var-lib-cni-multus\") pod \"multus-hbg2m\" (UID: \"50b9c3ea-4ff5-434f-803c-2365a0938f9a\") " pod="openshift-multus/multus-hbg2m" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.889535 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hv7gw\" (UniqueName: \"kubernetes.io/projected/50b9c3ea-4ff5-434f-803c-2365a0938f9a-kube-api-access-hv7gw\") pod \"multus-hbg2m\" (UID: \"50b9c3ea-4ff5-434f-803c-2365a0938f9a\") " pod="openshift-multus/multus-hbg2m" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.889552 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ceab872d-7a73-44d5-936e-3dd17facf399-system-cni-dir\") pod \"multus-additional-cni-plugins-27975\" (UID: \"ceab872d-7a73-44d5-936e-3dd17facf399\") " pod="openshift-multus/multus-additional-cni-plugins-27975" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.889571 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1-mcd-auth-proxy-config\") pod \"machine-config-daemon-bkmq6\" (UID: \"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1\") " pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.889622 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1-rootfs\") pod \"machine-config-daemon-bkmq6\" (UID: \"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1\") " pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.889642 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1-proxy-tls\") pod \"machine-config-daemon-bkmq6\" (UID: \"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1\") " pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.909254 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:40Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.939616 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:40Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.990874 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4p5fv\" (UniqueName: \"kubernetes.io/projected/1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1-kube-api-access-4p5fv\") pod \"machine-config-daemon-bkmq6\" (UID: \"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1\") " pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.990922 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/50b9c3ea-4ff5-434f-803c-2365a0938f9a-host-run-k8s-cni-cncf-io\") pod \"multus-hbg2m\" (UID: \"50b9c3ea-4ff5-434f-803c-2365a0938f9a\") " pod="openshift-multus/multus-hbg2m" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.990940 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ceab872d-7a73-44d5-936e-3dd17facf399-cnibin\") pod \"multus-additional-cni-plugins-27975\" (UID: \"ceab872d-7a73-44d5-936e-3dd17facf399\") " pod="openshift-multus/multus-additional-cni-plugins-27975" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.990966 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/50b9c3ea-4ff5-434f-803c-2365a0938f9a-system-cni-dir\") pod \"multus-hbg2m\" (UID: \"50b9c3ea-4ff5-434f-803c-2365a0938f9a\") " pod="openshift-multus/multus-hbg2m" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.990980 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/50b9c3ea-4ff5-434f-803c-2365a0938f9a-host-var-lib-cni-bin\") pod \"multus-hbg2m\" (UID: \"50b9c3ea-4ff5-434f-803c-2365a0938f9a\") " pod="openshift-multus/multus-hbg2m" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.990994 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/50b9c3ea-4ff5-434f-803c-2365a0938f9a-multus-daemon-config\") pod \"multus-hbg2m\" (UID: \"50b9c3ea-4ff5-434f-803c-2365a0938f9a\") " pod="openshift-multus/multus-hbg2m" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.991011 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ceab872d-7a73-44d5-936e-3dd17facf399-cni-binary-copy\") pod \"multus-additional-cni-plugins-27975\" (UID: \"ceab872d-7a73-44d5-936e-3dd17facf399\") " pod="openshift-multus/multus-additional-cni-plugins-27975" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.991026 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ceab872d-7a73-44d5-936e-3dd17facf399-os-release\") pod \"multus-additional-cni-plugins-27975\" (UID: \"ceab872d-7a73-44d5-936e-3dd17facf399\") " pod="openshift-multus/multus-additional-cni-plugins-27975" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.991041 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/50b9c3ea-4ff5-434f-803c-2365a0938f9a-host-run-multus-certs\") pod \"multus-hbg2m\" (UID: \"50b9c3ea-4ff5-434f-803c-2365a0938f9a\") " pod="openshift-multus/multus-hbg2m" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.991056 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/50b9c3ea-4ff5-434f-803c-2365a0938f9a-multus-cni-dir\") pod \"multus-hbg2m\" (UID: \"50b9c3ea-4ff5-434f-803c-2365a0938f9a\") " pod="openshift-multus/multus-hbg2m" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.991073 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/50b9c3ea-4ff5-434f-803c-2365a0938f9a-cnibin\") pod \"multus-hbg2m\" (UID: \"50b9c3ea-4ff5-434f-803c-2365a0938f9a\") " pod="openshift-multus/multus-hbg2m" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.991069 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/50b9c3ea-4ff5-434f-803c-2365a0938f9a-host-run-k8s-cni-cncf-io\") pod \"multus-hbg2m\" (UID: \"50b9c3ea-4ff5-434f-803c-2365a0938f9a\") " pod="openshift-multus/multus-hbg2m" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.991106 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/50b9c3ea-4ff5-434f-803c-2365a0938f9a-system-cni-dir\") pod \"multus-hbg2m\" (UID: \"50b9c3ea-4ff5-434f-803c-2365a0938f9a\") " pod="openshift-multus/multus-hbg2m" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.991135 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/50b9c3ea-4ff5-434f-803c-2365a0938f9a-etc-kubernetes\") pod \"multus-hbg2m\" (UID: \"50b9c3ea-4ff5-434f-803c-2365a0938f9a\") " pod="openshift-multus/multus-hbg2m" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.991092 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/50b9c3ea-4ff5-434f-803c-2365a0938f9a-etc-kubernetes\") pod \"multus-hbg2m\" (UID: \"50b9c3ea-4ff5-434f-803c-2365a0938f9a\") " pod="openshift-multus/multus-hbg2m" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.991171 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/50b9c3ea-4ff5-434f-803c-2365a0938f9a-host-var-lib-cni-bin\") pod \"multus-hbg2m\" (UID: \"50b9c3ea-4ff5-434f-803c-2365a0938f9a\") " pod="openshift-multus/multus-hbg2m" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.991185 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwcqq\" (UniqueName: \"kubernetes.io/projected/e3fc3441-5d98-4323-8a78-cab492090c5a-kube-api-access-jwcqq\") pod \"node-resolver-vcd5b\" (UID: \"e3fc3441-5d98-4323-8a78-cab492090c5a\") " pod="openshift-dns/node-resolver-vcd5b" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.991228 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/50b9c3ea-4ff5-434f-803c-2365a0938f9a-host-run-netns\") pod \"multus-hbg2m\" (UID: \"50b9c3ea-4ff5-434f-803c-2365a0938f9a\") " pod="openshift-multus/multus-hbg2m" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.991262 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/50b9c3ea-4ff5-434f-803c-2365a0938f9a-cni-binary-copy\") pod \"multus-hbg2m\" (UID: \"50b9c3ea-4ff5-434f-803c-2365a0938f9a\") " pod="openshift-multus/multus-hbg2m" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.991281 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/50b9c3ea-4ff5-434f-803c-2365a0938f9a-hostroot\") pod \"multus-hbg2m\" (UID: \"50b9c3ea-4ff5-434f-803c-2365a0938f9a\") " pod="openshift-multus/multus-hbg2m" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.991308 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/50b9c3ea-4ff5-434f-803c-2365a0938f9a-multus-conf-dir\") pod \"multus-hbg2m\" (UID: \"50b9c3ea-4ff5-434f-803c-2365a0938f9a\") " pod="openshift-multus/multus-hbg2m" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.991328 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ceab872d-7a73-44d5-936e-3dd17facf399-tuning-conf-dir\") pod \"multus-additional-cni-plugins-27975\" (UID: \"ceab872d-7a73-44d5-936e-3dd17facf399\") " pod="openshift-multus/multus-additional-cni-plugins-27975" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.991351 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/50b9c3ea-4ff5-434f-803c-2365a0938f9a-os-release\") pod \"multus-hbg2m\" (UID: \"50b9c3ea-4ff5-434f-803c-2365a0938f9a\") " pod="openshift-multus/multus-hbg2m" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.991372 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ceab872d-7a73-44d5-936e-3dd17facf399-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-27975\" (UID: \"ceab872d-7a73-44d5-936e-3dd17facf399\") " pod="openshift-multus/multus-additional-cni-plugins-27975" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.991392 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/e3fc3441-5d98-4323-8a78-cab492090c5a-hosts-file\") pod \"node-resolver-vcd5b\" (UID: \"e3fc3441-5d98-4323-8a78-cab492090c5a\") " pod="openshift-dns/node-resolver-vcd5b" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.991417 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgsq5\" (UniqueName: \"kubernetes.io/projected/ceab872d-7a73-44d5-936e-3dd17facf399-kube-api-access-xgsq5\") pod \"multus-additional-cni-plugins-27975\" (UID: \"ceab872d-7a73-44d5-936e-3dd17facf399\") " pod="openshift-multus/multus-additional-cni-plugins-27975" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.991437 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/50b9c3ea-4ff5-434f-803c-2365a0938f9a-multus-socket-dir-parent\") pod \"multus-hbg2m\" (UID: \"50b9c3ea-4ff5-434f-803c-2365a0938f9a\") " pod="openshift-multus/multus-hbg2m" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.991460 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/50b9c3ea-4ff5-434f-803c-2365a0938f9a-host-var-lib-kubelet\") pod \"multus-hbg2m\" (UID: \"50b9c3ea-4ff5-434f-803c-2365a0938f9a\") " pod="openshift-multus/multus-hbg2m" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.991482 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/50b9c3ea-4ff5-434f-803c-2365a0938f9a-host-var-lib-cni-multus\") pod \"multus-hbg2m\" (UID: \"50b9c3ea-4ff5-434f-803c-2365a0938f9a\") " pod="openshift-multus/multus-hbg2m" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.991507 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hv7gw\" (UniqueName: \"kubernetes.io/projected/50b9c3ea-4ff5-434f-803c-2365a0938f9a-kube-api-access-hv7gw\") pod \"multus-hbg2m\" (UID: \"50b9c3ea-4ff5-434f-803c-2365a0938f9a\") " pod="openshift-multus/multus-hbg2m" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.991529 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ceab872d-7a73-44d5-936e-3dd17facf399-system-cni-dir\") pod \"multus-additional-cni-plugins-27975\" (UID: \"ceab872d-7a73-44d5-936e-3dd17facf399\") " pod="openshift-multus/multus-additional-cni-plugins-27975" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.991556 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1-mcd-auth-proxy-config\") pod \"machine-config-daemon-bkmq6\" (UID: \"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1\") " pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.991583 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1-rootfs\") pod \"machine-config-daemon-bkmq6\" (UID: \"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1\") " pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.991636 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1-proxy-tls\") pod \"machine-config-daemon-bkmq6\" (UID: \"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1\") " pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.991804 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/50b9c3ea-4ff5-434f-803c-2365a0938f9a-multus-daemon-config\") pod \"multus-hbg2m\" (UID: \"50b9c3ea-4ff5-434f-803c-2365a0938f9a\") " pod="openshift-multus/multus-hbg2m" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.992015 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/50b9c3ea-4ff5-434f-803c-2365a0938f9a-cnibin\") pod \"multus-hbg2m\" (UID: \"50b9c3ea-4ff5-434f-803c-2365a0938f9a\") " pod="openshift-multus/multus-hbg2m" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.992043 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/50b9c3ea-4ff5-434f-803c-2365a0938f9a-multus-cni-dir\") pod \"multus-hbg2m\" (UID: \"50b9c3ea-4ff5-434f-803c-2365a0938f9a\") " pod="openshift-multus/multus-hbg2m" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.992070 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/50b9c3ea-4ff5-434f-803c-2365a0938f9a-host-var-lib-cni-multus\") pod \"multus-hbg2m\" (UID: \"50b9c3ea-4ff5-434f-803c-2365a0938f9a\") " pod="openshift-multus/multus-hbg2m" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.992106 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/e3fc3441-5d98-4323-8a78-cab492090c5a-hosts-file\") pod \"node-resolver-vcd5b\" (UID: \"e3fc3441-5d98-4323-8a78-cab492090c5a\") " pod="openshift-dns/node-resolver-vcd5b" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.992091 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ceab872d-7a73-44d5-936e-3dd17facf399-cni-binary-copy\") pod \"multus-additional-cni-plugins-27975\" (UID: \"ceab872d-7a73-44d5-936e-3dd17facf399\") " pod="openshift-multus/multus-additional-cni-plugins-27975" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.992161 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ceab872d-7a73-44d5-936e-3dd17facf399-system-cni-dir\") pod \"multus-additional-cni-plugins-27975\" (UID: \"ceab872d-7a73-44d5-936e-3dd17facf399\") " pod="openshift-multus/multus-additional-cni-plugins-27975" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.992316 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/50b9c3ea-4ff5-434f-803c-2365a0938f9a-multus-socket-dir-parent\") pod \"multus-hbg2m\" (UID: \"50b9c3ea-4ff5-434f-803c-2365a0938f9a\") " pod="openshift-multus/multus-hbg2m" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.992371 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/50b9c3ea-4ff5-434f-803c-2365a0938f9a-host-var-lib-kubelet\") pod \"multus-hbg2m\" (UID: \"50b9c3ea-4ff5-434f-803c-2365a0938f9a\") " pod="openshift-multus/multus-hbg2m" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.992412 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1-rootfs\") pod \"machine-config-daemon-bkmq6\" (UID: \"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1\") " pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.992617 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ceab872d-7a73-44d5-936e-3dd17facf399-os-release\") pod \"multus-additional-cni-plugins-27975\" (UID: \"ceab872d-7a73-44d5-936e-3dd17facf399\") " pod="openshift-multus/multus-additional-cni-plugins-27975" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.992626 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/50b9c3ea-4ff5-434f-803c-2365a0938f9a-multus-conf-dir\") pod \"multus-hbg2m\" (UID: \"50b9c3ea-4ff5-434f-803c-2365a0938f9a\") " pod="openshift-multus/multus-hbg2m" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.992649 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/50b9c3ea-4ff5-434f-803c-2365a0938f9a-hostroot\") pod \"multus-hbg2m\" (UID: \"50b9c3ea-4ff5-434f-803c-2365a0938f9a\") " pod="openshift-multus/multus-hbg2m" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.992676 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ceab872d-7a73-44d5-936e-3dd17facf399-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-27975\" (UID: \"ceab872d-7a73-44d5-936e-3dd17facf399\") " pod="openshift-multus/multus-additional-cni-plugins-27975" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.992731 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/50b9c3ea-4ff5-434f-803c-2365a0938f9a-os-release\") pod \"multus-hbg2m\" (UID: \"50b9c3ea-4ff5-434f-803c-2365a0938f9a\") " pod="openshift-multus/multus-hbg2m" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.991186 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ceab872d-7a73-44d5-936e-3dd17facf399-cnibin\") pod \"multus-additional-cni-plugins-27975\" (UID: \"ceab872d-7a73-44d5-936e-3dd17facf399\") " pod="openshift-multus/multus-additional-cni-plugins-27975" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.991210 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/50b9c3ea-4ff5-434f-803c-2365a0938f9a-host-run-multus-certs\") pod \"multus-hbg2m\" (UID: \"50b9c3ea-4ff5-434f-803c-2365a0938f9a\") " pod="openshift-multus/multus-hbg2m" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.992666 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/50b9c3ea-4ff5-434f-803c-2365a0938f9a-host-run-netns\") pod \"multus-hbg2m\" (UID: \"50b9c3ea-4ff5-434f-803c-2365a0938f9a\") " pod="openshift-multus/multus-hbg2m" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.992930 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/50b9c3ea-4ff5-434f-803c-2365a0938f9a-cni-binary-copy\") pod \"multus-hbg2m\" (UID: \"50b9c3ea-4ff5-434f-803c-2365a0938f9a\") " pod="openshift-multus/multus-hbg2m" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.993044 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ceab872d-7a73-44d5-936e-3dd17facf399-tuning-conf-dir\") pod \"multus-additional-cni-plugins-27975\" (UID: \"ceab872d-7a73-44d5-936e-3dd17facf399\") " pod="openshift-multus/multus-additional-cni-plugins-27975" Nov 29 07:39:40 crc kubenswrapper[4795]: I1129 07:39:40.993049 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1-mcd-auth-proxy-config\") pod \"machine-config-daemon-bkmq6\" (UID: \"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1\") " pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.001831 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1-proxy-tls\") pod \"machine-config-daemon-bkmq6\" (UID: \"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1\") " pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.002221 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:40Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.009815 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgsq5\" (UniqueName: \"kubernetes.io/projected/ceab872d-7a73-44d5-936e-3dd17facf399-kube-api-access-xgsq5\") pod \"multus-additional-cni-plugins-27975\" (UID: \"ceab872d-7a73-44d5-936e-3dd17facf399\") " pod="openshift-multus/multus-additional-cni-plugins-27975" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.009887 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hv7gw\" (UniqueName: \"kubernetes.io/projected/50b9c3ea-4ff5-434f-803c-2365a0938f9a-kube-api-access-hv7gw\") pod \"multus-hbg2m\" (UID: \"50b9c3ea-4ff5-434f-803c-2365a0938f9a\") " pod="openshift-multus/multus-hbg2m" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.010149 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4p5fv\" (UniqueName: \"kubernetes.io/projected/1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1-kube-api-access-4p5fv\") pod \"machine-config-daemon-bkmq6\" (UID: \"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1\") " pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.011126 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwcqq\" (UniqueName: \"kubernetes.io/projected/e3fc3441-5d98-4323-8a78-cab492090c5a-kube-api-access-jwcqq\") pod \"node-resolver-vcd5b\" (UID: \"e3fc3441-5d98-4323-8a78-cab492090c5a\") " pod="openshift-dns/node-resolver-vcd5b" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.014530 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkmq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.026429 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.043417 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27975" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceab872d-7a73-44d5-936e-3dd17facf399\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27975\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.055237 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.066192 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.077717 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2faa21199ef0e7bbf065823fff1e4b33636239d022db292bec68ff54e8c948c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c09cfeaabce351a2face8aa09a04f88aa9d50122718358d07301aeab84e4a76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.090312 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.094553 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84bbfda4-f166-406b-8459-d8cad5d21032\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://102735e5872ecbca5ba247f4fc457619ce4b383def180851d69bf37f7ea1051a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cba2e40f95664da3b482ce01c3432b7a421b785791004833aeaffbc2b6bdcb9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c385dac962c7c2c54fe6b5eabe286bc75d0518601f68b6845677764c5916ad44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6319d19e0aa1655aaff5e3abcc13d8ff9ff26f401c28cf23593ecaa23d1ac5a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa34ac2960898820609db88aa051a6567a7b6bd0bfc7fcb741205f6f5c3f402\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:39:37Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 07:39:27.030389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:39:27.033381 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-823169215/tls.crt::/tmp/serving-cert-823169215/tls.key\\\\\\\"\\\\nI1129 07:39:37.392848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:39:37.396263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:39:37.396294 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:39:37.396669 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:39:37.396692 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:39:37.407579 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:39:37.407638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407648 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407655 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:39:37.407660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:39:37.407665 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:39:37.407670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:39:37.408043 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:39:37.411891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7b73774f26f4ea48586f8faf7f53be02b4678c85937c8e402868610f36d4d00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.097638 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-hbg2m" Nov 29 07:39:41 crc kubenswrapper[4795]: W1129 07:39:41.102532 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1cf68fd3_dd2b_4cf2_b052_7b5a0965e9f1.slice/crio-a5c8d7389f4b37a3328944a46fdd6fb8bec156ef789c6eccc9d9614c0eff9c9f WatchSource:0}: Error finding container a5c8d7389f4b37a3328944a46fdd6fb8bec156ef789c6eccc9d9614c0eff9c9f: Status 404 returned error can't find the container with id a5c8d7389f4b37a3328944a46fdd6fb8bec156ef789c6eccc9d9614c0eff9c9f Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.104514 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-27975" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.111018 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-vcd5b" Nov 29 07:39:41 crc kubenswrapper[4795]: W1129 07:39:41.111655 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50b9c3ea_4ff5_434f_803c_2365a0938f9a.slice/crio-89b45779a75672f28794a74d8f667e5adac3eefb68372c6a81b67ab3e05d2e29 WatchSource:0}: Error finding container 89b45779a75672f28794a74d8f667e5adac3eefb68372c6a81b67ab3e05d2e29: Status 404 returned error can't find the container with id 89b45779a75672f28794a74d8f667e5adac3eefb68372c6a81b67ab3e05d2e29 Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.111860 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30fbc9849348e3900b416f78f697f1d88eb5eb018f5ee420e711561423a79459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:41 crc kubenswrapper[4795]: W1129 07:39:41.121230 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podceab872d_7a73_44d5_936e_3dd17facf399.slice/crio-b1fb9001754284480e96afff799041b06ac03a418c76b6ad9be9a47a41070c1c WatchSource:0}: Error finding container b1fb9001754284480e96afff799041b06ac03a418c76b6ad9be9a47a41070c1c: Status 404 returned error can't find the container with id b1fb9001754284480e96afff799041b06ac03a418c76b6ad9be9a47a41070c1c Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.126064 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.145814 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkmq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.154223 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-km2g9"] Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.156097 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.157991 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.158243 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.158859 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.159062 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.159357 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.159540 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.159801 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.167661 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hbg2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50b9c3ea-4ff5-434f-803c-2365a0938f9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv7gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hbg2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.187798 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vcd5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3fc3441-5d98-4323-8a78-cab492090c5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcqq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vcd5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.204657 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2faa21199ef0e7bbf065823fff1e4b33636239d022db292bec68ff54e8c948c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c09cfeaabce351a2face8aa09a04f88aa9d50122718358d07301aeab84e4a76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.220690 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.237514 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84bbfda4-f166-406b-8459-d8cad5d21032\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://102735e5872ecbca5ba247f4fc457619ce4b383def180851d69bf37f7ea1051a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cba2e40f95664da3b482ce01c3432b7a421b785791004833aeaffbc2b6bdcb9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c385dac962c7c2c54fe6b5eabe286bc75d0518601f68b6845677764c5916ad44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6319d19e0aa1655aaff5e3abcc13d8ff9ff26f401c28cf23593ecaa23d1ac5a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa34ac2960898820609db88aa051a6567a7b6bd0bfc7fcb741205f6f5c3f402\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:39:37Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 07:39:27.030389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:39:27.033381 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-823169215/tls.crt::/tmp/serving-cert-823169215/tls.key\\\\\\\"\\\\nI1129 07:39:37.392848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:39:37.396263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:39:37.396294 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:39:37.396669 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:39:37.396692 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:39:37.407579 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:39:37.407638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407648 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407655 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:39:37.407660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:39:37.407665 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:39:37.407670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:39:37.408043 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:39:37.411891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7b73774f26f4ea48586f8faf7f53be02b4678c85937c8e402868610f36d4d00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.248978 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30fbc9849348e3900b416f78f697f1d88eb5eb018f5ee420e711561423a79459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.260794 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.273976 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkmq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.294535 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-km2g9\" (UID: \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.294614 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-log-socket\") pod \"ovnkube-node-km2g9\" (UID: \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.294646 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-host-slash\") pod \"ovnkube-node-km2g9\" (UID: \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.294673 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-var-lib-openvswitch\") pod \"ovnkube-node-km2g9\" (UID: \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.295091 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-run-openvswitch\") pod \"ovnkube-node-km2g9\" (UID: \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.295112 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-host-cni-netd\") pod \"ovnkube-node-km2g9\" (UID: \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.295130 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-run-ovn\") pod \"ovnkube-node-km2g9\" (UID: \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.295158 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-host-kubelet\") pod \"ovnkube-node-km2g9\" (UID: \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.295173 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-host-cni-bin\") pod \"ovnkube-node-km2g9\" (UID: \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.295219 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-ovnkube-config\") pod \"ovnkube-node-km2g9\" (UID: \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.295251 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-run-systemd\") pod \"ovnkube-node-km2g9\" (UID: \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.295268 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-ovn-node-metrics-cert\") pod \"ovnkube-node-km2g9\" (UID: \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.295295 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-systemd-units\") pod \"ovnkube-node-km2g9\" (UID: \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.295332 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-host-run-netns\") pod \"ovnkube-node-km2g9\" (UID: \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.295349 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-node-log\") pod \"ovnkube-node-km2g9\" (UID: \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.295453 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-env-overrides\") pod \"ovnkube-node-km2g9\" (UID: \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.295480 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-host-run-ovn-kubernetes\") pod \"ovnkube-node-km2g9\" (UID: \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.295503 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psskj\" (UniqueName: \"kubernetes.io/projected/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-kube-api-access-psskj\") pod \"ovnkube-node-km2g9\" (UID: \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.295531 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-ovnkube-script-lib\") pod \"ovnkube-node-km2g9\" (UID: \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.295554 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-etc-openvswitch\") pod \"ovnkube-node-km2g9\" (UID: \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.297061 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-km2g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.309639 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vcd5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3fc3441-5d98-4323-8a78-cab492090c5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcqq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vcd5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.324499 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hbg2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50b9c3ea-4ff5-434f-803c-2365a0938f9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv7gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hbg2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.339363 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.356566 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.375657 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27975" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceab872d-7a73-44d5-936e-3dd17facf399\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27975\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:41Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.396956 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-host-kubelet\") pod \"ovnkube-node-km2g9\" (UID: \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.397227 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-host-cni-bin\") pod \"ovnkube-node-km2g9\" (UID: \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.397312 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-ovnkube-config\") pod \"ovnkube-node-km2g9\" (UID: \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.397398 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-systemd-units\") pod \"ovnkube-node-km2g9\" (UID: \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.397284 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-host-cni-bin\") pod \"ovnkube-node-km2g9\" (UID: \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.397464 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-run-systemd\") pod \"ovnkube-node-km2g9\" (UID: \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.397628 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-ovn-node-metrics-cert\") pod \"ovnkube-node-km2g9\" (UID: \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.397667 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-host-run-netns\") pod \"ovnkube-node-km2g9\" (UID: \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.397688 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-node-log\") pod \"ovnkube-node-km2g9\" (UID: \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.397725 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-env-overrides\") pod \"ovnkube-node-km2g9\" (UID: \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.397754 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-host-run-ovn-kubernetes\") pod \"ovnkube-node-km2g9\" (UID: \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.397772 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psskj\" (UniqueName: \"kubernetes.io/projected/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-kube-api-access-psskj\") pod \"ovnkube-node-km2g9\" (UID: \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.397789 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-ovnkube-script-lib\") pod \"ovnkube-node-km2g9\" (UID: \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.397817 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-etc-openvswitch\") pod \"ovnkube-node-km2g9\" (UID: \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.397837 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-km2g9\" (UID: \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.397870 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-host-slash\") pod \"ovnkube-node-km2g9\" (UID: \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.397884 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-var-lib-openvswitch\") pod \"ovnkube-node-km2g9\" (UID: \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.397899 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-run-openvswitch\") pod \"ovnkube-node-km2g9\" (UID: \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.397913 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-log-socket\") pod \"ovnkube-node-km2g9\" (UID: \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.397938 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-run-ovn\") pod \"ovnkube-node-km2g9\" (UID: \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.397954 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-host-cni-netd\") pod \"ovnkube-node-km2g9\" (UID: \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.398040 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-host-cni-netd\") pod \"ovnkube-node-km2g9\" (UID: \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.397142 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-host-kubelet\") pod \"ovnkube-node-km2g9\" (UID: \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.398083 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-km2g9\" (UID: \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.398052 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-var-lib-openvswitch\") pod \"ovnkube-node-km2g9\" (UID: \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.398064 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-systemd-units\") pod \"ovnkube-node-km2g9\" (UID: \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.398133 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-run-openvswitch\") pod \"ovnkube-node-km2g9\" (UID: \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.398104 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-host-slash\") pod \"ovnkube-node-km2g9\" (UID: \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.398116 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-host-run-ovn-kubernetes\") pod \"ovnkube-node-km2g9\" (UID: \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.398141 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-etc-openvswitch\") pod \"ovnkube-node-km2g9\" (UID: \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.398064 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-run-ovn\") pod \"ovnkube-node-km2g9\" (UID: \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.398156 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-log-socket\") pod \"ovnkube-node-km2g9\" (UID: \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.398147 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-node-log\") pod \"ovnkube-node-km2g9\" (UID: \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.398187 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-ovnkube-config\") pod \"ovnkube-node-km2g9\" (UID: \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.398191 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-host-run-netns\") pod \"ovnkube-node-km2g9\" (UID: \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.398505 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-run-systemd\") pod \"ovnkube-node-km2g9\" (UID: \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.398793 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-env-overrides\") pod \"ovnkube-node-km2g9\" (UID: \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.398826 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-ovnkube-script-lib\") pod \"ovnkube-node-km2g9\" (UID: \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.402963 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-ovn-node-metrics-cert\") pod \"ovnkube-node-km2g9\" (UID: \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.416199 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psskj\" (UniqueName: \"kubernetes.io/projected/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-kube-api-access-psskj\") pod \"ovnkube-node-km2g9\" (UID: \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.438744 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-vcd5b" event={"ID":"e3fc3441-5d98-4323-8a78-cab492090c5a","Type":"ContainerStarted","Data":"3b0aad18be30fa67114bcfb386e50a60382f730789956cf51d852f2aa8a56194"} Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.440066 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-27975" event={"ID":"ceab872d-7a73-44d5-936e-3dd17facf399","Type":"ContainerStarted","Data":"b1fb9001754284480e96afff799041b06ac03a418c76b6ad9be9a47a41070c1c"} Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.441085 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hbg2m" event={"ID":"50b9c3ea-4ff5-434f-803c-2365a0938f9a","Type":"ContainerStarted","Data":"89b45779a75672f28794a74d8f667e5adac3eefb68372c6a81b67ab3e05d2e29"} Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.442441 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" event={"ID":"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1","Type":"ContainerStarted","Data":"a5c8d7389f4b37a3328944a46fdd6fb8bec156ef789c6eccc9d9614c0eff9c9f"} Nov 29 07:39:41 crc kubenswrapper[4795]: I1129 07:39:41.498143 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" Nov 29 07:39:42 crc kubenswrapper[4795]: I1129 07:39:42.274932 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:39:42 crc kubenswrapper[4795]: I1129 07:39:42.274988 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:39:42 crc kubenswrapper[4795]: E1129 07:39:42.275081 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:39:42 crc kubenswrapper[4795]: E1129 07:39:42.275177 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:39:42 crc kubenswrapper[4795]: I1129 07:39:42.275565 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:39:42 crc kubenswrapper[4795]: E1129 07:39:42.275671 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:39:42 crc kubenswrapper[4795]: I1129 07:39:42.306115 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:39:42 crc kubenswrapper[4795]: I1129 07:39:42.306262 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:39:42 crc kubenswrapper[4795]: I1129 07:39:42.306348 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:39:42 crc kubenswrapper[4795]: I1129 07:39:42.306401 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:39:42 crc kubenswrapper[4795]: E1129 07:39:42.306435 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:39:46.306396264 +0000 UTC m=+32.281972064 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:39:42 crc kubenswrapper[4795]: E1129 07:39:42.306483 4795 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 07:39:42 crc kubenswrapper[4795]: I1129 07:39:42.306501 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:39:42 crc kubenswrapper[4795]: E1129 07:39:42.306524 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 07:39:46.306515828 +0000 UTC m=+32.282091618 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 07:39:42 crc kubenswrapper[4795]: E1129 07:39:42.306450 4795 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 07:39:42 crc kubenswrapper[4795]: E1129 07:39:42.306551 4795 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 07:39:42 crc kubenswrapper[4795]: E1129 07:39:42.306562 4795 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:39:42 crc kubenswrapper[4795]: E1129 07:39:42.306571 4795 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 07:39:42 crc kubenswrapper[4795]: E1129 07:39:42.306622 4795 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 07:39:42 crc kubenswrapper[4795]: E1129 07:39:42.306642 4795 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:39:42 crc kubenswrapper[4795]: E1129 07:39:42.306674 4795 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 07:39:42 crc kubenswrapper[4795]: E1129 07:39:42.306584 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-29 07:39:46.306578699 +0000 UTC m=+32.282154489 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:39:42 crc kubenswrapper[4795]: E1129 07:39:42.306726 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 07:39:46.306710963 +0000 UTC m=+32.282286773 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 07:39:42 crc kubenswrapper[4795]: E1129 07:39:42.306751 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-29 07:39:46.306739124 +0000 UTC m=+32.282315014 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:39:42 crc kubenswrapper[4795]: I1129 07:39:42.446020 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" event={"ID":"3d3ff2b2-cbaa-4309-805a-2b044f867d3a","Type":"ContainerStarted","Data":"1e28f6209bd7be6beb5d86d316c3194ad3162ae8561ac532a3dfc894415fb406"} Nov 29 07:39:42 crc kubenswrapper[4795]: I1129 07:39:42.446071 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" event={"ID":"3d3ff2b2-cbaa-4309-805a-2b044f867d3a","Type":"ContainerStarted","Data":"4d75662c6fe3de9e1cea2f71f945627df52c0bac515aeab0294c65662cf67ce5"} Nov 29 07:39:42 crc kubenswrapper[4795]: I1129 07:39:42.447938 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-27975" event={"ID":"ceab872d-7a73-44d5-936e-3dd17facf399","Type":"ContainerStarted","Data":"f6e24a1ba0b3bd47c8cff733e54ce43cf1641ba69df8a3d2083f47fcfdcd6640"} Nov 29 07:39:42 crc kubenswrapper[4795]: I1129 07:39:42.450341 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" event={"ID":"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1","Type":"ContainerStarted","Data":"63f3aac368767d0534a658bc0223f5cda0a951940f2d6ff3f1c89a6ac401a7d5"} Nov 29 07:39:42 crc kubenswrapper[4795]: I1129 07:39:42.932949 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-s9x86"] Nov 29 07:39:42 crc kubenswrapper[4795]: I1129 07:39:42.933313 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-s9x86" Nov 29 07:39:42 crc kubenswrapper[4795]: I1129 07:39:42.935791 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 29 07:39:42 crc kubenswrapper[4795]: I1129 07:39:42.935972 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 29 07:39:42 crc kubenswrapper[4795]: I1129 07:39:42.937114 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 29 07:39:42 crc kubenswrapper[4795]: I1129 07:39:42.937787 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 29 07:39:42 crc kubenswrapper[4795]: I1129 07:39:42.951142 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hbg2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50b9c3ea-4ff5-434f-803c-2365a0938f9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv7gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hbg2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:42Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:42 crc kubenswrapper[4795]: I1129 07:39:42.963961 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vcd5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3fc3441-5d98-4323-8a78-cab492090c5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcqq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vcd5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:42Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:42 crc kubenswrapper[4795]: I1129 07:39:42.981254 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:42Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:42 crc kubenswrapper[4795]: I1129 07:39:42.999204 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:42Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.016525 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27975" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceab872d-7a73-44d5-936e-3dd17facf399\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27975\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:43Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.027860 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-s9x86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcb60df4-ad48-4830-b8c7-c63621b96707\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-s9x86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:43Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.040972 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:43Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.054937 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2faa21199ef0e7bbf065823fff1e4b33636239d022db292bec68ff54e8c948c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c09cfeaabce351a2face8aa09a04f88aa9d50122718358d07301aeab84e4a76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:43Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.069911 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84bbfda4-f166-406b-8459-d8cad5d21032\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://102735e5872ecbca5ba247f4fc457619ce4b383def180851d69bf37f7ea1051a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cba2e40f95664da3b482ce01c3432b7a421b785791004833aeaffbc2b6bdcb9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c385dac962c7c2c54fe6b5eabe286bc75d0518601f68b6845677764c5916ad44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6319d19e0aa1655aaff5e3abcc13d8ff9ff26f401c28cf23593ecaa23d1ac5a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa34ac2960898820609db88aa051a6567a7b6bd0bfc7fcb741205f6f5c3f402\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:39:37Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 07:39:27.030389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:39:27.033381 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-823169215/tls.crt::/tmp/serving-cert-823169215/tls.key\\\\\\\"\\\\nI1129 07:39:37.392848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:39:37.396263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:39:37.396294 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:39:37.396669 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:39:37.396692 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:39:37.407579 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:39:37.407638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407648 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407655 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:39:37.407660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:39:37.407665 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:39:37.407670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:39:37.408043 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:39:37.411891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7b73774f26f4ea48586f8faf7f53be02b4678c85937c8e402868610f36d4d00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:43Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.082147 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30fbc9849348e3900b416f78f697f1d88eb5eb018f5ee420e711561423a79459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:43Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.093231 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:43Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.108653 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkmq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:43Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.114854 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fcb60df4-ad48-4830-b8c7-c63621b96707-host\") pod \"node-ca-s9x86\" (UID: \"fcb60df4-ad48-4830-b8c7-c63621b96707\") " pod="openshift-image-registry/node-ca-s9x86" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.114984 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/fcb60df4-ad48-4830-b8c7-c63621b96707-serviceca\") pod \"node-ca-s9x86\" (UID: \"fcb60df4-ad48-4830-b8c7-c63621b96707\") " pod="openshift-image-registry/node-ca-s9x86" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.115033 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sr9tb\" (UniqueName: \"kubernetes.io/projected/fcb60df4-ad48-4830-b8c7-c63621b96707-kube-api-access-sr9tb\") pod \"node-ca-s9x86\" (UID: \"fcb60df4-ad48-4830-b8c7-c63621b96707\") " pod="openshift-image-registry/node-ca-s9x86" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.129175 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-km2g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:43Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.216120 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sr9tb\" (UniqueName: \"kubernetes.io/projected/fcb60df4-ad48-4830-b8c7-c63621b96707-kube-api-access-sr9tb\") pod \"node-ca-s9x86\" (UID: \"fcb60df4-ad48-4830-b8c7-c63621b96707\") " pod="openshift-image-registry/node-ca-s9x86" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.216193 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fcb60df4-ad48-4830-b8c7-c63621b96707-host\") pod \"node-ca-s9x86\" (UID: \"fcb60df4-ad48-4830-b8c7-c63621b96707\") " pod="openshift-image-registry/node-ca-s9x86" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.216242 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/fcb60df4-ad48-4830-b8c7-c63621b96707-serviceca\") pod \"node-ca-s9x86\" (UID: \"fcb60df4-ad48-4830-b8c7-c63621b96707\") " pod="openshift-image-registry/node-ca-s9x86" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.216369 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fcb60df4-ad48-4830-b8c7-c63621b96707-host\") pod \"node-ca-s9x86\" (UID: \"fcb60df4-ad48-4830-b8c7-c63621b96707\") " pod="openshift-image-registry/node-ca-s9x86" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.217172 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/fcb60df4-ad48-4830-b8c7-c63621b96707-serviceca\") pod \"node-ca-s9x86\" (UID: \"fcb60df4-ad48-4830-b8c7-c63621b96707\") " pod="openshift-image-registry/node-ca-s9x86" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.232426 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sr9tb\" (UniqueName: \"kubernetes.io/projected/fcb60df4-ad48-4830-b8c7-c63621b96707-kube-api-access-sr9tb\") pod \"node-ca-s9x86\" (UID: \"fcb60df4-ad48-4830-b8c7-c63621b96707\") " pod="openshift-image-registry/node-ca-s9x86" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.429448 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.432782 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.438061 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.443772 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hbg2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50b9c3ea-4ff5-434f-803c-2365a0938f9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv7gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hbg2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:43Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.454288 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hbg2m" event={"ID":"50b9c3ea-4ff5-434f-803c-2365a0938f9a","Type":"ContainerStarted","Data":"46105526d9b0e90d4e14597c7b9c505b01d89a55e947fb378e4587ec265cac89"} Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.455474 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vcd5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3fc3441-5d98-4323-8a78-cab492090c5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcqq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vcd5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:43Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.456173 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" event={"ID":"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1","Type":"ContainerStarted","Data":"148610b20c1dfe43c3e5443cda520fa95ed2823afcaa880dbe879b6d93dc8ce3"} Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.457559 4795 generic.go:334] "Generic (PLEG): container finished" podID="3d3ff2b2-cbaa-4309-805a-2b044f867d3a" containerID="1e28f6209bd7be6beb5d86d316c3194ad3162ae8561ac532a3dfc894415fb406" exitCode=0 Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.457626 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" event={"ID":"3d3ff2b2-cbaa-4309-805a-2b044f867d3a","Type":"ContainerDied","Data":"1e28f6209bd7be6beb5d86d316c3194ad3162ae8561ac532a3dfc894415fb406"} Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.459264 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"1b4d032ee6e86c2b61a59ea1477aa6d97a454c754888c37db24bc49d921eacd1"} Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.460466 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-vcd5b" event={"ID":"e3fc3441-5d98-4323-8a78-cab492090c5a","Type":"ContainerStarted","Data":"d047b5f2e979d7eebec46b45ef363e94e99bd7311214872c66fcce39cfd411db"} Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.461822 4795 generic.go:334] "Generic (PLEG): container finished" podID="ceab872d-7a73-44d5-936e-3dd17facf399" containerID="f6e24a1ba0b3bd47c8cff733e54ce43cf1641ba69df8a3d2083f47fcfdcd6640" exitCode=0 Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.461899 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-27975" event={"ID":"ceab872d-7a73-44d5-936e-3dd17facf399","Type":"ContainerDied","Data":"f6e24a1ba0b3bd47c8cff733e54ce43cf1641ba69df8a3d2083f47fcfdcd6640"} Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.466235 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-s9x86" Nov 29 07:39:43 crc kubenswrapper[4795]: E1129 07:39:43.467924 4795 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.479059 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27975" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceab872d-7a73-44d5-936e-3dd17facf399\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27975\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:43Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.490530 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-s9x86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcb60df4-ad48-4830-b8c7-c63621b96707\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-s9x86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:43Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.503409 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:43Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.517735 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:43Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.529887 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:43Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.544064 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2faa21199ef0e7bbf065823fff1e4b33636239d022db292bec68ff54e8c948c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c09cfeaabce351a2face8aa09a04f88aa9d50122718358d07301aeab84e4a76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:43Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.556754 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30fbc9849348e3900b416f78f697f1d88eb5eb018f5ee420e711561423a79459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:43Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.568563 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:43Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.580468 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkmq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:43Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.599886 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-km2g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:43Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.615946 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84bbfda4-f166-406b-8459-d8cad5d21032\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://102735e5872ecbca5ba247f4fc457619ce4b383def180851d69bf37f7ea1051a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cba2e40f95664da3b482ce01c3432b7a421b785791004833aeaffbc2b6bdcb9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c385dac962c7c2c54fe6b5eabe286bc75d0518601f68b6845677764c5916ad44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6319d19e0aa1655aaff5e3abcc13d8ff9ff26f401c28cf23593ecaa23d1ac5a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa34ac2960898820609db88aa051a6567a7b6bd0bfc7fcb741205f6f5c3f402\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:39:37Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 07:39:27.030389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:39:27.033381 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-823169215/tls.crt::/tmp/serving-cert-823169215/tls.key\\\\\\\"\\\\nI1129 07:39:37.392848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:39:37.396263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:39:37.396294 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:39:37.396669 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:39:37.396692 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:39:37.407579 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:39:37.407638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407648 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407655 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:39:37.407660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:39:37.407665 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:39:37.407670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:39:37.408043 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:39:37.411891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7b73774f26f4ea48586f8faf7f53be02b4678c85937c8e402868610f36d4d00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:43Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.630859 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:43Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.644130 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27975" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceab872d-7a73-44d5-936e-3dd17facf399\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e24a1ba0b3bd47c8cff733e54ce43cf1641ba69df8a3d2083f47fcfdcd6640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e24a1ba0b3bd47c8cff733e54ce43cf1641ba69df8a3d2083f47fcfdcd6640\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27975\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:43Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.653870 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-s9x86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcb60df4-ad48-4830-b8c7-c63621b96707\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-s9x86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:43Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.670013 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"262bf41d-eaf0-42b7-946d-3223857ca705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bbfc45764b29055dcafdee2e3b77893c35b23cfa3eaa9125f01850baef2fd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://508f7cb88a5b33c09bb52519555900031de44693e10de750ed34d9bb4c31738c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9f1344bd7c3a1bc53870f7e5f5f2ca4993df112a24aa474655cc07670e22c48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b06202234f21aa27a3dc67978cc9f82738d7526100bce3fd640be08131413f69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:43Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.692809 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:43Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.723697 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:43Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.757533 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2faa21199ef0e7bbf065823fff1e4b33636239d022db292bec68ff54e8c948c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c09cfeaabce351a2face8aa09a04f88aa9d50122718358d07301aeab84e4a76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:43Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.780356 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84bbfda4-f166-406b-8459-d8cad5d21032\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://102735e5872ecbca5ba247f4fc457619ce4b383def180851d69bf37f7ea1051a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cba2e40f95664da3b482ce01c3432b7a421b785791004833aeaffbc2b6bdcb9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c385dac962c7c2c54fe6b5eabe286bc75d0518601f68b6845677764c5916ad44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6319d19e0aa1655aaff5e3abcc13d8ff9ff26f401c28cf23593ecaa23d1ac5a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa34ac2960898820609db88aa051a6567a7b6bd0bfc7fcb741205f6f5c3f402\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:39:37Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 07:39:27.030389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:39:27.033381 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-823169215/tls.crt::/tmp/serving-cert-823169215/tls.key\\\\\\\"\\\\nI1129 07:39:37.392848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:39:37.396263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:39:37.396294 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:39:37.396669 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:39:37.396692 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:39:37.407579 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:39:37.407638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407648 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407655 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:39:37.407660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:39:37.407665 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:39:37.407670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:39:37.408043 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:39:37.411891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7b73774f26f4ea48586f8faf7f53be02b4678c85937c8e402868610f36d4d00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:43Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.786430 4795 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.788470 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.788519 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.788534 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.788670 4795 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.800307 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30fbc9849348e3900b416f78f697f1d88eb5eb018f5ee420e711561423a79459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:43Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.801709 4795 kubelet_node_status.go:115] "Node was previously registered" node="crc" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.801843 4795 kubelet_node_status.go:79] "Successfully registered node" node="crc" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.802822 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.802844 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.802852 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.802867 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.802877 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:43Z","lastTransitionTime":"2025-11-29T07:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.815703 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b4d032ee6e86c2b61a59ea1477aa6d97a454c754888c37db24bc49d921eacd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:43Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:43 crc kubenswrapper[4795]: E1129 07:39:43.818164 4795 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8cd0c036-22eb-405e-b57e-ec0c4424780e\\\",\\\"systemUUID\\\":\\\"bd085386-a70e-485f-9f18-00b3aef4bcca\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:43Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.821608 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.821653 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.821666 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.821685 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.821701 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:43Z","lastTransitionTime":"2025-11-29T07:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.832827 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkmq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:43Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:43 crc kubenswrapper[4795]: E1129 07:39:43.839419 4795 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8cd0c036-22eb-405e-b57e-ec0c4424780e\\\",\\\"systemUUID\\\":\\\"bd085386-a70e-485f-9f18-00b3aef4bcca\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:43Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.843906 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.843938 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.843947 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.843963 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.843972 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:43Z","lastTransitionTime":"2025-11-29T07:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.854376 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e28f6209bd7be6beb5d86d316c3194ad3162ae8561ac532a3dfc894415fb406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e28f6209bd7be6beb5d86d316c3194ad3162ae8561ac532a3dfc894415fb406\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-km2g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:43Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:43 crc kubenswrapper[4795]: E1129 07:39:43.856471 4795 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8cd0c036-22eb-405e-b57e-ec0c4424780e\\\",\\\"systemUUID\\\":\\\"bd085386-a70e-485f-9f18-00b3aef4bcca\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:43Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.860785 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.860813 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.860822 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.860835 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.860845 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:43Z","lastTransitionTime":"2025-11-29T07:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.869072 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hbg2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50b9c3ea-4ff5-434f-803c-2365a0938f9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46105526d9b0e90d4e14597c7b9c505b01d89a55e947fb378e4587ec265cac89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv7gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hbg2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:43Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:43 crc kubenswrapper[4795]: E1129 07:39:43.872213 4795 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8cd0c036-22eb-405e-b57e-ec0c4424780e\\\",\\\"systemUUID\\\":\\\"bd085386-a70e-485f-9f18-00b3aef4bcca\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:43Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.875991 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.876056 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.876074 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.876104 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.876128 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:43Z","lastTransitionTime":"2025-11-29T07:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.880990 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vcd5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3fc3441-5d98-4323-8a78-cab492090c5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d047b5f2e979d7eebec46b45ef363e94e99bd7311214872c66fcce39cfd411db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcqq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vcd5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:43Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:43 crc kubenswrapper[4795]: E1129 07:39:43.889958 4795 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8cd0c036-22eb-405e-b57e-ec0c4424780e\\\",\\\"systemUUID\\\":\\\"bd085386-a70e-485f-9f18-00b3aef4bcca\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:43Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:43 crc kubenswrapper[4795]: E1129 07:39:43.890078 4795 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.892551 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.892581 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.892604 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.892624 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.892635 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:43Z","lastTransitionTime":"2025-11-29T07:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.995153 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.995199 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.995210 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.995229 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:43 crc kubenswrapper[4795]: I1129 07:39:43.995243 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:43Z","lastTransitionTime":"2025-11-29T07:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:44 crc kubenswrapper[4795]: I1129 07:39:44.097909 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:44 crc kubenswrapper[4795]: I1129 07:39:44.097952 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:44 crc kubenswrapper[4795]: I1129 07:39:44.097963 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:44 crc kubenswrapper[4795]: I1129 07:39:44.097980 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:44 crc kubenswrapper[4795]: I1129 07:39:44.097992 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:44Z","lastTransitionTime":"2025-11-29T07:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:44 crc kubenswrapper[4795]: I1129 07:39:44.199977 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:44 crc kubenswrapper[4795]: I1129 07:39:44.200013 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:44 crc kubenswrapper[4795]: I1129 07:39:44.200021 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:44 crc kubenswrapper[4795]: I1129 07:39:44.200037 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:44 crc kubenswrapper[4795]: I1129 07:39:44.200047 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:44Z","lastTransitionTime":"2025-11-29T07:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:44 crc kubenswrapper[4795]: I1129 07:39:44.276836 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:39:44 crc kubenswrapper[4795]: I1129 07:39:44.276873 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:39:44 crc kubenswrapper[4795]: I1129 07:39:44.276909 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:39:44 crc kubenswrapper[4795]: E1129 07:39:44.277044 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:39:44 crc kubenswrapper[4795]: E1129 07:39:44.277381 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:39:44 crc kubenswrapper[4795]: E1129 07:39:44.277249 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:39:44 crc kubenswrapper[4795]: I1129 07:39:44.290393 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:44 crc kubenswrapper[4795]: I1129 07:39:44.301929 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:44 crc kubenswrapper[4795]: I1129 07:39:44.301965 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:44 crc kubenswrapper[4795]: I1129 07:39:44.301977 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:44 crc kubenswrapper[4795]: I1129 07:39:44.301995 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:44 crc kubenswrapper[4795]: I1129 07:39:44.302005 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:44Z","lastTransitionTime":"2025-11-29T07:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:44 crc kubenswrapper[4795]: I1129 07:39:44.306309 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2faa21199ef0e7bbf065823fff1e4b33636239d022db292bec68ff54e8c948c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c09cfeaabce351a2face8aa09a04f88aa9d50122718358d07301aeab84e4a76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:44 crc kubenswrapper[4795]: I1129 07:39:44.322323 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84bbfda4-f166-406b-8459-d8cad5d21032\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://102735e5872ecbca5ba247f4fc457619ce4b383def180851d69bf37f7ea1051a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cba2e40f95664da3b482ce01c3432b7a421b785791004833aeaffbc2b6bdcb9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c385dac962c7c2c54fe6b5eabe286bc75d0518601f68b6845677764c5916ad44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6319d19e0aa1655aaff5e3abcc13d8ff9ff26f401c28cf23593ecaa23d1ac5a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa34ac2960898820609db88aa051a6567a7b6bd0bfc7fcb741205f6f5c3f402\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:39:37Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 07:39:27.030389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:39:27.033381 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-823169215/tls.crt::/tmp/serving-cert-823169215/tls.key\\\\\\\"\\\\nI1129 07:39:37.392848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:39:37.396263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:39:37.396294 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:39:37.396669 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:39:37.396692 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:39:37.407579 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:39:37.407638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407648 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407655 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:39:37.407660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:39:37.407665 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:39:37.407670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:39:37.408043 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:39:37.411891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7b73774f26f4ea48586f8faf7f53be02b4678c85937c8e402868610f36d4d00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:44 crc kubenswrapper[4795]: I1129 07:39:44.341119 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30fbc9849348e3900b416f78f697f1d88eb5eb018f5ee420e711561423a79459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:44 crc kubenswrapper[4795]: I1129 07:39:44.356635 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b4d032ee6e86c2b61a59ea1477aa6d97a454c754888c37db24bc49d921eacd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:44 crc kubenswrapper[4795]: I1129 07:39:44.381014 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkmq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:44 crc kubenswrapper[4795]: I1129 07:39:44.405670 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:44 crc kubenswrapper[4795]: I1129 07:39:44.405796 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:44 crc kubenswrapper[4795]: I1129 07:39:44.406064 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:44 crc kubenswrapper[4795]: I1129 07:39:44.406088 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:44 crc kubenswrapper[4795]: I1129 07:39:44.406101 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:44Z","lastTransitionTime":"2025-11-29T07:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:44 crc kubenswrapper[4795]: I1129 07:39:44.416122 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e28f6209bd7be6beb5d86d316c3194ad3162ae8561ac532a3dfc894415fb406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e28f6209bd7be6beb5d86d316c3194ad3162ae8561ac532a3dfc894415fb406\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-km2g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:44 crc kubenswrapper[4795]: I1129 07:39:44.429704 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hbg2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50b9c3ea-4ff5-434f-803c-2365a0938f9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46105526d9b0e90d4e14597c7b9c505b01d89a55e947fb378e4587ec265cac89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv7gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hbg2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:44 crc kubenswrapper[4795]: I1129 07:39:44.441493 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vcd5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3fc3441-5d98-4323-8a78-cab492090c5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d047b5f2e979d7eebec46b45ef363e94e99bd7311214872c66fcce39cfd411db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcqq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vcd5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:44 crc kubenswrapper[4795]: I1129 07:39:44.456904 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"262bf41d-eaf0-42b7-946d-3223857ca705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bbfc45764b29055dcafdee2e3b77893c35b23cfa3eaa9125f01850baef2fd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://508f7cb88a5b33c09bb52519555900031de44693e10de750ed34d9bb4c31738c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9f1344bd7c3a1bc53870f7e5f5f2ca4993df112a24aa474655cc07670e22c48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b06202234f21aa27a3dc67978cc9f82738d7526100bce3fd640be08131413f69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:44 crc kubenswrapper[4795]: I1129 07:39:44.466744 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-s9x86" event={"ID":"fcb60df4-ad48-4830-b8c7-c63621b96707","Type":"ContainerStarted","Data":"7016754c3d94e4443a21cf0f5c2c07e663b05b59df578f216299c81428118d6b"} Nov 29 07:39:44 crc kubenswrapper[4795]: I1129 07:39:44.473344 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:44 crc kubenswrapper[4795]: I1129 07:39:44.488123 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:44 crc kubenswrapper[4795]: I1129 07:39:44.505762 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27975" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceab872d-7a73-44d5-936e-3dd17facf399\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e24a1ba0b3bd47c8cff733e54ce43cf1641ba69df8a3d2083f47fcfdcd6640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e24a1ba0b3bd47c8cff733e54ce43cf1641ba69df8a3d2083f47fcfdcd6640\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27975\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:44 crc kubenswrapper[4795]: I1129 07:39:44.508862 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:44 crc kubenswrapper[4795]: I1129 07:39:44.508934 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:44 crc kubenswrapper[4795]: I1129 07:39:44.508950 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:44 crc kubenswrapper[4795]: I1129 07:39:44.508973 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:44 crc kubenswrapper[4795]: I1129 07:39:44.508987 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:44Z","lastTransitionTime":"2025-11-29T07:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:44 crc kubenswrapper[4795]: I1129 07:39:44.518113 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-s9x86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcb60df4-ad48-4830-b8c7-c63621b96707\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-s9x86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:44 crc kubenswrapper[4795]: I1129 07:39:44.616312 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:44 crc kubenswrapper[4795]: I1129 07:39:44.616650 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:44 crc kubenswrapper[4795]: I1129 07:39:44.616666 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:44 crc kubenswrapper[4795]: I1129 07:39:44.616682 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:44 crc kubenswrapper[4795]: I1129 07:39:44.616697 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:44Z","lastTransitionTime":"2025-11-29T07:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:44 crc kubenswrapper[4795]: I1129 07:39:44.726067 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:44 crc kubenswrapper[4795]: I1129 07:39:44.726131 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:44 crc kubenswrapper[4795]: I1129 07:39:44.726194 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:44 crc kubenswrapper[4795]: I1129 07:39:44.726215 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:44 crc kubenswrapper[4795]: I1129 07:39:44.726226 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:44Z","lastTransitionTime":"2025-11-29T07:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:44 crc kubenswrapper[4795]: I1129 07:39:44.829541 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:44 crc kubenswrapper[4795]: I1129 07:39:44.829615 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:44 crc kubenswrapper[4795]: I1129 07:39:44.829630 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:44 crc kubenswrapper[4795]: I1129 07:39:44.829649 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:44 crc kubenswrapper[4795]: I1129 07:39:44.829695 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:44Z","lastTransitionTime":"2025-11-29T07:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:44 crc kubenswrapper[4795]: I1129 07:39:44.933120 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:44 crc kubenswrapper[4795]: I1129 07:39:44.933171 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:44 crc kubenswrapper[4795]: I1129 07:39:44.933183 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:44 crc kubenswrapper[4795]: I1129 07:39:44.933202 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:44 crc kubenswrapper[4795]: I1129 07:39:44.933213 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:44Z","lastTransitionTime":"2025-11-29T07:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.035526 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.035579 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.035605 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.035625 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.035641 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:45Z","lastTransitionTime":"2025-11-29T07:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.137284 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.137314 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.137322 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.137337 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.137347 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:45Z","lastTransitionTime":"2025-11-29T07:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.240437 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.240472 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.240482 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.240500 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.240515 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:45Z","lastTransitionTime":"2025-11-29T07:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.341907 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.341946 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.341957 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.341971 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.341981 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:45Z","lastTransitionTime":"2025-11-29T07:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.444880 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.444910 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.444919 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.444933 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.444942 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:45Z","lastTransitionTime":"2025-11-29T07:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.471862 4795 generic.go:334] "Generic (PLEG): container finished" podID="ceab872d-7a73-44d5-936e-3dd17facf399" containerID="f9e93605467f5dae805a8b5b55e1eebc3f046f4ab7fd3a409235b918d8b1aa88" exitCode=0 Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.471935 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-27975" event={"ID":"ceab872d-7a73-44d5-936e-3dd17facf399","Type":"ContainerDied","Data":"f9e93605467f5dae805a8b5b55e1eebc3f046f4ab7fd3a409235b918d8b1aa88"} Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.475078 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" event={"ID":"3d3ff2b2-cbaa-4309-805a-2b044f867d3a","Type":"ContainerStarted","Data":"bcce9ccf35e72987d01428b7573aafacfe90fc973009657b077f40b6fe6ddecb"} Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.475129 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" event={"ID":"3d3ff2b2-cbaa-4309-805a-2b044f867d3a","Type":"ContainerStarted","Data":"ceed8db447ae237cae16d2aa37b1651eade557ea7aaab5bc7b55678ce6e36f57"} Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.475145 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" event={"ID":"3d3ff2b2-cbaa-4309-805a-2b044f867d3a","Type":"ContainerStarted","Data":"feacad8666527640ec9d6bee04a743e99e0cc9f9e9c6432c1a70bc02e85fa039"} Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.475157 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" event={"ID":"3d3ff2b2-cbaa-4309-805a-2b044f867d3a","Type":"ContainerStarted","Data":"25239d7c14973e35144f9ac1b909564749220fc32b1851b68e08434a5aca7def"} Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.477157 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-s9x86" event={"ID":"fcb60df4-ad48-4830-b8c7-c63621b96707","Type":"ContainerStarted","Data":"3a6d9cdb665b3bed924577ab173dcf48968a0b7217ca81b1f8708d733d27d3d2"} Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.488912 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84bbfda4-f166-406b-8459-d8cad5d21032\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://102735e5872ecbca5ba247f4fc457619ce4b383def180851d69bf37f7ea1051a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cba2e40f95664da3b482ce01c3432b7a421b785791004833aeaffbc2b6bdcb9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c385dac962c7c2c54fe6b5eabe286bc75d0518601f68b6845677764c5916ad44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6319d19e0aa1655aaff5e3abcc13d8ff9ff26f401c28cf23593ecaa23d1ac5a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa34ac2960898820609db88aa051a6567a7b6bd0bfc7fcb741205f6f5c3f402\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:39:37Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 07:39:27.030389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:39:27.033381 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-823169215/tls.crt::/tmp/serving-cert-823169215/tls.key\\\\\\\"\\\\nI1129 07:39:37.392848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:39:37.396263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:39:37.396294 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:39:37.396669 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:39:37.396692 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:39:37.407579 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:39:37.407638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407648 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407655 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:39:37.407660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:39:37.407665 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:39:37.407670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:39:37.408043 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:39:37.411891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7b73774f26f4ea48586f8faf7f53be02b4678c85937c8e402868610f36d4d00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.509480 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30fbc9849348e3900b416f78f697f1d88eb5eb018f5ee420e711561423a79459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.525511 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b4d032ee6e86c2b61a59ea1477aa6d97a454c754888c37db24bc49d921eacd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.539891 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkmq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.548913 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.549389 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.549404 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.549425 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.549439 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:45Z","lastTransitionTime":"2025-11-29T07:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.566911 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e28f6209bd7be6beb5d86d316c3194ad3162ae8561ac532a3dfc894415fb406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e28f6209bd7be6beb5d86d316c3194ad3162ae8561ac532a3dfc894415fb406\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-km2g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.583455 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hbg2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50b9c3ea-4ff5-434f-803c-2365a0938f9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46105526d9b0e90d4e14597c7b9c505b01d89a55e947fb378e4587ec265cac89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv7gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hbg2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.594847 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vcd5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3fc3441-5d98-4323-8a78-cab492090c5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d047b5f2e979d7eebec46b45ef363e94e99bd7311214872c66fcce39cfd411db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcqq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vcd5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.611910 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.627081 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27975" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceab872d-7a73-44d5-936e-3dd17facf399\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e24a1ba0b3bd47c8cff733e54ce43cf1641ba69df8a3d2083f47fcfdcd6640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e24a1ba0b3bd47c8cff733e54ce43cf1641ba69df8a3d2083f47fcfdcd6640\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9e93605467f5dae805a8b5b55e1eebc3f046f4ab7fd3a409235b918d8b1aa88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9e93605467f5dae805a8b5b55e1eebc3f046f4ab7fd3a409235b918d8b1aa88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27975\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.643336 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-s9x86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcb60df4-ad48-4830-b8c7-c63621b96707\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-s9x86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.653022 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.653072 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.653084 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.653100 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.653113 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:45Z","lastTransitionTime":"2025-11-29T07:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.662130 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"262bf41d-eaf0-42b7-946d-3223857ca705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bbfc45764b29055dcafdee2e3b77893c35b23cfa3eaa9125f01850baef2fd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://508f7cb88a5b33c09bb52519555900031de44693e10de750ed34d9bb4c31738c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9f1344bd7c3a1bc53870f7e5f5f2ca4993df112a24aa474655cc07670e22c48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b06202234f21aa27a3dc67978cc9f82738d7526100bce3fd640be08131413f69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.676516 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.694810 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.713877 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2faa21199ef0e7bbf065823fff1e4b33636239d022db292bec68ff54e8c948c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c09cfeaabce351a2face8aa09a04f88aa9d50122718358d07301aeab84e4a76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.745301 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b4d032ee6e86c2b61a59ea1477aa6d97a454c754888c37db24bc49d921eacd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.760933 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.760991 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.761013 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.761036 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.761049 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:45Z","lastTransitionTime":"2025-11-29T07:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.769952 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://148610b20c1dfe43c3e5443cda520fa95ed2823afcaa880dbe879b6d93dc8ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3aac368767d0534a658bc0223f5cda0a951940f2d6ff3f1c89a6ac401a7d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkmq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.803735 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e28f6209bd7be6beb5d86d316c3194ad3162ae8561ac532a3dfc894415fb406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e28f6209bd7be6beb5d86d316c3194ad3162ae8561ac532a3dfc894415fb406\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-km2g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.823481 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84bbfda4-f166-406b-8459-d8cad5d21032\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://102735e5872ecbca5ba247f4fc457619ce4b383def180851d69bf37f7ea1051a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cba2e40f95664da3b482ce01c3432b7a421b785791004833aeaffbc2b6bdcb9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c385dac962c7c2c54fe6b5eabe286bc75d0518601f68b6845677764c5916ad44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6319d19e0aa1655aaff5e3abcc13d8ff9ff26f401c28cf23593ecaa23d1ac5a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa34ac2960898820609db88aa051a6567a7b6bd0bfc7fcb741205f6f5c3f402\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:39:37Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 07:39:27.030389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:39:27.033381 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-823169215/tls.crt::/tmp/serving-cert-823169215/tls.key\\\\\\\"\\\\nI1129 07:39:37.392848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:39:37.396263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:39:37.396294 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:39:37.396669 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:39:37.396692 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:39:37.407579 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:39:37.407638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407648 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407655 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:39:37.407660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:39:37.407665 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:39:37.407670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:39:37.408043 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:39:37.411891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7b73774f26f4ea48586f8faf7f53be02b4678c85937c8e402868610f36d4d00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.842455 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30fbc9849348e3900b416f78f697f1d88eb5eb018f5ee420e711561423a79459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.854421 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hbg2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50b9c3ea-4ff5-434f-803c-2365a0938f9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46105526d9b0e90d4e14597c7b9c505b01d89a55e947fb378e4587ec265cac89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv7gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hbg2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.864023 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.864073 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.864084 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.864103 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.864116 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:45Z","lastTransitionTime":"2025-11-29T07:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.865470 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vcd5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3fc3441-5d98-4323-8a78-cab492090c5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d047b5f2e979d7eebec46b45ef363e94e99bd7311214872c66fcce39cfd411db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcqq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vcd5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.879691 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"262bf41d-eaf0-42b7-946d-3223857ca705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bbfc45764b29055dcafdee2e3b77893c35b23cfa3eaa9125f01850baef2fd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://508f7cb88a5b33c09bb52519555900031de44693e10de750ed34d9bb4c31738c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9f1344bd7c3a1bc53870f7e5f5f2ca4993df112a24aa474655cc07670e22c48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b06202234f21aa27a3dc67978cc9f82738d7526100bce3fd640be08131413f69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.893107 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.906789 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.923049 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27975" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceab872d-7a73-44d5-936e-3dd17facf399\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e24a1ba0b3bd47c8cff733e54ce43cf1641ba69df8a3d2083f47fcfdcd6640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e24a1ba0b3bd47c8cff733e54ce43cf1641ba69df8a3d2083f47fcfdcd6640\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9e93605467f5dae805a8b5b55e1eebc3f046f4ab7fd3a409235b918d8b1aa88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9e93605467f5dae805a8b5b55e1eebc3f046f4ab7fd3a409235b918d8b1aa88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27975\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.934067 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-s9x86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcb60df4-ad48-4830-b8c7-c63621b96707\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6d9cdb665b3bed924577ab173dcf48968a0b7217ca81b1f8708d733d27d3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-s9x86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.948894 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.964266 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2faa21199ef0e7bbf065823fff1e4b33636239d022db292bec68ff54e8c948c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c09cfeaabce351a2face8aa09a04f88aa9d50122718358d07301aeab84e4a76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.966148 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.966182 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.966191 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.966206 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:45 crc kubenswrapper[4795]: I1129 07:39:45.966217 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:45Z","lastTransitionTime":"2025-11-29T07:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:46 crc kubenswrapper[4795]: I1129 07:39:46.067850 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:46 crc kubenswrapper[4795]: I1129 07:39:46.067887 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:46 crc kubenswrapper[4795]: I1129 07:39:46.067895 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:46 crc kubenswrapper[4795]: I1129 07:39:46.067909 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:46 crc kubenswrapper[4795]: I1129 07:39:46.067918 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:46Z","lastTransitionTime":"2025-11-29T07:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:46 crc kubenswrapper[4795]: I1129 07:39:46.170814 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:46 crc kubenswrapper[4795]: I1129 07:39:46.170854 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:46 crc kubenswrapper[4795]: I1129 07:39:46.170864 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:46 crc kubenswrapper[4795]: I1129 07:39:46.170877 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:46 crc kubenswrapper[4795]: I1129 07:39:46.170888 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:46Z","lastTransitionTime":"2025-11-29T07:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:46 crc kubenswrapper[4795]: I1129 07:39:46.273523 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:46 crc kubenswrapper[4795]: I1129 07:39:46.273625 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:46 crc kubenswrapper[4795]: I1129 07:39:46.273645 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:46 crc kubenswrapper[4795]: I1129 07:39:46.273672 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:46 crc kubenswrapper[4795]: I1129 07:39:46.273691 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:46Z","lastTransitionTime":"2025-11-29T07:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:46 crc kubenswrapper[4795]: I1129 07:39:46.275186 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:39:46 crc kubenswrapper[4795]: I1129 07:39:46.275186 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:39:46 crc kubenswrapper[4795]: E1129 07:39:46.275326 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:39:46 crc kubenswrapper[4795]: I1129 07:39:46.275373 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:39:46 crc kubenswrapper[4795]: E1129 07:39:46.275470 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:39:46 crc kubenswrapper[4795]: E1129 07:39:46.275630 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:39:46 crc kubenswrapper[4795]: I1129 07:39:46.354175 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:39:46 crc kubenswrapper[4795]: I1129 07:39:46.354319 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:39:46 crc kubenswrapper[4795]: I1129 07:39:46.354371 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:39:46 crc kubenswrapper[4795]: I1129 07:39:46.354409 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:39:46 crc kubenswrapper[4795]: I1129 07:39:46.354438 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:39:46 crc kubenswrapper[4795]: E1129 07:39:46.354486 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:39:54.354435727 +0000 UTC m=+40.330011527 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:39:46 crc kubenswrapper[4795]: E1129 07:39:46.354526 4795 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 07:39:46 crc kubenswrapper[4795]: E1129 07:39:46.354550 4795 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 07:39:46 crc kubenswrapper[4795]: E1129 07:39:46.354634 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 07:39:54.354613442 +0000 UTC m=+40.330189232 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 07:39:46 crc kubenswrapper[4795]: E1129 07:39:46.354674 4795 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 07:39:46 crc kubenswrapper[4795]: E1129 07:39:46.354706 4795 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 07:39:46 crc kubenswrapper[4795]: E1129 07:39:46.354715 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 07:39:54.354693504 +0000 UTC m=+40.330269304 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 07:39:46 crc kubenswrapper[4795]: E1129 07:39:46.354726 4795 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:39:46 crc kubenswrapper[4795]: E1129 07:39:46.354784 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-29 07:39:54.354771756 +0000 UTC m=+40.330347736 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:39:46 crc kubenswrapper[4795]: E1129 07:39:46.355026 4795 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 07:39:46 crc kubenswrapper[4795]: E1129 07:39:46.355103 4795 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 07:39:46 crc kubenswrapper[4795]: E1129 07:39:46.355169 4795 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:39:46 crc kubenswrapper[4795]: E1129 07:39:46.355273 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-29 07:39:54.355261959 +0000 UTC m=+40.330837939 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:39:46 crc kubenswrapper[4795]: I1129 07:39:46.376091 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:46 crc kubenswrapper[4795]: I1129 07:39:46.376134 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:46 crc kubenswrapper[4795]: I1129 07:39:46.376144 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:46 crc kubenswrapper[4795]: I1129 07:39:46.376162 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:46 crc kubenswrapper[4795]: I1129 07:39:46.376173 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:46Z","lastTransitionTime":"2025-11-29T07:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:46 crc kubenswrapper[4795]: I1129 07:39:46.486801 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:46 crc kubenswrapper[4795]: I1129 07:39:46.486856 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:46 crc kubenswrapper[4795]: I1129 07:39:46.486870 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:46 crc kubenswrapper[4795]: I1129 07:39:46.486895 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:46 crc kubenswrapper[4795]: I1129 07:39:46.486910 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:46Z","lastTransitionTime":"2025-11-29T07:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:46 crc kubenswrapper[4795]: I1129 07:39:46.491796 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" event={"ID":"3d3ff2b2-cbaa-4309-805a-2b044f867d3a","Type":"ContainerStarted","Data":"248e5291402287c1a589a168527135bd1df15e293a7614681fc896fcbdeb9dcf"} Nov 29 07:39:46 crc kubenswrapper[4795]: I1129 07:39:46.491877 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" event={"ID":"3d3ff2b2-cbaa-4309-805a-2b044f867d3a","Type":"ContainerStarted","Data":"8112226f945ad27e0c5af4d588800322977e17efc3eac9acec2ce591f30f46fe"} Nov 29 07:39:46 crc kubenswrapper[4795]: I1129 07:39:46.495034 4795 generic.go:334] "Generic (PLEG): container finished" podID="ceab872d-7a73-44d5-936e-3dd17facf399" containerID="ec1856815c6475b6c9db1d59694dfc249a9b254ca7f6b402570dbb791f60e038" exitCode=0 Nov 29 07:39:46 crc kubenswrapper[4795]: I1129 07:39:46.495110 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-27975" event={"ID":"ceab872d-7a73-44d5-936e-3dd17facf399","Type":"ContainerDied","Data":"ec1856815c6475b6c9db1d59694dfc249a9b254ca7f6b402570dbb791f60e038"} Nov 29 07:39:46 crc kubenswrapper[4795]: I1129 07:39:46.513674 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:46Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:46 crc kubenswrapper[4795]: I1129 07:39:46.534204 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2faa21199ef0e7bbf065823fff1e4b33636239d022db292bec68ff54e8c948c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c09cfeaabce351a2face8aa09a04f88aa9d50122718358d07301aeab84e4a76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:46Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:46 crc kubenswrapper[4795]: I1129 07:39:46.549772 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://148610b20c1dfe43c3e5443cda520fa95ed2823afcaa880dbe879b6d93dc8ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3aac368767d0534a658bc0223f5cda0a951940f2d6ff3f1c89a6ac401a7d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkmq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:46Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:46 crc kubenswrapper[4795]: I1129 07:39:46.572414 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e28f6209bd7be6beb5d86d316c3194ad3162ae8561ac532a3dfc894415fb406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e28f6209bd7be6beb5d86d316c3194ad3162ae8561ac532a3dfc894415fb406\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-km2g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:46Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:46 crc kubenswrapper[4795]: I1129 07:39:46.588545 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84bbfda4-f166-406b-8459-d8cad5d21032\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://102735e5872ecbca5ba247f4fc457619ce4b383def180851d69bf37f7ea1051a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cba2e40f95664da3b482ce01c3432b7a421b785791004833aeaffbc2b6bdcb9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c385dac962c7c2c54fe6b5eabe286bc75d0518601f68b6845677764c5916ad44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6319d19e0aa1655aaff5e3abcc13d8ff9ff26f401c28cf23593ecaa23d1ac5a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa34ac2960898820609db88aa051a6567a7b6bd0bfc7fcb741205f6f5c3f402\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:39:37Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 07:39:27.030389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:39:27.033381 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-823169215/tls.crt::/tmp/serving-cert-823169215/tls.key\\\\\\\"\\\\nI1129 07:39:37.392848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:39:37.396263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:39:37.396294 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:39:37.396669 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:39:37.396692 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:39:37.407579 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:39:37.407638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407648 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407655 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:39:37.407660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:39:37.407665 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:39:37.407670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:39:37.408043 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:39:37.411891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7b73774f26f4ea48586f8faf7f53be02b4678c85937c8e402868610f36d4d00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:46Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:46 crc kubenswrapper[4795]: I1129 07:39:46.590913 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:46 crc kubenswrapper[4795]: I1129 07:39:46.590945 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:46 crc kubenswrapper[4795]: I1129 07:39:46.590958 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:46 crc kubenswrapper[4795]: I1129 07:39:46.590976 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:46 crc kubenswrapper[4795]: I1129 07:39:46.590988 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:46Z","lastTransitionTime":"2025-11-29T07:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:46 crc kubenswrapper[4795]: I1129 07:39:46.605475 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30fbc9849348e3900b416f78f697f1d88eb5eb018f5ee420e711561423a79459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:46Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:46 crc kubenswrapper[4795]: I1129 07:39:46.618873 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b4d032ee6e86c2b61a59ea1477aa6d97a454c754888c37db24bc49d921eacd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:46Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:46 crc kubenswrapper[4795]: I1129 07:39:46.633051 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hbg2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50b9c3ea-4ff5-434f-803c-2365a0938f9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46105526d9b0e90d4e14597c7b9c505b01d89a55e947fb378e4587ec265cac89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv7gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hbg2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:46Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:46 crc kubenswrapper[4795]: I1129 07:39:46.644135 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vcd5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3fc3441-5d98-4323-8a78-cab492090c5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d047b5f2e979d7eebec46b45ef363e94e99bd7311214872c66fcce39cfd411db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcqq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vcd5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:46Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:46 crc kubenswrapper[4795]: I1129 07:39:46.657940 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"262bf41d-eaf0-42b7-946d-3223857ca705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bbfc45764b29055dcafdee2e3b77893c35b23cfa3eaa9125f01850baef2fd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://508f7cb88a5b33c09bb52519555900031de44693e10de750ed34d9bb4c31738c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9f1344bd7c3a1bc53870f7e5f5f2ca4993df112a24aa474655cc07670e22c48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b06202234f21aa27a3dc67978cc9f82738d7526100bce3fd640be08131413f69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:46Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:46 crc kubenswrapper[4795]: I1129 07:39:46.672270 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:46Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:46 crc kubenswrapper[4795]: I1129 07:39:46.688849 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:46Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:46 crc kubenswrapper[4795]: I1129 07:39:46.693561 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:46 crc kubenswrapper[4795]: I1129 07:39:46.693633 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:46 crc kubenswrapper[4795]: I1129 07:39:46.693643 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:46 crc kubenswrapper[4795]: I1129 07:39:46.693657 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:46 crc kubenswrapper[4795]: I1129 07:39:46.693668 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:46Z","lastTransitionTime":"2025-11-29T07:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:46 crc kubenswrapper[4795]: I1129 07:39:46.703993 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27975" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceab872d-7a73-44d5-936e-3dd17facf399\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e24a1ba0b3bd47c8cff733e54ce43cf1641ba69df8a3d2083f47fcfdcd6640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e24a1ba0b3bd47c8cff733e54ce43cf1641ba69df8a3d2083f47fcfdcd6640\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9e93605467f5dae805a8b5b55e1eebc3f046f4ab7fd3a409235b918d8b1aa88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9e93605467f5dae805a8b5b55e1eebc3f046f4ab7fd3a409235b918d8b1aa88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1856815c6475b6c9db1d59694dfc249a9b254ca7f6b402570dbb791f60e038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1856815c6475b6c9db1d59694dfc249a9b254ca7f6b402570dbb791f60e038\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27975\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:46Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:46 crc kubenswrapper[4795]: I1129 07:39:46.714889 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-s9x86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcb60df4-ad48-4830-b8c7-c63621b96707\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6d9cdb665b3bed924577ab173dcf48968a0b7217ca81b1f8708d733d27d3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-s9x86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:46Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:46 crc kubenswrapper[4795]: I1129 07:39:46.796376 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:46 crc kubenswrapper[4795]: I1129 07:39:46.796419 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:46 crc kubenswrapper[4795]: I1129 07:39:46.796428 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:46 crc kubenswrapper[4795]: I1129 07:39:46.796445 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:46 crc kubenswrapper[4795]: I1129 07:39:46.796459 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:46Z","lastTransitionTime":"2025-11-29T07:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:46 crc kubenswrapper[4795]: I1129 07:39:46.899155 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:46 crc kubenswrapper[4795]: I1129 07:39:46.899220 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:46 crc kubenswrapper[4795]: I1129 07:39:46.899240 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:46 crc kubenswrapper[4795]: I1129 07:39:46.899267 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:46 crc kubenswrapper[4795]: I1129 07:39:46.899291 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:46Z","lastTransitionTime":"2025-11-29T07:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:47 crc kubenswrapper[4795]: I1129 07:39:47.002607 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:47 crc kubenswrapper[4795]: I1129 07:39:47.002658 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:47 crc kubenswrapper[4795]: I1129 07:39:47.002668 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:47 crc kubenswrapper[4795]: I1129 07:39:47.002684 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:47 crc kubenswrapper[4795]: I1129 07:39:47.002695 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:47Z","lastTransitionTime":"2025-11-29T07:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:47 crc kubenswrapper[4795]: I1129 07:39:47.106982 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:47 crc kubenswrapper[4795]: I1129 07:39:47.107027 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:47 crc kubenswrapper[4795]: I1129 07:39:47.107039 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:47 crc kubenswrapper[4795]: I1129 07:39:47.107057 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:47 crc kubenswrapper[4795]: I1129 07:39:47.107069 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:47Z","lastTransitionTime":"2025-11-29T07:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:47 crc kubenswrapper[4795]: I1129 07:39:47.210371 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:47 crc kubenswrapper[4795]: I1129 07:39:47.210403 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:47 crc kubenswrapper[4795]: I1129 07:39:47.210413 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:47 crc kubenswrapper[4795]: I1129 07:39:47.210427 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:47 crc kubenswrapper[4795]: I1129 07:39:47.210438 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:47Z","lastTransitionTime":"2025-11-29T07:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:47 crc kubenswrapper[4795]: I1129 07:39:47.312739 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:47 crc kubenswrapper[4795]: I1129 07:39:47.313285 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:47 crc kubenswrapper[4795]: I1129 07:39:47.313459 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:47 crc kubenswrapper[4795]: I1129 07:39:47.313639 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:47 crc kubenswrapper[4795]: I1129 07:39:47.313803 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:47Z","lastTransitionTime":"2025-11-29T07:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:47 crc kubenswrapper[4795]: I1129 07:39:47.416424 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:47 crc kubenswrapper[4795]: I1129 07:39:47.416750 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:47 crc kubenswrapper[4795]: I1129 07:39:47.416825 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:47 crc kubenswrapper[4795]: I1129 07:39:47.416917 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:47 crc kubenswrapper[4795]: I1129 07:39:47.417002 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:47Z","lastTransitionTime":"2025-11-29T07:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:47 crc kubenswrapper[4795]: I1129 07:39:47.502416 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-27975" event={"ID":"ceab872d-7a73-44d5-936e-3dd17facf399","Type":"ContainerStarted","Data":"c79e6f292a540eaeab2781c31973da7e0213d08460452129dbb25075c05a9094"} Nov 29 07:39:47 crc kubenswrapper[4795]: I1129 07:39:47.519204 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:47 crc kubenswrapper[4795]: I1129 07:39:47.519232 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:47 crc kubenswrapper[4795]: I1129 07:39:47.519241 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:47 crc kubenswrapper[4795]: I1129 07:39:47.519255 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:47 crc kubenswrapper[4795]: I1129 07:39:47.519266 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:47Z","lastTransitionTime":"2025-11-29T07:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:47 crc kubenswrapper[4795]: I1129 07:39:47.622989 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:47 crc kubenswrapper[4795]: I1129 07:39:47.623016 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:47 crc kubenswrapper[4795]: I1129 07:39:47.623028 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:47 crc kubenswrapper[4795]: I1129 07:39:47.623042 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:47 crc kubenswrapper[4795]: I1129 07:39:47.623052 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:47Z","lastTransitionTime":"2025-11-29T07:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:47 crc kubenswrapper[4795]: I1129 07:39:47.726096 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:47 crc kubenswrapper[4795]: I1129 07:39:47.726143 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:47 crc kubenswrapper[4795]: I1129 07:39:47.726157 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:47 crc kubenswrapper[4795]: I1129 07:39:47.726180 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:47 crc kubenswrapper[4795]: I1129 07:39:47.726203 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:47Z","lastTransitionTime":"2025-11-29T07:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:47 crc kubenswrapper[4795]: I1129 07:39:47.830572 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:47 crc kubenswrapper[4795]: I1129 07:39:47.830651 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:47 crc kubenswrapper[4795]: I1129 07:39:47.830668 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:47 crc kubenswrapper[4795]: I1129 07:39:47.830694 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:47 crc kubenswrapper[4795]: I1129 07:39:47.830709 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:47Z","lastTransitionTime":"2025-11-29T07:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:47 crc kubenswrapper[4795]: I1129 07:39:47.940153 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:47 crc kubenswrapper[4795]: I1129 07:39:47.940223 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:47 crc kubenswrapper[4795]: I1129 07:39:47.940262 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:47 crc kubenswrapper[4795]: I1129 07:39:47.940287 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:47 crc kubenswrapper[4795]: I1129 07:39:47.940304 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:47Z","lastTransitionTime":"2025-11-29T07:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:48 crc kubenswrapper[4795]: I1129 07:39:48.043269 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:48 crc kubenswrapper[4795]: I1129 07:39:48.043315 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:48 crc kubenswrapper[4795]: I1129 07:39:48.043325 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:48 crc kubenswrapper[4795]: I1129 07:39:48.043340 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:48 crc kubenswrapper[4795]: I1129 07:39:48.043351 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:48Z","lastTransitionTime":"2025-11-29T07:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:48 crc kubenswrapper[4795]: I1129 07:39:48.146284 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:48 crc kubenswrapper[4795]: I1129 07:39:48.146934 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:48 crc kubenswrapper[4795]: I1129 07:39:48.146970 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:48 crc kubenswrapper[4795]: I1129 07:39:48.146993 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:48 crc kubenswrapper[4795]: I1129 07:39:48.147007 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:48Z","lastTransitionTime":"2025-11-29T07:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:48 crc kubenswrapper[4795]: I1129 07:39:48.249285 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:48 crc kubenswrapper[4795]: I1129 07:39:48.249327 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:48 crc kubenswrapper[4795]: I1129 07:39:48.249337 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:48 crc kubenswrapper[4795]: I1129 07:39:48.249357 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:48 crc kubenswrapper[4795]: I1129 07:39:48.249370 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:48Z","lastTransitionTime":"2025-11-29T07:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:48 crc kubenswrapper[4795]: I1129 07:39:48.275375 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:39:48 crc kubenswrapper[4795]: I1129 07:39:48.275438 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:39:48 crc kubenswrapper[4795]: I1129 07:39:48.275378 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:39:48 crc kubenswrapper[4795]: E1129 07:39:48.275511 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:39:48 crc kubenswrapper[4795]: E1129 07:39:48.275568 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:39:48 crc kubenswrapper[4795]: E1129 07:39:48.275654 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:39:48 crc kubenswrapper[4795]: I1129 07:39:48.351743 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:48 crc kubenswrapper[4795]: I1129 07:39:48.351787 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:48 crc kubenswrapper[4795]: I1129 07:39:48.351799 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:48 crc kubenswrapper[4795]: I1129 07:39:48.351816 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:48 crc kubenswrapper[4795]: I1129 07:39:48.351830 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:48Z","lastTransitionTime":"2025-11-29T07:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:48 crc kubenswrapper[4795]: I1129 07:39:48.453776 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:48 crc kubenswrapper[4795]: I1129 07:39:48.453814 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:48 crc kubenswrapper[4795]: I1129 07:39:48.453827 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:48 crc kubenswrapper[4795]: I1129 07:39:48.453844 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:48 crc kubenswrapper[4795]: I1129 07:39:48.453858 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:48Z","lastTransitionTime":"2025-11-29T07:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:48 crc kubenswrapper[4795]: I1129 07:39:48.508610 4795 generic.go:334] "Generic (PLEG): container finished" podID="ceab872d-7a73-44d5-936e-3dd17facf399" containerID="c79e6f292a540eaeab2781c31973da7e0213d08460452129dbb25075c05a9094" exitCode=0 Nov 29 07:39:48 crc kubenswrapper[4795]: I1129 07:39:48.508689 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-27975" event={"ID":"ceab872d-7a73-44d5-936e-3dd17facf399","Type":"ContainerDied","Data":"c79e6f292a540eaeab2781c31973da7e0213d08460452129dbb25075c05a9094"} Nov 29 07:39:48 crc kubenswrapper[4795]: I1129 07:39:48.514939 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" event={"ID":"3d3ff2b2-cbaa-4309-805a-2b044f867d3a","Type":"ContainerStarted","Data":"c63ac8cc3766762113c8fdfdc459f77570fc533bf636011f1e64a5c0a55b3ee0"} Nov 29 07:39:48 crc kubenswrapper[4795]: I1129 07:39:48.524460 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84bbfda4-f166-406b-8459-d8cad5d21032\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://102735e5872ecbca5ba247f4fc457619ce4b383def180851d69bf37f7ea1051a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cba2e40f95664da3b482ce01c3432b7a421b785791004833aeaffbc2b6bdcb9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c385dac962c7c2c54fe6b5eabe286bc75d0518601f68b6845677764c5916ad44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6319d19e0aa1655aaff5e3abcc13d8ff9ff26f401c28cf23593ecaa23d1ac5a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa34ac2960898820609db88aa051a6567a7b6bd0bfc7fcb741205f6f5c3f402\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:39:37Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 07:39:27.030389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:39:27.033381 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-823169215/tls.crt::/tmp/serving-cert-823169215/tls.key\\\\\\\"\\\\nI1129 07:39:37.392848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:39:37.396263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:39:37.396294 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:39:37.396669 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:39:37.396692 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:39:37.407579 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:39:37.407638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407648 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407655 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:39:37.407660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:39:37.407665 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:39:37.407670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:39:37.408043 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:39:37.411891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7b73774f26f4ea48586f8faf7f53be02b4678c85937c8e402868610f36d4d00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:48Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:48 crc kubenswrapper[4795]: I1129 07:39:48.540669 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30fbc9849348e3900b416f78f697f1d88eb5eb018f5ee420e711561423a79459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:48Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:48 crc kubenswrapper[4795]: I1129 07:39:48.554425 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b4d032ee6e86c2b61a59ea1477aa6d97a454c754888c37db24bc49d921eacd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:48Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:48 crc kubenswrapper[4795]: I1129 07:39:48.555825 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:48 crc kubenswrapper[4795]: I1129 07:39:48.555859 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:48 crc kubenswrapper[4795]: I1129 07:39:48.555870 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:48 crc kubenswrapper[4795]: I1129 07:39:48.555887 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:48 crc kubenswrapper[4795]: I1129 07:39:48.555897 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:48Z","lastTransitionTime":"2025-11-29T07:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:48 crc kubenswrapper[4795]: I1129 07:39:48.570890 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://148610b20c1dfe43c3e5443cda520fa95ed2823afcaa880dbe879b6d93dc8ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3aac368767d0534a658bc0223f5cda0a951940f2d6ff3f1c89a6ac401a7d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkmq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:48Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:48 crc kubenswrapper[4795]: I1129 07:39:48.590107 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e28f6209bd7be6beb5d86d316c3194ad3162ae8561ac532a3dfc894415fb406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e28f6209bd7be6beb5d86d316c3194ad3162ae8561ac532a3dfc894415fb406\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-km2g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:48Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:48 crc kubenswrapper[4795]: I1129 07:39:48.602565 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vcd5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3fc3441-5d98-4323-8a78-cab492090c5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d047b5f2e979d7eebec46b45ef363e94e99bd7311214872c66fcce39cfd411db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcqq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vcd5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:48Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:48 crc kubenswrapper[4795]: I1129 07:39:48.618049 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hbg2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50b9c3ea-4ff5-434f-803c-2365a0938f9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46105526d9b0e90d4e14597c7b9c505b01d89a55e947fb378e4587ec265cac89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv7gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hbg2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:48Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:48 crc kubenswrapper[4795]: I1129 07:39:48.631465 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:48Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:48 crc kubenswrapper[4795]: I1129 07:39:48.646360 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:48Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:48 crc kubenswrapper[4795]: I1129 07:39:48.658620 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:48 crc kubenswrapper[4795]: I1129 07:39:48.658676 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:48 crc kubenswrapper[4795]: I1129 07:39:48.658688 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:48 crc kubenswrapper[4795]: I1129 07:39:48.658709 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:48 crc kubenswrapper[4795]: I1129 07:39:48.658722 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:48Z","lastTransitionTime":"2025-11-29T07:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:48 crc kubenswrapper[4795]: I1129 07:39:48.665094 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27975" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceab872d-7a73-44d5-936e-3dd17facf399\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e24a1ba0b3bd47c8cff733e54ce43cf1641ba69df8a3d2083f47fcfdcd6640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e24a1ba0b3bd47c8cff733e54ce43cf1641ba69df8a3d2083f47fcfdcd6640\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9e93605467f5dae805a8b5b55e1eebc3f046f4ab7fd3a409235b918d8b1aa88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9e93605467f5dae805a8b5b55e1eebc3f046f4ab7fd3a409235b918d8b1aa88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1856815c6475b6c9db1d59694dfc249a9b254ca7f6b402570dbb791f60e038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1856815c6475b6c9db1d59694dfc249a9b254ca7f6b402570dbb791f60e038\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c79e6f292a540eaeab2781c31973da7e0213d08460452129dbb25075c05a9094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c79e6f292a540eaeab2781c31973da7e0213d08460452129dbb25075c05a9094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27975\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:48Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:48 crc kubenswrapper[4795]: I1129 07:39:48.676190 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-s9x86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcb60df4-ad48-4830-b8c7-c63621b96707\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6d9cdb665b3bed924577ab173dcf48968a0b7217ca81b1f8708d733d27d3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-s9x86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:48Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:48 crc kubenswrapper[4795]: I1129 07:39:48.693978 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"262bf41d-eaf0-42b7-946d-3223857ca705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bbfc45764b29055dcafdee2e3b77893c35b23cfa3eaa9125f01850baef2fd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://508f7cb88a5b33c09bb52519555900031de44693e10de750ed34d9bb4c31738c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9f1344bd7c3a1bc53870f7e5f5f2ca4993df112a24aa474655cc07670e22c48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b06202234f21aa27a3dc67978cc9f82738d7526100bce3fd640be08131413f69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:48Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:48 crc kubenswrapper[4795]: I1129 07:39:48.705875 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2faa21199ef0e7bbf065823fff1e4b33636239d022db292bec68ff54e8c948c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c09cfeaabce351a2face8aa09a04f88aa9d50122718358d07301aeab84e4a76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:48Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:48 crc kubenswrapper[4795]: I1129 07:39:48.719370 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:48Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:48 crc kubenswrapper[4795]: I1129 07:39:48.761650 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:48 crc kubenswrapper[4795]: I1129 07:39:48.761702 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:48 crc kubenswrapper[4795]: I1129 07:39:48.761711 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:48 crc kubenswrapper[4795]: I1129 07:39:48.761725 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:48 crc kubenswrapper[4795]: I1129 07:39:48.761735 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:48Z","lastTransitionTime":"2025-11-29T07:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:48 crc kubenswrapper[4795]: I1129 07:39:48.863959 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:48 crc kubenswrapper[4795]: I1129 07:39:48.864012 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:48 crc kubenswrapper[4795]: I1129 07:39:48.864021 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:48 crc kubenswrapper[4795]: I1129 07:39:48.864035 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:48 crc kubenswrapper[4795]: I1129 07:39:48.864044 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:48Z","lastTransitionTime":"2025-11-29T07:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:48 crc kubenswrapper[4795]: I1129 07:39:48.966189 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:48 crc kubenswrapper[4795]: I1129 07:39:48.966236 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:48 crc kubenswrapper[4795]: I1129 07:39:48.966247 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:48 crc kubenswrapper[4795]: I1129 07:39:48.966265 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:48 crc kubenswrapper[4795]: I1129 07:39:48.966278 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:48Z","lastTransitionTime":"2025-11-29T07:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:49 crc kubenswrapper[4795]: I1129 07:39:49.068561 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:49 crc kubenswrapper[4795]: I1129 07:39:49.068631 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:49 crc kubenswrapper[4795]: I1129 07:39:49.068646 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:49 crc kubenswrapper[4795]: I1129 07:39:49.068665 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:49 crc kubenswrapper[4795]: I1129 07:39:49.068685 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:49Z","lastTransitionTime":"2025-11-29T07:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:49 crc kubenswrapper[4795]: I1129 07:39:49.176327 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:49 crc kubenswrapper[4795]: I1129 07:39:49.176369 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:49 crc kubenswrapper[4795]: I1129 07:39:49.176380 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:49 crc kubenswrapper[4795]: I1129 07:39:49.176395 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:49 crc kubenswrapper[4795]: I1129 07:39:49.176404 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:49Z","lastTransitionTime":"2025-11-29T07:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:49 crc kubenswrapper[4795]: I1129 07:39:49.279396 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:49 crc kubenswrapper[4795]: I1129 07:39:49.279435 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:49 crc kubenswrapper[4795]: I1129 07:39:49.279457 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:49 crc kubenswrapper[4795]: I1129 07:39:49.279479 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:49 crc kubenswrapper[4795]: I1129 07:39:49.279496 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:49Z","lastTransitionTime":"2025-11-29T07:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:49 crc kubenswrapper[4795]: I1129 07:39:49.381781 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:49 crc kubenswrapper[4795]: I1129 07:39:49.381815 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:49 crc kubenswrapper[4795]: I1129 07:39:49.381823 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:49 crc kubenswrapper[4795]: I1129 07:39:49.381837 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:49 crc kubenswrapper[4795]: I1129 07:39:49.381847 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:49Z","lastTransitionTime":"2025-11-29T07:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:49 crc kubenswrapper[4795]: I1129 07:39:49.485003 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:49 crc kubenswrapper[4795]: I1129 07:39:49.485055 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:49 crc kubenswrapper[4795]: I1129 07:39:49.485068 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:49 crc kubenswrapper[4795]: I1129 07:39:49.485085 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:49 crc kubenswrapper[4795]: I1129 07:39:49.485098 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:49Z","lastTransitionTime":"2025-11-29T07:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:49 crc kubenswrapper[4795]: I1129 07:39:49.588738 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:49 crc kubenswrapper[4795]: I1129 07:39:49.588775 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:49 crc kubenswrapper[4795]: I1129 07:39:49.588784 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:49 crc kubenswrapper[4795]: I1129 07:39:49.588800 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:49 crc kubenswrapper[4795]: I1129 07:39:49.588810 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:49Z","lastTransitionTime":"2025-11-29T07:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:49 crc kubenswrapper[4795]: I1129 07:39:49.692362 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:49 crc kubenswrapper[4795]: I1129 07:39:49.692440 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:49 crc kubenswrapper[4795]: I1129 07:39:49.692471 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:49 crc kubenswrapper[4795]: I1129 07:39:49.692502 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:49 crc kubenswrapper[4795]: I1129 07:39:49.692527 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:49Z","lastTransitionTime":"2025-11-29T07:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:49 crc kubenswrapper[4795]: I1129 07:39:49.795552 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:49 crc kubenswrapper[4795]: I1129 07:39:49.795585 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:49 crc kubenswrapper[4795]: I1129 07:39:49.795616 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:49 crc kubenswrapper[4795]: I1129 07:39:49.795632 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:49 crc kubenswrapper[4795]: I1129 07:39:49.795644 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:49Z","lastTransitionTime":"2025-11-29T07:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:49 crc kubenswrapper[4795]: I1129 07:39:49.898960 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:49 crc kubenswrapper[4795]: I1129 07:39:49.899538 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:49 crc kubenswrapper[4795]: I1129 07:39:49.899799 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:49 crc kubenswrapper[4795]: I1129 07:39:49.899834 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:49 crc kubenswrapper[4795]: I1129 07:39:49.899861 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:49Z","lastTransitionTime":"2025-11-29T07:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:50 crc kubenswrapper[4795]: I1129 07:39:50.003303 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:50 crc kubenswrapper[4795]: I1129 07:39:50.003346 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:50 crc kubenswrapper[4795]: I1129 07:39:50.003355 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:50 crc kubenswrapper[4795]: I1129 07:39:50.003389 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:50 crc kubenswrapper[4795]: I1129 07:39:50.003402 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:50Z","lastTransitionTime":"2025-11-29T07:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:50 crc kubenswrapper[4795]: I1129 07:39:50.107319 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:50 crc kubenswrapper[4795]: I1129 07:39:50.107380 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:50 crc kubenswrapper[4795]: I1129 07:39:50.107395 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:50 crc kubenswrapper[4795]: I1129 07:39:50.107417 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:50 crc kubenswrapper[4795]: I1129 07:39:50.107430 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:50Z","lastTransitionTime":"2025-11-29T07:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:50 crc kubenswrapper[4795]: I1129 07:39:50.210373 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:50 crc kubenswrapper[4795]: I1129 07:39:50.210441 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:50 crc kubenswrapper[4795]: I1129 07:39:50.210455 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:50 crc kubenswrapper[4795]: I1129 07:39:50.210479 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:50 crc kubenswrapper[4795]: I1129 07:39:50.210497 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:50Z","lastTransitionTime":"2025-11-29T07:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:50 crc kubenswrapper[4795]: I1129 07:39:50.275121 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:39:50 crc kubenswrapper[4795]: I1129 07:39:50.275162 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:39:50 crc kubenswrapper[4795]: I1129 07:39:50.275187 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:39:50 crc kubenswrapper[4795]: E1129 07:39:50.275275 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:39:50 crc kubenswrapper[4795]: E1129 07:39:50.275560 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:39:50 crc kubenswrapper[4795]: E1129 07:39:50.275766 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:39:50 crc kubenswrapper[4795]: I1129 07:39:50.312676 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:50 crc kubenswrapper[4795]: I1129 07:39:50.312718 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:50 crc kubenswrapper[4795]: I1129 07:39:50.312729 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:50 crc kubenswrapper[4795]: I1129 07:39:50.312748 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:50 crc kubenswrapper[4795]: I1129 07:39:50.312764 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:50Z","lastTransitionTime":"2025-11-29T07:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:50 crc kubenswrapper[4795]: I1129 07:39:50.415131 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:50 crc kubenswrapper[4795]: I1129 07:39:50.415401 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:50 crc kubenswrapper[4795]: I1129 07:39:50.415519 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:50 crc kubenswrapper[4795]: I1129 07:39:50.415821 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:50 crc kubenswrapper[4795]: I1129 07:39:50.415945 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:50Z","lastTransitionTime":"2025-11-29T07:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:50 crc kubenswrapper[4795]: I1129 07:39:50.518863 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:50 crc kubenswrapper[4795]: I1129 07:39:50.518983 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:50 crc kubenswrapper[4795]: I1129 07:39:50.519005 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:50 crc kubenswrapper[4795]: I1129 07:39:50.519035 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:50 crc kubenswrapper[4795]: I1129 07:39:50.519061 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:50Z","lastTransitionTime":"2025-11-29T07:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:50 crc kubenswrapper[4795]: I1129 07:39:50.528872 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-27975" event={"ID":"ceab872d-7a73-44d5-936e-3dd17facf399","Type":"ContainerStarted","Data":"1e9ceaddb24d5dc8e3689b2e05482fd76883f4ce3475557c03419491b53e800d"} Nov 29 07:39:50 crc kubenswrapper[4795]: I1129 07:39:50.621620 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:50 crc kubenswrapper[4795]: I1129 07:39:50.621666 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:50 crc kubenswrapper[4795]: I1129 07:39:50.621677 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:50 crc kubenswrapper[4795]: I1129 07:39:50.621695 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:50 crc kubenswrapper[4795]: I1129 07:39:50.621707 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:50Z","lastTransitionTime":"2025-11-29T07:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:50 crc kubenswrapper[4795]: I1129 07:39:50.724290 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:50 crc kubenswrapper[4795]: I1129 07:39:50.724329 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:50 crc kubenswrapper[4795]: I1129 07:39:50.724341 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:50 crc kubenswrapper[4795]: I1129 07:39:50.724359 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:50 crc kubenswrapper[4795]: I1129 07:39:50.724372 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:50Z","lastTransitionTime":"2025-11-29T07:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:50 crc kubenswrapper[4795]: I1129 07:39:50.826484 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:50 crc kubenswrapper[4795]: I1129 07:39:50.826531 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:50 crc kubenswrapper[4795]: I1129 07:39:50.826543 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:50 crc kubenswrapper[4795]: I1129 07:39:50.826566 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:50 crc kubenswrapper[4795]: I1129 07:39:50.826584 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:50Z","lastTransitionTime":"2025-11-29T07:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:50 crc kubenswrapper[4795]: I1129 07:39:50.929889 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:50 crc kubenswrapper[4795]: I1129 07:39:50.929931 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:50 crc kubenswrapper[4795]: I1129 07:39:50.929942 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:50 crc kubenswrapper[4795]: I1129 07:39:50.929957 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:50 crc kubenswrapper[4795]: I1129 07:39:50.929971 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:50Z","lastTransitionTime":"2025-11-29T07:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:51 crc kubenswrapper[4795]: I1129 07:39:51.033278 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:51 crc kubenswrapper[4795]: I1129 07:39:51.033336 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:51 crc kubenswrapper[4795]: I1129 07:39:51.033352 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:51 crc kubenswrapper[4795]: I1129 07:39:51.033371 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:51 crc kubenswrapper[4795]: I1129 07:39:51.033383 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:51Z","lastTransitionTime":"2025-11-29T07:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:51 crc kubenswrapper[4795]: I1129 07:39:51.138835 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:51 crc kubenswrapper[4795]: I1129 07:39:51.138885 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:51 crc kubenswrapper[4795]: I1129 07:39:51.138899 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:51 crc kubenswrapper[4795]: I1129 07:39:51.138916 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:51 crc kubenswrapper[4795]: I1129 07:39:51.138932 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:51Z","lastTransitionTime":"2025-11-29T07:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:51 crc kubenswrapper[4795]: I1129 07:39:51.241975 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:51 crc kubenswrapper[4795]: I1129 07:39:51.242393 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:51 crc kubenswrapper[4795]: I1129 07:39:51.242583 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:51 crc kubenswrapper[4795]: I1129 07:39:51.242756 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:51 crc kubenswrapper[4795]: I1129 07:39:51.242870 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:51Z","lastTransitionTime":"2025-11-29T07:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:51 crc kubenswrapper[4795]: I1129 07:39:51.346087 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:51 crc kubenswrapper[4795]: I1129 07:39:51.346136 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:51 crc kubenswrapper[4795]: I1129 07:39:51.346145 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:51 crc kubenswrapper[4795]: I1129 07:39:51.346162 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:51 crc kubenswrapper[4795]: I1129 07:39:51.346172 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:51Z","lastTransitionTime":"2025-11-29T07:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:51 crc kubenswrapper[4795]: I1129 07:39:51.449395 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:51 crc kubenswrapper[4795]: I1129 07:39:51.449444 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:51 crc kubenswrapper[4795]: I1129 07:39:51.449457 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:51 crc kubenswrapper[4795]: I1129 07:39:51.449479 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:51 crc kubenswrapper[4795]: I1129 07:39:51.449494 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:51Z","lastTransitionTime":"2025-11-29T07:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:51 crc kubenswrapper[4795]: I1129 07:39:51.548824 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hbg2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50b9c3ea-4ff5-434f-803c-2365a0938f9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46105526d9b0e90d4e14597c7b9c505b01d89a55e947fb378e4587ec265cac89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv7gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hbg2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:51Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:51 crc kubenswrapper[4795]: I1129 07:39:51.552735 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:51 crc kubenswrapper[4795]: I1129 07:39:51.552810 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:51 crc kubenswrapper[4795]: I1129 07:39:51.552828 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:51 crc kubenswrapper[4795]: I1129 07:39:51.552885 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:51 crc kubenswrapper[4795]: I1129 07:39:51.552904 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:51Z","lastTransitionTime":"2025-11-29T07:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:51 crc kubenswrapper[4795]: I1129 07:39:51.559774 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vcd5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3fc3441-5d98-4323-8a78-cab492090c5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d047b5f2e979d7eebec46b45ef363e94e99bd7311214872c66fcce39cfd411db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcqq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vcd5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:51Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:51 crc kubenswrapper[4795]: I1129 07:39:51.574078 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:51Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:51 crc kubenswrapper[4795]: I1129 07:39:51.593460 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27975" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceab872d-7a73-44d5-936e-3dd17facf399\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e24a1ba0b3bd47c8cff733e54ce43cf1641ba69df8a3d2083f47fcfdcd6640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e24a1ba0b3bd47c8cff733e54ce43cf1641ba69df8a3d2083f47fcfdcd6640\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9e93605467f5dae805a8b5b55e1eebc3f046f4ab7fd3a409235b918d8b1aa88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9e93605467f5dae805a8b5b55e1eebc3f046f4ab7fd3a409235b918d8b1aa88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1856815c6475b6c9db1d59694dfc249a9b254ca7f6b402570dbb791f60e038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1856815c6475b6c9db1d59694dfc249a9b254ca7f6b402570dbb791f60e038\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c79e6f292a540eaeab2781c31973da7e0213d08460452129dbb25075c05a9094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c79e6f292a540eaeab2781c31973da7e0213d08460452129dbb25075c05a9094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e9ceaddb24d5dc8e3689b2e05482fd76883f4ce3475557c03419491b53e800d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27975\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:51Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:51 crc kubenswrapper[4795]: I1129 07:39:51.608830 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-s9x86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcb60df4-ad48-4830-b8c7-c63621b96707\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6d9cdb665b3bed924577ab173dcf48968a0b7217ca81b1f8708d733d27d3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-s9x86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:51Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:51 crc kubenswrapper[4795]: I1129 07:39:51.625496 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"262bf41d-eaf0-42b7-946d-3223857ca705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bbfc45764b29055dcafdee2e3b77893c35b23cfa3eaa9125f01850baef2fd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://508f7cb88a5b33c09bb52519555900031de44693e10de750ed34d9bb4c31738c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9f1344bd7c3a1bc53870f7e5f5f2ca4993df112a24aa474655cc07670e22c48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b06202234f21aa27a3dc67978cc9f82738d7526100bce3fd640be08131413f69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:51Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:51 crc kubenswrapper[4795]: I1129 07:39:51.641433 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:51Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:51 crc kubenswrapper[4795]: I1129 07:39:51.656233 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:51Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:51 crc kubenswrapper[4795]: I1129 07:39:51.656627 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:51 crc kubenswrapper[4795]: I1129 07:39:51.656680 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:51 crc kubenswrapper[4795]: I1129 07:39:51.656695 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:51 crc kubenswrapper[4795]: I1129 07:39:51.656718 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:51 crc kubenswrapper[4795]: I1129 07:39:51.656735 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:51Z","lastTransitionTime":"2025-11-29T07:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:51 crc kubenswrapper[4795]: I1129 07:39:51.671950 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2faa21199ef0e7bbf065823fff1e4b33636239d022db292bec68ff54e8c948c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c09cfeaabce351a2face8aa09a04f88aa9d50122718358d07301aeab84e4a76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:51Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:51 crc kubenswrapper[4795]: I1129 07:39:51.689150 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84bbfda4-f166-406b-8459-d8cad5d21032\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://102735e5872ecbca5ba247f4fc457619ce4b383def180851d69bf37f7ea1051a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cba2e40f95664da3b482ce01c3432b7a421b785791004833aeaffbc2b6bdcb9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c385dac962c7c2c54fe6b5eabe286bc75d0518601f68b6845677764c5916ad44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6319d19e0aa1655aaff5e3abcc13d8ff9ff26f401c28cf23593ecaa23d1ac5a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa34ac2960898820609db88aa051a6567a7b6bd0bfc7fcb741205f6f5c3f402\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:39:37Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 07:39:27.030389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:39:27.033381 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-823169215/tls.crt::/tmp/serving-cert-823169215/tls.key\\\\\\\"\\\\nI1129 07:39:37.392848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:39:37.396263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:39:37.396294 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:39:37.396669 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:39:37.396692 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:39:37.407579 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:39:37.407638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407648 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407655 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:39:37.407660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:39:37.407665 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:39:37.407670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:39:37.408043 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:39:37.411891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7b73774f26f4ea48586f8faf7f53be02b4678c85937c8e402868610f36d4d00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:51Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:51 crc kubenswrapper[4795]: I1129 07:39:51.710084 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30fbc9849348e3900b416f78f697f1d88eb5eb018f5ee420e711561423a79459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:51Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:51 crc kubenswrapper[4795]: I1129 07:39:51.729162 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b4d032ee6e86c2b61a59ea1477aa6d97a454c754888c37db24bc49d921eacd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:51Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:51 crc kubenswrapper[4795]: I1129 07:39:51.744742 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://148610b20c1dfe43c3e5443cda520fa95ed2823afcaa880dbe879b6d93dc8ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3aac368767d0534a658bc0223f5cda0a951940f2d6ff3f1c89a6ac401a7d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkmq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:51Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:51 crc kubenswrapper[4795]: I1129 07:39:51.760295 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:51 crc kubenswrapper[4795]: I1129 07:39:51.760339 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:51 crc kubenswrapper[4795]: I1129 07:39:51.760377 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:51 crc kubenswrapper[4795]: I1129 07:39:51.760396 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:51 crc kubenswrapper[4795]: I1129 07:39:51.760409 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:51Z","lastTransitionTime":"2025-11-29T07:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:51 crc kubenswrapper[4795]: I1129 07:39:51.765882 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e28f6209bd7be6beb5d86d316c3194ad3162ae8561ac532a3dfc894415fb406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e28f6209bd7be6beb5d86d316c3194ad3162ae8561ac532a3dfc894415fb406\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-km2g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:51Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:51 crc kubenswrapper[4795]: I1129 07:39:51.865735 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:51 crc kubenswrapper[4795]: I1129 07:39:51.865809 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:51 crc kubenswrapper[4795]: I1129 07:39:51.865830 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:51 crc kubenswrapper[4795]: I1129 07:39:51.865860 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:51 crc kubenswrapper[4795]: I1129 07:39:51.865882 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:51Z","lastTransitionTime":"2025-11-29T07:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:51 crc kubenswrapper[4795]: I1129 07:39:51.969805 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:51 crc kubenswrapper[4795]: I1129 07:39:51.969894 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:51 crc kubenswrapper[4795]: I1129 07:39:51.969919 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:51 crc kubenswrapper[4795]: I1129 07:39:51.969944 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:51 crc kubenswrapper[4795]: I1129 07:39:51.969964 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:51Z","lastTransitionTime":"2025-11-29T07:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:52 crc kubenswrapper[4795]: I1129 07:39:52.072873 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:52 crc kubenswrapper[4795]: I1129 07:39:52.073153 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:52 crc kubenswrapper[4795]: I1129 07:39:52.073168 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:52 crc kubenswrapper[4795]: I1129 07:39:52.073184 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:52 crc kubenswrapper[4795]: I1129 07:39:52.073196 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:52Z","lastTransitionTime":"2025-11-29T07:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:52 crc kubenswrapper[4795]: I1129 07:39:52.175383 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:52 crc kubenswrapper[4795]: I1129 07:39:52.175419 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:52 crc kubenswrapper[4795]: I1129 07:39:52.175428 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:52 crc kubenswrapper[4795]: I1129 07:39:52.175443 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:52 crc kubenswrapper[4795]: I1129 07:39:52.175456 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:52Z","lastTransitionTime":"2025-11-29T07:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:52 crc kubenswrapper[4795]: I1129 07:39:52.275049 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:39:52 crc kubenswrapper[4795]: E1129 07:39:52.275631 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:39:52 crc kubenswrapper[4795]: I1129 07:39:52.275724 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:39:52 crc kubenswrapper[4795]: E1129 07:39:52.275935 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:39:52 crc kubenswrapper[4795]: I1129 07:39:52.276146 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:39:52 crc kubenswrapper[4795]: E1129 07:39:52.276476 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:39:52 crc kubenswrapper[4795]: I1129 07:39:52.282684 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:52 crc kubenswrapper[4795]: I1129 07:39:52.282732 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:52 crc kubenswrapper[4795]: I1129 07:39:52.282749 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:52 crc kubenswrapper[4795]: I1129 07:39:52.282776 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:52 crc kubenswrapper[4795]: I1129 07:39:52.282797 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:52Z","lastTransitionTime":"2025-11-29T07:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:52 crc kubenswrapper[4795]: I1129 07:39:52.385500 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:52 crc kubenswrapper[4795]: I1129 07:39:52.385563 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:52 crc kubenswrapper[4795]: I1129 07:39:52.385579 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:52 crc kubenswrapper[4795]: I1129 07:39:52.385622 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:52 crc kubenswrapper[4795]: I1129 07:39:52.385635 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:52Z","lastTransitionTime":"2025-11-29T07:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:52 crc kubenswrapper[4795]: I1129 07:39:52.491907 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:52 crc kubenswrapper[4795]: I1129 07:39:52.491964 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:52 crc kubenswrapper[4795]: I1129 07:39:52.491979 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:52 crc kubenswrapper[4795]: I1129 07:39:52.492003 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:52 crc kubenswrapper[4795]: I1129 07:39:52.492020 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:52Z","lastTransitionTime":"2025-11-29T07:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:52 crc kubenswrapper[4795]: I1129 07:39:52.562183 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" event={"ID":"3d3ff2b2-cbaa-4309-805a-2b044f867d3a","Type":"ContainerStarted","Data":"178c4829fb6723552ece37b1b2a9db689e2acb44c46d095d72a2ad03af1d2460"} Nov 29 07:39:52 crc kubenswrapper[4795]: I1129 07:39:52.562648 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" Nov 29 07:39:52 crc kubenswrapper[4795]: I1129 07:39:52.581472 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:52 crc kubenswrapper[4795]: I1129 07:39:52.595917 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:52 crc kubenswrapper[4795]: I1129 07:39:52.595975 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:52 crc kubenswrapper[4795]: I1129 07:39:52.595989 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:52 crc kubenswrapper[4795]: I1129 07:39:52.596008 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:52 crc kubenswrapper[4795]: I1129 07:39:52.596286 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:52Z","lastTransitionTime":"2025-11-29T07:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:52 crc kubenswrapper[4795]: I1129 07:39:52.602743 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2faa21199ef0e7bbf065823fff1e4b33636239d022db292bec68ff54e8c948c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c09cfeaabce351a2face8aa09a04f88aa9d50122718358d07301aeab84e4a76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:52 crc kubenswrapper[4795]: I1129 07:39:52.624936 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84bbfda4-f166-406b-8459-d8cad5d21032\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://102735e5872ecbca5ba247f4fc457619ce4b383def180851d69bf37f7ea1051a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cba2e40f95664da3b482ce01c3432b7a421b785791004833aeaffbc2b6bdcb9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c385dac962c7c2c54fe6b5eabe286bc75d0518601f68b6845677764c5916ad44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6319d19e0aa1655aaff5e3abcc13d8ff9ff26f401c28cf23593ecaa23d1ac5a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa34ac2960898820609db88aa051a6567a7b6bd0bfc7fcb741205f6f5c3f402\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:39:37Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 07:39:27.030389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:39:27.033381 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-823169215/tls.crt::/tmp/serving-cert-823169215/tls.key\\\\\\\"\\\\nI1129 07:39:37.392848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:39:37.396263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:39:37.396294 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:39:37.396669 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:39:37.396692 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:39:37.407579 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:39:37.407638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407648 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407655 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:39:37.407660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:39:37.407665 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:39:37.407670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:39:37.408043 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:39:37.411891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7b73774f26f4ea48586f8faf7f53be02b4678c85937c8e402868610f36d4d00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:52 crc kubenswrapper[4795]: I1129 07:39:52.642934 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30fbc9849348e3900b416f78f697f1d88eb5eb018f5ee420e711561423a79459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:52 crc kubenswrapper[4795]: I1129 07:39:52.659542 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b4d032ee6e86c2b61a59ea1477aa6d97a454c754888c37db24bc49d921eacd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:52 crc kubenswrapper[4795]: I1129 07:39:52.673310 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://148610b20c1dfe43c3e5443cda520fa95ed2823afcaa880dbe879b6d93dc8ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3aac368767d0534a658bc0223f5cda0a951940f2d6ff3f1c89a6ac401a7d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkmq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:52 crc kubenswrapper[4795]: I1129 07:39:52.683151 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" Nov 29 07:39:52 crc kubenswrapper[4795]: I1129 07:39:52.700062 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceed8db447ae237cae16d2aa37b1651eade557ea7aaab5bc7b55678ce6e36f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcce9ccf35e72987d01428b7573aafacfe90fc973009657b077f40b6fe6ddecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://248e5291402287c1a589a168527135bd1df15e293a7614681fc896fcbdeb9dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8112226f945ad27e0c5af4d588800322977e17efc3eac9acec2ce591f30f46fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://feacad8666527640ec9d6bee04a743e99e0cc9f9e9c6432c1a70bc02e85fa039\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25239d7c14973e35144f9ac1b909564749220fc32b1851b68e08434a5aca7def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://178c4829fb6723552ece37b1b2a9db689e2acb44c46d095d72a2ad03af1d2460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63ac8cc3766762113c8fdfdc459f77570fc533bf636011f1e64a5c0a55b3ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e28f6209bd7be6beb5d86d316c3194ad3162ae8561ac532a3dfc894415fb406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e28f6209bd7be6beb5d86d316c3194ad3162ae8561ac532a3dfc894415fb406\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-km2g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:52 crc kubenswrapper[4795]: I1129 07:39:52.700558 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:52 crc kubenswrapper[4795]: I1129 07:39:52.700659 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:52 crc kubenswrapper[4795]: I1129 07:39:52.700689 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:52 crc kubenswrapper[4795]: I1129 07:39:52.700768 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:52 crc kubenswrapper[4795]: I1129 07:39:52.700835 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:52Z","lastTransitionTime":"2025-11-29T07:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:52 crc kubenswrapper[4795]: I1129 07:39:52.718786 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hbg2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50b9c3ea-4ff5-434f-803c-2365a0938f9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46105526d9b0e90d4e14597c7b9c505b01d89a55e947fb378e4587ec265cac89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv7gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hbg2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:52 crc kubenswrapper[4795]: I1129 07:39:52.732824 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vcd5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3fc3441-5d98-4323-8a78-cab492090c5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d047b5f2e979d7eebec46b45ef363e94e99bd7311214872c66fcce39cfd411db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcqq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vcd5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:52 crc kubenswrapper[4795]: I1129 07:39:52.751045 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"262bf41d-eaf0-42b7-946d-3223857ca705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bbfc45764b29055dcafdee2e3b77893c35b23cfa3eaa9125f01850baef2fd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://508f7cb88a5b33c09bb52519555900031de44693e10de750ed34d9bb4c31738c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9f1344bd7c3a1bc53870f7e5f5f2ca4993df112a24aa474655cc07670e22c48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b06202234f21aa27a3dc67978cc9f82738d7526100bce3fd640be08131413f69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:52 crc kubenswrapper[4795]: I1129 07:39:52.766813 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:52 crc kubenswrapper[4795]: I1129 07:39:52.779665 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:52 crc kubenswrapper[4795]: I1129 07:39:52.793999 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27975" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceab872d-7a73-44d5-936e-3dd17facf399\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e24a1ba0b3bd47c8cff733e54ce43cf1641ba69df8a3d2083f47fcfdcd6640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e24a1ba0b3bd47c8cff733e54ce43cf1641ba69df8a3d2083f47fcfdcd6640\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9e93605467f5dae805a8b5b55e1eebc3f046f4ab7fd3a409235b918d8b1aa88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9e93605467f5dae805a8b5b55e1eebc3f046f4ab7fd3a409235b918d8b1aa88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1856815c6475b6c9db1d59694dfc249a9b254ca7f6b402570dbb791f60e038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1856815c6475b6c9db1d59694dfc249a9b254ca7f6b402570dbb791f60e038\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c79e6f292a540eaeab2781c31973da7e0213d08460452129dbb25075c05a9094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c79e6f292a540eaeab2781c31973da7e0213d08460452129dbb25075c05a9094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e9ceaddb24d5dc8e3689b2e05482fd76883f4ce3475557c03419491b53e800d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27975\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:52 crc kubenswrapper[4795]: I1129 07:39:52.803810 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:52 crc kubenswrapper[4795]: I1129 07:39:52.803857 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:52 crc kubenswrapper[4795]: I1129 07:39:52.803869 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:52 crc kubenswrapper[4795]: I1129 07:39:52.804157 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:52 crc kubenswrapper[4795]: I1129 07:39:52.804187 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:52Z","lastTransitionTime":"2025-11-29T07:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:52 crc kubenswrapper[4795]: I1129 07:39:52.805480 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-s9x86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcb60df4-ad48-4830-b8c7-c63621b96707\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6d9cdb665b3bed924577ab173dcf48968a0b7217ca81b1f8708d733d27d3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-s9x86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:52 crc kubenswrapper[4795]: I1129 07:39:52.824141 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84bbfda4-f166-406b-8459-d8cad5d21032\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://102735e5872ecbca5ba247f4fc457619ce4b383def180851d69bf37f7ea1051a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cba2e40f95664da3b482ce01c3432b7a421b785791004833aeaffbc2b6bdcb9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c385dac962c7c2c54fe6b5eabe286bc75d0518601f68b6845677764c5916ad44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6319d19e0aa1655aaff5e3abcc13d8ff9ff26f401c28cf23593ecaa23d1ac5a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa34ac2960898820609db88aa051a6567a7b6bd0bfc7fcb741205f6f5c3f402\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:39:37Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 07:39:27.030389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:39:27.033381 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-823169215/tls.crt::/tmp/serving-cert-823169215/tls.key\\\\\\\"\\\\nI1129 07:39:37.392848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:39:37.396263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:39:37.396294 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:39:37.396669 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:39:37.396692 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:39:37.407579 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:39:37.407638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407648 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407655 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:39:37.407660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:39:37.407665 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:39:37.407670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:39:37.408043 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:39:37.411891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7b73774f26f4ea48586f8faf7f53be02b4678c85937c8e402868610f36d4d00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:52 crc kubenswrapper[4795]: I1129 07:39:52.841684 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30fbc9849348e3900b416f78f697f1d88eb5eb018f5ee420e711561423a79459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:52 crc kubenswrapper[4795]: I1129 07:39:52.855255 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b4d032ee6e86c2b61a59ea1477aa6d97a454c754888c37db24bc49d921eacd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:52 crc kubenswrapper[4795]: I1129 07:39:52.868763 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://148610b20c1dfe43c3e5443cda520fa95ed2823afcaa880dbe879b6d93dc8ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3aac368767d0534a658bc0223f5cda0a951940f2d6ff3f1c89a6ac401a7d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkmq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:52 crc kubenswrapper[4795]: I1129 07:39:52.887301 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceed8db447ae237cae16d2aa37b1651eade557ea7aaab5bc7b55678ce6e36f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcce9ccf35e72987d01428b7573aafacfe90fc973009657b077f40b6fe6ddecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://248e5291402287c1a589a168527135bd1df15e293a7614681fc896fcbdeb9dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8112226f945ad27e0c5af4d588800322977e17efc3eac9acec2ce591f30f46fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://feacad8666527640ec9d6bee04a743e99e0cc9f9e9c6432c1a70bc02e85fa039\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25239d7c14973e35144f9ac1b909564749220fc32b1851b68e08434a5aca7def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://178c4829fb6723552ece37b1b2a9db689e2acb44c46d095d72a2ad03af1d2460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63ac8cc3766762113c8fdfdc459f77570fc533bf636011f1e64a5c0a55b3ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e28f6209bd7be6beb5d86d316c3194ad3162ae8561ac532a3dfc894415fb406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e28f6209bd7be6beb5d86d316c3194ad3162ae8561ac532a3dfc894415fb406\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-km2g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:52 crc kubenswrapper[4795]: I1129 07:39:52.899438 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hbg2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50b9c3ea-4ff5-434f-803c-2365a0938f9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46105526d9b0e90d4e14597c7b9c505b01d89a55e947fb378e4587ec265cac89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv7gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hbg2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:52 crc kubenswrapper[4795]: I1129 07:39:52.906938 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:52 crc kubenswrapper[4795]: I1129 07:39:52.907060 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:52 crc kubenswrapper[4795]: I1129 07:39:52.907080 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:52 crc kubenswrapper[4795]: I1129 07:39:52.907104 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:52 crc kubenswrapper[4795]: I1129 07:39:52.907123 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:52Z","lastTransitionTime":"2025-11-29T07:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:52 crc kubenswrapper[4795]: I1129 07:39:52.914117 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vcd5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3fc3441-5d98-4323-8a78-cab492090c5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d047b5f2e979d7eebec46b45ef363e94e99bd7311214872c66fcce39cfd411db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcqq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vcd5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:52 crc kubenswrapper[4795]: I1129 07:39:52.929988 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:52 crc kubenswrapper[4795]: I1129 07:39:52.944129 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27975" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceab872d-7a73-44d5-936e-3dd17facf399\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e24a1ba0b3bd47c8cff733e54ce43cf1641ba69df8a3d2083f47fcfdcd6640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e24a1ba0b3bd47c8cff733e54ce43cf1641ba69df8a3d2083f47fcfdcd6640\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9e93605467f5dae805a8b5b55e1eebc3f046f4ab7fd3a409235b918d8b1aa88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9e93605467f5dae805a8b5b55e1eebc3f046f4ab7fd3a409235b918d8b1aa88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1856815c6475b6c9db1d59694dfc249a9b254ca7f6b402570dbb791f60e038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1856815c6475b6c9db1d59694dfc249a9b254ca7f6b402570dbb791f60e038\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c79e6f292a540eaeab2781c31973da7e0213d08460452129dbb25075c05a9094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c79e6f292a540eaeab2781c31973da7e0213d08460452129dbb25075c05a9094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e9ceaddb24d5dc8e3689b2e05482fd76883f4ce3475557c03419491b53e800d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27975\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:52 crc kubenswrapper[4795]: I1129 07:39:52.954640 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-s9x86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcb60df4-ad48-4830-b8c7-c63621b96707\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6d9cdb665b3bed924577ab173dcf48968a0b7217ca81b1f8708d733d27d3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-s9x86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:52 crc kubenswrapper[4795]: I1129 07:39:52.968816 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"262bf41d-eaf0-42b7-946d-3223857ca705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bbfc45764b29055dcafdee2e3b77893c35b23cfa3eaa9125f01850baef2fd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://508f7cb88a5b33c09bb52519555900031de44693e10de750ed34d9bb4c31738c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9f1344bd7c3a1bc53870f7e5f5f2ca4993df112a24aa474655cc07670e22c48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b06202234f21aa27a3dc67978cc9f82738d7526100bce3fd640be08131413f69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:52 crc kubenswrapper[4795]: I1129 07:39:52.984792 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.003839 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:53Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.013540 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.013645 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.013669 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.013701 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.013721 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:53Z","lastTransitionTime":"2025-11-29T07:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.047877 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2faa21199ef0e7bbf065823fff1e4b33636239d022db292bec68ff54e8c948c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c09cfeaabce351a2face8aa09a04f88aa9d50122718358d07301aeab84e4a76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:53Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.117305 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.117393 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.117441 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.117469 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.117487 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:53Z","lastTransitionTime":"2025-11-29T07:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.221077 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.221112 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.221124 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.221140 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.221149 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:53Z","lastTransitionTime":"2025-11-29T07:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.246782 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qcmb5"] Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.247735 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qcmb5" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.249816 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.250240 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.280266 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceed8db447ae237cae16d2aa37b1651eade557ea7aaab5bc7b55678ce6e36f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcce9ccf35e72987d01428b7573aafacfe90fc973009657b077f40b6fe6ddecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://248e5291402287c1a589a168527135bd1df15e293a7614681fc896fcbdeb9dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8112226f945ad27e0c5af4d588800322977e17efc3eac9acec2ce591f30f46fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://feacad8666527640ec9d6bee04a743e99e0cc9f9e9c6432c1a70bc02e85fa039\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25239d7c14973e35144f9ac1b909564749220fc32b1851b68e08434a5aca7def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://178c4829fb6723552ece37b1b2a9db689e2acb44c46d095d72a2ad03af1d2460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63ac8cc3766762113c8fdfdc459f77570fc533bf636011f1e64a5c0a55b3ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e28f6209bd7be6beb5d86d316c3194ad3162ae8561ac532a3dfc894415fb406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e28f6209bd7be6beb5d86d316c3194ad3162ae8561ac532a3dfc894415fb406\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-km2g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:53Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.301926 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84bbfda4-f166-406b-8459-d8cad5d21032\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://102735e5872ecbca5ba247f4fc457619ce4b383def180851d69bf37f7ea1051a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cba2e40f95664da3b482ce01c3432b7a421b785791004833aeaffbc2b6bdcb9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c385dac962c7c2c54fe6b5eabe286bc75d0518601f68b6845677764c5916ad44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6319d19e0aa1655aaff5e3abcc13d8ff9ff26f401c28cf23593ecaa23d1ac5a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa34ac2960898820609db88aa051a6567a7b6bd0bfc7fcb741205f6f5c3f402\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:39:37Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 07:39:27.030389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:39:27.033381 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-823169215/tls.crt::/tmp/serving-cert-823169215/tls.key\\\\\\\"\\\\nI1129 07:39:37.392848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:39:37.396263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:39:37.396294 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:39:37.396669 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:39:37.396692 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:39:37.407579 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:39:37.407638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407648 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407655 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:39:37.407660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:39:37.407665 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:39:37.407670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:39:37.408043 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:39:37.411891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7b73774f26f4ea48586f8faf7f53be02b4678c85937c8e402868610f36d4d00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:53Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.316428 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30fbc9849348e3900b416f78f697f1d88eb5eb018f5ee420e711561423a79459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:53Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.324265 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.324336 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.324351 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.324370 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.324384 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:53Z","lastTransitionTime":"2025-11-29T07:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.330898 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b4d032ee6e86c2b61a59ea1477aa6d97a454c754888c37db24bc49d921eacd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:53Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.341149 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6472ebbc-939b-4dd0-8b03-110cb9811484-env-overrides\") pod \"ovnkube-control-plane-749d76644c-qcmb5\" (UID: \"6472ebbc-939b-4dd0-8b03-110cb9811484\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qcmb5" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.341233 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qwxp\" (UniqueName: \"kubernetes.io/projected/6472ebbc-939b-4dd0-8b03-110cb9811484-kube-api-access-6qwxp\") pod \"ovnkube-control-plane-749d76644c-qcmb5\" (UID: \"6472ebbc-939b-4dd0-8b03-110cb9811484\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qcmb5" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.341293 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6472ebbc-939b-4dd0-8b03-110cb9811484-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-qcmb5\" (UID: \"6472ebbc-939b-4dd0-8b03-110cb9811484\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qcmb5" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.341332 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6472ebbc-939b-4dd0-8b03-110cb9811484-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-qcmb5\" (UID: \"6472ebbc-939b-4dd0-8b03-110cb9811484\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qcmb5" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.347275 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://148610b20c1dfe43c3e5443cda520fa95ed2823afcaa880dbe879b6d93dc8ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3aac368767d0534a658bc0223f5cda0a951940f2d6ff3f1c89a6ac401a7d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkmq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:53Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.362648 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hbg2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50b9c3ea-4ff5-434f-803c-2365a0938f9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46105526d9b0e90d4e14597c7b9c505b01d89a55e947fb378e4587ec265cac89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv7gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hbg2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:53Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.375960 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vcd5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3fc3441-5d98-4323-8a78-cab492090c5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d047b5f2e979d7eebec46b45ef363e94e99bd7311214872c66fcce39cfd411db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcqq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vcd5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:53Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.390186 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"262bf41d-eaf0-42b7-946d-3223857ca705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bbfc45764b29055dcafdee2e3b77893c35b23cfa3eaa9125f01850baef2fd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://508f7cb88a5b33c09bb52519555900031de44693e10de750ed34d9bb4c31738c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9f1344bd7c3a1bc53870f7e5f5f2ca4993df112a24aa474655cc07670e22c48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b06202234f21aa27a3dc67978cc9f82738d7526100bce3fd640be08131413f69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:53Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.408321 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:53Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.422026 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:53Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.427277 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.427339 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.427357 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.427382 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.427396 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:53Z","lastTransitionTime":"2025-11-29T07:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.442438 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6472ebbc-939b-4dd0-8b03-110cb9811484-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-qcmb5\" (UID: \"6472ebbc-939b-4dd0-8b03-110cb9811484\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qcmb5" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.442585 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6472ebbc-939b-4dd0-8b03-110cb9811484-env-overrides\") pod \"ovnkube-control-plane-749d76644c-qcmb5\" (UID: \"6472ebbc-939b-4dd0-8b03-110cb9811484\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qcmb5" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.442649 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qwxp\" (UniqueName: \"kubernetes.io/projected/6472ebbc-939b-4dd0-8b03-110cb9811484-kube-api-access-6qwxp\") pod \"ovnkube-control-plane-749d76644c-qcmb5\" (UID: \"6472ebbc-939b-4dd0-8b03-110cb9811484\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qcmb5" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.442702 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6472ebbc-939b-4dd0-8b03-110cb9811484-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-qcmb5\" (UID: \"6472ebbc-939b-4dd0-8b03-110cb9811484\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qcmb5" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.443787 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6472ebbc-939b-4dd0-8b03-110cb9811484-env-overrides\") pod \"ovnkube-control-plane-749d76644c-qcmb5\" (UID: \"6472ebbc-939b-4dd0-8b03-110cb9811484\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qcmb5" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.444233 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6472ebbc-939b-4dd0-8b03-110cb9811484-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-qcmb5\" (UID: \"6472ebbc-939b-4dd0-8b03-110cb9811484\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qcmb5" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.446394 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27975" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceab872d-7a73-44d5-936e-3dd17facf399\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e24a1ba0b3bd47c8cff733e54ce43cf1641ba69df8a3d2083f47fcfdcd6640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e24a1ba0b3bd47c8cff733e54ce43cf1641ba69df8a3d2083f47fcfdcd6640\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9e93605467f5dae805a8b5b55e1eebc3f046f4ab7fd3a409235b918d8b1aa88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9e93605467f5dae805a8b5b55e1eebc3f046f4ab7fd3a409235b918d8b1aa88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1856815c6475b6c9db1d59694dfc249a9b254ca7f6b402570dbb791f60e038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1856815c6475b6c9db1d59694dfc249a9b254ca7f6b402570dbb791f60e038\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c79e6f292a540eaeab2781c31973da7e0213d08460452129dbb25075c05a9094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c79e6f292a540eaeab2781c31973da7e0213d08460452129dbb25075c05a9094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e9ceaddb24d5dc8e3689b2e05482fd76883f4ce3475557c03419491b53e800d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27975\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:53Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.452272 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6472ebbc-939b-4dd0-8b03-110cb9811484-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-qcmb5\" (UID: \"6472ebbc-939b-4dd0-8b03-110cb9811484\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qcmb5" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.463841 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-s9x86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcb60df4-ad48-4830-b8c7-c63621b96707\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6d9cdb665b3bed924577ab173dcf48968a0b7217ca81b1f8708d733d27d3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-s9x86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:53Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.466522 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qwxp\" (UniqueName: \"kubernetes.io/projected/6472ebbc-939b-4dd0-8b03-110cb9811484-kube-api-access-6qwxp\") pod \"ovnkube-control-plane-749d76644c-qcmb5\" (UID: \"6472ebbc-939b-4dd0-8b03-110cb9811484\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qcmb5" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.483496 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qcmb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6472ebbc-939b-4dd0-8b03-110cb9811484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qwxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qwxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qcmb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:53Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.499771 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:53Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.513955 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2faa21199ef0e7bbf065823fff1e4b33636239d022db292bec68ff54e8c948c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c09cfeaabce351a2face8aa09a04f88aa9d50122718358d07301aeab84e4a76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:53Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.530338 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.530388 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.530401 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.530423 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.530439 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:53Z","lastTransitionTime":"2025-11-29T07:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.563649 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qcmb5" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.568433 4795 generic.go:334] "Generic (PLEG): container finished" podID="ceab872d-7a73-44d5-936e-3dd17facf399" containerID="1e9ceaddb24d5dc8e3689b2e05482fd76883f4ce3475557c03419491b53e800d" exitCode=0 Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.568803 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-27975" event={"ID":"ceab872d-7a73-44d5-936e-3dd17facf399","Type":"ContainerDied","Data":"1e9ceaddb24d5dc8e3689b2e05482fd76883f4ce3475557c03419491b53e800d"} Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.569158 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.569275 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.591129 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hbg2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50b9c3ea-4ff5-434f-803c-2365a0938f9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46105526d9b0e90d4e14597c7b9c505b01d89a55e947fb378e4587ec265cac89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv7gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hbg2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:53Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.602325 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.605872 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vcd5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3fc3441-5d98-4323-8a78-cab492090c5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d047b5f2e979d7eebec46b45ef363e94e99bd7311214872c66fcce39cfd411db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcqq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vcd5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:53Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.619182 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"262bf41d-eaf0-42b7-946d-3223857ca705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bbfc45764b29055dcafdee2e3b77893c35b23cfa3eaa9125f01850baef2fd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://508f7cb88a5b33c09bb52519555900031de44693e10de750ed34d9bb4c31738c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9f1344bd7c3a1bc53870f7e5f5f2ca4993df112a24aa474655cc07670e22c48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b06202234f21aa27a3dc67978cc9f82738d7526100bce3fd640be08131413f69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:53Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.634395 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.634461 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.634480 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.634506 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.634519 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:53Z","lastTransitionTime":"2025-11-29T07:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.635113 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:53Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.650866 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:53Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.677030 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27975" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceab872d-7a73-44d5-936e-3dd17facf399\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e24a1ba0b3bd47c8cff733e54ce43cf1641ba69df8a3d2083f47fcfdcd6640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e24a1ba0b3bd47c8cff733e54ce43cf1641ba69df8a3d2083f47fcfdcd6640\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9e93605467f5dae805a8b5b55e1eebc3f046f4ab7fd3a409235b918d8b1aa88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9e93605467f5dae805a8b5b55e1eebc3f046f4ab7fd3a409235b918d8b1aa88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1856815c6475b6c9db1d59694dfc249a9b254ca7f6b402570dbb791f60e038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1856815c6475b6c9db1d59694dfc249a9b254ca7f6b402570dbb791f60e038\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c79e6f292a540eaeab2781c31973da7e0213d08460452129dbb25075c05a9094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c79e6f292a540eaeab2781c31973da7e0213d08460452129dbb25075c05a9094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e9ceaddb24d5dc8e3689b2e05482fd76883f4ce3475557c03419491b53e800d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9ceaddb24d5dc8e3689b2e05482fd76883f4ce3475557c03419491b53e800d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27975\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:53Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.694773 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-s9x86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcb60df4-ad48-4830-b8c7-c63621b96707\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6d9cdb665b3bed924577ab173dcf48968a0b7217ca81b1f8708d733d27d3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-s9x86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:53Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.706283 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qcmb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6472ebbc-939b-4dd0-8b03-110cb9811484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qwxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qwxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qcmb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:53Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.717214 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:53Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.729895 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2faa21199ef0e7bbf065823fff1e4b33636239d022db292bec68ff54e8c948c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c09cfeaabce351a2face8aa09a04f88aa9d50122718358d07301aeab84e4a76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:53Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.739255 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.739297 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.739307 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.739322 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.739334 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:53Z","lastTransitionTime":"2025-11-29T07:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.748923 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84bbfda4-f166-406b-8459-d8cad5d21032\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://102735e5872ecbca5ba247f4fc457619ce4b383def180851d69bf37f7ea1051a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cba2e40f95664da3b482ce01c3432b7a421b785791004833aeaffbc2b6bdcb9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c385dac962c7c2c54fe6b5eabe286bc75d0518601f68b6845677764c5916ad44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6319d19e0aa1655aaff5e3abcc13d8ff9ff26f401c28cf23593ecaa23d1ac5a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa34ac2960898820609db88aa051a6567a7b6bd0bfc7fcb741205f6f5c3f402\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:39:37Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 07:39:27.030389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:39:27.033381 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-823169215/tls.crt::/tmp/serving-cert-823169215/tls.key\\\\\\\"\\\\nI1129 07:39:37.392848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:39:37.396263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:39:37.396294 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:39:37.396669 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:39:37.396692 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:39:37.407579 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:39:37.407638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407648 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407655 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:39:37.407660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:39:37.407665 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:39:37.407670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:39:37.408043 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:39:37.411891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7b73774f26f4ea48586f8faf7f53be02b4678c85937c8e402868610f36d4d00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:53Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.766788 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30fbc9849348e3900b416f78f697f1d88eb5eb018f5ee420e711561423a79459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:53Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.781014 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b4d032ee6e86c2b61a59ea1477aa6d97a454c754888c37db24bc49d921eacd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:53Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.794101 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://148610b20c1dfe43c3e5443cda520fa95ed2823afcaa880dbe879b6d93dc8ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3aac368767d0534a658bc0223f5cda0a951940f2d6ff3f1c89a6ac401a7d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkmq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:53Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.815092 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceed8db447ae237cae16d2aa37b1651eade557ea7aaab5bc7b55678ce6e36f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcce9ccf35e72987d01428b7573aafacfe90fc973009657b077f40b6fe6ddecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://248e5291402287c1a589a168527135bd1df15e293a7614681fc896fcbdeb9dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8112226f945ad27e0c5af4d588800322977e17efc3eac9acec2ce591f30f46fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://feacad8666527640ec9d6bee04a743e99e0cc9f9e9c6432c1a70bc02e85fa039\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25239d7c14973e35144f9ac1b909564749220fc32b1851b68e08434a5aca7def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://178c4829fb6723552ece37b1b2a9db689e2acb44c46d095d72a2ad03af1d2460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63ac8cc3766762113c8fdfdc459f77570fc533bf636011f1e64a5c0a55b3ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e28f6209bd7be6beb5d86d316c3194ad3162ae8561ac532a3dfc894415fb406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e28f6209bd7be6beb5d86d316c3194ad3162ae8561ac532a3dfc894415fb406\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-km2g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:53Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.830125 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84bbfda4-f166-406b-8459-d8cad5d21032\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://102735e5872ecbca5ba247f4fc457619ce4b383def180851d69bf37f7ea1051a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cba2e40f95664da3b482ce01c3432b7a421b785791004833aeaffbc2b6bdcb9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c385dac962c7c2c54fe6b5eabe286bc75d0518601f68b6845677764c5916ad44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6319d19e0aa1655aaff5e3abcc13d8ff9ff26f401c28cf23593ecaa23d1ac5a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa34ac2960898820609db88aa051a6567a7b6bd0bfc7fcb741205f6f5c3f402\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:39:37Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 07:39:27.030389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:39:27.033381 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-823169215/tls.crt::/tmp/serving-cert-823169215/tls.key\\\\\\\"\\\\nI1129 07:39:37.392848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:39:37.396263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:39:37.396294 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:39:37.396669 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:39:37.396692 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:39:37.407579 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:39:37.407638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407648 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407655 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:39:37.407660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:39:37.407665 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:39:37.407670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:39:37.408043 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:39:37.411891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7b73774f26f4ea48586f8faf7f53be02b4678c85937c8e402868610f36d4d00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:53Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.831575 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.841991 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.842039 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.842051 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.842071 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.842085 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:53Z","lastTransitionTime":"2025-11-29T07:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.844263 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30fbc9849348e3900b416f78f697f1d88eb5eb018f5ee420e711561423a79459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:53Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.859492 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b4d032ee6e86c2b61a59ea1477aa6d97a454c754888c37db24bc49d921eacd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:53Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.872292 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://148610b20c1dfe43c3e5443cda520fa95ed2823afcaa880dbe879b6d93dc8ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3aac368767d0534a658bc0223f5cda0a951940f2d6ff3f1c89a6ac401a7d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkmq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:53Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.893784 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceed8db447ae237cae16d2aa37b1651eade557ea7aaab5bc7b55678ce6e36f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcce9ccf35e72987d01428b7573aafacfe90fc973009657b077f40b6fe6ddecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://248e5291402287c1a589a168527135bd1df15e293a7614681fc896fcbdeb9dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8112226f945ad27e0c5af4d588800322977e17efc3eac9acec2ce591f30f46fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://feacad8666527640ec9d6bee04a743e99e0cc9f9e9c6432c1a70bc02e85fa039\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25239d7c14973e35144f9ac1b909564749220fc32b1851b68e08434a5aca7def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://178c4829fb6723552ece37b1b2a9db689e2acb44c46d095d72a2ad03af1d2460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63ac8cc3766762113c8fdfdc459f77570fc533bf636011f1e64a5c0a55b3ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e28f6209bd7be6beb5d86d316c3194ad3162ae8561ac532a3dfc894415fb406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e28f6209bd7be6beb5d86d316c3194ad3162ae8561ac532a3dfc894415fb406\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-km2g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:53Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.908415 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hbg2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50b9c3ea-4ff5-434f-803c-2365a0938f9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46105526d9b0e90d4e14597c7b9c505b01d89a55e947fb378e4587ec265cac89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv7gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hbg2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:53Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.921477 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vcd5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3fc3441-5d98-4323-8a78-cab492090c5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d047b5f2e979d7eebec46b45ef363e94e99bd7311214872c66fcce39cfd411db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcqq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vcd5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:53Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.935026 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:53Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.944666 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.944729 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.944744 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.944764 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.945191 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:53Z","lastTransitionTime":"2025-11-29T07:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.960981 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27975" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceab872d-7a73-44d5-936e-3dd17facf399\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e24a1ba0b3bd47c8cff733e54ce43cf1641ba69df8a3d2083f47fcfdcd6640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e24a1ba0b3bd47c8cff733e54ce43cf1641ba69df8a3d2083f47fcfdcd6640\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9e93605467f5dae805a8b5b55e1eebc3f046f4ab7fd3a409235b918d8b1aa88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9e93605467f5dae805a8b5b55e1eebc3f046f4ab7fd3a409235b918d8b1aa88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1856815c6475b6c9db1d59694dfc249a9b254ca7f6b402570dbb791f60e038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1856815c6475b6c9db1d59694dfc249a9b254ca7f6b402570dbb791f60e038\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c79e6f292a540eaeab2781c31973da7e0213d08460452129dbb25075c05a9094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c79e6f292a540eaeab2781c31973da7e0213d08460452129dbb25075c05a9094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e9ceaddb24d5dc8e3689b2e05482fd76883f4ce3475557c03419491b53e800d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9ceaddb24d5dc8e3689b2e05482fd76883f4ce3475557c03419491b53e800d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27975\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:53Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.975532 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-s9x86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcb60df4-ad48-4830-b8c7-c63621b96707\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6d9cdb665b3bed924577ab173dcf48968a0b7217ca81b1f8708d733d27d3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-s9x86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:53Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:53 crc kubenswrapper[4795]: I1129 07:39:53.991553 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qcmb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6472ebbc-939b-4dd0-8b03-110cb9811484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qwxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qwxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qcmb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:53Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.005938 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"262bf41d-eaf0-42b7-946d-3223857ca705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bbfc45764b29055dcafdee2e3b77893c35b23cfa3eaa9125f01850baef2fd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://508f7cb88a5b33c09bb52519555900031de44693e10de750ed34d9bb4c31738c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9f1344bd7c3a1bc53870f7e5f5f2ca4993df112a24aa474655cc07670e22c48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b06202234f21aa27a3dc67978cc9f82738d7526100bce3fd640be08131413f69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:54Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.020361 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:54Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.037375 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:54Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.048801 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.048849 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.048896 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.048917 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.048933 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:54Z","lastTransitionTime":"2025-11-29T07:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.053005 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2faa21199ef0e7bbf065823fff1e4b33636239d022db292bec68ff54e8c948c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c09cfeaabce351a2face8aa09a04f88aa9d50122718358d07301aeab84e4a76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:54Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.069266 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2faa21199ef0e7bbf065823fff1e4b33636239d022db292bec68ff54e8c948c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c09cfeaabce351a2face8aa09a04f88aa9d50122718358d07301aeab84e4a76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:54Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.070714 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.070765 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.070783 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.070811 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.070829 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:54Z","lastTransitionTime":"2025-11-29T07:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.085477 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:54Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:54 crc kubenswrapper[4795]: E1129 07:39:54.086655 4795 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8cd0c036-22eb-405e-b57e-ec0c4424780e\\\",\\\"systemUUID\\\":\\\"bd085386-a70e-485f-9f18-00b3aef4bcca\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:54Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.091586 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.091650 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.091662 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.091679 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.091692 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:54Z","lastTransitionTime":"2025-11-29T07:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.101158 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84bbfda4-f166-406b-8459-d8cad5d21032\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://102735e5872ecbca5ba247f4fc457619ce4b383def180851d69bf37f7ea1051a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cba2e40f95664da3b482ce01c3432b7a421b785791004833aeaffbc2b6bdcb9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c385dac962c7c2c54fe6b5eabe286bc75d0518601f68b6845677764c5916ad44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6319d19e0aa1655aaff5e3abcc13d8ff9ff26f401c28cf23593ecaa23d1ac5a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa34ac2960898820609db88aa051a6567a7b6bd0bfc7fcb741205f6f5c3f402\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:39:37Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 07:39:27.030389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:39:27.033381 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-823169215/tls.crt::/tmp/serving-cert-823169215/tls.key\\\\\\\"\\\\nI1129 07:39:37.392848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:39:37.396263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:39:37.396294 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:39:37.396669 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:39:37.396692 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:39:37.407579 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:39:37.407638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407648 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407655 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:39:37.407660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:39:37.407665 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:39:37.407670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:39:37.408043 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:39:37.411891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7b73774f26f4ea48586f8faf7f53be02b4678c85937c8e402868610f36d4d00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:54Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:54 crc kubenswrapper[4795]: E1129 07:39:54.110255 4795 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8cd0c036-22eb-405e-b57e-ec0c4424780e\\\",\\\"systemUUID\\\":\\\"bd085386-a70e-485f-9f18-00b3aef4bcca\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:54Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.115314 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.115363 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.115378 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.115401 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.115418 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:54Z","lastTransitionTime":"2025-11-29T07:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.120159 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30fbc9849348e3900b416f78f697f1d88eb5eb018f5ee420e711561423a79459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:54Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:54 crc kubenswrapper[4795]: E1129 07:39:54.131115 4795 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8cd0c036-22eb-405e-b57e-ec0c4424780e\\\",\\\"systemUUID\\\":\\\"bd085386-a70e-485f-9f18-00b3aef4bcca\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:54Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.134072 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b4d032ee6e86c2b61a59ea1477aa6d97a454c754888c37db24bc49d921eacd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:54Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.135570 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.135634 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.135650 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.135671 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.135685 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:54Z","lastTransitionTime":"2025-11-29T07:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.147908 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://148610b20c1dfe43c3e5443cda520fa95ed2823afcaa880dbe879b6d93dc8ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3aac368767d0534a658bc0223f5cda0a951940f2d6ff3f1c89a6ac401a7d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkmq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:54Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:54 crc kubenswrapper[4795]: E1129 07:39:54.149758 4795 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8cd0c036-22eb-405e-b57e-ec0c4424780e\\\",\\\"systemUUID\\\":\\\"bd085386-a70e-485f-9f18-00b3aef4bcca\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:54Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.153970 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.154030 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.154043 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.154062 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.154077 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:54Z","lastTransitionTime":"2025-11-29T07:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:54 crc kubenswrapper[4795]: E1129 07:39:54.171548 4795 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8cd0c036-22eb-405e-b57e-ec0c4424780e\\\",\\\"systemUUID\\\":\\\"bd085386-a70e-485f-9f18-00b3aef4bcca\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:54Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:54 crc kubenswrapper[4795]: E1129 07:39:54.171760 4795 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.174215 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.174253 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.174265 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.174288 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.174304 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:54Z","lastTransitionTime":"2025-11-29T07:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.176684 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceed8db447ae237cae16d2aa37b1651eade557ea7aaab5bc7b55678ce6e36f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcce9ccf35e72987d01428b7573aafacfe90fc973009657b077f40b6fe6ddecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://248e5291402287c1a589a168527135bd1df15e293a7614681fc896fcbdeb9dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8112226f945ad27e0c5af4d588800322977e17efc3eac9acec2ce591f30f46fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://feacad8666527640ec9d6bee04a743e99e0cc9f9e9c6432c1a70bc02e85fa039\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25239d7c14973e35144f9ac1b909564749220fc32b1851b68e08434a5aca7def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://178c4829fb6723552ece37b1b2a9db689e2acb44c46d095d72a2ad03af1d2460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63ac8cc3766762113c8fdfdc459f77570fc533bf636011f1e64a5c0a55b3ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e28f6209bd7be6beb5d86d316c3194ad3162ae8561ac532a3dfc894415fb406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e28f6209bd7be6beb5d86d316c3194ad3162ae8561ac532a3dfc894415fb406\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-km2g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:54Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.190777 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vcd5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3fc3441-5d98-4323-8a78-cab492090c5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d047b5f2e979d7eebec46b45ef363e94e99bd7311214872c66fcce39cfd411db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcqq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vcd5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:54Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.210019 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hbg2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50b9c3ea-4ff5-434f-803c-2365a0938f9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46105526d9b0e90d4e14597c7b9c505b01d89a55e947fb378e4587ec265cac89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv7gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hbg2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:54Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.225966 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:54Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.240796 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:54Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.257455 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27975" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceab872d-7a73-44d5-936e-3dd17facf399\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e24a1ba0b3bd47c8cff733e54ce43cf1641ba69df8a3d2083f47fcfdcd6640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e24a1ba0b3bd47c8cff733e54ce43cf1641ba69df8a3d2083f47fcfdcd6640\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9e93605467f5dae805a8b5b55e1eebc3f046f4ab7fd3a409235b918d8b1aa88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9e93605467f5dae805a8b5b55e1eebc3f046f4ab7fd3a409235b918d8b1aa88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1856815c6475b6c9db1d59694dfc249a9b254ca7f6b402570dbb791f60e038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1856815c6475b6c9db1d59694dfc249a9b254ca7f6b402570dbb791f60e038\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c79e6f292a540eaeab2781c31973da7e0213d08460452129dbb25075c05a9094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c79e6f292a540eaeab2781c31973da7e0213d08460452129dbb25075c05a9094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e9ceaddb24d5dc8e3689b2e05482fd76883f4ce3475557c03419491b53e800d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9ceaddb24d5dc8e3689b2e05482fd76883f4ce3475557c03419491b53e800d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27975\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:54Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.268548 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-s9x86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcb60df4-ad48-4830-b8c7-c63621b96707\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6d9cdb665b3bed924577ab173dcf48968a0b7217ca81b1f8708d733d27d3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-s9x86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:54Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.275735 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.275735 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:39:54 crc kubenswrapper[4795]: E1129 07:39:54.275917 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.275990 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:39:54 crc kubenswrapper[4795]: E1129 07:39:54.276037 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:39:54 crc kubenswrapper[4795]: E1129 07:39:54.276144 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.277859 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.277895 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.277906 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.277923 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.277934 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:54Z","lastTransitionTime":"2025-11-29T07:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.282410 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qcmb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6472ebbc-939b-4dd0-8b03-110cb9811484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qwxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qwxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qcmb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:54Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.295768 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"262bf41d-eaf0-42b7-946d-3223857ca705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bbfc45764b29055dcafdee2e3b77893c35b23cfa3eaa9125f01850baef2fd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://508f7cb88a5b33c09bb52519555900031de44693e10de750ed34d9bb4c31738c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9f1344bd7c3a1bc53870f7e5f5f2ca4993df112a24aa474655cc07670e22c48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b06202234f21aa27a3dc67978cc9f82738d7526100bce3fd640be08131413f69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:54Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.310377 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:54Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.324755 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:54Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.340045 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27975" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceab872d-7a73-44d5-936e-3dd17facf399\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e24a1ba0b3bd47c8cff733e54ce43cf1641ba69df8a3d2083f47fcfdcd6640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e24a1ba0b3bd47c8cff733e54ce43cf1641ba69df8a3d2083f47fcfdcd6640\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9e93605467f5dae805a8b5b55e1eebc3f046f4ab7fd3a409235b918d8b1aa88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9e93605467f5dae805a8b5b55e1eebc3f046f4ab7fd3a409235b918d8b1aa88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1856815c6475b6c9db1d59694dfc249a9b254ca7f6b402570dbb791f60e038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1856815c6475b6c9db1d59694dfc249a9b254ca7f6b402570dbb791f60e038\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c79e6f292a540eaeab2781c31973da7e0213d08460452129dbb25075c05a9094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c79e6f292a540eaeab2781c31973da7e0213d08460452129dbb25075c05a9094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e9ceaddb24d5dc8e3689b2e05482fd76883f4ce3475557c03419491b53e800d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9ceaddb24d5dc8e3689b2e05482fd76883f4ce3475557c03419491b53e800d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27975\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:54Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.353276 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-s9x86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcb60df4-ad48-4830-b8c7-c63621b96707\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6d9cdb665b3bed924577ab173dcf48968a0b7217ca81b1f8708d733d27d3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-s9x86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:54Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.366868 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qcmb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6472ebbc-939b-4dd0-8b03-110cb9811484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qwxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qwxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qcmb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:54Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.380649 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.380697 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.380707 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.380724 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.380734 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:54Z","lastTransitionTime":"2025-11-29T07:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.396252 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"262bf41d-eaf0-42b7-946d-3223857ca705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bbfc45764b29055dcafdee2e3b77893c35b23cfa3eaa9125f01850baef2fd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://508f7cb88a5b33c09bb52519555900031de44693e10de750ed34d9bb4c31738c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9f1344bd7c3a1bc53870f7e5f5f2ca4993df112a24aa474655cc07670e22c48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b06202234f21aa27a3dc67978cc9f82738d7526100bce3fd640be08131413f69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:54Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.434025 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2faa21199ef0e7bbf065823fff1e4b33636239d022db292bec68ff54e8c948c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c09cfeaabce351a2face8aa09a04f88aa9d50122718358d07301aeab84e4a76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:54Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.453999 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.454172 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.454204 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.454238 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.454259 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:39:54 crc kubenswrapper[4795]: E1129 07:39:54.454287 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:40:10.454261042 +0000 UTC m=+56.429836832 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:39:54 crc kubenswrapper[4795]: E1129 07:39:54.454389 4795 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 07:39:54 crc kubenswrapper[4795]: E1129 07:39:54.454423 4795 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 07:39:54 crc kubenswrapper[4795]: E1129 07:39:54.454460 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 07:40:10.454439567 +0000 UTC m=+56.430015347 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 07:39:54 crc kubenswrapper[4795]: E1129 07:39:54.454480 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 07:40:10.454471788 +0000 UTC m=+56.430047578 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 07:39:54 crc kubenswrapper[4795]: E1129 07:39:54.454524 4795 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 07:39:54 crc kubenswrapper[4795]: E1129 07:39:54.454536 4795 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 07:39:54 crc kubenswrapper[4795]: E1129 07:39:54.454547 4795 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:39:54 crc kubenswrapper[4795]: E1129 07:39:54.454572 4795 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 07:39:54 crc kubenswrapper[4795]: E1129 07:39:54.454575 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-29 07:40:10.4545646 +0000 UTC m=+56.430140390 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:39:54 crc kubenswrapper[4795]: E1129 07:39:54.454607 4795 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 07:39:54 crc kubenswrapper[4795]: E1129 07:39:54.454628 4795 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:39:54 crc kubenswrapper[4795]: E1129 07:39:54.454656 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-29 07:40:10.454649203 +0000 UTC m=+56.430224993 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.475828 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:54Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.483180 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.483214 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.483223 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.483238 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.483246 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:54Z","lastTransitionTime":"2025-11-29T07:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.515484 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84bbfda4-f166-406b-8459-d8cad5d21032\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://102735e5872ecbca5ba247f4fc457619ce4b383def180851d69bf37f7ea1051a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cba2e40f95664da3b482ce01c3432b7a421b785791004833aeaffbc2b6bdcb9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c385dac962c7c2c54fe6b5eabe286bc75d0518601f68b6845677764c5916ad44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6319d19e0aa1655aaff5e3abcc13d8ff9ff26f401c28cf23593ecaa23d1ac5a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa34ac2960898820609db88aa051a6567a7b6bd0bfc7fcb741205f6f5c3f402\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:39:37Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 07:39:27.030389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:39:27.033381 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-823169215/tls.crt::/tmp/serving-cert-823169215/tls.key\\\\\\\"\\\\nI1129 07:39:37.392848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:39:37.396263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:39:37.396294 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:39:37.396669 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:39:37.396692 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:39:37.407579 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:39:37.407638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407648 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407655 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:39:37.407660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:39:37.407665 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:39:37.407670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:39:37.408043 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:39:37.411891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7b73774f26f4ea48586f8faf7f53be02b4678c85937c8e402868610f36d4d00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:54Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.557992 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30fbc9849348e3900b416f78f697f1d88eb5eb018f5ee420e711561423a79459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:54Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.576025 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-27975" event={"ID":"ceab872d-7a73-44d5-936e-3dd17facf399","Type":"ContainerStarted","Data":"2966e801bc03a51bece33fae1207ba6a432620fcbcb1c702abddbb11b40a1e1c"} Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.578048 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qcmb5" event={"ID":"6472ebbc-939b-4dd0-8b03-110cb9811484","Type":"ContainerStarted","Data":"ad977e9d70f50688539c9d0c9cf08789584d08f77792d3dde1a619e9d958cff2"} Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.578234 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qcmb5" event={"ID":"6472ebbc-939b-4dd0-8b03-110cb9811484","Type":"ContainerStarted","Data":"f83f2702785f956d1af8275629ced02d9348bd5ad79cd24f3a7b8298fa6f645e"} Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.585163 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.585202 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.585211 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.585231 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.585242 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:54Z","lastTransitionTime":"2025-11-29T07:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.594653 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b4d032ee6e86c2b61a59ea1477aa6d97a454c754888c37db24bc49d921eacd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:54Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.634313 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://148610b20c1dfe43c3e5443cda520fa95ed2823afcaa880dbe879b6d93dc8ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3aac368767d0534a658bc0223f5cda0a951940f2d6ff3f1c89a6ac401a7d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkmq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:54Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.688350 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.688394 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.688403 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.688420 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.688431 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:54Z","lastTransitionTime":"2025-11-29T07:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.689100 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceed8db447ae237cae16d2aa37b1651eade557ea7aaab5bc7b55678ce6e36f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcce9ccf35e72987d01428b7573aafacfe90fc973009657b077f40b6fe6ddecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://248e5291402287c1a589a168527135bd1df15e293a7614681fc896fcbdeb9dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8112226f945ad27e0c5af4d588800322977e17efc3eac9acec2ce591f30f46fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://feacad8666527640ec9d6bee04a743e99e0cc9f9e9c6432c1a70bc02e85fa039\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25239d7c14973e35144f9ac1b909564749220fc32b1851b68e08434a5aca7def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://178c4829fb6723552ece37b1b2a9db689e2acb44c46d095d72a2ad03af1d2460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63ac8cc3766762113c8fdfdc459f77570fc533bf636011f1e64a5c0a55b3ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e28f6209bd7be6beb5d86d316c3194ad3162ae8561ac532a3dfc894415fb406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e28f6209bd7be6beb5d86d316c3194ad3162ae8561ac532a3dfc894415fb406\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-km2g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:54Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.713746 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vcd5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3fc3441-5d98-4323-8a78-cab492090c5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d047b5f2e979d7eebec46b45ef363e94e99bd7311214872c66fcce39cfd411db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcqq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vcd5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:54Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.737456 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-bvmzq"] Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.738110 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvmzq" Nov 29 07:39:54 crc kubenswrapper[4795]: E1129 07:39:54.738208 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvmzq" podUID="9be66670-47c2-4d05-bf3d-59ae6f4ff53b" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.757517 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hbg2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50b9c3ea-4ff5-434f-803c-2365a0938f9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46105526d9b0e90d4e14597c7b9c505b01d89a55e947fb378e4587ec265cac89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv7gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hbg2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:54Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.791491 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.791530 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.791540 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.791558 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.791570 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:54Z","lastTransitionTime":"2025-11-29T07:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.798733 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84bbfda4-f166-406b-8459-d8cad5d21032\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://102735e5872ecbca5ba247f4fc457619ce4b383def180851d69bf37f7ea1051a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cba2e40f95664da3b482ce01c3432b7a421b785791004833aeaffbc2b6bdcb9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c385dac962c7c2c54fe6b5eabe286bc75d0518601f68b6845677764c5916ad44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6319d19e0aa1655aaff5e3abcc13d8ff9ff26f401c28cf23593ecaa23d1ac5a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa34ac2960898820609db88aa051a6567a7b6bd0bfc7fcb741205f6f5c3f402\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:39:37Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 07:39:27.030389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:39:27.033381 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-823169215/tls.crt::/tmp/serving-cert-823169215/tls.key\\\\\\\"\\\\nI1129 07:39:37.392848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:39:37.396263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:39:37.396294 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:39:37.396669 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:39:37.396692 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:39:37.407579 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:39:37.407638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407648 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407655 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:39:37.407660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:39:37.407665 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:39:37.407670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:39:37.408043 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:39:37.411891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7b73774f26f4ea48586f8faf7f53be02b4678c85937c8e402868610f36d4d00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:54Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.834314 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30fbc9849348e3900b416f78f697f1d88eb5eb018f5ee420e711561423a79459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:54Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.858739 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9be66670-47c2-4d05-bf3d-59ae6f4ff53b-metrics-certs\") pod \"network-metrics-daemon-bvmzq\" (UID: \"9be66670-47c2-4d05-bf3d-59ae6f4ff53b\") " pod="openshift-multus/network-metrics-daemon-bvmzq" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.858802 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4h7m\" (UniqueName: \"kubernetes.io/projected/9be66670-47c2-4d05-bf3d-59ae6f4ff53b-kube-api-access-c4h7m\") pod \"network-metrics-daemon-bvmzq\" (UID: \"9be66670-47c2-4d05-bf3d-59ae6f4ff53b\") " pod="openshift-multus/network-metrics-daemon-bvmzq" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.876776 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b4d032ee6e86c2b61a59ea1477aa6d97a454c754888c37db24bc49d921eacd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:54Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.894974 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.895023 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.895037 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.895058 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.895075 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:54Z","lastTransitionTime":"2025-11-29T07:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.917135 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://148610b20c1dfe43c3e5443cda520fa95ed2823afcaa880dbe879b6d93dc8ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3aac368767d0534a658bc0223f5cda0a951940f2d6ff3f1c89a6ac401a7d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkmq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:54Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.959635 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4h7m\" (UniqueName: \"kubernetes.io/projected/9be66670-47c2-4d05-bf3d-59ae6f4ff53b-kube-api-access-c4h7m\") pod \"network-metrics-daemon-bvmzq\" (UID: \"9be66670-47c2-4d05-bf3d-59ae6f4ff53b\") " pod="openshift-multus/network-metrics-daemon-bvmzq" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.959720 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9be66670-47c2-4d05-bf3d-59ae6f4ff53b-metrics-certs\") pod \"network-metrics-daemon-bvmzq\" (UID: \"9be66670-47c2-4d05-bf3d-59ae6f4ff53b\") " pod="openshift-multus/network-metrics-daemon-bvmzq" Nov 29 07:39:54 crc kubenswrapper[4795]: E1129 07:39:54.959836 4795 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 29 07:39:54 crc kubenswrapper[4795]: E1129 07:39:54.959886 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9be66670-47c2-4d05-bf3d-59ae6f4ff53b-metrics-certs podName:9be66670-47c2-4d05-bf3d-59ae6f4ff53b nodeName:}" failed. No retries permitted until 2025-11-29 07:39:55.459872434 +0000 UTC m=+41.435448224 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9be66670-47c2-4d05-bf3d-59ae6f4ff53b-metrics-certs") pod "network-metrics-daemon-bvmzq" (UID: "9be66670-47c2-4d05-bf3d-59ae6f4ff53b") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.962453 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceed8db447ae237cae16d2aa37b1651eade557ea7aaab5bc7b55678ce6e36f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcce9ccf35e72987d01428b7573aafacfe90fc973009657b077f40b6fe6ddecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://248e5291402287c1a589a168527135bd1df15e293a7614681fc896fcbdeb9dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8112226f945ad27e0c5af4d588800322977e17efc3eac9acec2ce591f30f46fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://feacad8666527640ec9d6bee04a743e99e0cc9f9e9c6432c1a70bc02e85fa039\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25239d7c14973e35144f9ac1b909564749220fc32b1851b68e08434a5aca7def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://178c4829fb6723552ece37b1b2a9db689e2acb44c46d095d72a2ad03af1d2460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63ac8cc3766762113c8fdfdc459f77570fc533bf636011f1e64a5c0a55b3ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e28f6209bd7be6beb5d86d316c3194ad3162ae8561ac532a3dfc894415fb406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e28f6209bd7be6beb5d86d316c3194ad3162ae8561ac532a3dfc894415fb406\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-km2g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:54Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.979445 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4h7m\" (UniqueName: \"kubernetes.io/projected/9be66670-47c2-4d05-bf3d-59ae6f4ff53b-kube-api-access-c4h7m\") pod \"network-metrics-daemon-bvmzq\" (UID: \"9be66670-47c2-4d05-bf3d-59ae6f4ff53b\") " pod="openshift-multus/network-metrics-daemon-bvmzq" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.998197 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.998258 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.998270 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.998293 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:54 crc kubenswrapper[4795]: I1129 07:39:54.998307 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:54Z","lastTransitionTime":"2025-11-29T07:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:55 crc kubenswrapper[4795]: I1129 07:39:55.014099 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bvmzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9be66670-47c2-4d05-bf3d-59ae6f4ff53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4h7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4h7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bvmzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:55Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:55 crc kubenswrapper[4795]: I1129 07:39:55.060544 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hbg2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50b9c3ea-4ff5-434f-803c-2365a0938f9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46105526d9b0e90d4e14597c7b9c505b01d89a55e947fb378e4587ec265cac89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv7gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hbg2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:55Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:55 crc kubenswrapper[4795]: I1129 07:39:55.091782 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vcd5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3fc3441-5d98-4323-8a78-cab492090c5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d047b5f2e979d7eebec46b45ef363e94e99bd7311214872c66fcce39cfd411db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcqq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vcd5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:55Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:55 crc kubenswrapper[4795]: I1129 07:39:55.100866 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:55 crc kubenswrapper[4795]: I1129 07:39:55.100949 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:55 crc kubenswrapper[4795]: I1129 07:39:55.100969 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:55 crc kubenswrapper[4795]: I1129 07:39:55.101002 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:55 crc kubenswrapper[4795]: I1129 07:39:55.101023 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:55Z","lastTransitionTime":"2025-11-29T07:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:55 crc kubenswrapper[4795]: I1129 07:39:55.137219 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:55Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:55 crc kubenswrapper[4795]: I1129 07:39:55.178455 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27975" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceab872d-7a73-44d5-936e-3dd17facf399\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e24a1ba0b3bd47c8cff733e54ce43cf1641ba69df8a3d2083f47fcfdcd6640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e24a1ba0b3bd47c8cff733e54ce43cf1641ba69df8a3d2083f47fcfdcd6640\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9e93605467f5dae805a8b5b55e1eebc3f046f4ab7fd3a409235b918d8b1aa88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9e93605467f5dae805a8b5b55e1eebc3f046f4ab7fd3a409235b918d8b1aa88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1856815c6475b6c9db1d59694dfc249a9b254ca7f6b402570dbb791f60e038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1856815c6475b6c9db1d59694dfc249a9b254ca7f6b402570dbb791f60e038\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c79e6f292a540eaeab2781c31973da7e0213d08460452129dbb25075c05a9094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c79e6f292a540eaeab2781c31973da7e0213d08460452129dbb25075c05a9094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e9ceaddb24d5dc8e3689b2e05482fd76883f4ce3475557c03419491b53e800d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9ceaddb24d5dc8e3689b2e05482fd76883f4ce3475557c03419491b53e800d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27975\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:55Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:55 crc kubenswrapper[4795]: I1129 07:39:55.203579 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:55 crc kubenswrapper[4795]: I1129 07:39:55.203645 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:55 crc kubenswrapper[4795]: I1129 07:39:55.203656 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:55 crc kubenswrapper[4795]: I1129 07:39:55.203675 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:55 crc kubenswrapper[4795]: I1129 07:39:55.203685 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:55Z","lastTransitionTime":"2025-11-29T07:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:55 crc kubenswrapper[4795]: I1129 07:39:55.213063 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-s9x86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcb60df4-ad48-4830-b8c7-c63621b96707\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6d9cdb665b3bed924577ab173dcf48968a0b7217ca81b1f8708d733d27d3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-s9x86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:55Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:55 crc kubenswrapper[4795]: I1129 07:39:55.256913 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qcmb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6472ebbc-939b-4dd0-8b03-110cb9811484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qwxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qwxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qcmb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:55Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:55 crc kubenswrapper[4795]: I1129 07:39:55.295134 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"262bf41d-eaf0-42b7-946d-3223857ca705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bbfc45764b29055dcafdee2e3b77893c35b23cfa3eaa9125f01850baef2fd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://508f7cb88a5b33c09bb52519555900031de44693e10de750ed34d9bb4c31738c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9f1344bd7c3a1bc53870f7e5f5f2ca4993df112a24aa474655cc07670e22c48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b06202234f21aa27a3dc67978cc9f82738d7526100bce3fd640be08131413f69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:55Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:55 crc kubenswrapper[4795]: I1129 07:39:55.306402 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:55 crc kubenswrapper[4795]: I1129 07:39:55.306461 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:55 crc kubenswrapper[4795]: I1129 07:39:55.306476 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:55 crc kubenswrapper[4795]: I1129 07:39:55.306499 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:55 crc kubenswrapper[4795]: I1129 07:39:55.306524 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:55Z","lastTransitionTime":"2025-11-29T07:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:55 crc kubenswrapper[4795]: I1129 07:39:55.339851 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:55Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:55 crc kubenswrapper[4795]: I1129 07:39:55.378922 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:55Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:55 crc kubenswrapper[4795]: I1129 07:39:55.409880 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:55 crc kubenswrapper[4795]: I1129 07:39:55.409927 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:55 crc kubenswrapper[4795]: I1129 07:39:55.409939 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:55 crc kubenswrapper[4795]: I1129 07:39:55.409963 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:55 crc kubenswrapper[4795]: I1129 07:39:55.409980 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:55Z","lastTransitionTime":"2025-11-29T07:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:55 crc kubenswrapper[4795]: I1129 07:39:55.418218 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2faa21199ef0e7bbf065823fff1e4b33636239d022db292bec68ff54e8c948c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c09cfeaabce351a2face8aa09a04f88aa9d50122718358d07301aeab84e4a76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:55Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:55 crc kubenswrapper[4795]: I1129 07:39:55.465914 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9be66670-47c2-4d05-bf3d-59ae6f4ff53b-metrics-certs\") pod \"network-metrics-daemon-bvmzq\" (UID: \"9be66670-47c2-4d05-bf3d-59ae6f4ff53b\") " pod="openshift-multus/network-metrics-daemon-bvmzq" Nov 29 07:39:55 crc kubenswrapper[4795]: E1129 07:39:55.466119 4795 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 29 07:39:55 crc kubenswrapper[4795]: E1129 07:39:55.466229 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9be66670-47c2-4d05-bf3d-59ae6f4ff53b-metrics-certs podName:9be66670-47c2-4d05-bf3d-59ae6f4ff53b nodeName:}" failed. No retries permitted until 2025-11-29 07:39:56.466207126 +0000 UTC m=+42.441782986 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9be66670-47c2-4d05-bf3d-59ae6f4ff53b-metrics-certs") pod "network-metrics-daemon-bvmzq" (UID: "9be66670-47c2-4d05-bf3d-59ae6f4ff53b") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 29 07:39:55 crc kubenswrapper[4795]: I1129 07:39:55.514180 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:55 crc kubenswrapper[4795]: I1129 07:39:55.514240 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:55 crc kubenswrapper[4795]: I1129 07:39:55.514256 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:55 crc kubenswrapper[4795]: I1129 07:39:55.514278 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:55 crc kubenswrapper[4795]: I1129 07:39:55.514294 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:55Z","lastTransitionTime":"2025-11-29T07:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:55 crc kubenswrapper[4795]: I1129 07:39:55.621823 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:55 crc kubenswrapper[4795]: I1129 07:39:55.621882 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:55 crc kubenswrapper[4795]: I1129 07:39:55.621894 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:55 crc kubenswrapper[4795]: I1129 07:39:55.621912 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:55 crc kubenswrapper[4795]: I1129 07:39:55.621923 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:55Z","lastTransitionTime":"2025-11-29T07:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:55 crc kubenswrapper[4795]: I1129 07:39:55.628992 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"262bf41d-eaf0-42b7-946d-3223857ca705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bbfc45764b29055dcafdee2e3b77893c35b23cfa3eaa9125f01850baef2fd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://508f7cb88a5b33c09bb52519555900031de44693e10de750ed34d9bb4c31738c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9f1344bd7c3a1bc53870f7e5f5f2ca4993df112a24aa474655cc07670e22c48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b06202234f21aa27a3dc67978cc9f82738d7526100bce3fd640be08131413f69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:55Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:55 crc kubenswrapper[4795]: I1129 07:39:55.642726 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:55Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:55 crc kubenswrapper[4795]: I1129 07:39:55.657811 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:55Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:55 crc kubenswrapper[4795]: I1129 07:39:55.675048 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27975" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceab872d-7a73-44d5-936e-3dd17facf399\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e24a1ba0b3bd47c8cff733e54ce43cf1641ba69df8a3d2083f47fcfdcd6640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e24a1ba0b3bd47c8cff733e54ce43cf1641ba69df8a3d2083f47fcfdcd6640\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9e93605467f5dae805a8b5b55e1eebc3f046f4ab7fd3a409235b918d8b1aa88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9e93605467f5dae805a8b5b55e1eebc3f046f4ab7fd3a409235b918d8b1aa88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1856815c6475b6c9db1d59694dfc249a9b254ca7f6b402570dbb791f60e038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1856815c6475b6c9db1d59694dfc249a9b254ca7f6b402570dbb791f60e038\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c79e6f292a540eaeab2781c31973da7e0213d08460452129dbb25075c05a9094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c79e6f292a540eaeab2781c31973da7e0213d08460452129dbb25075c05a9094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e9ceaddb24d5dc8e3689b2e05482fd76883f4ce3475557c03419491b53e800d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9ceaddb24d5dc8e3689b2e05482fd76883f4ce3475557c03419491b53e800d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2966e801bc03a51bece33fae1207ba6a432620fcbcb1c702abddbb11b40a1e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27975\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:55Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:55 crc kubenswrapper[4795]: I1129 07:39:55.689233 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-s9x86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcb60df4-ad48-4830-b8c7-c63621b96707\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6d9cdb665b3bed924577ab173dcf48968a0b7217ca81b1f8708d733d27d3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-s9x86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:55Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:55 crc kubenswrapper[4795]: I1129 07:39:55.703737 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qcmb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6472ebbc-939b-4dd0-8b03-110cb9811484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qwxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qwxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qcmb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:55Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:55 crc kubenswrapper[4795]: I1129 07:39:55.719883 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:55Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:55 crc kubenswrapper[4795]: I1129 07:39:55.725642 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:55 crc kubenswrapper[4795]: I1129 07:39:55.725687 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:55 crc kubenswrapper[4795]: I1129 07:39:55.725698 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:55 crc kubenswrapper[4795]: I1129 07:39:55.725716 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:55 crc kubenswrapper[4795]: I1129 07:39:55.725727 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:55Z","lastTransitionTime":"2025-11-29T07:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:55 crc kubenswrapper[4795]: I1129 07:39:55.737205 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2faa21199ef0e7bbf065823fff1e4b33636239d022db292bec68ff54e8c948c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c09cfeaabce351a2face8aa09a04f88aa9d50122718358d07301aeab84e4a76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:55Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:55 crc kubenswrapper[4795]: I1129 07:39:55.781311 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceed8db447ae237cae16d2aa37b1651eade557ea7aaab5bc7b55678ce6e36f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcce9ccf35e72987d01428b7573aafacfe90fc973009657b077f40b6fe6ddecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://248e5291402287c1a589a168527135bd1df15e293a7614681fc896fcbdeb9dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8112226f945ad27e0c5af4d588800322977e17efc3eac9acec2ce591f30f46fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://feacad8666527640ec9d6bee04a743e99e0cc9f9e9c6432c1a70bc02e85fa039\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25239d7c14973e35144f9ac1b909564749220fc32b1851b68e08434a5aca7def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://178c4829fb6723552ece37b1b2a9db689e2acb44c46d095d72a2ad03af1d2460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63ac8cc3766762113c8fdfdc459f77570fc533bf636011f1e64a5c0a55b3ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e28f6209bd7be6beb5d86d316c3194ad3162ae8561ac532a3dfc894415fb406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e28f6209bd7be6beb5d86d316c3194ad3162ae8561ac532a3dfc894415fb406\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-km2g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:55Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:55 crc kubenswrapper[4795]: I1129 07:39:55.818468 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84bbfda4-f166-406b-8459-d8cad5d21032\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://102735e5872ecbca5ba247f4fc457619ce4b383def180851d69bf37f7ea1051a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cba2e40f95664da3b482ce01c3432b7a421b785791004833aeaffbc2b6bdcb9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c385dac962c7c2c54fe6b5eabe286bc75d0518601f68b6845677764c5916ad44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6319d19e0aa1655aaff5e3abcc13d8ff9ff26f401c28cf23593ecaa23d1ac5a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa34ac2960898820609db88aa051a6567a7b6bd0bfc7fcb741205f6f5c3f402\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:39:37Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 07:39:27.030389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:39:27.033381 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-823169215/tls.crt::/tmp/serving-cert-823169215/tls.key\\\\\\\"\\\\nI1129 07:39:37.392848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:39:37.396263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:39:37.396294 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:39:37.396669 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:39:37.396692 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:39:37.407579 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:39:37.407638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407648 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407655 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:39:37.407660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:39:37.407665 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:39:37.407670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:39:37.408043 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:39:37.411891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7b73774f26f4ea48586f8faf7f53be02b4678c85937c8e402868610f36d4d00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:55Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:55 crc kubenswrapper[4795]: I1129 07:39:55.828813 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:55 crc kubenswrapper[4795]: I1129 07:39:55.828867 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:55 crc kubenswrapper[4795]: I1129 07:39:55.828878 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:55 crc kubenswrapper[4795]: I1129 07:39:55.828897 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:55 crc kubenswrapper[4795]: I1129 07:39:55.828910 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:55Z","lastTransitionTime":"2025-11-29T07:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:55 crc kubenswrapper[4795]: I1129 07:39:55.855723 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30fbc9849348e3900b416f78f697f1d88eb5eb018f5ee420e711561423a79459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:55Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:55 crc kubenswrapper[4795]: I1129 07:39:55.893151 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b4d032ee6e86c2b61a59ea1477aa6d97a454c754888c37db24bc49d921eacd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:55Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:55 crc kubenswrapper[4795]: I1129 07:39:55.931623 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:55 crc kubenswrapper[4795]: I1129 07:39:55.931687 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:55 crc kubenswrapper[4795]: I1129 07:39:55.931703 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:55 crc kubenswrapper[4795]: I1129 07:39:55.931728 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:55 crc kubenswrapper[4795]: I1129 07:39:55.931746 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:55Z","lastTransitionTime":"2025-11-29T07:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:55 crc kubenswrapper[4795]: I1129 07:39:55.935903 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://148610b20c1dfe43c3e5443cda520fa95ed2823afcaa880dbe879b6d93dc8ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3aac368767d0534a658bc0223f5cda0a951940f2d6ff3f1c89a6ac401a7d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkmq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:55Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:55 crc kubenswrapper[4795]: I1129 07:39:55.974965 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bvmzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9be66670-47c2-4d05-bf3d-59ae6f4ff53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4h7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4h7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bvmzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:55Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:56 crc kubenswrapper[4795]: I1129 07:39:56.015340 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hbg2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50b9c3ea-4ff5-434f-803c-2365a0938f9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46105526d9b0e90d4e14597c7b9c505b01d89a55e947fb378e4587ec265cac89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv7gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hbg2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:56Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:56 crc kubenswrapper[4795]: I1129 07:39:56.034797 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:56 crc kubenswrapper[4795]: I1129 07:39:56.034861 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:56 crc kubenswrapper[4795]: I1129 07:39:56.034872 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:56 crc kubenswrapper[4795]: I1129 07:39:56.034892 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:56 crc kubenswrapper[4795]: I1129 07:39:56.034905 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:56Z","lastTransitionTime":"2025-11-29T07:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:56 crc kubenswrapper[4795]: I1129 07:39:56.053758 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vcd5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3fc3441-5d98-4323-8a78-cab492090c5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d047b5f2e979d7eebec46b45ef363e94e99bd7311214872c66fcce39cfd411db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcqq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vcd5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:56Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:56 crc kubenswrapper[4795]: I1129 07:39:56.138063 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:56 crc kubenswrapper[4795]: I1129 07:39:56.138122 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:56 crc kubenswrapper[4795]: I1129 07:39:56.138131 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:56 crc kubenswrapper[4795]: I1129 07:39:56.138145 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:56 crc kubenswrapper[4795]: I1129 07:39:56.138156 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:56Z","lastTransitionTime":"2025-11-29T07:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:56 crc kubenswrapper[4795]: I1129 07:39:56.240868 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:56 crc kubenswrapper[4795]: I1129 07:39:56.240983 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:56 crc kubenswrapper[4795]: I1129 07:39:56.241004 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:56 crc kubenswrapper[4795]: I1129 07:39:56.241035 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:56 crc kubenswrapper[4795]: I1129 07:39:56.241059 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:56Z","lastTransitionTime":"2025-11-29T07:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:56 crc kubenswrapper[4795]: I1129 07:39:56.274997 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:39:56 crc kubenswrapper[4795]: I1129 07:39:56.275075 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:39:56 crc kubenswrapper[4795]: I1129 07:39:56.275007 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:39:56 crc kubenswrapper[4795]: I1129 07:39:56.275259 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvmzq" Nov 29 07:39:56 crc kubenswrapper[4795]: E1129 07:39:56.275388 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:39:56 crc kubenswrapper[4795]: E1129 07:39:56.275579 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:39:56 crc kubenswrapper[4795]: E1129 07:39:56.275687 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:39:56 crc kubenswrapper[4795]: E1129 07:39:56.275759 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvmzq" podUID="9be66670-47c2-4d05-bf3d-59ae6f4ff53b" Nov 29 07:39:56 crc kubenswrapper[4795]: I1129 07:39:56.344683 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:56 crc kubenswrapper[4795]: I1129 07:39:56.344756 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:56 crc kubenswrapper[4795]: I1129 07:39:56.344772 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:56 crc kubenswrapper[4795]: I1129 07:39:56.344797 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:56 crc kubenswrapper[4795]: I1129 07:39:56.344814 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:56Z","lastTransitionTime":"2025-11-29T07:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:56 crc kubenswrapper[4795]: I1129 07:39:56.448318 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:56 crc kubenswrapper[4795]: I1129 07:39:56.448376 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:56 crc kubenswrapper[4795]: I1129 07:39:56.448390 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:56 crc kubenswrapper[4795]: I1129 07:39:56.448411 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:56 crc kubenswrapper[4795]: I1129 07:39:56.448425 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:56Z","lastTransitionTime":"2025-11-29T07:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:56 crc kubenswrapper[4795]: I1129 07:39:56.476771 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9be66670-47c2-4d05-bf3d-59ae6f4ff53b-metrics-certs\") pod \"network-metrics-daemon-bvmzq\" (UID: \"9be66670-47c2-4d05-bf3d-59ae6f4ff53b\") " pod="openshift-multus/network-metrics-daemon-bvmzq" Nov 29 07:39:56 crc kubenswrapper[4795]: E1129 07:39:56.477002 4795 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 29 07:39:56 crc kubenswrapper[4795]: E1129 07:39:56.477142 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9be66670-47c2-4d05-bf3d-59ae6f4ff53b-metrics-certs podName:9be66670-47c2-4d05-bf3d-59ae6f4ff53b nodeName:}" failed. No retries permitted until 2025-11-29 07:39:58.477112491 +0000 UTC m=+44.452688301 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9be66670-47c2-4d05-bf3d-59ae6f4ff53b-metrics-certs") pod "network-metrics-daemon-bvmzq" (UID: "9be66670-47c2-4d05-bf3d-59ae6f4ff53b") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 29 07:39:56 crc kubenswrapper[4795]: I1129 07:39:56.551400 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:56 crc kubenswrapper[4795]: I1129 07:39:56.551462 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:56 crc kubenswrapper[4795]: I1129 07:39:56.551480 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:56 crc kubenswrapper[4795]: I1129 07:39:56.551506 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:56 crc kubenswrapper[4795]: I1129 07:39:56.551524 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:56Z","lastTransitionTime":"2025-11-29T07:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:56 crc kubenswrapper[4795]: I1129 07:39:56.618284 4795 generic.go:334] "Generic (PLEG): container finished" podID="ceab872d-7a73-44d5-936e-3dd17facf399" containerID="2966e801bc03a51bece33fae1207ba6a432620fcbcb1c702abddbb11b40a1e1c" exitCode=0 Nov 29 07:39:56 crc kubenswrapper[4795]: I1129 07:39:56.618322 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-27975" event={"ID":"ceab872d-7a73-44d5-936e-3dd17facf399","Type":"ContainerDied","Data":"2966e801bc03a51bece33fae1207ba6a432620fcbcb1c702abddbb11b40a1e1c"} Nov 29 07:39:56 crc kubenswrapper[4795]: I1129 07:39:56.643657 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hbg2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50b9c3ea-4ff5-434f-803c-2365a0938f9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46105526d9b0e90d4e14597c7b9c505b01d89a55e947fb378e4587ec265cac89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv7gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hbg2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:56Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:56 crc kubenswrapper[4795]: I1129 07:39:56.655736 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:56 crc kubenswrapper[4795]: I1129 07:39:56.655864 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:56 crc kubenswrapper[4795]: I1129 07:39:56.655903 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:56 crc kubenswrapper[4795]: I1129 07:39:56.656543 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:56 crc kubenswrapper[4795]: I1129 07:39:56.656623 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:56Z","lastTransitionTime":"2025-11-29T07:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:56 crc kubenswrapper[4795]: I1129 07:39:56.662369 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vcd5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3fc3441-5d98-4323-8a78-cab492090c5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d047b5f2e979d7eebec46b45ef363e94e99bd7311214872c66fcce39cfd411db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcqq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vcd5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:56Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:56 crc kubenswrapper[4795]: I1129 07:39:56.682364 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"262bf41d-eaf0-42b7-946d-3223857ca705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bbfc45764b29055dcafdee2e3b77893c35b23cfa3eaa9125f01850baef2fd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://508f7cb88a5b33c09bb52519555900031de44693e10de750ed34d9bb4c31738c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9f1344bd7c3a1bc53870f7e5f5f2ca4993df112a24aa474655cc07670e22c48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b06202234f21aa27a3dc67978cc9f82738d7526100bce3fd640be08131413f69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:56Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:56 crc kubenswrapper[4795]: I1129 07:39:56.700458 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:56Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:56 crc kubenswrapper[4795]: I1129 07:39:56.717897 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:56Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:56 crc kubenswrapper[4795]: I1129 07:39:56.749628 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27975" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceab872d-7a73-44d5-936e-3dd17facf399\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e24a1ba0b3bd47c8cff733e54ce43cf1641ba69df8a3d2083f47fcfdcd6640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e24a1ba0b3bd47c8cff733e54ce43cf1641ba69df8a3d2083f47fcfdcd6640\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9e93605467f5dae805a8b5b55e1eebc3f046f4ab7fd3a409235b918d8b1aa88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9e93605467f5dae805a8b5b55e1eebc3f046f4ab7fd3a409235b918d8b1aa88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1856815c6475b6c9db1d59694dfc249a9b254ca7f6b402570dbb791f60e038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1856815c6475b6c9db1d59694dfc249a9b254ca7f6b402570dbb791f60e038\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c79e6f292a540eaeab2781c31973da7e0213d08460452129dbb25075c05a9094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c79e6f292a540eaeab2781c31973da7e0213d08460452129dbb25075c05a9094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e9ceaddb24d5dc8e3689b2e05482fd76883f4ce3475557c03419491b53e800d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9ceaddb24d5dc8e3689b2e05482fd76883f4ce3475557c03419491b53e800d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2966e801bc03a51bece33fae1207ba6a432620fcbcb1c702abddbb11b40a1e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2966e801bc03a51bece33fae1207ba6a432620fcbcb1c702abddbb11b40a1e1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27975\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:56Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:56 crc kubenswrapper[4795]: I1129 07:39:56.761395 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:56 crc kubenswrapper[4795]: I1129 07:39:56.761452 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:56 crc kubenswrapper[4795]: I1129 07:39:56.761468 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:56 crc kubenswrapper[4795]: I1129 07:39:56.761489 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:56 crc kubenswrapper[4795]: I1129 07:39:56.761503 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:56Z","lastTransitionTime":"2025-11-29T07:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:56 crc kubenswrapper[4795]: I1129 07:39:56.767501 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-s9x86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcb60df4-ad48-4830-b8c7-c63621b96707\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6d9cdb665b3bed924577ab173dcf48968a0b7217ca81b1f8708d733d27d3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-s9x86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:56Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:56 crc kubenswrapper[4795]: I1129 07:39:56.782823 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qcmb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6472ebbc-939b-4dd0-8b03-110cb9811484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qwxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qwxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qcmb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:56Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:56 crc kubenswrapper[4795]: I1129 07:39:56.804265 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:56Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:56 crc kubenswrapper[4795]: I1129 07:39:56.828671 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2faa21199ef0e7bbf065823fff1e4b33636239d022db292bec68ff54e8c948c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c09cfeaabce351a2face8aa09a04f88aa9d50122718358d07301aeab84e4a76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:56Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:56 crc kubenswrapper[4795]: I1129 07:39:56.856265 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceed8db447ae237cae16d2aa37b1651eade557ea7aaab5bc7b55678ce6e36f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcce9ccf35e72987d01428b7573aafacfe90fc973009657b077f40b6fe6ddecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://248e5291402287c1a589a168527135bd1df15e293a7614681fc896fcbdeb9dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8112226f945ad27e0c5af4d588800322977e17efc3eac9acec2ce591f30f46fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://feacad8666527640ec9d6bee04a743e99e0cc9f9e9c6432c1a70bc02e85fa039\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25239d7c14973e35144f9ac1b909564749220fc32b1851b68e08434a5aca7def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://178c4829fb6723552ece37b1b2a9db689e2acb44c46d095d72a2ad03af1d2460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63ac8cc3766762113c8fdfdc459f77570fc533bf636011f1e64a5c0a55b3ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e28f6209bd7be6beb5d86d316c3194ad3162ae8561ac532a3dfc894415fb406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e28f6209bd7be6beb5d86d316c3194ad3162ae8561ac532a3dfc894415fb406\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-km2g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:56Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:56 crc kubenswrapper[4795]: I1129 07:39:56.864286 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:56 crc kubenswrapper[4795]: I1129 07:39:56.864337 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:56 crc kubenswrapper[4795]: I1129 07:39:56.864350 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:56 crc kubenswrapper[4795]: I1129 07:39:56.864370 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:56 crc kubenswrapper[4795]: I1129 07:39:56.864383 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:56Z","lastTransitionTime":"2025-11-29T07:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:56 crc kubenswrapper[4795]: I1129 07:39:56.875781 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84bbfda4-f166-406b-8459-d8cad5d21032\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://102735e5872ecbca5ba247f4fc457619ce4b383def180851d69bf37f7ea1051a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cba2e40f95664da3b482ce01c3432b7a421b785791004833aeaffbc2b6bdcb9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c385dac962c7c2c54fe6b5eabe286bc75d0518601f68b6845677764c5916ad44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6319d19e0aa1655aaff5e3abcc13d8ff9ff26f401c28cf23593ecaa23d1ac5a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa34ac2960898820609db88aa051a6567a7b6bd0bfc7fcb741205f6f5c3f402\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:39:37Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 07:39:27.030389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:39:27.033381 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-823169215/tls.crt::/tmp/serving-cert-823169215/tls.key\\\\\\\"\\\\nI1129 07:39:37.392848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:39:37.396263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:39:37.396294 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:39:37.396669 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:39:37.396692 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:39:37.407579 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:39:37.407638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407648 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407655 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:39:37.407660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:39:37.407665 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:39:37.407670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:39:37.408043 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:39:37.411891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7b73774f26f4ea48586f8faf7f53be02b4678c85937c8e402868610f36d4d00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:56Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:56 crc kubenswrapper[4795]: I1129 07:39:56.892960 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30fbc9849348e3900b416f78f697f1d88eb5eb018f5ee420e711561423a79459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:56Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:56 crc kubenswrapper[4795]: I1129 07:39:56.908388 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b4d032ee6e86c2b61a59ea1477aa6d97a454c754888c37db24bc49d921eacd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:56Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:56 crc kubenswrapper[4795]: I1129 07:39:56.921361 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://148610b20c1dfe43c3e5443cda520fa95ed2823afcaa880dbe879b6d93dc8ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3aac368767d0534a658bc0223f5cda0a951940f2d6ff3f1c89a6ac401a7d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkmq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:56Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:56 crc kubenswrapper[4795]: I1129 07:39:56.933097 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bvmzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9be66670-47c2-4d05-bf3d-59ae6f4ff53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4h7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4h7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bvmzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:39:56Z is after 2025-08-24T17:21:41Z" Nov 29 07:39:56 crc kubenswrapper[4795]: I1129 07:39:56.967610 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:56 crc kubenswrapper[4795]: I1129 07:39:56.967683 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:56 crc kubenswrapper[4795]: I1129 07:39:56.967698 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:56 crc kubenswrapper[4795]: I1129 07:39:56.968143 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:56 crc kubenswrapper[4795]: I1129 07:39:56.968187 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:56Z","lastTransitionTime":"2025-11-29T07:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:57 crc kubenswrapper[4795]: I1129 07:39:57.072144 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:57 crc kubenswrapper[4795]: I1129 07:39:57.072199 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:57 crc kubenswrapper[4795]: I1129 07:39:57.072216 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:57 crc kubenswrapper[4795]: I1129 07:39:57.072242 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:57 crc kubenswrapper[4795]: I1129 07:39:57.072260 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:57Z","lastTransitionTime":"2025-11-29T07:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:57 crc kubenswrapper[4795]: I1129 07:39:57.175397 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:57 crc kubenswrapper[4795]: I1129 07:39:57.175480 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:57 crc kubenswrapper[4795]: I1129 07:39:57.175498 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:57 crc kubenswrapper[4795]: I1129 07:39:57.175534 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:57 crc kubenswrapper[4795]: I1129 07:39:57.175559 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:57Z","lastTransitionTime":"2025-11-29T07:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:57 crc kubenswrapper[4795]: I1129 07:39:57.278089 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:57 crc kubenswrapper[4795]: I1129 07:39:57.278133 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:57 crc kubenswrapper[4795]: I1129 07:39:57.278143 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:57 crc kubenswrapper[4795]: I1129 07:39:57.278155 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:57 crc kubenswrapper[4795]: I1129 07:39:57.278166 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:57Z","lastTransitionTime":"2025-11-29T07:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:57 crc kubenswrapper[4795]: I1129 07:39:57.383157 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:57 crc kubenswrapper[4795]: I1129 07:39:57.383216 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:57 crc kubenswrapper[4795]: I1129 07:39:57.383234 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:57 crc kubenswrapper[4795]: I1129 07:39:57.383259 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:57 crc kubenswrapper[4795]: I1129 07:39:57.383277 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:57Z","lastTransitionTime":"2025-11-29T07:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:57 crc kubenswrapper[4795]: I1129 07:39:57.486576 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:57 crc kubenswrapper[4795]: I1129 07:39:57.486634 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:57 crc kubenswrapper[4795]: I1129 07:39:57.486648 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:57 crc kubenswrapper[4795]: I1129 07:39:57.486664 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:57 crc kubenswrapper[4795]: I1129 07:39:57.486674 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:57Z","lastTransitionTime":"2025-11-29T07:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:57 crc kubenswrapper[4795]: I1129 07:39:57.589360 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:57 crc kubenswrapper[4795]: I1129 07:39:57.589390 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:57 crc kubenswrapper[4795]: I1129 07:39:57.589398 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:57 crc kubenswrapper[4795]: I1129 07:39:57.589411 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:57 crc kubenswrapper[4795]: I1129 07:39:57.589420 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:57Z","lastTransitionTime":"2025-11-29T07:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:57 crc kubenswrapper[4795]: I1129 07:39:57.693441 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:57 crc kubenswrapper[4795]: I1129 07:39:57.693515 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:57 crc kubenswrapper[4795]: I1129 07:39:57.693536 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:57 crc kubenswrapper[4795]: I1129 07:39:57.693562 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:57 crc kubenswrapper[4795]: I1129 07:39:57.693581 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:57Z","lastTransitionTime":"2025-11-29T07:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:57 crc kubenswrapper[4795]: I1129 07:39:57.797083 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:57 crc kubenswrapper[4795]: I1129 07:39:57.797141 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:57 crc kubenswrapper[4795]: I1129 07:39:57.797161 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:57 crc kubenswrapper[4795]: I1129 07:39:57.797186 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:57 crc kubenswrapper[4795]: I1129 07:39:57.797208 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:57Z","lastTransitionTime":"2025-11-29T07:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:57 crc kubenswrapper[4795]: I1129 07:39:57.900228 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:57 crc kubenswrapper[4795]: I1129 07:39:57.900270 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:57 crc kubenswrapper[4795]: I1129 07:39:57.900288 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:57 crc kubenswrapper[4795]: I1129 07:39:57.900307 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:57 crc kubenswrapper[4795]: I1129 07:39:57.900319 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:57Z","lastTransitionTime":"2025-11-29T07:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:58 crc kubenswrapper[4795]: I1129 07:39:58.003245 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:58 crc kubenswrapper[4795]: I1129 07:39:58.003288 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:58 crc kubenswrapper[4795]: I1129 07:39:58.003305 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:58 crc kubenswrapper[4795]: I1129 07:39:58.003327 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:58 crc kubenswrapper[4795]: I1129 07:39:58.003339 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:58Z","lastTransitionTime":"2025-11-29T07:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:58 crc kubenswrapper[4795]: I1129 07:39:58.111987 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:58 crc kubenswrapper[4795]: I1129 07:39:58.112039 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:58 crc kubenswrapper[4795]: I1129 07:39:58.112048 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:58 crc kubenswrapper[4795]: I1129 07:39:58.112066 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:58 crc kubenswrapper[4795]: I1129 07:39:58.112077 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:58Z","lastTransitionTime":"2025-11-29T07:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:58 crc kubenswrapper[4795]: I1129 07:39:58.214460 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:58 crc kubenswrapper[4795]: I1129 07:39:58.214500 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:58 crc kubenswrapper[4795]: I1129 07:39:58.214511 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:58 crc kubenswrapper[4795]: I1129 07:39:58.214527 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:58 crc kubenswrapper[4795]: I1129 07:39:58.214537 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:58Z","lastTransitionTime":"2025-11-29T07:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:58 crc kubenswrapper[4795]: I1129 07:39:58.275147 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:39:58 crc kubenswrapper[4795]: I1129 07:39:58.275262 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:39:58 crc kubenswrapper[4795]: E1129 07:39:58.275372 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:39:58 crc kubenswrapper[4795]: I1129 07:39:58.275156 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:39:58 crc kubenswrapper[4795]: I1129 07:39:58.275182 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvmzq" Nov 29 07:39:58 crc kubenswrapper[4795]: E1129 07:39:58.275528 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:39:58 crc kubenswrapper[4795]: E1129 07:39:58.275578 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:39:58 crc kubenswrapper[4795]: E1129 07:39:58.275695 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvmzq" podUID="9be66670-47c2-4d05-bf3d-59ae6f4ff53b" Nov 29 07:39:58 crc kubenswrapper[4795]: I1129 07:39:58.318788 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:58 crc kubenswrapper[4795]: I1129 07:39:58.318836 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:58 crc kubenswrapper[4795]: I1129 07:39:58.318850 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:58 crc kubenswrapper[4795]: I1129 07:39:58.318869 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:58 crc kubenswrapper[4795]: I1129 07:39:58.318886 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:58Z","lastTransitionTime":"2025-11-29T07:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:58 crc kubenswrapper[4795]: I1129 07:39:58.421930 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:58 crc kubenswrapper[4795]: I1129 07:39:58.421991 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:58 crc kubenswrapper[4795]: I1129 07:39:58.422005 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:58 crc kubenswrapper[4795]: I1129 07:39:58.422024 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:58 crc kubenswrapper[4795]: I1129 07:39:58.422038 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:58Z","lastTransitionTime":"2025-11-29T07:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:58 crc kubenswrapper[4795]: I1129 07:39:58.504428 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9be66670-47c2-4d05-bf3d-59ae6f4ff53b-metrics-certs\") pod \"network-metrics-daemon-bvmzq\" (UID: \"9be66670-47c2-4d05-bf3d-59ae6f4ff53b\") " pod="openshift-multus/network-metrics-daemon-bvmzq" Nov 29 07:39:58 crc kubenswrapper[4795]: E1129 07:39:58.504653 4795 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 29 07:39:58 crc kubenswrapper[4795]: E1129 07:39:58.504728 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9be66670-47c2-4d05-bf3d-59ae6f4ff53b-metrics-certs podName:9be66670-47c2-4d05-bf3d-59ae6f4ff53b nodeName:}" failed. No retries permitted until 2025-11-29 07:40:02.50471135 +0000 UTC m=+48.480287140 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9be66670-47c2-4d05-bf3d-59ae6f4ff53b-metrics-certs") pod "network-metrics-daemon-bvmzq" (UID: "9be66670-47c2-4d05-bf3d-59ae6f4ff53b") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 29 07:39:58 crc kubenswrapper[4795]: I1129 07:39:58.524108 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:58 crc kubenswrapper[4795]: I1129 07:39:58.524145 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:58 crc kubenswrapper[4795]: I1129 07:39:58.524156 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:58 crc kubenswrapper[4795]: I1129 07:39:58.524170 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:58 crc kubenswrapper[4795]: I1129 07:39:58.524179 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:58Z","lastTransitionTime":"2025-11-29T07:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:58 crc kubenswrapper[4795]: I1129 07:39:58.631159 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:58 crc kubenswrapper[4795]: I1129 07:39:58.631217 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:58 crc kubenswrapper[4795]: I1129 07:39:58.631229 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:58 crc kubenswrapper[4795]: I1129 07:39:58.631249 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:58 crc kubenswrapper[4795]: I1129 07:39:58.631264 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:58Z","lastTransitionTime":"2025-11-29T07:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:58 crc kubenswrapper[4795]: I1129 07:39:58.634362 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qcmb5" event={"ID":"6472ebbc-939b-4dd0-8b03-110cb9811484","Type":"ContainerStarted","Data":"b2497113482ab716b22a90b7872f2837bccd484b765e979c2c67a394f82de688"} Nov 29 07:39:58 crc kubenswrapper[4795]: I1129 07:39:58.734154 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:58 crc kubenswrapper[4795]: I1129 07:39:58.734201 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:58 crc kubenswrapper[4795]: I1129 07:39:58.734214 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:58 crc kubenswrapper[4795]: I1129 07:39:58.734235 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:58 crc kubenswrapper[4795]: I1129 07:39:58.734250 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:58Z","lastTransitionTime":"2025-11-29T07:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:58 crc kubenswrapper[4795]: I1129 07:39:58.836979 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:58 crc kubenswrapper[4795]: I1129 07:39:58.837027 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:58 crc kubenswrapper[4795]: I1129 07:39:58.837040 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:58 crc kubenswrapper[4795]: I1129 07:39:58.837056 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:58 crc kubenswrapper[4795]: I1129 07:39:58.837067 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:58Z","lastTransitionTime":"2025-11-29T07:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:58 crc kubenswrapper[4795]: I1129 07:39:58.940143 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:58 crc kubenswrapper[4795]: I1129 07:39:58.940203 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:58 crc kubenswrapper[4795]: I1129 07:39:58.940218 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:58 crc kubenswrapper[4795]: I1129 07:39:58.940237 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:58 crc kubenswrapper[4795]: I1129 07:39:58.940249 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:58Z","lastTransitionTime":"2025-11-29T07:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:59 crc kubenswrapper[4795]: I1129 07:39:59.043203 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:59 crc kubenswrapper[4795]: I1129 07:39:59.043257 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:59 crc kubenswrapper[4795]: I1129 07:39:59.043269 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:59 crc kubenswrapper[4795]: I1129 07:39:59.043288 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:59 crc kubenswrapper[4795]: I1129 07:39:59.043303 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:59Z","lastTransitionTime":"2025-11-29T07:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:59 crc kubenswrapper[4795]: I1129 07:39:59.145260 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:59 crc kubenswrapper[4795]: I1129 07:39:59.145298 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:59 crc kubenswrapper[4795]: I1129 07:39:59.145309 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:59 crc kubenswrapper[4795]: I1129 07:39:59.145323 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:59 crc kubenswrapper[4795]: I1129 07:39:59.145332 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:59Z","lastTransitionTime":"2025-11-29T07:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:59 crc kubenswrapper[4795]: I1129 07:39:59.248498 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:59 crc kubenswrapper[4795]: I1129 07:39:59.248535 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:59 crc kubenswrapper[4795]: I1129 07:39:59.248546 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:59 crc kubenswrapper[4795]: I1129 07:39:59.248562 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:59 crc kubenswrapper[4795]: I1129 07:39:59.248573 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:59Z","lastTransitionTime":"2025-11-29T07:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:59 crc kubenswrapper[4795]: I1129 07:39:59.350674 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:59 crc kubenswrapper[4795]: I1129 07:39:59.350713 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:59 crc kubenswrapper[4795]: I1129 07:39:59.350724 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:59 crc kubenswrapper[4795]: I1129 07:39:59.350741 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:59 crc kubenswrapper[4795]: I1129 07:39:59.350750 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:59Z","lastTransitionTime":"2025-11-29T07:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:59 crc kubenswrapper[4795]: I1129 07:39:59.452760 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:59 crc kubenswrapper[4795]: I1129 07:39:59.452806 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:59 crc kubenswrapper[4795]: I1129 07:39:59.452820 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:59 crc kubenswrapper[4795]: I1129 07:39:59.452837 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:59 crc kubenswrapper[4795]: I1129 07:39:59.452849 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:59Z","lastTransitionTime":"2025-11-29T07:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:59 crc kubenswrapper[4795]: I1129 07:39:59.555349 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:59 crc kubenswrapper[4795]: I1129 07:39:59.555639 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:59 crc kubenswrapper[4795]: I1129 07:39:59.555711 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:59 crc kubenswrapper[4795]: I1129 07:39:59.555802 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:59 crc kubenswrapper[4795]: I1129 07:39:59.555894 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:59Z","lastTransitionTime":"2025-11-29T07:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:59 crc kubenswrapper[4795]: I1129 07:39:59.658835 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:59 crc kubenswrapper[4795]: I1129 07:39:59.658876 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:59 crc kubenswrapper[4795]: I1129 07:39:59.658888 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:59 crc kubenswrapper[4795]: I1129 07:39:59.658906 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:59 crc kubenswrapper[4795]: I1129 07:39:59.658918 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:59Z","lastTransitionTime":"2025-11-29T07:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:59 crc kubenswrapper[4795]: I1129 07:39:59.761646 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:59 crc kubenswrapper[4795]: I1129 07:39:59.761684 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:59 crc kubenswrapper[4795]: I1129 07:39:59.761696 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:59 crc kubenswrapper[4795]: I1129 07:39:59.761715 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:59 crc kubenswrapper[4795]: I1129 07:39:59.761729 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:59Z","lastTransitionTime":"2025-11-29T07:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:59 crc kubenswrapper[4795]: I1129 07:39:59.864214 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:59 crc kubenswrapper[4795]: I1129 07:39:59.864256 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:59 crc kubenswrapper[4795]: I1129 07:39:59.864268 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:59 crc kubenswrapper[4795]: I1129 07:39:59.864288 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:59 crc kubenswrapper[4795]: I1129 07:39:59.864303 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:59Z","lastTransitionTime":"2025-11-29T07:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:39:59 crc kubenswrapper[4795]: I1129 07:39:59.967095 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:39:59 crc kubenswrapper[4795]: I1129 07:39:59.967341 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:39:59 crc kubenswrapper[4795]: I1129 07:39:59.967446 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:39:59 crc kubenswrapper[4795]: I1129 07:39:59.967538 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:39:59 crc kubenswrapper[4795]: I1129 07:39:59.967652 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:39:59Z","lastTransitionTime":"2025-11-29T07:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:00 crc kubenswrapper[4795]: I1129 07:40:00.069566 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:00 crc kubenswrapper[4795]: I1129 07:40:00.069645 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:00 crc kubenswrapper[4795]: I1129 07:40:00.069658 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:00 crc kubenswrapper[4795]: I1129 07:40:00.069675 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:00 crc kubenswrapper[4795]: I1129 07:40:00.069694 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:00Z","lastTransitionTime":"2025-11-29T07:40:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:00 crc kubenswrapper[4795]: I1129 07:40:00.173362 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:00 crc kubenswrapper[4795]: I1129 07:40:00.173708 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:00 crc kubenswrapper[4795]: I1129 07:40:00.173866 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:00 crc kubenswrapper[4795]: I1129 07:40:00.174031 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:00 crc kubenswrapper[4795]: I1129 07:40:00.174191 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:00Z","lastTransitionTime":"2025-11-29T07:40:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:00 crc kubenswrapper[4795]: I1129 07:40:00.275742 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:40:00 crc kubenswrapper[4795]: I1129 07:40:00.276064 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:40:00 crc kubenswrapper[4795]: E1129 07:40:00.276288 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:40:00 crc kubenswrapper[4795]: I1129 07:40:00.276143 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:40:00 crc kubenswrapper[4795]: E1129 07:40:00.276383 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:40:00 crc kubenswrapper[4795]: I1129 07:40:00.276128 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvmzq" Nov 29 07:40:00 crc kubenswrapper[4795]: E1129 07:40:00.276468 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:40:00 crc kubenswrapper[4795]: E1129 07:40:00.276544 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvmzq" podUID="9be66670-47c2-4d05-bf3d-59ae6f4ff53b" Nov 29 07:40:00 crc kubenswrapper[4795]: I1129 07:40:00.277157 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:00 crc kubenswrapper[4795]: I1129 07:40:00.277180 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:00 crc kubenswrapper[4795]: I1129 07:40:00.277190 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:00 crc kubenswrapper[4795]: I1129 07:40:00.277205 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:00 crc kubenswrapper[4795]: I1129 07:40:00.277217 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:00Z","lastTransitionTime":"2025-11-29T07:40:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:00 crc kubenswrapper[4795]: I1129 07:40:00.379271 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:00 crc kubenswrapper[4795]: I1129 07:40:00.379309 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:00 crc kubenswrapper[4795]: I1129 07:40:00.379319 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:00 crc kubenswrapper[4795]: I1129 07:40:00.379334 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:00 crc kubenswrapper[4795]: I1129 07:40:00.379352 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:00Z","lastTransitionTime":"2025-11-29T07:40:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:00 crc kubenswrapper[4795]: I1129 07:40:00.481751 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:00 crc kubenswrapper[4795]: I1129 07:40:00.481824 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:00 crc kubenswrapper[4795]: I1129 07:40:00.481844 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:00 crc kubenswrapper[4795]: I1129 07:40:00.481878 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:00 crc kubenswrapper[4795]: I1129 07:40:00.481896 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:00Z","lastTransitionTime":"2025-11-29T07:40:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:00 crc kubenswrapper[4795]: I1129 07:40:00.584353 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:00 crc kubenswrapper[4795]: I1129 07:40:00.584403 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:00 crc kubenswrapper[4795]: I1129 07:40:00.584415 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:00 crc kubenswrapper[4795]: I1129 07:40:00.584437 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:00 crc kubenswrapper[4795]: I1129 07:40:00.584449 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:00Z","lastTransitionTime":"2025-11-29T07:40:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:00 crc kubenswrapper[4795]: I1129 07:40:00.645930 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-27975" event={"ID":"ceab872d-7a73-44d5-936e-3dd17facf399","Type":"ContainerStarted","Data":"b771b13a7bbce9270d919a830c700a1a04bd1d82141a1487e7abb6b1def634cb"} Nov 29 07:40:00 crc kubenswrapper[4795]: I1129 07:40:00.666930 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceed8db447ae237cae16d2aa37b1651eade557ea7aaab5bc7b55678ce6e36f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcce9ccf35e72987d01428b7573aafacfe90fc973009657b077f40b6fe6ddecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://248e5291402287c1a589a168527135bd1df15e293a7614681fc896fcbdeb9dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8112226f945ad27e0c5af4d588800322977e17efc3eac9acec2ce591f30f46fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://feacad8666527640ec9d6bee04a743e99e0cc9f9e9c6432c1a70bc02e85fa039\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25239d7c14973e35144f9ac1b909564749220fc32b1851b68e08434a5aca7def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://178c4829fb6723552ece37b1b2a9db689e2acb44c46d095d72a2ad03af1d2460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63ac8cc3766762113c8fdfdc459f77570fc533bf636011f1e64a5c0a55b3ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e28f6209bd7be6beb5d86d316c3194ad3162ae8561ac532a3dfc894415fb406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e28f6209bd7be6beb5d86d316c3194ad3162ae8561ac532a3dfc894415fb406\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-km2g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:00Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:00 crc kubenswrapper[4795]: I1129 07:40:00.692427 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:00 crc kubenswrapper[4795]: I1129 07:40:00.692470 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:00 crc kubenswrapper[4795]: I1129 07:40:00.692482 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:00 crc kubenswrapper[4795]: I1129 07:40:00.692500 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:00 crc kubenswrapper[4795]: I1129 07:40:00.692511 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:00Z","lastTransitionTime":"2025-11-29T07:40:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:00 crc kubenswrapper[4795]: I1129 07:40:00.701122 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84bbfda4-f166-406b-8459-d8cad5d21032\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://102735e5872ecbca5ba247f4fc457619ce4b383def180851d69bf37f7ea1051a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cba2e40f95664da3b482ce01c3432b7a421b785791004833aeaffbc2b6bdcb9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c385dac962c7c2c54fe6b5eabe286bc75d0518601f68b6845677764c5916ad44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6319d19e0aa1655aaff5e3abcc13d8ff9ff26f401c28cf23593ecaa23d1ac5a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa34ac2960898820609db88aa051a6567a7b6bd0bfc7fcb741205f6f5c3f402\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:39:37Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 07:39:27.030389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:39:27.033381 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-823169215/tls.crt::/tmp/serving-cert-823169215/tls.key\\\\\\\"\\\\nI1129 07:39:37.392848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:39:37.396263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:39:37.396294 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:39:37.396669 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:39:37.396692 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:39:37.407579 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:39:37.407638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407648 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407655 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:39:37.407660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:39:37.407665 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:39:37.407670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:39:37.408043 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:39:37.411891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7b73774f26f4ea48586f8faf7f53be02b4678c85937c8e402868610f36d4d00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:00Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:00 crc kubenswrapper[4795]: I1129 07:40:00.732873 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30fbc9849348e3900b416f78f697f1d88eb5eb018f5ee420e711561423a79459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:00Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:00 crc kubenswrapper[4795]: I1129 07:40:00.750057 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b4d032ee6e86c2b61a59ea1477aa6d97a454c754888c37db24bc49d921eacd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:00Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:00 crc kubenswrapper[4795]: I1129 07:40:00.760887 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://148610b20c1dfe43c3e5443cda520fa95ed2823afcaa880dbe879b6d93dc8ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3aac368767d0534a658bc0223f5cda0a951940f2d6ff3f1c89a6ac401a7d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkmq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:00Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:00 crc kubenswrapper[4795]: I1129 07:40:00.771014 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bvmzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9be66670-47c2-4d05-bf3d-59ae6f4ff53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4h7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4h7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bvmzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:00Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:00 crc kubenswrapper[4795]: I1129 07:40:00.783793 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hbg2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50b9c3ea-4ff5-434f-803c-2365a0938f9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46105526d9b0e90d4e14597c7b9c505b01d89a55e947fb378e4587ec265cac89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv7gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hbg2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:00Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:00 crc kubenswrapper[4795]: I1129 07:40:00.793176 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vcd5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3fc3441-5d98-4323-8a78-cab492090c5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d047b5f2e979d7eebec46b45ef363e94e99bd7311214872c66fcce39cfd411db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcqq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vcd5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:00Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:00 crc kubenswrapper[4795]: I1129 07:40:00.795129 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:00 crc kubenswrapper[4795]: I1129 07:40:00.795204 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:00 crc kubenswrapper[4795]: I1129 07:40:00.795220 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:00 crc kubenswrapper[4795]: I1129 07:40:00.795243 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:00 crc kubenswrapper[4795]: I1129 07:40:00.795257 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:00Z","lastTransitionTime":"2025-11-29T07:40:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:00 crc kubenswrapper[4795]: I1129 07:40:00.806251 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"262bf41d-eaf0-42b7-946d-3223857ca705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bbfc45764b29055dcafdee2e3b77893c35b23cfa3eaa9125f01850baef2fd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://508f7cb88a5b33c09bb52519555900031de44693e10de750ed34d9bb4c31738c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9f1344bd7c3a1bc53870f7e5f5f2ca4993df112a24aa474655cc07670e22c48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b06202234f21aa27a3dc67978cc9f82738d7526100bce3fd640be08131413f69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:00Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:00 crc kubenswrapper[4795]: I1129 07:40:00.818188 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:00Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:00 crc kubenswrapper[4795]: I1129 07:40:00.832916 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:00Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:00 crc kubenswrapper[4795]: I1129 07:40:00.851482 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27975" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceab872d-7a73-44d5-936e-3dd17facf399\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e24a1ba0b3bd47c8cff733e54ce43cf1641ba69df8a3d2083f47fcfdcd6640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e24a1ba0b3bd47c8cff733e54ce43cf1641ba69df8a3d2083f47fcfdcd6640\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9e93605467f5dae805a8b5b55e1eebc3f046f4ab7fd3a409235b918d8b1aa88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9e93605467f5dae805a8b5b55e1eebc3f046f4ab7fd3a409235b918d8b1aa88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1856815c6475b6c9db1d59694dfc249a9b254ca7f6b402570dbb791f60e038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1856815c6475b6c9db1d59694dfc249a9b254ca7f6b402570dbb791f60e038\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c79e6f292a540eaeab2781c31973da7e0213d08460452129dbb25075c05a9094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c79e6f292a540eaeab2781c31973da7e0213d08460452129dbb25075c05a9094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e9ceaddb24d5dc8e3689b2e05482fd76883f4ce3475557c03419491b53e800d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9ceaddb24d5dc8e3689b2e05482fd76883f4ce3475557c03419491b53e800d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2966e801bc03a51bece33fae1207ba6a432620fcbcb1c702abddbb11b40a1e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2966e801bc03a51bece33fae1207ba6a432620fcbcb1c702abddbb11b40a1e1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27975\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:00Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:00 crc kubenswrapper[4795]: I1129 07:40:00.866370 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-s9x86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcb60df4-ad48-4830-b8c7-c63621b96707\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6d9cdb665b3bed924577ab173dcf48968a0b7217ca81b1f8708d733d27d3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-s9x86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:00Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:00 crc kubenswrapper[4795]: I1129 07:40:00.881303 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qcmb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6472ebbc-939b-4dd0-8b03-110cb9811484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad977e9d70f50688539c9d0c9cf08789584d08f77792d3dde1a619e9d958cff2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qwxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2497113482ab716b22a90b7872f2837bccd484b765e979c2c67a394f82de688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qwxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qcmb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:00Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:00 crc kubenswrapper[4795]: I1129 07:40:00.898460 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:00 crc kubenswrapper[4795]: I1129 07:40:00.898524 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:00 crc kubenswrapper[4795]: I1129 07:40:00.898536 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:00 crc kubenswrapper[4795]: I1129 07:40:00.898558 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:00 crc kubenswrapper[4795]: I1129 07:40:00.898571 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:00Z","lastTransitionTime":"2025-11-29T07:40:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:00 crc kubenswrapper[4795]: I1129 07:40:00.900372 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:00Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:00 crc kubenswrapper[4795]: I1129 07:40:00.916283 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2faa21199ef0e7bbf065823fff1e4b33636239d022db292bec68ff54e8c948c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c09cfeaabce351a2face8aa09a04f88aa9d50122718358d07301aeab84e4a76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:00Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:00 crc kubenswrapper[4795]: I1129 07:40:00.931529 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hbg2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50b9c3ea-4ff5-434f-803c-2365a0938f9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46105526d9b0e90d4e14597c7b9c505b01d89a55e947fb378e4587ec265cac89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv7gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hbg2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:00Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:00 crc kubenswrapper[4795]: I1129 07:40:00.942795 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vcd5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3fc3441-5d98-4323-8a78-cab492090c5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d047b5f2e979d7eebec46b45ef363e94e99bd7311214872c66fcce39cfd411db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcqq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vcd5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:00Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:00 crc kubenswrapper[4795]: I1129 07:40:00.953706 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-s9x86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcb60df4-ad48-4830-b8c7-c63621b96707\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6d9cdb665b3bed924577ab173dcf48968a0b7217ca81b1f8708d733d27d3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-s9x86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:00Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:00 crc kubenswrapper[4795]: I1129 07:40:00.967205 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qcmb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6472ebbc-939b-4dd0-8b03-110cb9811484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad977e9d70f50688539c9d0c9cf08789584d08f77792d3dde1a619e9d958cff2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qwxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2497113482ab716b22a90b7872f2837bccd484b765e979c2c67a394f82de688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qwxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qcmb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:00Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:00 crc kubenswrapper[4795]: I1129 07:40:00.982985 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"262bf41d-eaf0-42b7-946d-3223857ca705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bbfc45764b29055dcafdee2e3b77893c35b23cfa3eaa9125f01850baef2fd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://508f7cb88a5b33c09bb52519555900031de44693e10de750ed34d9bb4c31738c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9f1344bd7c3a1bc53870f7e5f5f2ca4993df112a24aa474655cc07670e22c48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b06202234f21aa27a3dc67978cc9f82738d7526100bce3fd640be08131413f69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:00Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:00 crc kubenswrapper[4795]: I1129 07:40:00.996907 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:00Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:01 crc kubenswrapper[4795]: I1129 07:40:01.001160 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:01 crc kubenswrapper[4795]: I1129 07:40:01.001378 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:01 crc kubenswrapper[4795]: I1129 07:40:01.001430 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:01 crc kubenswrapper[4795]: I1129 07:40:01.001460 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:01 crc kubenswrapper[4795]: I1129 07:40:01.001476 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:01Z","lastTransitionTime":"2025-11-29T07:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:01 crc kubenswrapper[4795]: I1129 07:40:01.011024 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:01Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:01 crc kubenswrapper[4795]: I1129 07:40:01.027708 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27975" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceab872d-7a73-44d5-936e-3dd17facf399\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b771b13a7bbce9270d919a830c700a1a04bd1d82141a1487e7abb6b1def634cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:40:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e24a1ba0b3bd47c8cff733e54ce43cf1641ba69df8a3d2083f47fcfdcd6640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e24a1ba0b3bd47c8cff733e54ce43cf1641ba69df8a3d2083f47fcfdcd6640\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9e93605467f5dae805a8b5b55e1eebc3f046f4ab7fd3a409235b918d8b1aa88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9e93605467f5dae805a8b5b55e1eebc3f046f4ab7fd3a409235b918d8b1aa88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1856815c6475b6c9db1d59694dfc249a9b254ca7f6b402570dbb791f60e038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1856815c6475b6c9db1d59694dfc249a9b254ca7f6b402570dbb791f60e038\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c79e6f292a540eaeab2781c31973da7e0213d08460452129dbb25075c05a9094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c79e6f292a540eaeab2781c31973da7e0213d08460452129dbb25075c05a9094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e9ceaddb24d5dc8e3689b2e05482fd76883f4ce3475557c03419491b53e800d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9ceaddb24d5dc8e3689b2e05482fd76883f4ce3475557c03419491b53e800d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2966e801bc03a51bece33fae1207ba6a432620fcbcb1c702abddbb11b40a1e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2966e801bc03a51bece33fae1207ba6a432620fcbcb1c702abddbb11b40a1e1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27975\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:01Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:01 crc kubenswrapper[4795]: I1129 07:40:01.041352 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:01Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:01 crc kubenswrapper[4795]: I1129 07:40:01.054292 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2faa21199ef0e7bbf065823fff1e4b33636239d022db292bec68ff54e8c948c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c09cfeaabce351a2face8aa09a04f88aa9d50122718358d07301aeab84e4a76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:01Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:01 crc kubenswrapper[4795]: I1129 07:40:01.067202 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30fbc9849348e3900b416f78f697f1d88eb5eb018f5ee420e711561423a79459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:01Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:01 crc kubenswrapper[4795]: I1129 07:40:01.079219 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b4d032ee6e86c2b61a59ea1477aa6d97a454c754888c37db24bc49d921eacd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:01Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:01 crc kubenswrapper[4795]: I1129 07:40:01.090233 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://148610b20c1dfe43c3e5443cda520fa95ed2823afcaa880dbe879b6d93dc8ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3aac368767d0534a658bc0223f5cda0a951940f2d6ff3f1c89a6ac401a7d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkmq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:01Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:01 crc kubenswrapper[4795]: I1129 07:40:01.104545 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:01 crc kubenswrapper[4795]: I1129 07:40:01.104603 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:01 crc kubenswrapper[4795]: I1129 07:40:01.104616 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:01 crc kubenswrapper[4795]: I1129 07:40:01.104634 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:01 crc kubenswrapper[4795]: I1129 07:40:01.104645 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:01Z","lastTransitionTime":"2025-11-29T07:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:01 crc kubenswrapper[4795]: I1129 07:40:01.109547 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceed8db447ae237cae16d2aa37b1651eade557ea7aaab5bc7b55678ce6e36f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcce9ccf35e72987d01428b7573aafacfe90fc973009657b077f40b6fe6ddecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://248e5291402287c1a589a168527135bd1df15e293a7614681fc896fcbdeb9dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8112226f945ad27e0c5af4d588800322977e17efc3eac9acec2ce591f30f46fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://feacad8666527640ec9d6bee04a743e99e0cc9f9e9c6432c1a70bc02e85fa039\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25239d7c14973e35144f9ac1b909564749220fc32b1851b68e08434a5aca7def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://178c4829fb6723552ece37b1b2a9db689e2acb44c46d095d72a2ad03af1d2460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63ac8cc3766762113c8fdfdc459f77570fc533bf636011f1e64a5c0a55b3ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e28f6209bd7be6beb5d86d316c3194ad3162ae8561ac532a3dfc894415fb406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e28f6209bd7be6beb5d86d316c3194ad3162ae8561ac532a3dfc894415fb406\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-km2g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:01Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:01 crc kubenswrapper[4795]: I1129 07:40:01.126884 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84bbfda4-f166-406b-8459-d8cad5d21032\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://102735e5872ecbca5ba247f4fc457619ce4b383def180851d69bf37f7ea1051a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cba2e40f95664da3b482ce01c3432b7a421b785791004833aeaffbc2b6bdcb9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c385dac962c7c2c54fe6b5eabe286bc75d0518601f68b6845677764c5916ad44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6319d19e0aa1655aaff5e3abcc13d8ff9ff26f401c28cf23593ecaa23d1ac5a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa34ac2960898820609db88aa051a6567a7b6bd0bfc7fcb741205f6f5c3f402\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:39:37Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 07:39:27.030389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:39:27.033381 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-823169215/tls.crt::/tmp/serving-cert-823169215/tls.key\\\\\\\"\\\\nI1129 07:39:37.392848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:39:37.396263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:39:37.396294 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:39:37.396669 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:39:37.396692 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:39:37.407579 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:39:37.407638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407648 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407655 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:39:37.407660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:39:37.407665 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:39:37.407670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:39:37.408043 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:39:37.411891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7b73774f26f4ea48586f8faf7f53be02b4678c85937c8e402868610f36d4d00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:01Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:01 crc kubenswrapper[4795]: I1129 07:40:01.142983 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bvmzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9be66670-47c2-4d05-bf3d-59ae6f4ff53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4h7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4h7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bvmzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:01Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:01 crc kubenswrapper[4795]: I1129 07:40:01.207311 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:01 crc kubenswrapper[4795]: I1129 07:40:01.207409 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:01 crc kubenswrapper[4795]: I1129 07:40:01.207422 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:01 crc kubenswrapper[4795]: I1129 07:40:01.207437 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:01 crc kubenswrapper[4795]: I1129 07:40:01.207449 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:01Z","lastTransitionTime":"2025-11-29T07:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:01 crc kubenswrapper[4795]: I1129 07:40:01.310317 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:01 crc kubenswrapper[4795]: I1129 07:40:01.310399 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:01 crc kubenswrapper[4795]: I1129 07:40:01.310435 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:01 crc kubenswrapper[4795]: I1129 07:40:01.310474 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:01 crc kubenswrapper[4795]: I1129 07:40:01.310491 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:01Z","lastTransitionTime":"2025-11-29T07:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:01 crc kubenswrapper[4795]: I1129 07:40:01.413318 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:01 crc kubenswrapper[4795]: I1129 07:40:01.413372 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:01 crc kubenswrapper[4795]: I1129 07:40:01.413383 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:01 crc kubenswrapper[4795]: I1129 07:40:01.413399 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:01 crc kubenswrapper[4795]: I1129 07:40:01.413411 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:01Z","lastTransitionTime":"2025-11-29T07:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:01 crc kubenswrapper[4795]: I1129 07:40:01.516652 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:01 crc kubenswrapper[4795]: I1129 07:40:01.516688 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:01 crc kubenswrapper[4795]: I1129 07:40:01.516700 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:01 crc kubenswrapper[4795]: I1129 07:40:01.516717 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:01 crc kubenswrapper[4795]: I1129 07:40:01.516731 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:01Z","lastTransitionTime":"2025-11-29T07:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:01 crc kubenswrapper[4795]: I1129 07:40:01.619899 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:01 crc kubenswrapper[4795]: I1129 07:40:01.619982 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:01 crc kubenswrapper[4795]: I1129 07:40:01.620007 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:01 crc kubenswrapper[4795]: I1129 07:40:01.620041 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:01 crc kubenswrapper[4795]: I1129 07:40:01.620068 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:01Z","lastTransitionTime":"2025-11-29T07:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:01 crc kubenswrapper[4795]: I1129 07:40:01.659151 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-km2g9_3d3ff2b2-cbaa-4309-805a-2b044f867d3a/ovnkube-controller/0.log" Nov 29 07:40:01 crc kubenswrapper[4795]: I1129 07:40:01.662262 4795 generic.go:334] "Generic (PLEG): container finished" podID="3d3ff2b2-cbaa-4309-805a-2b044f867d3a" containerID="178c4829fb6723552ece37b1b2a9db689e2acb44c46d095d72a2ad03af1d2460" exitCode=1 Nov 29 07:40:01 crc kubenswrapper[4795]: I1129 07:40:01.662318 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" event={"ID":"3d3ff2b2-cbaa-4309-805a-2b044f867d3a","Type":"ContainerDied","Data":"178c4829fb6723552ece37b1b2a9db689e2acb44c46d095d72a2ad03af1d2460"} Nov 29 07:40:01 crc kubenswrapper[4795]: I1129 07:40:01.663273 4795 scope.go:117] "RemoveContainer" containerID="178c4829fb6723552ece37b1b2a9db689e2acb44c46d095d72a2ad03af1d2460" Nov 29 07:40:01 crc kubenswrapper[4795]: I1129 07:40:01.679332 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"262bf41d-eaf0-42b7-946d-3223857ca705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bbfc45764b29055dcafdee2e3b77893c35b23cfa3eaa9125f01850baef2fd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://508f7cb88a5b33c09bb52519555900031de44693e10de750ed34d9bb4c31738c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9f1344bd7c3a1bc53870f7e5f5f2ca4993df112a24aa474655cc07670e22c48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b06202234f21aa27a3dc67978cc9f82738d7526100bce3fd640be08131413f69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:01Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:01 crc kubenswrapper[4795]: I1129 07:40:01.694946 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:01Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:01 crc kubenswrapper[4795]: I1129 07:40:01.709052 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:01Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:01 crc kubenswrapper[4795]: I1129 07:40:01.723420 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:01 crc kubenswrapper[4795]: I1129 07:40:01.723465 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:01 crc kubenswrapper[4795]: I1129 07:40:01.723483 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:01 crc kubenswrapper[4795]: I1129 07:40:01.723505 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:01 crc kubenswrapper[4795]: I1129 07:40:01.723518 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:01Z","lastTransitionTime":"2025-11-29T07:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:01 crc kubenswrapper[4795]: I1129 07:40:01.728639 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27975" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceab872d-7a73-44d5-936e-3dd17facf399\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b771b13a7bbce9270d919a830c700a1a04bd1d82141a1487e7abb6b1def634cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:40:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e24a1ba0b3bd47c8cff733e54ce43cf1641ba69df8a3d2083f47fcfdcd6640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e24a1ba0b3bd47c8cff733e54ce43cf1641ba69df8a3d2083f47fcfdcd6640\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9e93605467f5dae805a8b5b55e1eebc3f046f4ab7fd3a409235b918d8b1aa88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9e93605467f5dae805a8b5b55e1eebc3f046f4ab7fd3a409235b918d8b1aa88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1856815c6475b6c9db1d59694dfc249a9b254ca7f6b402570dbb791f60e038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1856815c6475b6c9db1d59694dfc249a9b254ca7f6b402570dbb791f60e038\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c79e6f292a540eaeab2781c31973da7e0213d08460452129dbb25075c05a9094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c79e6f292a540eaeab2781c31973da7e0213d08460452129dbb25075c05a9094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e9ceaddb24d5dc8e3689b2e05482fd76883f4ce3475557c03419491b53e800d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9ceaddb24d5dc8e3689b2e05482fd76883f4ce3475557c03419491b53e800d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2966e801bc03a51bece33fae1207ba6a432620fcbcb1c702abddbb11b40a1e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2966e801bc03a51bece33fae1207ba6a432620fcbcb1c702abddbb11b40a1e1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27975\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:01Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:01 crc kubenswrapper[4795]: I1129 07:40:01.743763 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-s9x86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcb60df4-ad48-4830-b8c7-c63621b96707\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6d9cdb665b3bed924577ab173dcf48968a0b7217ca81b1f8708d733d27d3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-s9x86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:01Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:01 crc kubenswrapper[4795]: I1129 07:40:01.757452 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qcmb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6472ebbc-939b-4dd0-8b03-110cb9811484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad977e9d70f50688539c9d0c9cf08789584d08f77792d3dde1a619e9d958cff2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qwxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2497113482ab716b22a90b7872f2837bccd484b765e979c2c67a394f82de688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qwxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qcmb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:01Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:01 crc kubenswrapper[4795]: I1129 07:40:01.772731 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:01Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:01 crc kubenswrapper[4795]: I1129 07:40:01.787657 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2faa21199ef0e7bbf065823fff1e4b33636239d022db292bec68ff54e8c948c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c09cfeaabce351a2face8aa09a04f88aa9d50122718358d07301aeab84e4a76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:01Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:01 crc kubenswrapper[4795]: I1129 07:40:01.801552 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://148610b20c1dfe43c3e5443cda520fa95ed2823afcaa880dbe879b6d93dc8ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3aac368767d0534a658bc0223f5cda0a951940f2d6ff3f1c89a6ac401a7d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkmq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:01Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:01 crc kubenswrapper[4795]: I1129 07:40:01.821848 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceed8db447ae237cae16d2aa37b1651eade557ea7aaab5bc7b55678ce6e36f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcce9ccf35e72987d01428b7573aafacfe90fc973009657b077f40b6fe6ddecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://248e5291402287c1a589a168527135bd1df15e293a7614681fc896fcbdeb9dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8112226f945ad27e0c5af4d588800322977e17efc3eac9acec2ce591f30f46fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://feacad8666527640ec9d6bee04a743e99e0cc9f9e9c6432c1a70bc02e85fa039\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25239d7c14973e35144f9ac1b909564749220fc32b1851b68e08434a5aca7def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://178c4829fb6723552ece37b1b2a9db689e2acb44c46d095d72a2ad03af1d2460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://178c4829fb6723552ece37b1b2a9db689e2acb44c46d095d72a2ad03af1d2460\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"message\\\":\\\"e event handler 1\\\\nI1129 07:40:00.611646 6086 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1129 07:40:00.611652 6086 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1129 07:40:00.611679 6086 factory.go:656] Stopping watch factory\\\\nI1129 07:40:00.611695 6086 handler.go:208] Removed *v1.Node event handler 7\\\\nI1129 07:40:00.611703 6086 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1129 07:40:00.611711 6086 handler.go:208] Removed *v1.Node event handler 2\\\\nI1129 07:40:00.611742 6086 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:40:00.612160 6086 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:40:00.612495 6086 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:40:00.612534 6086 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:40:00.612726 6086 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1129 07:40:00.612802 6086 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63ac8cc3766762113c8fdfdc459f77570fc533bf636011f1e64a5c0a55b3ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e28f6209bd7be6beb5d86d316c3194ad3162ae8561ac532a3dfc894415fb406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e28f6209bd7be6beb5d86d316c3194ad3162ae8561ac532a3dfc894415fb406\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-km2g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:01Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:01 crc kubenswrapper[4795]: I1129 07:40:01.827262 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:01 crc kubenswrapper[4795]: I1129 07:40:01.827320 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:01 crc kubenswrapper[4795]: I1129 07:40:01.827332 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:01 crc kubenswrapper[4795]: I1129 07:40:01.827352 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:01 crc kubenswrapper[4795]: I1129 07:40:01.827366 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:01Z","lastTransitionTime":"2025-11-29T07:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:01 crc kubenswrapper[4795]: I1129 07:40:01.840381 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84bbfda4-f166-406b-8459-d8cad5d21032\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://102735e5872ecbca5ba247f4fc457619ce4b383def180851d69bf37f7ea1051a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cba2e40f95664da3b482ce01c3432b7a421b785791004833aeaffbc2b6bdcb9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c385dac962c7c2c54fe6b5eabe286bc75d0518601f68b6845677764c5916ad44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6319d19e0aa1655aaff5e3abcc13d8ff9ff26f401c28cf23593ecaa23d1ac5a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa34ac2960898820609db88aa051a6567a7b6bd0bfc7fcb741205f6f5c3f402\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:39:37Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 07:39:27.030389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:39:27.033381 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-823169215/tls.crt::/tmp/serving-cert-823169215/tls.key\\\\\\\"\\\\nI1129 07:39:37.392848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:39:37.396263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:39:37.396294 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:39:37.396669 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:39:37.396692 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:39:37.407579 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:39:37.407638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407648 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407655 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:39:37.407660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:39:37.407665 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:39:37.407670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:39:37.408043 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:39:37.411891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7b73774f26f4ea48586f8faf7f53be02b4678c85937c8e402868610f36d4d00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:01Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:01 crc kubenswrapper[4795]: I1129 07:40:01.860636 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30fbc9849348e3900b416f78f697f1d88eb5eb018f5ee420e711561423a79459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:01Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:01 crc kubenswrapper[4795]: I1129 07:40:01.877102 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b4d032ee6e86c2b61a59ea1477aa6d97a454c754888c37db24bc49d921eacd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:01Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:01 crc kubenswrapper[4795]: I1129 07:40:01.891528 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bvmzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9be66670-47c2-4d05-bf3d-59ae6f4ff53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4h7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4h7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bvmzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:01Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:01 crc kubenswrapper[4795]: I1129 07:40:01.909328 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hbg2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50b9c3ea-4ff5-434f-803c-2365a0938f9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46105526d9b0e90d4e14597c7b9c505b01d89a55e947fb378e4587ec265cac89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv7gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hbg2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:01Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:01 crc kubenswrapper[4795]: I1129 07:40:01.922870 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vcd5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3fc3441-5d98-4323-8a78-cab492090c5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d047b5f2e979d7eebec46b45ef363e94e99bd7311214872c66fcce39cfd411db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcqq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vcd5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:01Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:01 crc kubenswrapper[4795]: I1129 07:40:01.930302 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:01 crc kubenswrapper[4795]: I1129 07:40:01.930354 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:01 crc kubenswrapper[4795]: I1129 07:40:01.930371 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:01 crc kubenswrapper[4795]: I1129 07:40:01.930398 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:01 crc kubenswrapper[4795]: I1129 07:40:01.930413 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:01Z","lastTransitionTime":"2025-11-29T07:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:02 crc kubenswrapper[4795]: I1129 07:40:02.032898 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:02 crc kubenswrapper[4795]: I1129 07:40:02.032959 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:02 crc kubenswrapper[4795]: I1129 07:40:02.032975 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:02 crc kubenswrapper[4795]: I1129 07:40:02.032998 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:02 crc kubenswrapper[4795]: I1129 07:40:02.033015 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:02Z","lastTransitionTime":"2025-11-29T07:40:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:02 crc kubenswrapper[4795]: I1129 07:40:02.135313 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:02 crc kubenswrapper[4795]: I1129 07:40:02.135358 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:02 crc kubenswrapper[4795]: I1129 07:40:02.135369 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:02 crc kubenswrapper[4795]: I1129 07:40:02.135382 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:02 crc kubenswrapper[4795]: I1129 07:40:02.135393 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:02Z","lastTransitionTime":"2025-11-29T07:40:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:02 crc kubenswrapper[4795]: I1129 07:40:02.238679 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:02 crc kubenswrapper[4795]: I1129 07:40:02.238727 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:02 crc kubenswrapper[4795]: I1129 07:40:02.238739 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:02 crc kubenswrapper[4795]: I1129 07:40:02.238757 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:02 crc kubenswrapper[4795]: I1129 07:40:02.238766 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:02Z","lastTransitionTime":"2025-11-29T07:40:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:02 crc kubenswrapper[4795]: I1129 07:40:02.275784 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:40:02 crc kubenswrapper[4795]: I1129 07:40:02.275832 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:40:02 crc kubenswrapper[4795]: I1129 07:40:02.275880 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:40:02 crc kubenswrapper[4795]: E1129 07:40:02.275967 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:40:02 crc kubenswrapper[4795]: I1129 07:40:02.275904 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvmzq" Nov 29 07:40:02 crc kubenswrapper[4795]: E1129 07:40:02.276158 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:40:02 crc kubenswrapper[4795]: E1129 07:40:02.276080 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:40:02 crc kubenswrapper[4795]: E1129 07:40:02.276443 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvmzq" podUID="9be66670-47c2-4d05-bf3d-59ae6f4ff53b" Nov 29 07:40:02 crc kubenswrapper[4795]: I1129 07:40:02.341338 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:02 crc kubenswrapper[4795]: I1129 07:40:02.341378 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:02 crc kubenswrapper[4795]: I1129 07:40:02.341387 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:02 crc kubenswrapper[4795]: I1129 07:40:02.341405 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:02 crc kubenswrapper[4795]: I1129 07:40:02.341415 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:02Z","lastTransitionTime":"2025-11-29T07:40:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:02 crc kubenswrapper[4795]: I1129 07:40:02.443314 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:02 crc kubenswrapper[4795]: I1129 07:40:02.443360 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:02 crc kubenswrapper[4795]: I1129 07:40:02.443370 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:02 crc kubenswrapper[4795]: I1129 07:40:02.443385 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:02 crc kubenswrapper[4795]: I1129 07:40:02.443395 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:02Z","lastTransitionTime":"2025-11-29T07:40:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:02 crc kubenswrapper[4795]: I1129 07:40:02.545474 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:02 crc kubenswrapper[4795]: I1129 07:40:02.545504 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:02 crc kubenswrapper[4795]: I1129 07:40:02.545512 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:02 crc kubenswrapper[4795]: I1129 07:40:02.545525 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:02 crc kubenswrapper[4795]: I1129 07:40:02.545534 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:02Z","lastTransitionTime":"2025-11-29T07:40:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:02 crc kubenswrapper[4795]: I1129 07:40:02.548875 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9be66670-47c2-4d05-bf3d-59ae6f4ff53b-metrics-certs\") pod \"network-metrics-daemon-bvmzq\" (UID: \"9be66670-47c2-4d05-bf3d-59ae6f4ff53b\") " pod="openshift-multus/network-metrics-daemon-bvmzq" Nov 29 07:40:02 crc kubenswrapper[4795]: E1129 07:40:02.548995 4795 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 29 07:40:02 crc kubenswrapper[4795]: E1129 07:40:02.549039 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9be66670-47c2-4d05-bf3d-59ae6f4ff53b-metrics-certs podName:9be66670-47c2-4d05-bf3d-59ae6f4ff53b nodeName:}" failed. No retries permitted until 2025-11-29 07:40:10.549026561 +0000 UTC m=+56.524602351 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9be66670-47c2-4d05-bf3d-59ae6f4ff53b-metrics-certs") pod "network-metrics-daemon-bvmzq" (UID: "9be66670-47c2-4d05-bf3d-59ae6f4ff53b") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 29 07:40:02 crc kubenswrapper[4795]: I1129 07:40:02.647338 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:02 crc kubenswrapper[4795]: I1129 07:40:02.647405 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:02 crc kubenswrapper[4795]: I1129 07:40:02.647414 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:02 crc kubenswrapper[4795]: I1129 07:40:02.647430 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:02 crc kubenswrapper[4795]: I1129 07:40:02.647440 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:02Z","lastTransitionTime":"2025-11-29T07:40:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:02 crc kubenswrapper[4795]: I1129 07:40:02.668765 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-km2g9_3d3ff2b2-cbaa-4309-805a-2b044f867d3a/ovnkube-controller/0.log" Nov 29 07:40:02 crc kubenswrapper[4795]: I1129 07:40:02.672709 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" event={"ID":"3d3ff2b2-cbaa-4309-805a-2b044f867d3a","Type":"ContainerStarted","Data":"8863f06c350a821d91544fd372bed53f0ae71b14571eb9b9e189228b3eb48e53"} Nov 29 07:40:02 crc kubenswrapper[4795]: I1129 07:40:02.673331 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" Nov 29 07:40:02 crc kubenswrapper[4795]: I1129 07:40:02.686631 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30fbc9849348e3900b416f78f697f1d88eb5eb018f5ee420e711561423a79459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:02Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:02 crc kubenswrapper[4795]: I1129 07:40:02.699307 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b4d032ee6e86c2b61a59ea1477aa6d97a454c754888c37db24bc49d921eacd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:02Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:02 crc kubenswrapper[4795]: I1129 07:40:02.710888 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://148610b20c1dfe43c3e5443cda520fa95ed2823afcaa880dbe879b6d93dc8ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3aac368767d0534a658bc0223f5cda0a951940f2d6ff3f1c89a6ac401a7d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkmq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:02Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:02 crc kubenswrapper[4795]: I1129 07:40:02.730643 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceed8db447ae237cae16d2aa37b1651eade557ea7aaab5bc7b55678ce6e36f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcce9ccf35e72987d01428b7573aafacfe90fc973009657b077f40b6fe6ddecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://248e5291402287c1a589a168527135bd1df15e293a7614681fc896fcbdeb9dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8112226f945ad27e0c5af4d588800322977e17efc3eac9acec2ce591f30f46fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://feacad8666527640ec9d6bee04a743e99e0cc9f9e9c6432c1a70bc02e85fa039\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25239d7c14973e35144f9ac1b909564749220fc32b1851b68e08434a5aca7def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8863f06c350a821d91544fd372bed53f0ae71b14571eb9b9e189228b3eb48e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://178c4829fb6723552ece37b1b2a9db689e2acb44c46d095d72a2ad03af1d2460\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"message\\\":\\\"e event handler 1\\\\nI1129 07:40:00.611646 6086 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1129 07:40:00.611652 6086 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1129 07:40:00.611679 6086 factory.go:656] Stopping watch factory\\\\nI1129 07:40:00.611695 6086 handler.go:208] Removed *v1.Node event handler 7\\\\nI1129 07:40:00.611703 6086 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1129 07:40:00.611711 6086 handler.go:208] Removed *v1.Node event handler 2\\\\nI1129 07:40:00.611742 6086 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:40:00.612160 6086 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:40:00.612495 6086 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:40:00.612534 6086 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:40:00.612726 6086 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1129 07:40:00.612802 6086 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63ac8cc3766762113c8fdfdc459f77570fc533bf636011f1e64a5c0a55b3ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e28f6209bd7be6beb5d86d316c3194ad3162ae8561ac532a3dfc894415fb406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e28f6209bd7be6beb5d86d316c3194ad3162ae8561ac532a3dfc894415fb406\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-km2g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:02Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:02 crc kubenswrapper[4795]: I1129 07:40:02.749741 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:02 crc kubenswrapper[4795]: I1129 07:40:02.749778 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:02 crc kubenswrapper[4795]: I1129 07:40:02.749788 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:02 crc kubenswrapper[4795]: I1129 07:40:02.749804 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:02 crc kubenswrapper[4795]: I1129 07:40:02.749814 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:02Z","lastTransitionTime":"2025-11-29T07:40:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:02 crc kubenswrapper[4795]: I1129 07:40:02.751796 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84bbfda4-f166-406b-8459-d8cad5d21032\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://102735e5872ecbca5ba247f4fc457619ce4b383def180851d69bf37f7ea1051a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cba2e40f95664da3b482ce01c3432b7a421b785791004833aeaffbc2b6bdcb9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c385dac962c7c2c54fe6b5eabe286bc75d0518601f68b6845677764c5916ad44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6319d19e0aa1655aaff5e3abcc13d8ff9ff26f401c28cf23593ecaa23d1ac5a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa34ac2960898820609db88aa051a6567a7b6bd0bfc7fcb741205f6f5c3f402\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:39:37Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 07:39:27.030389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:39:27.033381 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-823169215/tls.crt::/tmp/serving-cert-823169215/tls.key\\\\\\\"\\\\nI1129 07:39:37.392848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:39:37.396263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:39:37.396294 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:39:37.396669 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:39:37.396692 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:39:37.407579 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:39:37.407638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407648 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407655 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:39:37.407660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:39:37.407665 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:39:37.407670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:39:37.408043 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:39:37.411891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7b73774f26f4ea48586f8faf7f53be02b4678c85937c8e402868610f36d4d00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:02Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:02 crc kubenswrapper[4795]: I1129 07:40:02.766737 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bvmzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9be66670-47c2-4d05-bf3d-59ae6f4ff53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4h7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4h7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bvmzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:02Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:02 crc kubenswrapper[4795]: I1129 07:40:02.784941 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hbg2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50b9c3ea-4ff5-434f-803c-2365a0938f9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46105526d9b0e90d4e14597c7b9c505b01d89a55e947fb378e4587ec265cac89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv7gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hbg2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:02Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:02 crc kubenswrapper[4795]: I1129 07:40:02.797657 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vcd5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3fc3441-5d98-4323-8a78-cab492090c5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d047b5f2e979d7eebec46b45ef363e94e99bd7311214872c66fcce39cfd411db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcqq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vcd5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:02Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:02 crc kubenswrapper[4795]: I1129 07:40:02.810057 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-s9x86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcb60df4-ad48-4830-b8c7-c63621b96707\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6d9cdb665b3bed924577ab173dcf48968a0b7217ca81b1f8708d733d27d3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-s9x86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:02Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:02 crc kubenswrapper[4795]: I1129 07:40:02.823780 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qcmb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6472ebbc-939b-4dd0-8b03-110cb9811484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad977e9d70f50688539c9d0c9cf08789584d08f77792d3dde1a619e9d958cff2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qwxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2497113482ab716b22a90b7872f2837bccd484b765e979c2c67a394f82de688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qwxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qcmb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:02Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:02 crc kubenswrapper[4795]: I1129 07:40:02.839540 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"262bf41d-eaf0-42b7-946d-3223857ca705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bbfc45764b29055dcafdee2e3b77893c35b23cfa3eaa9125f01850baef2fd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://508f7cb88a5b33c09bb52519555900031de44693e10de750ed34d9bb4c31738c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9f1344bd7c3a1bc53870f7e5f5f2ca4993df112a24aa474655cc07670e22c48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b06202234f21aa27a3dc67978cc9f82738d7526100bce3fd640be08131413f69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:02Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:02 crc kubenswrapper[4795]: I1129 07:40:02.852051 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:02 crc kubenswrapper[4795]: I1129 07:40:02.852111 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:02 crc kubenswrapper[4795]: I1129 07:40:02.852126 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:02 crc kubenswrapper[4795]: I1129 07:40:02.852146 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:02 crc kubenswrapper[4795]: I1129 07:40:02.852159 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:02Z","lastTransitionTime":"2025-11-29T07:40:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:02 crc kubenswrapper[4795]: I1129 07:40:02.853204 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:02Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:02 crc kubenswrapper[4795]: I1129 07:40:02.866055 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:02Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:02 crc kubenswrapper[4795]: I1129 07:40:02.880846 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27975" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceab872d-7a73-44d5-936e-3dd17facf399\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b771b13a7bbce9270d919a830c700a1a04bd1d82141a1487e7abb6b1def634cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:40:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e24a1ba0b3bd47c8cff733e54ce43cf1641ba69df8a3d2083f47fcfdcd6640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e24a1ba0b3bd47c8cff733e54ce43cf1641ba69df8a3d2083f47fcfdcd6640\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9e93605467f5dae805a8b5b55e1eebc3f046f4ab7fd3a409235b918d8b1aa88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9e93605467f5dae805a8b5b55e1eebc3f046f4ab7fd3a409235b918d8b1aa88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1856815c6475b6c9db1d59694dfc249a9b254ca7f6b402570dbb791f60e038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1856815c6475b6c9db1d59694dfc249a9b254ca7f6b402570dbb791f60e038\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c79e6f292a540eaeab2781c31973da7e0213d08460452129dbb25075c05a9094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c79e6f292a540eaeab2781c31973da7e0213d08460452129dbb25075c05a9094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e9ceaddb24d5dc8e3689b2e05482fd76883f4ce3475557c03419491b53e800d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9ceaddb24d5dc8e3689b2e05482fd76883f4ce3475557c03419491b53e800d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2966e801bc03a51bece33fae1207ba6a432620fcbcb1c702abddbb11b40a1e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2966e801bc03a51bece33fae1207ba6a432620fcbcb1c702abddbb11b40a1e1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27975\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:02Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:02 crc kubenswrapper[4795]: I1129 07:40:02.892268 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:02Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:02 crc kubenswrapper[4795]: I1129 07:40:02.903297 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2faa21199ef0e7bbf065823fff1e4b33636239d022db292bec68ff54e8c948c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c09cfeaabce351a2face8aa09a04f88aa9d50122718358d07301aeab84e4a76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:02Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:02 crc kubenswrapper[4795]: I1129 07:40:02.954222 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:02 crc kubenswrapper[4795]: I1129 07:40:02.954258 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:02 crc kubenswrapper[4795]: I1129 07:40:02.954267 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:02 crc kubenswrapper[4795]: I1129 07:40:02.954284 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:02 crc kubenswrapper[4795]: I1129 07:40:02.954294 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:02Z","lastTransitionTime":"2025-11-29T07:40:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:03 crc kubenswrapper[4795]: I1129 07:40:03.056817 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:03 crc kubenswrapper[4795]: I1129 07:40:03.056868 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:03 crc kubenswrapper[4795]: I1129 07:40:03.056882 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:03 crc kubenswrapper[4795]: I1129 07:40:03.056901 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:03 crc kubenswrapper[4795]: I1129 07:40:03.056916 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:03Z","lastTransitionTime":"2025-11-29T07:40:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:03 crc kubenswrapper[4795]: I1129 07:40:03.159921 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:03 crc kubenswrapper[4795]: I1129 07:40:03.159995 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:03 crc kubenswrapper[4795]: I1129 07:40:03.160023 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:03 crc kubenswrapper[4795]: I1129 07:40:03.160054 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:03 crc kubenswrapper[4795]: I1129 07:40:03.160079 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:03Z","lastTransitionTime":"2025-11-29T07:40:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:03 crc kubenswrapper[4795]: I1129 07:40:03.263562 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:03 crc kubenswrapper[4795]: I1129 07:40:03.263666 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:03 crc kubenswrapper[4795]: I1129 07:40:03.263769 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:03 crc kubenswrapper[4795]: I1129 07:40:03.263802 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:03 crc kubenswrapper[4795]: I1129 07:40:03.263826 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:03Z","lastTransitionTime":"2025-11-29T07:40:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:03 crc kubenswrapper[4795]: I1129 07:40:03.326807 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 29 07:40:03 crc kubenswrapper[4795]: I1129 07:40:03.338362 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 29 07:40:03 crc kubenswrapper[4795]: I1129 07:40:03.362745 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceed8db447ae237cae16d2aa37b1651eade557ea7aaab5bc7b55678ce6e36f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcce9ccf35e72987d01428b7573aafacfe90fc973009657b077f40b6fe6ddecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://248e5291402287c1a589a168527135bd1df15e293a7614681fc896fcbdeb9dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8112226f945ad27e0c5af4d588800322977e17efc3eac9acec2ce591f30f46fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://feacad8666527640ec9d6bee04a743e99e0cc9f9e9c6432c1a70bc02e85fa039\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25239d7c14973e35144f9ac1b909564749220fc32b1851b68e08434a5aca7def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8863f06c350a821d91544fd372bed53f0ae71b14571eb9b9e189228b3eb48e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://178c4829fb6723552ece37b1b2a9db689e2acb44c46d095d72a2ad03af1d2460\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"message\\\":\\\"e event handler 1\\\\nI1129 07:40:00.611646 6086 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1129 07:40:00.611652 6086 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1129 07:40:00.611679 6086 factory.go:656] Stopping watch factory\\\\nI1129 07:40:00.611695 6086 handler.go:208] Removed *v1.Node event handler 7\\\\nI1129 07:40:00.611703 6086 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1129 07:40:00.611711 6086 handler.go:208] Removed *v1.Node event handler 2\\\\nI1129 07:40:00.611742 6086 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:40:00.612160 6086 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:40:00.612495 6086 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:40:00.612534 6086 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:40:00.612726 6086 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1129 07:40:00.612802 6086 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63ac8cc3766762113c8fdfdc459f77570fc533bf636011f1e64a5c0a55b3ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e28f6209bd7be6beb5d86d316c3194ad3162ae8561ac532a3dfc894415fb406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e28f6209bd7be6beb5d86d316c3194ad3162ae8561ac532a3dfc894415fb406\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-km2g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:03Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:03 crc kubenswrapper[4795]: I1129 07:40:03.366680 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:03 crc kubenswrapper[4795]: I1129 07:40:03.366748 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:03 crc kubenswrapper[4795]: I1129 07:40:03.366767 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:03 crc kubenswrapper[4795]: I1129 07:40:03.366794 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:03 crc kubenswrapper[4795]: I1129 07:40:03.366814 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:03Z","lastTransitionTime":"2025-11-29T07:40:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:03 crc kubenswrapper[4795]: I1129 07:40:03.385555 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84bbfda4-f166-406b-8459-d8cad5d21032\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://102735e5872ecbca5ba247f4fc457619ce4b383def180851d69bf37f7ea1051a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cba2e40f95664da3b482ce01c3432b7a421b785791004833aeaffbc2b6bdcb9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c385dac962c7c2c54fe6b5eabe286bc75d0518601f68b6845677764c5916ad44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6319d19e0aa1655aaff5e3abcc13d8ff9ff26f401c28cf23593ecaa23d1ac5a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa34ac2960898820609db88aa051a6567a7b6bd0bfc7fcb741205f6f5c3f402\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:39:37Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 07:39:27.030389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:39:27.033381 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-823169215/tls.crt::/tmp/serving-cert-823169215/tls.key\\\\\\\"\\\\nI1129 07:39:37.392848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:39:37.396263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:39:37.396294 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:39:37.396669 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:39:37.396692 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:39:37.407579 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:39:37.407638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407648 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407655 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:39:37.407660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:39:37.407665 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:39:37.407670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:39:37.408043 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:39:37.411891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7b73774f26f4ea48586f8faf7f53be02b4678c85937c8e402868610f36d4d00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:03Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:03 crc kubenswrapper[4795]: I1129 07:40:03.403881 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30fbc9849348e3900b416f78f697f1d88eb5eb018f5ee420e711561423a79459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:03Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:03 crc kubenswrapper[4795]: I1129 07:40:03.421564 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b4d032ee6e86c2b61a59ea1477aa6d97a454c754888c37db24bc49d921eacd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:03Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:03 crc kubenswrapper[4795]: I1129 07:40:03.435782 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://148610b20c1dfe43c3e5443cda520fa95ed2823afcaa880dbe879b6d93dc8ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3aac368767d0534a658bc0223f5cda0a951940f2d6ff3f1c89a6ac401a7d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkmq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:03Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:03 crc kubenswrapper[4795]: I1129 07:40:03.449463 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bvmzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9be66670-47c2-4d05-bf3d-59ae6f4ff53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4h7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4h7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bvmzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:03Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:03 crc kubenswrapper[4795]: I1129 07:40:03.469186 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:03 crc kubenswrapper[4795]: I1129 07:40:03.469222 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:03 crc kubenswrapper[4795]: I1129 07:40:03.469254 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:03 crc kubenswrapper[4795]: I1129 07:40:03.469272 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:03 crc kubenswrapper[4795]: I1129 07:40:03.469283 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:03Z","lastTransitionTime":"2025-11-29T07:40:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:03 crc kubenswrapper[4795]: I1129 07:40:03.469295 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hbg2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50b9c3ea-4ff5-434f-803c-2365a0938f9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46105526d9b0e90d4e14597c7b9c505b01d89a55e947fb378e4587ec265cac89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv7gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hbg2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:03Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:03 crc kubenswrapper[4795]: I1129 07:40:03.480692 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vcd5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3fc3441-5d98-4323-8a78-cab492090c5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d047b5f2e979d7eebec46b45ef363e94e99bd7311214872c66fcce39cfd411db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcqq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vcd5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:03Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:03 crc kubenswrapper[4795]: I1129 07:40:03.494290 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"262bf41d-eaf0-42b7-946d-3223857ca705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bbfc45764b29055dcafdee2e3b77893c35b23cfa3eaa9125f01850baef2fd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://508f7cb88a5b33c09bb52519555900031de44693e10de750ed34d9bb4c31738c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9f1344bd7c3a1bc53870f7e5f5f2ca4993df112a24aa474655cc07670e22c48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b06202234f21aa27a3dc67978cc9f82738d7526100bce3fd640be08131413f69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:03Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:03 crc kubenswrapper[4795]: I1129 07:40:03.509640 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:03Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:03 crc kubenswrapper[4795]: I1129 07:40:03.524698 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:03Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:03 crc kubenswrapper[4795]: I1129 07:40:03.542273 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27975" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceab872d-7a73-44d5-936e-3dd17facf399\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b771b13a7bbce9270d919a830c700a1a04bd1d82141a1487e7abb6b1def634cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:40:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e24a1ba0b3bd47c8cff733e54ce43cf1641ba69df8a3d2083f47fcfdcd6640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e24a1ba0b3bd47c8cff733e54ce43cf1641ba69df8a3d2083f47fcfdcd6640\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9e93605467f5dae805a8b5b55e1eebc3f046f4ab7fd3a409235b918d8b1aa88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9e93605467f5dae805a8b5b55e1eebc3f046f4ab7fd3a409235b918d8b1aa88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1856815c6475b6c9db1d59694dfc249a9b254ca7f6b402570dbb791f60e038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1856815c6475b6c9db1d59694dfc249a9b254ca7f6b402570dbb791f60e038\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c79e6f292a540eaeab2781c31973da7e0213d08460452129dbb25075c05a9094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c79e6f292a540eaeab2781c31973da7e0213d08460452129dbb25075c05a9094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e9ceaddb24d5dc8e3689b2e05482fd76883f4ce3475557c03419491b53e800d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9ceaddb24d5dc8e3689b2e05482fd76883f4ce3475557c03419491b53e800d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2966e801bc03a51bece33fae1207ba6a432620fcbcb1c702abddbb11b40a1e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2966e801bc03a51bece33fae1207ba6a432620fcbcb1c702abddbb11b40a1e1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27975\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:03Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:03 crc kubenswrapper[4795]: I1129 07:40:03.556986 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-s9x86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcb60df4-ad48-4830-b8c7-c63621b96707\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6d9cdb665b3bed924577ab173dcf48968a0b7217ca81b1f8708d733d27d3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-s9x86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:03Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:03 crc kubenswrapper[4795]: I1129 07:40:03.571534 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qcmb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6472ebbc-939b-4dd0-8b03-110cb9811484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad977e9d70f50688539c9d0c9cf08789584d08f77792d3dde1a619e9d958cff2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qwxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2497113482ab716b22a90b7872f2837bccd484b765e979c2c67a394f82de688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qwxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qcmb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:03Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:03 crc kubenswrapper[4795]: I1129 07:40:03.572305 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:03 crc kubenswrapper[4795]: I1129 07:40:03.572342 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:03 crc kubenswrapper[4795]: I1129 07:40:03.572356 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:03 crc kubenswrapper[4795]: I1129 07:40:03.572375 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:03 crc kubenswrapper[4795]: I1129 07:40:03.572386 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:03Z","lastTransitionTime":"2025-11-29T07:40:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:03 crc kubenswrapper[4795]: I1129 07:40:03.585761 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:03Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:03 crc kubenswrapper[4795]: I1129 07:40:03.598987 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2faa21199ef0e7bbf065823fff1e4b33636239d022db292bec68ff54e8c948c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c09cfeaabce351a2face8aa09a04f88aa9d50122718358d07301aeab84e4a76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:03Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:03 crc kubenswrapper[4795]: I1129 07:40:03.674139 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:03 crc kubenswrapper[4795]: I1129 07:40:03.674218 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:03 crc kubenswrapper[4795]: I1129 07:40:03.674234 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:03 crc kubenswrapper[4795]: I1129 07:40:03.674255 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:03 crc kubenswrapper[4795]: I1129 07:40:03.674270 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:03Z","lastTransitionTime":"2025-11-29T07:40:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:03 crc kubenswrapper[4795]: I1129 07:40:03.777142 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:03 crc kubenswrapper[4795]: I1129 07:40:03.777195 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:03 crc kubenswrapper[4795]: I1129 07:40:03.777214 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:03 crc kubenswrapper[4795]: I1129 07:40:03.777231 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:03 crc kubenswrapper[4795]: I1129 07:40:03.777240 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:03Z","lastTransitionTime":"2025-11-29T07:40:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:03 crc kubenswrapper[4795]: I1129 07:40:03.880061 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:03 crc kubenswrapper[4795]: I1129 07:40:03.880144 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:03 crc kubenswrapper[4795]: I1129 07:40:03.880183 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:03 crc kubenswrapper[4795]: I1129 07:40:03.880215 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:03 crc kubenswrapper[4795]: I1129 07:40:03.880240 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:03Z","lastTransitionTime":"2025-11-29T07:40:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:03 crc kubenswrapper[4795]: I1129 07:40:03.984993 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:03 crc kubenswrapper[4795]: I1129 07:40:03.985063 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:03 crc kubenswrapper[4795]: I1129 07:40:03.985092 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:03 crc kubenswrapper[4795]: I1129 07:40:03.985140 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:03 crc kubenswrapper[4795]: I1129 07:40:03.985162 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:03Z","lastTransitionTime":"2025-11-29T07:40:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.089587 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.090001 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.090131 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.090296 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.090432 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:04Z","lastTransitionTime":"2025-11-29T07:40:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.193432 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.193786 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.193916 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.194058 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.194170 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:04Z","lastTransitionTime":"2025-11-29T07:40:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.238388 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.238462 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.238470 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.238483 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.238493 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:04Z","lastTransitionTime":"2025-11-29T07:40:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:04 crc kubenswrapper[4795]: E1129 07:40:04.250798 4795 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8cd0c036-22eb-405e-b57e-ec0c4424780e\\\",\\\"systemUUID\\\":\\\"bd085386-a70e-485f-9f18-00b3aef4bcca\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:04Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.254900 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.254938 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.254969 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.254987 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.255000 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:04Z","lastTransitionTime":"2025-11-29T07:40:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:04 crc kubenswrapper[4795]: E1129 07:40:04.271239 4795 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8cd0c036-22eb-405e-b57e-ec0c4424780e\\\",\\\"systemUUID\\\":\\\"bd085386-a70e-485f-9f18-00b3aef4bcca\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:04Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.275113 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.276652 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.276676 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.275418 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.276702 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.276717 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:04Z","lastTransitionTime":"2025-11-29T07:40:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.275459 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvmzq" Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.275505 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:40:04 crc kubenswrapper[4795]: E1129 07:40:04.276809 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.275410 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:40:04 crc kubenswrapper[4795]: E1129 07:40:04.276930 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:40:04 crc kubenswrapper[4795]: E1129 07:40:04.276994 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:40:04 crc kubenswrapper[4795]: E1129 07:40:04.277147 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvmzq" podUID="9be66670-47c2-4d05-bf3d-59ae6f4ff53b" Nov 29 07:40:04 crc kubenswrapper[4795]: E1129 07:40:04.288490 4795 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8cd0c036-22eb-405e-b57e-ec0c4424780e\\\",\\\"systemUUID\\\":\\\"bd085386-a70e-485f-9f18-00b3aef4bcca\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:04Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.291477 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.291512 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.291520 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.291533 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.291543 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:04Z","lastTransitionTime":"2025-11-29T07:40:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.297288 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceed8db447ae237cae16d2aa37b1651eade557ea7aaab5bc7b55678ce6e36f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcce9ccf35e72987d01428b7573aafacfe90fc973009657b077f40b6fe6ddecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://248e5291402287c1a589a168527135bd1df15e293a7614681fc896fcbdeb9dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8112226f945ad27e0c5af4d588800322977e17efc3eac9acec2ce591f30f46fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://feacad8666527640ec9d6bee04a743e99e0cc9f9e9c6432c1a70bc02e85fa039\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25239d7c14973e35144f9ac1b909564749220fc32b1851b68e08434a5aca7def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8863f06c350a821d91544fd372bed53f0ae71b14571eb9b9e189228b3eb48e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://178c4829fb6723552ece37b1b2a9db689e2acb44c46d095d72a2ad03af1d2460\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"message\\\":\\\"e event handler 1\\\\nI1129 07:40:00.611646 6086 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1129 07:40:00.611652 6086 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1129 07:40:00.611679 6086 factory.go:656] Stopping watch factory\\\\nI1129 07:40:00.611695 6086 handler.go:208] Removed *v1.Node event handler 7\\\\nI1129 07:40:00.611703 6086 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1129 07:40:00.611711 6086 handler.go:208] Removed *v1.Node event handler 2\\\\nI1129 07:40:00.611742 6086 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:40:00.612160 6086 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:40:00.612495 6086 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:40:00.612534 6086 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:40:00.612726 6086 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1129 07:40:00.612802 6086 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63ac8cc3766762113c8fdfdc459f77570fc533bf636011f1e64a5c0a55b3ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e28f6209bd7be6beb5d86d316c3194ad3162ae8561ac532a3dfc894415fb406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e28f6209bd7be6beb5d86d316c3194ad3162ae8561ac532a3dfc894415fb406\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-km2g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:04Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:04 crc kubenswrapper[4795]: E1129 07:40:04.306879 4795 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8cd0c036-22eb-405e-b57e-ec0c4424780e\\\",\\\"systemUUID\\\":\\\"bd085386-a70e-485f-9f18-00b3aef4bcca\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:04Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.309562 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84bbfda4-f166-406b-8459-d8cad5d21032\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://102735e5872ecbca5ba247f4fc457619ce4b383def180851d69bf37f7ea1051a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cba2e40f95664da3b482ce01c3432b7a421b785791004833aeaffbc2b6bdcb9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c385dac962c7c2c54fe6b5eabe286bc75d0518601f68b6845677764c5916ad44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6319d19e0aa1655aaff5e3abcc13d8ff9ff26f401c28cf23593ecaa23d1ac5a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa34ac2960898820609db88aa051a6567a7b6bd0bfc7fcb741205f6f5c3f402\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:39:37Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 07:39:27.030389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:39:27.033381 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-823169215/tls.crt::/tmp/serving-cert-823169215/tls.key\\\\\\\"\\\\nI1129 07:39:37.392848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:39:37.396263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:39:37.396294 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:39:37.396669 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:39:37.396692 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:39:37.407579 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:39:37.407638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407648 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407655 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:39:37.407660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:39:37.407665 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:39:37.407670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:39:37.408043 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:39:37.411891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7b73774f26f4ea48586f8faf7f53be02b4678c85937c8e402868610f36d4d00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:04Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.311237 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.311291 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.311307 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.311349 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.311366 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:04Z","lastTransitionTime":"2025-11-29T07:40:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.321816 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c05a965f-51fb-403d-bbac-b34d4d6b2bbd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98152568d086a30f06fcd1535110482b356c587d906973de9baedd65fc85b6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34ebb0f2c455d2133d2fcf00d19e74d539e80523f2ed0b2ccc1f8eabcf2cb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9d7937efcaeb684eb229e396c7896a805f1bf205bda1cd1b8113aeda2bea5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bcaeb06b249f6fa81deaf9f997ca41c9dd7f3568d458072208901c04543e272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bcaeb06b249f6fa81deaf9f997ca41c9dd7f3568d458072208901c04543e272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:04Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:04 crc kubenswrapper[4795]: E1129 07:40:04.326283 4795 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8cd0c036-22eb-405e-b57e-ec0c4424780e\\\",\\\"systemUUID\\\":\\\"bd085386-a70e-485f-9f18-00b3aef4bcca\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:04Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:04 crc kubenswrapper[4795]: E1129 07:40:04.326384 4795 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.328099 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.328128 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.328144 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.328162 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.328174 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:04Z","lastTransitionTime":"2025-11-29T07:40:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.334280 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30fbc9849348e3900b416f78f697f1d88eb5eb018f5ee420e711561423a79459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:04Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.344680 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b4d032ee6e86c2b61a59ea1477aa6d97a454c754888c37db24bc49d921eacd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:04Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.354463 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://148610b20c1dfe43c3e5443cda520fa95ed2823afcaa880dbe879b6d93dc8ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3aac368767d0534a658bc0223f5cda0a951940f2d6ff3f1c89a6ac401a7d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkmq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:04Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.365373 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bvmzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9be66670-47c2-4d05-bf3d-59ae6f4ff53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4h7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4h7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bvmzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:04Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.380894 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hbg2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50b9c3ea-4ff5-434f-803c-2365a0938f9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46105526d9b0e90d4e14597c7b9c505b01d89a55e947fb378e4587ec265cac89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv7gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hbg2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:04Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.390492 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vcd5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3fc3441-5d98-4323-8a78-cab492090c5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d047b5f2e979d7eebec46b45ef363e94e99bd7311214872c66fcce39cfd411db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcqq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vcd5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:04Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.403669 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"262bf41d-eaf0-42b7-946d-3223857ca705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bbfc45764b29055dcafdee2e3b77893c35b23cfa3eaa9125f01850baef2fd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://508f7cb88a5b33c09bb52519555900031de44693e10de750ed34d9bb4c31738c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9f1344bd7c3a1bc53870f7e5f5f2ca4993df112a24aa474655cc07670e22c48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b06202234f21aa27a3dc67978cc9f82738d7526100bce3fd640be08131413f69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:04Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.414953 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:04Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.428436 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:04Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.429716 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.429759 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.429772 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.429788 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.429799 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:04Z","lastTransitionTime":"2025-11-29T07:40:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.445088 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27975" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceab872d-7a73-44d5-936e-3dd17facf399\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b771b13a7bbce9270d919a830c700a1a04bd1d82141a1487e7abb6b1def634cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:40:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e24a1ba0b3bd47c8cff733e54ce43cf1641ba69df8a3d2083f47fcfdcd6640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e24a1ba0b3bd47c8cff733e54ce43cf1641ba69df8a3d2083f47fcfdcd6640\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9e93605467f5dae805a8b5b55e1eebc3f046f4ab7fd3a409235b918d8b1aa88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9e93605467f5dae805a8b5b55e1eebc3f046f4ab7fd3a409235b918d8b1aa88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1856815c6475b6c9db1d59694dfc249a9b254ca7f6b402570dbb791f60e038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1856815c6475b6c9db1d59694dfc249a9b254ca7f6b402570dbb791f60e038\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c79e6f292a540eaeab2781c31973da7e0213d08460452129dbb25075c05a9094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c79e6f292a540eaeab2781c31973da7e0213d08460452129dbb25075c05a9094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e9ceaddb24d5dc8e3689b2e05482fd76883f4ce3475557c03419491b53e800d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9ceaddb24d5dc8e3689b2e05482fd76883f4ce3475557c03419491b53e800d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2966e801bc03a51bece33fae1207ba6a432620fcbcb1c702abddbb11b40a1e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2966e801bc03a51bece33fae1207ba6a432620fcbcb1c702abddbb11b40a1e1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27975\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:04Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.456283 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-s9x86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcb60df4-ad48-4830-b8c7-c63621b96707\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6d9cdb665b3bed924577ab173dcf48968a0b7217ca81b1f8708d733d27d3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-s9x86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:04Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.469793 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qcmb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6472ebbc-939b-4dd0-8b03-110cb9811484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad977e9d70f50688539c9d0c9cf08789584d08f77792d3dde1a619e9d958cff2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qwxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2497113482ab716b22a90b7872f2837bccd484b765e979c2c67a394f82de688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qwxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qcmb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:04Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.488974 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:04Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.502932 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2faa21199ef0e7bbf065823fff1e4b33636239d022db292bec68ff54e8c948c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c09cfeaabce351a2face8aa09a04f88aa9d50122718358d07301aeab84e4a76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:04Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.531475 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.531510 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.531519 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.531531 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.531540 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:04Z","lastTransitionTime":"2025-11-29T07:40:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.633881 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.633922 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.633934 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.633953 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.633963 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:04Z","lastTransitionTime":"2025-11-29T07:40:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.736508 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.736548 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.736560 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.736576 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.736611 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:04Z","lastTransitionTime":"2025-11-29T07:40:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.841548 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.841623 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.841636 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.841656 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.841672 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:04Z","lastTransitionTime":"2025-11-29T07:40:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.944939 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.944984 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.944994 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.945011 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:04 crc kubenswrapper[4795]: I1129 07:40:04.945022 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:04Z","lastTransitionTime":"2025-11-29T07:40:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:05 crc kubenswrapper[4795]: I1129 07:40:05.047727 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:05 crc kubenswrapper[4795]: I1129 07:40:05.047773 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:05 crc kubenswrapper[4795]: I1129 07:40:05.047781 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:05 crc kubenswrapper[4795]: I1129 07:40:05.047800 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:05 crc kubenswrapper[4795]: I1129 07:40:05.047809 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:05Z","lastTransitionTime":"2025-11-29T07:40:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:05 crc kubenswrapper[4795]: I1129 07:40:05.149986 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:05 crc kubenswrapper[4795]: I1129 07:40:05.150034 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:05 crc kubenswrapper[4795]: I1129 07:40:05.150043 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:05 crc kubenswrapper[4795]: I1129 07:40:05.150057 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:05 crc kubenswrapper[4795]: I1129 07:40:05.150067 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:05Z","lastTransitionTime":"2025-11-29T07:40:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:05 crc kubenswrapper[4795]: I1129 07:40:05.244139 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-km2g9_3d3ff2b2-cbaa-4309-805a-2b044f867d3a/ovnkube-controller/1.log" Nov 29 07:40:05 crc kubenswrapper[4795]: I1129 07:40:05.245096 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-km2g9_3d3ff2b2-cbaa-4309-805a-2b044f867d3a/ovnkube-controller/0.log" Nov 29 07:40:05 crc kubenswrapper[4795]: I1129 07:40:05.248490 4795 generic.go:334] "Generic (PLEG): container finished" podID="3d3ff2b2-cbaa-4309-805a-2b044f867d3a" containerID="8863f06c350a821d91544fd372bed53f0ae71b14571eb9b9e189228b3eb48e53" exitCode=1 Nov 29 07:40:05 crc kubenswrapper[4795]: I1129 07:40:05.248544 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" event={"ID":"3d3ff2b2-cbaa-4309-805a-2b044f867d3a","Type":"ContainerDied","Data":"8863f06c350a821d91544fd372bed53f0ae71b14571eb9b9e189228b3eb48e53"} Nov 29 07:40:05 crc kubenswrapper[4795]: I1129 07:40:05.248635 4795 scope.go:117] "RemoveContainer" containerID="178c4829fb6723552ece37b1b2a9db689e2acb44c46d095d72a2ad03af1d2460" Nov 29 07:40:05 crc kubenswrapper[4795]: I1129 07:40:05.250495 4795 scope.go:117] "RemoveContainer" containerID="8863f06c350a821d91544fd372bed53f0ae71b14571eb9b9e189228b3eb48e53" Nov 29 07:40:05 crc kubenswrapper[4795]: E1129 07:40:05.250876 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-km2g9_openshift-ovn-kubernetes(3d3ff2b2-cbaa-4309-805a-2b044f867d3a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" podUID="3d3ff2b2-cbaa-4309-805a-2b044f867d3a" Nov 29 07:40:05 crc kubenswrapper[4795]: I1129 07:40:05.252586 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:05 crc kubenswrapper[4795]: I1129 07:40:05.252680 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:05 crc kubenswrapper[4795]: I1129 07:40:05.252739 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:05 crc kubenswrapper[4795]: I1129 07:40:05.252765 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:05 crc kubenswrapper[4795]: I1129 07:40:05.252831 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:05Z","lastTransitionTime":"2025-11-29T07:40:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:05 crc kubenswrapper[4795]: I1129 07:40:05.275084 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:05Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:05 crc kubenswrapper[4795]: I1129 07:40:05.292142 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27975" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceab872d-7a73-44d5-936e-3dd17facf399\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b771b13a7bbce9270d919a830c700a1a04bd1d82141a1487e7abb6b1def634cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:40:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e24a1ba0b3bd47c8cff733e54ce43cf1641ba69df8a3d2083f47fcfdcd6640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e24a1ba0b3bd47c8cff733e54ce43cf1641ba69df8a3d2083f47fcfdcd6640\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9e93605467f5dae805a8b5b55e1eebc3f046f4ab7fd3a409235b918d8b1aa88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9e93605467f5dae805a8b5b55e1eebc3f046f4ab7fd3a409235b918d8b1aa88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1856815c6475b6c9db1d59694dfc249a9b254ca7f6b402570dbb791f60e038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1856815c6475b6c9db1d59694dfc249a9b254ca7f6b402570dbb791f60e038\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c79e6f292a540eaeab2781c31973da7e0213d08460452129dbb25075c05a9094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c79e6f292a540eaeab2781c31973da7e0213d08460452129dbb25075c05a9094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e9ceaddb24d5dc8e3689b2e05482fd76883f4ce3475557c03419491b53e800d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9ceaddb24d5dc8e3689b2e05482fd76883f4ce3475557c03419491b53e800d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2966e801bc03a51bece33fae1207ba6a432620fcbcb1c702abddbb11b40a1e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2966e801bc03a51bece33fae1207ba6a432620fcbcb1c702abddbb11b40a1e1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27975\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:05Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:05 crc kubenswrapper[4795]: I1129 07:40:05.308775 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-s9x86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcb60df4-ad48-4830-b8c7-c63621b96707\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6d9cdb665b3bed924577ab173dcf48968a0b7217ca81b1f8708d733d27d3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-s9x86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:05Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:05 crc kubenswrapper[4795]: I1129 07:40:05.328049 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qcmb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6472ebbc-939b-4dd0-8b03-110cb9811484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad977e9d70f50688539c9d0c9cf08789584d08f77792d3dde1a619e9d958cff2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qwxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2497113482ab716b22a90b7872f2837bccd484b765e979c2c67a394f82de688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qwxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qcmb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:05Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:05 crc kubenswrapper[4795]: I1129 07:40:05.345554 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"262bf41d-eaf0-42b7-946d-3223857ca705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bbfc45764b29055dcafdee2e3b77893c35b23cfa3eaa9125f01850baef2fd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://508f7cb88a5b33c09bb52519555900031de44693e10de750ed34d9bb4c31738c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9f1344bd7c3a1bc53870f7e5f5f2ca4993df112a24aa474655cc07670e22c48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b06202234f21aa27a3dc67978cc9f82738d7526100bce3fd640be08131413f69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:05Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:05 crc kubenswrapper[4795]: I1129 07:40:05.358647 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:05 crc kubenswrapper[4795]: I1129 07:40:05.359034 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:05 crc kubenswrapper[4795]: I1129 07:40:05.359222 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:05 crc kubenswrapper[4795]: I1129 07:40:05.359409 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:05 crc kubenswrapper[4795]: I1129 07:40:05.359649 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:05Z","lastTransitionTime":"2025-11-29T07:40:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:05 crc kubenswrapper[4795]: I1129 07:40:05.368562 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:05Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:05 crc kubenswrapper[4795]: I1129 07:40:05.387863 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:05Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:05 crc kubenswrapper[4795]: I1129 07:40:05.405479 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2faa21199ef0e7bbf065823fff1e4b33636239d022db292bec68ff54e8c948c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c09cfeaabce351a2face8aa09a04f88aa9d50122718358d07301aeab84e4a76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:05Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:05 crc kubenswrapper[4795]: I1129 07:40:05.425687 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84bbfda4-f166-406b-8459-d8cad5d21032\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://102735e5872ecbca5ba247f4fc457619ce4b383def180851d69bf37f7ea1051a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cba2e40f95664da3b482ce01c3432b7a421b785791004833aeaffbc2b6bdcb9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c385dac962c7c2c54fe6b5eabe286bc75d0518601f68b6845677764c5916ad44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6319d19e0aa1655aaff5e3abcc13d8ff9ff26f401c28cf23593ecaa23d1ac5a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa34ac2960898820609db88aa051a6567a7b6bd0bfc7fcb741205f6f5c3f402\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:39:37Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 07:39:27.030389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:39:27.033381 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-823169215/tls.crt::/tmp/serving-cert-823169215/tls.key\\\\\\\"\\\\nI1129 07:39:37.392848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:39:37.396263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:39:37.396294 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:39:37.396669 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:39:37.396692 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:39:37.407579 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:39:37.407638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407648 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407655 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:39:37.407660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:39:37.407665 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:39:37.407670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:39:37.408043 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:39:37.411891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7b73774f26f4ea48586f8faf7f53be02b4678c85937c8e402868610f36d4d00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:05Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:05 crc kubenswrapper[4795]: I1129 07:40:05.440544 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c05a965f-51fb-403d-bbac-b34d4d6b2bbd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98152568d086a30f06fcd1535110482b356c587d906973de9baedd65fc85b6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34ebb0f2c455d2133d2fcf00d19e74d539e80523f2ed0b2ccc1f8eabcf2cb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9d7937efcaeb684eb229e396c7896a805f1bf205bda1cd1b8113aeda2bea5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bcaeb06b249f6fa81deaf9f997ca41c9dd7f3568d458072208901c04543e272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bcaeb06b249f6fa81deaf9f997ca41c9dd7f3568d458072208901c04543e272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:05Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:05 crc kubenswrapper[4795]: I1129 07:40:05.457741 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30fbc9849348e3900b416f78f697f1d88eb5eb018f5ee420e711561423a79459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:05Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:05 crc kubenswrapper[4795]: I1129 07:40:05.463249 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:05 crc kubenswrapper[4795]: I1129 07:40:05.463307 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:05 crc kubenswrapper[4795]: I1129 07:40:05.463318 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:05 crc kubenswrapper[4795]: I1129 07:40:05.463336 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:05 crc kubenswrapper[4795]: I1129 07:40:05.463347 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:05Z","lastTransitionTime":"2025-11-29T07:40:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:05 crc kubenswrapper[4795]: I1129 07:40:05.478414 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b4d032ee6e86c2b61a59ea1477aa6d97a454c754888c37db24bc49d921eacd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:05Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:05 crc kubenswrapper[4795]: I1129 07:40:05.494382 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://148610b20c1dfe43c3e5443cda520fa95ed2823afcaa880dbe879b6d93dc8ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3aac368767d0534a658bc0223f5cda0a951940f2d6ff3f1c89a6ac401a7d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkmq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:05Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:05 crc kubenswrapper[4795]: I1129 07:40:05.516727 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceed8db447ae237cae16d2aa37b1651eade557ea7aaab5bc7b55678ce6e36f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcce9ccf35e72987d01428b7573aafacfe90fc973009657b077f40b6fe6ddecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://248e5291402287c1a589a168527135bd1df15e293a7614681fc896fcbdeb9dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8112226f945ad27e0c5af4d588800322977e17efc3eac9acec2ce591f30f46fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://feacad8666527640ec9d6bee04a743e99e0cc9f9e9c6432c1a70bc02e85fa039\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25239d7c14973e35144f9ac1b909564749220fc32b1851b68e08434a5aca7def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8863f06c350a821d91544fd372bed53f0ae71b14571eb9b9e189228b3eb48e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://178c4829fb6723552ece37b1b2a9db689e2acb44c46d095d72a2ad03af1d2460\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"message\\\":\\\"e event handler 1\\\\nI1129 07:40:00.611646 6086 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1129 07:40:00.611652 6086 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1129 07:40:00.611679 6086 factory.go:656] Stopping watch factory\\\\nI1129 07:40:00.611695 6086 handler.go:208] Removed *v1.Node event handler 7\\\\nI1129 07:40:00.611703 6086 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1129 07:40:00.611711 6086 handler.go:208] Removed *v1.Node event handler 2\\\\nI1129 07:40:00.611742 6086 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:40:00.612160 6086 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:40:00.612495 6086 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:40:00.612534 6086 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:40:00.612726 6086 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1129 07:40:00.612802 6086 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8863f06c350a821d91544fd372bed53f0ae71b14571eb9b9e189228b3eb48e53\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:40:04Z\\\",\\\"message\\\":\\\"twork-check-source-55646444c4-trplf\\\\nI1129 07:40:02.480885 6335 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1129 07:40:02.480888 6335 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1129 07:40:02.480892 6335 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI1129 07:40:02.480897 6335 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1129 07:40:02.480905 6335 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI1129 07:40:02.480911 6335 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI1129 07:40:02.480918 6335 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF1129 07:40:02.480921 6335 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63ac8cc3766762113c8fdfdc459f77570fc533bf636011f1e64a5c0a55b3ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e28f6209bd7be6beb5d86d316c3194ad3162ae8561ac532a3dfc894415fb406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e28f6209bd7be6beb5d86d316c3194ad3162ae8561ac532a3dfc894415fb406\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-km2g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:05Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:05 crc kubenswrapper[4795]: I1129 07:40:05.532272 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bvmzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9be66670-47c2-4d05-bf3d-59ae6f4ff53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4h7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4h7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bvmzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:05Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:05 crc kubenswrapper[4795]: I1129 07:40:05.544626 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hbg2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50b9c3ea-4ff5-434f-803c-2365a0938f9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46105526d9b0e90d4e14597c7b9c505b01d89a55e947fb378e4587ec265cac89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv7gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hbg2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:05Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:05 crc kubenswrapper[4795]: I1129 07:40:05.555818 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vcd5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3fc3441-5d98-4323-8a78-cab492090c5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d047b5f2e979d7eebec46b45ef363e94e99bd7311214872c66fcce39cfd411db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcqq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vcd5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:05Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:05 crc kubenswrapper[4795]: I1129 07:40:05.565663 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:05 crc kubenswrapper[4795]: I1129 07:40:05.565725 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:05 crc kubenswrapper[4795]: I1129 07:40:05.565738 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:05 crc kubenswrapper[4795]: I1129 07:40:05.565752 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:05 crc kubenswrapper[4795]: I1129 07:40:05.565762 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:05Z","lastTransitionTime":"2025-11-29T07:40:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:05 crc kubenswrapper[4795]: I1129 07:40:05.668896 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:05 crc kubenswrapper[4795]: I1129 07:40:05.669006 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:05 crc kubenswrapper[4795]: I1129 07:40:05.669019 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:05 crc kubenswrapper[4795]: I1129 07:40:05.669032 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:05 crc kubenswrapper[4795]: I1129 07:40:05.669042 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:05Z","lastTransitionTime":"2025-11-29T07:40:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:05 crc kubenswrapper[4795]: I1129 07:40:05.772342 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:05 crc kubenswrapper[4795]: I1129 07:40:05.772439 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:05 crc kubenswrapper[4795]: I1129 07:40:05.772465 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:05 crc kubenswrapper[4795]: I1129 07:40:05.772501 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:05 crc kubenswrapper[4795]: I1129 07:40:05.772527 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:05Z","lastTransitionTime":"2025-11-29T07:40:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:05 crc kubenswrapper[4795]: I1129 07:40:05.874684 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:05 crc kubenswrapper[4795]: I1129 07:40:05.874759 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:05 crc kubenswrapper[4795]: I1129 07:40:05.874777 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:05 crc kubenswrapper[4795]: I1129 07:40:05.874811 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:05 crc kubenswrapper[4795]: I1129 07:40:05.874835 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:05Z","lastTransitionTime":"2025-11-29T07:40:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:05 crc kubenswrapper[4795]: I1129 07:40:05.977562 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:05 crc kubenswrapper[4795]: I1129 07:40:05.977623 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:05 crc kubenswrapper[4795]: I1129 07:40:05.977632 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:05 crc kubenswrapper[4795]: I1129 07:40:05.977648 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:05 crc kubenswrapper[4795]: I1129 07:40:05.977659 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:05Z","lastTransitionTime":"2025-11-29T07:40:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:06 crc kubenswrapper[4795]: I1129 07:40:06.081078 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:06 crc kubenswrapper[4795]: I1129 07:40:06.081183 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:06 crc kubenswrapper[4795]: I1129 07:40:06.081207 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:06 crc kubenswrapper[4795]: I1129 07:40:06.081237 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:06 crc kubenswrapper[4795]: I1129 07:40:06.081261 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:06Z","lastTransitionTime":"2025-11-29T07:40:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:06 crc kubenswrapper[4795]: I1129 07:40:06.184935 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:06 crc kubenswrapper[4795]: I1129 07:40:06.184983 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:06 crc kubenswrapper[4795]: I1129 07:40:06.184999 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:06 crc kubenswrapper[4795]: I1129 07:40:06.185019 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:06 crc kubenswrapper[4795]: I1129 07:40:06.185032 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:06Z","lastTransitionTime":"2025-11-29T07:40:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:06 crc kubenswrapper[4795]: I1129 07:40:06.255312 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-km2g9_3d3ff2b2-cbaa-4309-805a-2b044f867d3a/ovnkube-controller/1.log" Nov 29 07:40:06 crc kubenswrapper[4795]: I1129 07:40:06.275960 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:40:06 crc kubenswrapper[4795]: I1129 07:40:06.276043 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:40:06 crc kubenswrapper[4795]: I1129 07:40:06.275981 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:40:06 crc kubenswrapper[4795]: E1129 07:40:06.276213 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:40:06 crc kubenswrapper[4795]: I1129 07:40:06.276239 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvmzq" Nov 29 07:40:06 crc kubenswrapper[4795]: E1129 07:40:06.276357 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:40:06 crc kubenswrapper[4795]: E1129 07:40:06.276507 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:40:06 crc kubenswrapper[4795]: E1129 07:40:06.276655 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvmzq" podUID="9be66670-47c2-4d05-bf3d-59ae6f4ff53b" Nov 29 07:40:06 crc kubenswrapper[4795]: I1129 07:40:06.294368 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:06 crc kubenswrapper[4795]: I1129 07:40:06.294431 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:06 crc kubenswrapper[4795]: I1129 07:40:06.294442 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:06 crc kubenswrapper[4795]: I1129 07:40:06.294461 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:06 crc kubenswrapper[4795]: I1129 07:40:06.294473 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:06Z","lastTransitionTime":"2025-11-29T07:40:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:06 crc kubenswrapper[4795]: I1129 07:40:06.397634 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:06 crc kubenswrapper[4795]: I1129 07:40:06.397684 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:06 crc kubenswrapper[4795]: I1129 07:40:06.397697 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:06 crc kubenswrapper[4795]: I1129 07:40:06.397715 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:06 crc kubenswrapper[4795]: I1129 07:40:06.397728 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:06Z","lastTransitionTime":"2025-11-29T07:40:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:06 crc kubenswrapper[4795]: I1129 07:40:06.500004 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:06 crc kubenswrapper[4795]: I1129 07:40:06.500070 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:06 crc kubenswrapper[4795]: I1129 07:40:06.500085 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:06 crc kubenswrapper[4795]: I1129 07:40:06.500107 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:06 crc kubenswrapper[4795]: I1129 07:40:06.500137 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:06Z","lastTransitionTime":"2025-11-29T07:40:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:06 crc kubenswrapper[4795]: I1129 07:40:06.602171 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:06 crc kubenswrapper[4795]: I1129 07:40:06.602210 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:06 crc kubenswrapper[4795]: I1129 07:40:06.602219 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:06 crc kubenswrapper[4795]: I1129 07:40:06.602234 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:06 crc kubenswrapper[4795]: I1129 07:40:06.602244 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:06Z","lastTransitionTime":"2025-11-29T07:40:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:06 crc kubenswrapper[4795]: I1129 07:40:06.704624 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:06 crc kubenswrapper[4795]: I1129 07:40:06.704972 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:06 crc kubenswrapper[4795]: I1129 07:40:06.705051 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:06 crc kubenswrapper[4795]: I1129 07:40:06.705153 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:06 crc kubenswrapper[4795]: I1129 07:40:06.705237 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:06Z","lastTransitionTime":"2025-11-29T07:40:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:06 crc kubenswrapper[4795]: I1129 07:40:06.807852 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:06 crc kubenswrapper[4795]: I1129 07:40:06.807927 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:06 crc kubenswrapper[4795]: I1129 07:40:06.807946 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:06 crc kubenswrapper[4795]: I1129 07:40:06.807974 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:06 crc kubenswrapper[4795]: I1129 07:40:06.807993 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:06Z","lastTransitionTime":"2025-11-29T07:40:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:06 crc kubenswrapper[4795]: I1129 07:40:06.912568 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:06 crc kubenswrapper[4795]: I1129 07:40:06.912653 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:06 crc kubenswrapper[4795]: I1129 07:40:06.912666 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:06 crc kubenswrapper[4795]: I1129 07:40:06.912686 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:06 crc kubenswrapper[4795]: I1129 07:40:06.912699 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:06Z","lastTransitionTime":"2025-11-29T07:40:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:07 crc kubenswrapper[4795]: I1129 07:40:07.015568 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:07 crc kubenswrapper[4795]: I1129 07:40:07.016241 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:07 crc kubenswrapper[4795]: I1129 07:40:07.016438 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:07 crc kubenswrapper[4795]: I1129 07:40:07.016651 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:07 crc kubenswrapper[4795]: I1129 07:40:07.016824 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:07Z","lastTransitionTime":"2025-11-29T07:40:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:07 crc kubenswrapper[4795]: I1129 07:40:07.119845 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:07 crc kubenswrapper[4795]: I1129 07:40:07.119900 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:07 crc kubenswrapper[4795]: I1129 07:40:07.119916 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:07 crc kubenswrapper[4795]: I1129 07:40:07.119939 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:07 crc kubenswrapper[4795]: I1129 07:40:07.119951 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:07Z","lastTransitionTime":"2025-11-29T07:40:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:07 crc kubenswrapper[4795]: I1129 07:40:07.222483 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:07 crc kubenswrapper[4795]: I1129 07:40:07.222528 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:07 crc kubenswrapper[4795]: I1129 07:40:07.222553 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:07 crc kubenswrapper[4795]: I1129 07:40:07.222573 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:07 crc kubenswrapper[4795]: I1129 07:40:07.222585 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:07Z","lastTransitionTime":"2025-11-29T07:40:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:07 crc kubenswrapper[4795]: I1129 07:40:07.324686 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:07 crc kubenswrapper[4795]: I1129 07:40:07.324736 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:07 crc kubenswrapper[4795]: I1129 07:40:07.324747 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:07 crc kubenswrapper[4795]: I1129 07:40:07.324761 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:07 crc kubenswrapper[4795]: I1129 07:40:07.324772 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:07Z","lastTransitionTime":"2025-11-29T07:40:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:07 crc kubenswrapper[4795]: I1129 07:40:07.426884 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:07 crc kubenswrapper[4795]: I1129 07:40:07.426933 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:07 crc kubenswrapper[4795]: I1129 07:40:07.426947 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:07 crc kubenswrapper[4795]: I1129 07:40:07.426965 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:07 crc kubenswrapper[4795]: I1129 07:40:07.426979 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:07Z","lastTransitionTime":"2025-11-29T07:40:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:07 crc kubenswrapper[4795]: I1129 07:40:07.529049 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:07 crc kubenswrapper[4795]: I1129 07:40:07.529084 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:07 crc kubenswrapper[4795]: I1129 07:40:07.529096 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:07 crc kubenswrapper[4795]: I1129 07:40:07.529115 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:07 crc kubenswrapper[4795]: I1129 07:40:07.529306 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:07Z","lastTransitionTime":"2025-11-29T07:40:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:07 crc kubenswrapper[4795]: I1129 07:40:07.632131 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:07 crc kubenswrapper[4795]: I1129 07:40:07.632183 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:07 crc kubenswrapper[4795]: I1129 07:40:07.632195 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:07 crc kubenswrapper[4795]: I1129 07:40:07.632214 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:07 crc kubenswrapper[4795]: I1129 07:40:07.632226 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:07Z","lastTransitionTime":"2025-11-29T07:40:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:07 crc kubenswrapper[4795]: I1129 07:40:07.734458 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:07 crc kubenswrapper[4795]: I1129 07:40:07.734724 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:07 crc kubenswrapper[4795]: I1129 07:40:07.734803 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:07 crc kubenswrapper[4795]: I1129 07:40:07.734876 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:07 crc kubenswrapper[4795]: I1129 07:40:07.734938 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:07Z","lastTransitionTime":"2025-11-29T07:40:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:07 crc kubenswrapper[4795]: I1129 07:40:07.837945 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:07 crc kubenswrapper[4795]: I1129 07:40:07.837999 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:07 crc kubenswrapper[4795]: I1129 07:40:07.838010 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:07 crc kubenswrapper[4795]: I1129 07:40:07.838027 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:07 crc kubenswrapper[4795]: I1129 07:40:07.838040 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:07Z","lastTransitionTime":"2025-11-29T07:40:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:07 crc kubenswrapper[4795]: I1129 07:40:07.940656 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:07 crc kubenswrapper[4795]: I1129 07:40:07.940692 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:07 crc kubenswrapper[4795]: I1129 07:40:07.940701 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:07 crc kubenswrapper[4795]: I1129 07:40:07.940715 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:07 crc kubenswrapper[4795]: I1129 07:40:07.940725 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:07Z","lastTransitionTime":"2025-11-29T07:40:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:08 crc kubenswrapper[4795]: I1129 07:40:08.042654 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:08 crc kubenswrapper[4795]: I1129 07:40:08.042687 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:08 crc kubenswrapper[4795]: I1129 07:40:08.042695 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:08 crc kubenswrapper[4795]: I1129 07:40:08.042709 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:08 crc kubenswrapper[4795]: I1129 07:40:08.042718 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:08Z","lastTransitionTime":"2025-11-29T07:40:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:08 crc kubenswrapper[4795]: I1129 07:40:08.145368 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:08 crc kubenswrapper[4795]: I1129 07:40:08.145419 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:08 crc kubenswrapper[4795]: I1129 07:40:08.145430 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:08 crc kubenswrapper[4795]: I1129 07:40:08.145449 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:08 crc kubenswrapper[4795]: I1129 07:40:08.145464 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:08Z","lastTransitionTime":"2025-11-29T07:40:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:08 crc kubenswrapper[4795]: I1129 07:40:08.248877 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:08 crc kubenswrapper[4795]: I1129 07:40:08.248919 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:08 crc kubenswrapper[4795]: I1129 07:40:08.248935 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:08 crc kubenswrapper[4795]: I1129 07:40:08.248955 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:08 crc kubenswrapper[4795]: I1129 07:40:08.248971 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:08Z","lastTransitionTime":"2025-11-29T07:40:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:08 crc kubenswrapper[4795]: I1129 07:40:08.275025 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:40:08 crc kubenswrapper[4795]: I1129 07:40:08.275062 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:40:08 crc kubenswrapper[4795]: E1129 07:40:08.275191 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:40:08 crc kubenswrapper[4795]: I1129 07:40:08.275242 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:40:08 crc kubenswrapper[4795]: E1129 07:40:08.275380 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:40:08 crc kubenswrapper[4795]: E1129 07:40:08.275508 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:40:08 crc kubenswrapper[4795]: I1129 07:40:08.275686 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvmzq" Nov 29 07:40:08 crc kubenswrapper[4795]: E1129 07:40:08.275857 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvmzq" podUID="9be66670-47c2-4d05-bf3d-59ae6f4ff53b" Nov 29 07:40:08 crc kubenswrapper[4795]: I1129 07:40:08.356683 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:08 crc kubenswrapper[4795]: I1129 07:40:08.356812 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:08 crc kubenswrapper[4795]: I1129 07:40:08.356837 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:08 crc kubenswrapper[4795]: I1129 07:40:08.361017 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:08 crc kubenswrapper[4795]: I1129 07:40:08.361118 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:08Z","lastTransitionTime":"2025-11-29T07:40:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:08 crc kubenswrapper[4795]: I1129 07:40:08.466803 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:08 crc kubenswrapper[4795]: I1129 07:40:08.466900 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:08 crc kubenswrapper[4795]: I1129 07:40:08.466932 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:08 crc kubenswrapper[4795]: I1129 07:40:08.467087 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:08 crc kubenswrapper[4795]: I1129 07:40:08.467112 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:08Z","lastTransitionTime":"2025-11-29T07:40:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:08 crc kubenswrapper[4795]: I1129 07:40:08.570308 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:08 crc kubenswrapper[4795]: I1129 07:40:08.570425 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:08 crc kubenswrapper[4795]: I1129 07:40:08.570455 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:08 crc kubenswrapper[4795]: I1129 07:40:08.570488 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:08 crc kubenswrapper[4795]: I1129 07:40:08.570526 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:08Z","lastTransitionTime":"2025-11-29T07:40:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:08 crc kubenswrapper[4795]: I1129 07:40:08.674582 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:08 crc kubenswrapper[4795]: I1129 07:40:08.674691 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:08 crc kubenswrapper[4795]: I1129 07:40:08.674715 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:08 crc kubenswrapper[4795]: I1129 07:40:08.674748 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:08 crc kubenswrapper[4795]: I1129 07:40:08.674772 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:08Z","lastTransitionTime":"2025-11-29T07:40:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:08 crc kubenswrapper[4795]: I1129 07:40:08.778477 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:08 crc kubenswrapper[4795]: I1129 07:40:08.778529 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:08 crc kubenswrapper[4795]: I1129 07:40:08.778542 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:08 crc kubenswrapper[4795]: I1129 07:40:08.778564 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:08 crc kubenswrapper[4795]: I1129 07:40:08.778576 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:08Z","lastTransitionTime":"2025-11-29T07:40:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:08 crc kubenswrapper[4795]: I1129 07:40:08.881407 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:08 crc kubenswrapper[4795]: I1129 07:40:08.881771 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:08 crc kubenswrapper[4795]: I1129 07:40:08.881857 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:08 crc kubenswrapper[4795]: I1129 07:40:08.881948 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:08 crc kubenswrapper[4795]: I1129 07:40:08.882024 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:08Z","lastTransitionTime":"2025-11-29T07:40:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:08 crc kubenswrapper[4795]: I1129 07:40:08.985199 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:08 crc kubenswrapper[4795]: I1129 07:40:08.985275 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:08 crc kubenswrapper[4795]: I1129 07:40:08.985295 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:08 crc kubenswrapper[4795]: I1129 07:40:08.985326 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:08 crc kubenswrapper[4795]: I1129 07:40:08.985345 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:08Z","lastTransitionTime":"2025-11-29T07:40:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:09 crc kubenswrapper[4795]: I1129 07:40:09.088262 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:09 crc kubenswrapper[4795]: I1129 07:40:09.088311 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:09 crc kubenswrapper[4795]: I1129 07:40:09.088320 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:09 crc kubenswrapper[4795]: I1129 07:40:09.088339 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:09 crc kubenswrapper[4795]: I1129 07:40:09.088349 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:09Z","lastTransitionTime":"2025-11-29T07:40:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:09 crc kubenswrapper[4795]: I1129 07:40:09.191285 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:09 crc kubenswrapper[4795]: I1129 07:40:09.191341 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:09 crc kubenswrapper[4795]: I1129 07:40:09.191355 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:09 crc kubenswrapper[4795]: I1129 07:40:09.191377 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:09 crc kubenswrapper[4795]: I1129 07:40:09.191393 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:09Z","lastTransitionTime":"2025-11-29T07:40:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:09 crc kubenswrapper[4795]: I1129 07:40:09.294194 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:09 crc kubenswrapper[4795]: I1129 07:40:09.294270 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:09 crc kubenswrapper[4795]: I1129 07:40:09.294291 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:09 crc kubenswrapper[4795]: I1129 07:40:09.294323 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:09 crc kubenswrapper[4795]: I1129 07:40:09.294346 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:09Z","lastTransitionTime":"2025-11-29T07:40:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:09 crc kubenswrapper[4795]: I1129 07:40:09.397856 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:09 crc kubenswrapper[4795]: I1129 07:40:09.397915 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:09 crc kubenswrapper[4795]: I1129 07:40:09.397928 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:09 crc kubenswrapper[4795]: I1129 07:40:09.397947 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:09 crc kubenswrapper[4795]: I1129 07:40:09.397959 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:09Z","lastTransitionTime":"2025-11-29T07:40:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:09 crc kubenswrapper[4795]: I1129 07:40:09.500895 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:09 crc kubenswrapper[4795]: I1129 07:40:09.501346 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:09 crc kubenswrapper[4795]: I1129 07:40:09.501409 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:09 crc kubenswrapper[4795]: I1129 07:40:09.501490 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:09 crc kubenswrapper[4795]: I1129 07:40:09.501567 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:09Z","lastTransitionTime":"2025-11-29T07:40:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:09 crc kubenswrapper[4795]: I1129 07:40:09.605274 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:09 crc kubenswrapper[4795]: I1129 07:40:09.605333 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:09 crc kubenswrapper[4795]: I1129 07:40:09.605342 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:09 crc kubenswrapper[4795]: I1129 07:40:09.605359 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:09 crc kubenswrapper[4795]: I1129 07:40:09.605373 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:09Z","lastTransitionTime":"2025-11-29T07:40:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:09 crc kubenswrapper[4795]: I1129 07:40:09.709196 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:09 crc kubenswrapper[4795]: I1129 07:40:09.709274 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:09 crc kubenswrapper[4795]: I1129 07:40:09.709299 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:09 crc kubenswrapper[4795]: I1129 07:40:09.709325 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:09 crc kubenswrapper[4795]: I1129 07:40:09.709342 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:09Z","lastTransitionTime":"2025-11-29T07:40:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:09 crc kubenswrapper[4795]: I1129 07:40:09.812643 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:09 crc kubenswrapper[4795]: I1129 07:40:09.812972 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:09 crc kubenswrapper[4795]: I1129 07:40:09.813042 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:09 crc kubenswrapper[4795]: I1129 07:40:09.813133 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:09 crc kubenswrapper[4795]: I1129 07:40:09.813217 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:09Z","lastTransitionTime":"2025-11-29T07:40:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:09 crc kubenswrapper[4795]: I1129 07:40:09.916505 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:09 crc kubenswrapper[4795]: I1129 07:40:09.916560 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:09 crc kubenswrapper[4795]: I1129 07:40:09.916576 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:09 crc kubenswrapper[4795]: I1129 07:40:09.916627 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:09 crc kubenswrapper[4795]: I1129 07:40:09.916642 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:09Z","lastTransitionTime":"2025-11-29T07:40:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:10 crc kubenswrapper[4795]: I1129 07:40:10.019601 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:10 crc kubenswrapper[4795]: I1129 07:40:10.019882 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:10 crc kubenswrapper[4795]: I1129 07:40:10.020037 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:10 crc kubenswrapper[4795]: I1129 07:40:10.020146 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:10 crc kubenswrapper[4795]: I1129 07:40:10.020241 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:10Z","lastTransitionTime":"2025-11-29T07:40:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:10 crc kubenswrapper[4795]: I1129 07:40:10.124064 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:10 crc kubenswrapper[4795]: I1129 07:40:10.124110 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:10 crc kubenswrapper[4795]: I1129 07:40:10.124122 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:10 crc kubenswrapper[4795]: I1129 07:40:10.124143 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:10 crc kubenswrapper[4795]: I1129 07:40:10.124157 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:10Z","lastTransitionTime":"2025-11-29T07:40:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:10 crc kubenswrapper[4795]: I1129 07:40:10.227142 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:10 crc kubenswrapper[4795]: I1129 07:40:10.227193 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:10 crc kubenswrapper[4795]: I1129 07:40:10.227206 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:10 crc kubenswrapper[4795]: I1129 07:40:10.227223 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:10 crc kubenswrapper[4795]: I1129 07:40:10.227236 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:10Z","lastTransitionTime":"2025-11-29T07:40:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:10 crc kubenswrapper[4795]: I1129 07:40:10.275436 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:40:10 crc kubenswrapper[4795]: I1129 07:40:10.275457 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:40:10 crc kubenswrapper[4795]: I1129 07:40:10.275461 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:40:10 crc kubenswrapper[4795]: I1129 07:40:10.275499 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvmzq" Nov 29 07:40:10 crc kubenswrapper[4795]: E1129 07:40:10.276348 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:40:10 crc kubenswrapper[4795]: E1129 07:40:10.276511 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:40:10 crc kubenswrapper[4795]: E1129 07:40:10.276430 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:40:10 crc kubenswrapper[4795]: E1129 07:40:10.277145 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvmzq" podUID="9be66670-47c2-4d05-bf3d-59ae6f4ff53b" Nov 29 07:40:10 crc kubenswrapper[4795]: I1129 07:40:10.330322 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:10 crc kubenswrapper[4795]: I1129 07:40:10.330352 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:10 crc kubenswrapper[4795]: I1129 07:40:10.330360 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:10 crc kubenswrapper[4795]: I1129 07:40:10.330375 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:10 crc kubenswrapper[4795]: I1129 07:40:10.330399 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:10Z","lastTransitionTime":"2025-11-29T07:40:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:10 crc kubenswrapper[4795]: I1129 07:40:10.432811 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:10 crc kubenswrapper[4795]: I1129 07:40:10.432842 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:10 crc kubenswrapper[4795]: I1129 07:40:10.432855 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:10 crc kubenswrapper[4795]: I1129 07:40:10.432869 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:10 crc kubenswrapper[4795]: I1129 07:40:10.432878 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:10Z","lastTransitionTime":"2025-11-29T07:40:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:10 crc kubenswrapper[4795]: I1129 07:40:10.535536 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:10 crc kubenswrapper[4795]: I1129 07:40:10.535568 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:10 crc kubenswrapper[4795]: I1129 07:40:10.535577 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:10 crc kubenswrapper[4795]: I1129 07:40:10.535780 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:10 crc kubenswrapper[4795]: I1129 07:40:10.535792 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:10Z","lastTransitionTime":"2025-11-29T07:40:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:10 crc kubenswrapper[4795]: I1129 07:40:10.543940 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:40:10 crc kubenswrapper[4795]: E1129 07:40:10.544066 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:40:42.544045293 +0000 UTC m=+88.519621083 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:40:10 crc kubenswrapper[4795]: I1129 07:40:10.544144 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:40:10 crc kubenswrapper[4795]: I1129 07:40:10.544170 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:40:10 crc kubenswrapper[4795]: I1129 07:40:10.544237 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:40:10 crc kubenswrapper[4795]: I1129 07:40:10.544261 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:40:10 crc kubenswrapper[4795]: E1129 07:40:10.544339 4795 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 07:40:10 crc kubenswrapper[4795]: E1129 07:40:10.544372 4795 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 07:40:10 crc kubenswrapper[4795]: E1129 07:40:10.544390 4795 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 07:40:10 crc kubenswrapper[4795]: E1129 07:40:10.544404 4795 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:40:10 crc kubenswrapper[4795]: E1129 07:40:10.544379 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 07:40:42.544370242 +0000 UTC m=+88.519946032 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 07:40:10 crc kubenswrapper[4795]: E1129 07:40:10.544465 4795 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 07:40:10 crc kubenswrapper[4795]: E1129 07:40:10.544474 4795 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 07:40:10 crc kubenswrapper[4795]: E1129 07:40:10.544482 4795 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:40:10 crc kubenswrapper[4795]: E1129 07:40:10.544519 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-29 07:40:42.544472174 +0000 UTC m=+88.520047964 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:40:10 crc kubenswrapper[4795]: E1129 07:40:10.544546 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-29 07:40:42.544536986 +0000 UTC m=+88.520112776 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:40:10 crc kubenswrapper[4795]: E1129 07:40:10.544546 4795 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 07:40:10 crc kubenswrapper[4795]: E1129 07:40:10.544634 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 07:40:42.544620698 +0000 UTC m=+88.520196548 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 07:40:10 crc kubenswrapper[4795]: I1129 07:40:10.638442 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:10 crc kubenswrapper[4795]: I1129 07:40:10.638480 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:10 crc kubenswrapper[4795]: I1129 07:40:10.638491 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:10 crc kubenswrapper[4795]: I1129 07:40:10.638506 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:10 crc kubenswrapper[4795]: I1129 07:40:10.638517 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:10Z","lastTransitionTime":"2025-11-29T07:40:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:10 crc kubenswrapper[4795]: I1129 07:40:10.645869 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9be66670-47c2-4d05-bf3d-59ae6f4ff53b-metrics-certs\") pod \"network-metrics-daemon-bvmzq\" (UID: \"9be66670-47c2-4d05-bf3d-59ae6f4ff53b\") " pod="openshift-multus/network-metrics-daemon-bvmzq" Nov 29 07:40:10 crc kubenswrapper[4795]: E1129 07:40:10.646047 4795 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 29 07:40:10 crc kubenswrapper[4795]: E1129 07:40:10.646096 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9be66670-47c2-4d05-bf3d-59ae6f4ff53b-metrics-certs podName:9be66670-47c2-4d05-bf3d-59ae6f4ff53b nodeName:}" failed. No retries permitted until 2025-11-29 07:40:26.64608246 +0000 UTC m=+72.621658250 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9be66670-47c2-4d05-bf3d-59ae6f4ff53b-metrics-certs") pod "network-metrics-daemon-bvmzq" (UID: "9be66670-47c2-4d05-bf3d-59ae6f4ff53b") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 29 07:40:10 crc kubenswrapper[4795]: I1129 07:40:10.742208 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:10 crc kubenswrapper[4795]: I1129 07:40:10.742249 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:10 crc kubenswrapper[4795]: I1129 07:40:10.742259 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:10 crc kubenswrapper[4795]: I1129 07:40:10.742274 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:10 crc kubenswrapper[4795]: I1129 07:40:10.742284 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:10Z","lastTransitionTime":"2025-11-29T07:40:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:10 crc kubenswrapper[4795]: I1129 07:40:10.844824 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:10 crc kubenswrapper[4795]: I1129 07:40:10.844874 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:10 crc kubenswrapper[4795]: I1129 07:40:10.844885 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:10 crc kubenswrapper[4795]: I1129 07:40:10.844902 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:10 crc kubenswrapper[4795]: I1129 07:40:10.844912 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:10Z","lastTransitionTime":"2025-11-29T07:40:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:10 crc kubenswrapper[4795]: I1129 07:40:10.947042 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:10 crc kubenswrapper[4795]: I1129 07:40:10.947090 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:10 crc kubenswrapper[4795]: I1129 07:40:10.947102 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:10 crc kubenswrapper[4795]: I1129 07:40:10.947120 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:10 crc kubenswrapper[4795]: I1129 07:40:10.947131 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:10Z","lastTransitionTime":"2025-11-29T07:40:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:11 crc kubenswrapper[4795]: I1129 07:40:11.049863 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:11 crc kubenswrapper[4795]: I1129 07:40:11.049890 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:11 crc kubenswrapper[4795]: I1129 07:40:11.049899 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:11 crc kubenswrapper[4795]: I1129 07:40:11.049912 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:11 crc kubenswrapper[4795]: I1129 07:40:11.049921 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:11Z","lastTransitionTime":"2025-11-29T07:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:11 crc kubenswrapper[4795]: I1129 07:40:11.151813 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:11 crc kubenswrapper[4795]: I1129 07:40:11.151879 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:11 crc kubenswrapper[4795]: I1129 07:40:11.151892 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:11 crc kubenswrapper[4795]: I1129 07:40:11.151909 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:11 crc kubenswrapper[4795]: I1129 07:40:11.151921 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:11Z","lastTransitionTime":"2025-11-29T07:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:11 crc kubenswrapper[4795]: I1129 07:40:11.254353 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:11 crc kubenswrapper[4795]: I1129 07:40:11.254392 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:11 crc kubenswrapper[4795]: I1129 07:40:11.254403 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:11 crc kubenswrapper[4795]: I1129 07:40:11.254420 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:11 crc kubenswrapper[4795]: I1129 07:40:11.254432 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:11Z","lastTransitionTime":"2025-11-29T07:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:11 crc kubenswrapper[4795]: I1129 07:40:11.356466 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:11 crc kubenswrapper[4795]: I1129 07:40:11.356506 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:11 crc kubenswrapper[4795]: I1129 07:40:11.356514 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:11 crc kubenswrapper[4795]: I1129 07:40:11.356527 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:11 crc kubenswrapper[4795]: I1129 07:40:11.356536 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:11Z","lastTransitionTime":"2025-11-29T07:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:11 crc kubenswrapper[4795]: I1129 07:40:11.458474 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:11 crc kubenswrapper[4795]: I1129 07:40:11.458515 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:11 crc kubenswrapper[4795]: I1129 07:40:11.458527 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:11 crc kubenswrapper[4795]: I1129 07:40:11.458543 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:11 crc kubenswrapper[4795]: I1129 07:40:11.458554 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:11Z","lastTransitionTime":"2025-11-29T07:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:11 crc kubenswrapper[4795]: I1129 07:40:11.561437 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:11 crc kubenswrapper[4795]: I1129 07:40:11.561470 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:11 crc kubenswrapper[4795]: I1129 07:40:11.561480 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:11 crc kubenswrapper[4795]: I1129 07:40:11.561495 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:11 crc kubenswrapper[4795]: I1129 07:40:11.561505 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:11Z","lastTransitionTime":"2025-11-29T07:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:11 crc kubenswrapper[4795]: I1129 07:40:11.664996 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:11 crc kubenswrapper[4795]: I1129 07:40:11.665041 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:11 crc kubenswrapper[4795]: I1129 07:40:11.665050 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:11 crc kubenswrapper[4795]: I1129 07:40:11.665066 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:11 crc kubenswrapper[4795]: I1129 07:40:11.665075 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:11Z","lastTransitionTime":"2025-11-29T07:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:11 crc kubenswrapper[4795]: I1129 07:40:11.768193 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:11 crc kubenswrapper[4795]: I1129 07:40:11.768246 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:11 crc kubenswrapper[4795]: I1129 07:40:11.768259 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:11 crc kubenswrapper[4795]: I1129 07:40:11.768276 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:11 crc kubenswrapper[4795]: I1129 07:40:11.768288 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:11Z","lastTransitionTime":"2025-11-29T07:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:11 crc kubenswrapper[4795]: I1129 07:40:11.870743 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:11 crc kubenswrapper[4795]: I1129 07:40:11.870787 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:11 crc kubenswrapper[4795]: I1129 07:40:11.870797 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:11 crc kubenswrapper[4795]: I1129 07:40:11.870810 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:11 crc kubenswrapper[4795]: I1129 07:40:11.870821 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:11Z","lastTransitionTime":"2025-11-29T07:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:11 crc kubenswrapper[4795]: I1129 07:40:11.972832 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:11 crc kubenswrapper[4795]: I1129 07:40:11.972872 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:11 crc kubenswrapper[4795]: I1129 07:40:11.972885 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:11 crc kubenswrapper[4795]: I1129 07:40:11.972907 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:11 crc kubenswrapper[4795]: I1129 07:40:11.972918 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:11Z","lastTransitionTime":"2025-11-29T07:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:12 crc kubenswrapper[4795]: I1129 07:40:12.075018 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:12 crc kubenswrapper[4795]: I1129 07:40:12.075053 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:12 crc kubenswrapper[4795]: I1129 07:40:12.075063 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:12 crc kubenswrapper[4795]: I1129 07:40:12.075079 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:12 crc kubenswrapper[4795]: I1129 07:40:12.075089 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:12Z","lastTransitionTime":"2025-11-29T07:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:12 crc kubenswrapper[4795]: I1129 07:40:12.177163 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:12 crc kubenswrapper[4795]: I1129 07:40:12.177204 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:12 crc kubenswrapper[4795]: I1129 07:40:12.177213 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:12 crc kubenswrapper[4795]: I1129 07:40:12.177231 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:12 crc kubenswrapper[4795]: I1129 07:40:12.177241 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:12Z","lastTransitionTime":"2025-11-29T07:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:12 crc kubenswrapper[4795]: I1129 07:40:12.274702 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:40:12 crc kubenswrapper[4795]: I1129 07:40:12.274814 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:40:12 crc kubenswrapper[4795]: I1129 07:40:12.274974 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvmzq" Nov 29 07:40:12 crc kubenswrapper[4795]: I1129 07:40:12.275032 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:40:12 crc kubenswrapper[4795]: E1129 07:40:12.275072 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvmzq" podUID="9be66670-47c2-4d05-bf3d-59ae6f4ff53b" Nov 29 07:40:12 crc kubenswrapper[4795]: E1129 07:40:12.275207 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:40:12 crc kubenswrapper[4795]: E1129 07:40:12.275335 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:40:12 crc kubenswrapper[4795]: E1129 07:40:12.275575 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:40:12 crc kubenswrapper[4795]: I1129 07:40:12.278865 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:12 crc kubenswrapper[4795]: I1129 07:40:12.278916 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:12 crc kubenswrapper[4795]: I1129 07:40:12.278934 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:12 crc kubenswrapper[4795]: I1129 07:40:12.278957 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:12 crc kubenswrapper[4795]: I1129 07:40:12.278973 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:12Z","lastTransitionTime":"2025-11-29T07:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:12 crc kubenswrapper[4795]: I1129 07:40:12.382155 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:12 crc kubenswrapper[4795]: I1129 07:40:12.382198 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:12 crc kubenswrapper[4795]: I1129 07:40:12.382207 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:12 crc kubenswrapper[4795]: I1129 07:40:12.382221 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:12 crc kubenswrapper[4795]: I1129 07:40:12.382241 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:12Z","lastTransitionTime":"2025-11-29T07:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:12 crc kubenswrapper[4795]: I1129 07:40:12.484658 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:12 crc kubenswrapper[4795]: I1129 07:40:12.484699 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:12 crc kubenswrapper[4795]: I1129 07:40:12.484715 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:12 crc kubenswrapper[4795]: I1129 07:40:12.484730 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:12 crc kubenswrapper[4795]: I1129 07:40:12.484741 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:12Z","lastTransitionTime":"2025-11-29T07:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:12 crc kubenswrapper[4795]: I1129 07:40:12.588021 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:12 crc kubenswrapper[4795]: I1129 07:40:12.588055 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:12 crc kubenswrapper[4795]: I1129 07:40:12.588063 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:12 crc kubenswrapper[4795]: I1129 07:40:12.588078 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:12 crc kubenswrapper[4795]: I1129 07:40:12.588088 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:12Z","lastTransitionTime":"2025-11-29T07:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:12 crc kubenswrapper[4795]: I1129 07:40:12.691604 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:12 crc kubenswrapper[4795]: I1129 07:40:12.691672 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:12 crc kubenswrapper[4795]: I1129 07:40:12.691689 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:12 crc kubenswrapper[4795]: I1129 07:40:12.691711 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:12 crc kubenswrapper[4795]: I1129 07:40:12.691727 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:12Z","lastTransitionTime":"2025-11-29T07:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:12 crc kubenswrapper[4795]: I1129 07:40:12.798562 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:12 crc kubenswrapper[4795]: I1129 07:40:12.798651 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:12 crc kubenswrapper[4795]: I1129 07:40:12.798661 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:12 crc kubenswrapper[4795]: I1129 07:40:12.798680 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:12 crc kubenswrapper[4795]: I1129 07:40:12.798692 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:12Z","lastTransitionTime":"2025-11-29T07:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:12 crc kubenswrapper[4795]: I1129 07:40:12.901683 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:12 crc kubenswrapper[4795]: I1129 07:40:12.901742 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:12 crc kubenswrapper[4795]: I1129 07:40:12.901761 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:12 crc kubenswrapper[4795]: I1129 07:40:12.901780 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:12 crc kubenswrapper[4795]: I1129 07:40:12.901792 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:12Z","lastTransitionTime":"2025-11-29T07:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:13 crc kubenswrapper[4795]: I1129 07:40:13.005107 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:13 crc kubenswrapper[4795]: I1129 07:40:13.005191 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:13 crc kubenswrapper[4795]: I1129 07:40:13.005212 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:13 crc kubenswrapper[4795]: I1129 07:40:13.005240 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:13 crc kubenswrapper[4795]: I1129 07:40:13.005262 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:13Z","lastTransitionTime":"2025-11-29T07:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:13 crc kubenswrapper[4795]: I1129 07:40:13.108243 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:13 crc kubenswrapper[4795]: I1129 07:40:13.108286 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:13 crc kubenswrapper[4795]: I1129 07:40:13.108294 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:13 crc kubenswrapper[4795]: I1129 07:40:13.108309 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:13 crc kubenswrapper[4795]: I1129 07:40:13.108319 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:13Z","lastTransitionTime":"2025-11-29T07:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:13 crc kubenswrapper[4795]: I1129 07:40:13.211947 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:13 crc kubenswrapper[4795]: I1129 07:40:13.212034 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:13 crc kubenswrapper[4795]: I1129 07:40:13.212058 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:13 crc kubenswrapper[4795]: I1129 07:40:13.212091 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:13 crc kubenswrapper[4795]: I1129 07:40:13.212114 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:13Z","lastTransitionTime":"2025-11-29T07:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:13 crc kubenswrapper[4795]: I1129 07:40:13.315379 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:13 crc kubenswrapper[4795]: I1129 07:40:13.315431 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:13 crc kubenswrapper[4795]: I1129 07:40:13.315445 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:13 crc kubenswrapper[4795]: I1129 07:40:13.315463 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:13 crc kubenswrapper[4795]: I1129 07:40:13.315475 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:13Z","lastTransitionTime":"2025-11-29T07:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:13 crc kubenswrapper[4795]: I1129 07:40:13.418160 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:13 crc kubenswrapper[4795]: I1129 07:40:13.418194 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:13 crc kubenswrapper[4795]: I1129 07:40:13.418202 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:13 crc kubenswrapper[4795]: I1129 07:40:13.418220 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:13 crc kubenswrapper[4795]: I1129 07:40:13.418246 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:13Z","lastTransitionTime":"2025-11-29T07:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:13 crc kubenswrapper[4795]: I1129 07:40:13.521528 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:13 crc kubenswrapper[4795]: I1129 07:40:13.521584 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:13 crc kubenswrapper[4795]: I1129 07:40:13.521615 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:13 crc kubenswrapper[4795]: I1129 07:40:13.521640 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:13 crc kubenswrapper[4795]: I1129 07:40:13.521655 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:13Z","lastTransitionTime":"2025-11-29T07:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:13 crc kubenswrapper[4795]: I1129 07:40:13.625440 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:13 crc kubenswrapper[4795]: I1129 07:40:13.625523 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:13 crc kubenswrapper[4795]: I1129 07:40:13.625550 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:13 crc kubenswrapper[4795]: I1129 07:40:13.625643 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:13 crc kubenswrapper[4795]: I1129 07:40:13.625675 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:13Z","lastTransitionTime":"2025-11-29T07:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:13 crc kubenswrapper[4795]: I1129 07:40:13.728904 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:13 crc kubenswrapper[4795]: I1129 07:40:13.728968 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:13 crc kubenswrapper[4795]: I1129 07:40:13.728989 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:13 crc kubenswrapper[4795]: I1129 07:40:13.729018 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:13 crc kubenswrapper[4795]: I1129 07:40:13.729039 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:13Z","lastTransitionTime":"2025-11-29T07:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:13 crc kubenswrapper[4795]: I1129 07:40:13.832005 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:13 crc kubenswrapper[4795]: I1129 07:40:13.832052 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:13 crc kubenswrapper[4795]: I1129 07:40:13.832065 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:13 crc kubenswrapper[4795]: I1129 07:40:13.832087 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:13 crc kubenswrapper[4795]: I1129 07:40:13.832099 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:13Z","lastTransitionTime":"2025-11-29T07:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:13 crc kubenswrapper[4795]: I1129 07:40:13.935243 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:13 crc kubenswrapper[4795]: I1129 07:40:13.935299 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:13 crc kubenswrapper[4795]: I1129 07:40:13.935315 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:13 crc kubenswrapper[4795]: I1129 07:40:13.935335 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:13 crc kubenswrapper[4795]: I1129 07:40:13.935355 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:13Z","lastTransitionTime":"2025-11-29T07:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.038475 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.038551 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.038571 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.038658 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.038689 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:14Z","lastTransitionTime":"2025-11-29T07:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.141123 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.141207 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.141253 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.141269 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.141279 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:14Z","lastTransitionTime":"2025-11-29T07:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.243898 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.243948 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.243964 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.243987 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.244002 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:14Z","lastTransitionTime":"2025-11-29T07:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.275222 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.275258 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:40:14 crc kubenswrapper[4795]: E1129 07:40:14.275450 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.275521 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvmzq" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.275615 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:40:14 crc kubenswrapper[4795]: E1129 07:40:14.275745 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvmzq" podUID="9be66670-47c2-4d05-bf3d-59ae6f4ff53b" Nov 29 07:40:14 crc kubenswrapper[4795]: E1129 07:40:14.276035 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:40:14 crc kubenswrapper[4795]: E1129 07:40:14.276155 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.298962 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://148610b20c1dfe43c3e5443cda520fa95ed2823afcaa880dbe879b6d93dc8ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3aac368767d0534a658bc0223f5cda0a951940f2d6ff3f1c89a6ac401a7d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkmq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:14Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.321902 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceed8db447ae237cae16d2aa37b1651eade557ea7aaab5bc7b55678ce6e36f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcce9ccf35e72987d01428b7573aafacfe90fc973009657b077f40b6fe6ddecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://248e5291402287c1a589a168527135bd1df15e293a7614681fc896fcbdeb9dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8112226f945ad27e0c5af4d588800322977e17efc3eac9acec2ce591f30f46fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://feacad8666527640ec9d6bee04a743e99e0cc9f9e9c6432c1a70bc02e85fa039\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25239d7c14973e35144f9ac1b909564749220fc32b1851b68e08434a5aca7def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8863f06c350a821d91544fd372bed53f0ae71b14571eb9b9e189228b3eb48e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://178c4829fb6723552ece37b1b2a9db689e2acb44c46d095d72a2ad03af1d2460\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"message\\\":\\\"e event handler 1\\\\nI1129 07:40:00.611646 6086 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1129 07:40:00.611652 6086 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1129 07:40:00.611679 6086 factory.go:656] Stopping watch factory\\\\nI1129 07:40:00.611695 6086 handler.go:208] Removed *v1.Node event handler 7\\\\nI1129 07:40:00.611703 6086 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1129 07:40:00.611711 6086 handler.go:208] Removed *v1.Node event handler 2\\\\nI1129 07:40:00.611742 6086 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:40:00.612160 6086 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:40:00.612495 6086 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:40:00.612534 6086 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:40:00.612726 6086 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1129 07:40:00.612802 6086 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8863f06c350a821d91544fd372bed53f0ae71b14571eb9b9e189228b3eb48e53\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:40:04Z\\\",\\\"message\\\":\\\"twork-check-source-55646444c4-trplf\\\\nI1129 07:40:02.480885 6335 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1129 07:40:02.480888 6335 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1129 07:40:02.480892 6335 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI1129 07:40:02.480897 6335 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1129 07:40:02.480905 6335 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI1129 07:40:02.480911 6335 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI1129 07:40:02.480918 6335 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF1129 07:40:02.480921 6335 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63ac8cc3766762113c8fdfdc459f77570fc533bf636011f1e64a5c0a55b3ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e28f6209bd7be6beb5d86d316c3194ad3162ae8561ac532a3dfc894415fb406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e28f6209bd7be6beb5d86d316c3194ad3162ae8561ac532a3dfc894415fb406\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-km2g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:14Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.337282 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84bbfda4-f166-406b-8459-d8cad5d21032\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://102735e5872ecbca5ba247f4fc457619ce4b383def180851d69bf37f7ea1051a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cba2e40f95664da3b482ce01c3432b7a421b785791004833aeaffbc2b6bdcb9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c385dac962c7c2c54fe6b5eabe286bc75d0518601f68b6845677764c5916ad44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6319d19e0aa1655aaff5e3abcc13d8ff9ff26f401c28cf23593ecaa23d1ac5a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa34ac2960898820609db88aa051a6567a7b6bd0bfc7fcb741205f6f5c3f402\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:39:37Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 07:39:27.030389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:39:27.033381 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-823169215/tls.crt::/tmp/serving-cert-823169215/tls.key\\\\\\\"\\\\nI1129 07:39:37.392848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:39:37.396263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:39:37.396294 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:39:37.396669 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:39:37.396692 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:39:37.407579 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:39:37.407638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407648 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407655 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:39:37.407660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:39:37.407665 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:39:37.407670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:39:37.408043 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:39:37.411891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7b73774f26f4ea48586f8faf7f53be02b4678c85937c8e402868610f36d4d00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:14Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.348615 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.348660 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.348670 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.348688 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.348716 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:14Z","lastTransitionTime":"2025-11-29T07:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.349045 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c05a965f-51fb-403d-bbac-b34d4d6b2bbd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98152568d086a30f06fcd1535110482b356c587d906973de9baedd65fc85b6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34ebb0f2c455d2133d2fcf00d19e74d539e80523f2ed0b2ccc1f8eabcf2cb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9d7937efcaeb684eb229e396c7896a805f1bf205bda1cd1b8113aeda2bea5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bcaeb06b249f6fa81deaf9f997ca41c9dd7f3568d458072208901c04543e272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bcaeb06b249f6fa81deaf9f997ca41c9dd7f3568d458072208901c04543e272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:14Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.365166 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30fbc9849348e3900b416f78f697f1d88eb5eb018f5ee420e711561423a79459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:14Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.378746 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b4d032ee6e86c2b61a59ea1477aa6d97a454c754888c37db24bc49d921eacd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:14Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.392990 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bvmzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9be66670-47c2-4d05-bf3d-59ae6f4ff53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4h7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4h7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bvmzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:14Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.406671 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hbg2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50b9c3ea-4ff5-434f-803c-2365a0938f9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46105526d9b0e90d4e14597c7b9c505b01d89a55e947fb378e4587ec265cac89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv7gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hbg2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:14Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.415340 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vcd5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3fc3441-5d98-4323-8a78-cab492090c5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d047b5f2e979d7eebec46b45ef363e94e99bd7311214872c66fcce39cfd411db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcqq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vcd5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:14Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.425671 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"262bf41d-eaf0-42b7-946d-3223857ca705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bbfc45764b29055dcafdee2e3b77893c35b23cfa3eaa9125f01850baef2fd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://508f7cb88a5b33c09bb52519555900031de44693e10de750ed34d9bb4c31738c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9f1344bd7c3a1bc53870f7e5f5f2ca4993df112a24aa474655cc07670e22c48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b06202234f21aa27a3dc67978cc9f82738d7526100bce3fd640be08131413f69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:14Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.435264 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:14Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.444900 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:14Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.450440 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.450476 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.450485 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.450507 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.450516 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:14Z","lastTransitionTime":"2025-11-29T07:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.458732 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27975" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceab872d-7a73-44d5-936e-3dd17facf399\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b771b13a7bbce9270d919a830c700a1a04bd1d82141a1487e7abb6b1def634cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:40:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e24a1ba0b3bd47c8cff733e54ce43cf1641ba69df8a3d2083f47fcfdcd6640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e24a1ba0b3bd47c8cff733e54ce43cf1641ba69df8a3d2083f47fcfdcd6640\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9e93605467f5dae805a8b5b55e1eebc3f046f4ab7fd3a409235b918d8b1aa88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9e93605467f5dae805a8b5b55e1eebc3f046f4ab7fd3a409235b918d8b1aa88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1856815c6475b6c9db1d59694dfc249a9b254ca7f6b402570dbb791f60e038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1856815c6475b6c9db1d59694dfc249a9b254ca7f6b402570dbb791f60e038\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c79e6f292a540eaeab2781c31973da7e0213d08460452129dbb25075c05a9094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c79e6f292a540eaeab2781c31973da7e0213d08460452129dbb25075c05a9094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e9ceaddb24d5dc8e3689b2e05482fd76883f4ce3475557c03419491b53e800d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9ceaddb24d5dc8e3689b2e05482fd76883f4ce3475557c03419491b53e800d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2966e801bc03a51bece33fae1207ba6a432620fcbcb1c702abddbb11b40a1e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2966e801bc03a51bece33fae1207ba6a432620fcbcb1c702abddbb11b40a1e1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27975\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:14Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.467787 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-s9x86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcb60df4-ad48-4830-b8c7-c63621b96707\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6d9cdb665b3bed924577ab173dcf48968a0b7217ca81b1f8708d733d27d3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-s9x86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:14Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.479625 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qcmb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6472ebbc-939b-4dd0-8b03-110cb9811484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad977e9d70f50688539c9d0c9cf08789584d08f77792d3dde1a619e9d958cff2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qwxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2497113482ab716b22a90b7872f2837bccd484b765e979c2c67a394f82de688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qwxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qcmb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:14Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.491638 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:14Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.504227 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2faa21199ef0e7bbf065823fff1e4b33636239d022db292bec68ff54e8c948c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c09cfeaabce351a2face8aa09a04f88aa9d50122718358d07301aeab84e4a76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:14Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.519764 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.519798 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.519808 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.519822 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.519830 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:14Z","lastTransitionTime":"2025-11-29T07:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:14 crc kubenswrapper[4795]: E1129 07:40:14.531706 4795 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8cd0c036-22eb-405e-b57e-ec0c4424780e\\\",\\\"systemUUID\\\":\\\"bd085386-a70e-485f-9f18-00b3aef4bcca\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:14Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.535010 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.535049 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.535060 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.535083 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.535097 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:14Z","lastTransitionTime":"2025-11-29T07:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:14 crc kubenswrapper[4795]: E1129 07:40:14.549496 4795 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8cd0c036-22eb-405e-b57e-ec0c4424780e\\\",\\\"systemUUID\\\":\\\"bd085386-a70e-485f-9f18-00b3aef4bcca\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:14Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.553335 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.553377 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.553389 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.553407 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.553419 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:14Z","lastTransitionTime":"2025-11-29T07:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:14 crc kubenswrapper[4795]: E1129 07:40:14.564002 4795 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8cd0c036-22eb-405e-b57e-ec0c4424780e\\\",\\\"systemUUID\\\":\\\"bd085386-a70e-485f-9f18-00b3aef4bcca\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:14Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.567048 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.567090 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.567102 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.567118 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.567130 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:14Z","lastTransitionTime":"2025-11-29T07:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:14 crc kubenswrapper[4795]: E1129 07:40:14.577877 4795 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8cd0c036-22eb-405e-b57e-ec0c4424780e\\\",\\\"systemUUID\\\":\\\"bd085386-a70e-485f-9f18-00b3aef4bcca\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:14Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.580816 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.580847 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.580856 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.580872 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.580880 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:14Z","lastTransitionTime":"2025-11-29T07:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:14 crc kubenswrapper[4795]: E1129 07:40:14.590523 4795 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8cd0c036-22eb-405e-b57e-ec0c4424780e\\\",\\\"systemUUID\\\":\\\"bd085386-a70e-485f-9f18-00b3aef4bcca\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:14Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:14 crc kubenswrapper[4795]: E1129 07:40:14.590649 4795 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.591951 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.592041 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.592117 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.592202 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.592269 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:14Z","lastTransitionTime":"2025-11-29T07:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.694708 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.694766 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.694777 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.694793 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.694806 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:14Z","lastTransitionTime":"2025-11-29T07:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.796939 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.796975 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.796986 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.797002 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.797011 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:14Z","lastTransitionTime":"2025-11-29T07:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.899188 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.899222 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.899233 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.899251 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:14 crc kubenswrapper[4795]: I1129 07:40:14.899262 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:14Z","lastTransitionTime":"2025-11-29T07:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:15 crc kubenswrapper[4795]: I1129 07:40:15.001880 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:15 crc kubenswrapper[4795]: I1129 07:40:15.001932 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:15 crc kubenswrapper[4795]: I1129 07:40:15.001942 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:15 crc kubenswrapper[4795]: I1129 07:40:15.001959 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:15 crc kubenswrapper[4795]: I1129 07:40:15.001968 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:15Z","lastTransitionTime":"2025-11-29T07:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:15 crc kubenswrapper[4795]: I1129 07:40:15.104612 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:15 crc kubenswrapper[4795]: I1129 07:40:15.104655 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:15 crc kubenswrapper[4795]: I1129 07:40:15.104666 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:15 crc kubenswrapper[4795]: I1129 07:40:15.104684 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:15 crc kubenswrapper[4795]: I1129 07:40:15.104699 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:15Z","lastTransitionTime":"2025-11-29T07:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:15 crc kubenswrapper[4795]: I1129 07:40:15.207674 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:15 crc kubenswrapper[4795]: I1129 07:40:15.207726 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:15 crc kubenswrapper[4795]: I1129 07:40:15.207736 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:15 crc kubenswrapper[4795]: I1129 07:40:15.207750 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:15 crc kubenswrapper[4795]: I1129 07:40:15.207760 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:15Z","lastTransitionTime":"2025-11-29T07:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:15 crc kubenswrapper[4795]: I1129 07:40:15.310724 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:15 crc kubenswrapper[4795]: I1129 07:40:15.310813 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:15 crc kubenswrapper[4795]: I1129 07:40:15.310830 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:15 crc kubenswrapper[4795]: I1129 07:40:15.310854 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:15 crc kubenswrapper[4795]: I1129 07:40:15.310869 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:15Z","lastTransitionTime":"2025-11-29T07:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:15 crc kubenswrapper[4795]: I1129 07:40:15.414844 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:15 crc kubenswrapper[4795]: I1129 07:40:15.415482 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:15 crc kubenswrapper[4795]: I1129 07:40:15.415550 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:15 crc kubenswrapper[4795]: I1129 07:40:15.415652 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:15 crc kubenswrapper[4795]: I1129 07:40:15.415760 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:15Z","lastTransitionTime":"2025-11-29T07:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:15 crc kubenswrapper[4795]: I1129 07:40:15.519422 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:15 crc kubenswrapper[4795]: I1129 07:40:15.519473 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:15 crc kubenswrapper[4795]: I1129 07:40:15.519487 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:15 crc kubenswrapper[4795]: I1129 07:40:15.519505 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:15 crc kubenswrapper[4795]: I1129 07:40:15.519517 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:15Z","lastTransitionTime":"2025-11-29T07:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:15 crc kubenswrapper[4795]: I1129 07:40:15.622825 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:15 crc kubenswrapper[4795]: I1129 07:40:15.623625 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:15 crc kubenswrapper[4795]: I1129 07:40:15.623811 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:15 crc kubenswrapper[4795]: I1129 07:40:15.624010 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:15 crc kubenswrapper[4795]: I1129 07:40:15.624213 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:15Z","lastTransitionTime":"2025-11-29T07:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:15 crc kubenswrapper[4795]: I1129 07:40:15.726648 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:15 crc kubenswrapper[4795]: I1129 07:40:15.726688 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:15 crc kubenswrapper[4795]: I1129 07:40:15.726697 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:15 crc kubenswrapper[4795]: I1129 07:40:15.726710 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:15 crc kubenswrapper[4795]: I1129 07:40:15.726720 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:15Z","lastTransitionTime":"2025-11-29T07:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:15 crc kubenswrapper[4795]: I1129 07:40:15.830190 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:15 crc kubenswrapper[4795]: I1129 07:40:15.830500 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:15 crc kubenswrapper[4795]: I1129 07:40:15.830566 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:15 crc kubenswrapper[4795]: I1129 07:40:15.830666 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:15 crc kubenswrapper[4795]: I1129 07:40:15.830737 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:15Z","lastTransitionTime":"2025-11-29T07:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:15 crc kubenswrapper[4795]: I1129 07:40:15.933984 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:15 crc kubenswrapper[4795]: I1129 07:40:15.934029 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:15 crc kubenswrapper[4795]: I1129 07:40:15.934038 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:15 crc kubenswrapper[4795]: I1129 07:40:15.934056 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:15 crc kubenswrapper[4795]: I1129 07:40:15.934068 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:15Z","lastTransitionTime":"2025-11-29T07:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:16 crc kubenswrapper[4795]: I1129 07:40:16.038056 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:16 crc kubenswrapper[4795]: I1129 07:40:16.038103 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:16 crc kubenswrapper[4795]: I1129 07:40:16.038130 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:16 crc kubenswrapper[4795]: I1129 07:40:16.038151 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:16 crc kubenswrapper[4795]: I1129 07:40:16.038160 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:16Z","lastTransitionTime":"2025-11-29T07:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:16 crc kubenswrapper[4795]: I1129 07:40:16.140883 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:16 crc kubenswrapper[4795]: I1129 07:40:16.140971 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:16 crc kubenswrapper[4795]: I1129 07:40:16.140990 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:16 crc kubenswrapper[4795]: I1129 07:40:16.141020 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:16 crc kubenswrapper[4795]: I1129 07:40:16.141045 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:16Z","lastTransitionTime":"2025-11-29T07:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:16 crc kubenswrapper[4795]: I1129 07:40:16.244579 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:16 crc kubenswrapper[4795]: I1129 07:40:16.244640 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:16 crc kubenswrapper[4795]: I1129 07:40:16.244650 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:16 crc kubenswrapper[4795]: I1129 07:40:16.244670 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:16 crc kubenswrapper[4795]: I1129 07:40:16.244681 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:16Z","lastTransitionTime":"2025-11-29T07:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:16 crc kubenswrapper[4795]: I1129 07:40:16.275474 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:40:16 crc kubenswrapper[4795]: I1129 07:40:16.275553 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:40:16 crc kubenswrapper[4795]: I1129 07:40:16.275567 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:40:16 crc kubenswrapper[4795]: E1129 07:40:16.278585 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:40:16 crc kubenswrapper[4795]: I1129 07:40:16.278893 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvmzq" Nov 29 07:40:16 crc kubenswrapper[4795]: E1129 07:40:16.279106 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvmzq" podUID="9be66670-47c2-4d05-bf3d-59ae6f4ff53b" Nov 29 07:40:16 crc kubenswrapper[4795]: E1129 07:40:16.282532 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:40:16 crc kubenswrapper[4795]: E1129 07:40:16.282344 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:40:16 crc kubenswrapper[4795]: I1129 07:40:16.347296 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:16 crc kubenswrapper[4795]: I1129 07:40:16.347344 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:16 crc kubenswrapper[4795]: I1129 07:40:16.347352 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:16 crc kubenswrapper[4795]: I1129 07:40:16.347368 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:16 crc kubenswrapper[4795]: I1129 07:40:16.347382 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:16Z","lastTransitionTime":"2025-11-29T07:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:16 crc kubenswrapper[4795]: I1129 07:40:16.451188 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:16 crc kubenswrapper[4795]: I1129 07:40:16.451245 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:16 crc kubenswrapper[4795]: I1129 07:40:16.451255 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:16 crc kubenswrapper[4795]: I1129 07:40:16.451269 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:16 crc kubenswrapper[4795]: I1129 07:40:16.451278 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:16Z","lastTransitionTime":"2025-11-29T07:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:16 crc kubenswrapper[4795]: I1129 07:40:16.553737 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:16 crc kubenswrapper[4795]: I1129 07:40:16.553788 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:16 crc kubenswrapper[4795]: I1129 07:40:16.553800 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:16 crc kubenswrapper[4795]: I1129 07:40:16.553821 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:16 crc kubenswrapper[4795]: I1129 07:40:16.553833 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:16Z","lastTransitionTime":"2025-11-29T07:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:16 crc kubenswrapper[4795]: I1129 07:40:16.656891 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:16 crc kubenswrapper[4795]: I1129 07:40:16.656938 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:16 crc kubenswrapper[4795]: I1129 07:40:16.656950 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:16 crc kubenswrapper[4795]: I1129 07:40:16.656970 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:16 crc kubenswrapper[4795]: I1129 07:40:16.656990 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:16Z","lastTransitionTime":"2025-11-29T07:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:16 crc kubenswrapper[4795]: I1129 07:40:16.759957 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:16 crc kubenswrapper[4795]: I1129 07:40:16.760008 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:16 crc kubenswrapper[4795]: I1129 07:40:16.760021 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:16 crc kubenswrapper[4795]: I1129 07:40:16.760045 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:16 crc kubenswrapper[4795]: I1129 07:40:16.760059 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:16Z","lastTransitionTime":"2025-11-29T07:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:16 crc kubenswrapper[4795]: I1129 07:40:16.863906 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:16 crc kubenswrapper[4795]: I1129 07:40:16.863951 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:16 crc kubenswrapper[4795]: I1129 07:40:16.863959 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:16 crc kubenswrapper[4795]: I1129 07:40:16.863973 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:16 crc kubenswrapper[4795]: I1129 07:40:16.863986 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:16Z","lastTransitionTime":"2025-11-29T07:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:16 crc kubenswrapper[4795]: I1129 07:40:16.966841 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:16 crc kubenswrapper[4795]: I1129 07:40:16.966879 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:16 crc kubenswrapper[4795]: I1129 07:40:16.966889 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:16 crc kubenswrapper[4795]: I1129 07:40:16.966907 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:16 crc kubenswrapper[4795]: I1129 07:40:16.966917 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:16Z","lastTransitionTime":"2025-11-29T07:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:17 crc kubenswrapper[4795]: I1129 07:40:17.069726 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:17 crc kubenswrapper[4795]: I1129 07:40:17.069759 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:17 crc kubenswrapper[4795]: I1129 07:40:17.069769 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:17 crc kubenswrapper[4795]: I1129 07:40:17.069784 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:17 crc kubenswrapper[4795]: I1129 07:40:17.069794 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:17Z","lastTransitionTime":"2025-11-29T07:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:17 crc kubenswrapper[4795]: I1129 07:40:17.172705 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:17 crc kubenswrapper[4795]: I1129 07:40:17.172747 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:17 crc kubenswrapper[4795]: I1129 07:40:17.172759 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:17 crc kubenswrapper[4795]: I1129 07:40:17.172774 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:17 crc kubenswrapper[4795]: I1129 07:40:17.172785 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:17Z","lastTransitionTime":"2025-11-29T07:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:17 crc kubenswrapper[4795]: I1129 07:40:17.275699 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:17 crc kubenswrapper[4795]: I1129 07:40:17.275732 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:17 crc kubenswrapper[4795]: I1129 07:40:17.275741 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:17 crc kubenswrapper[4795]: I1129 07:40:17.275755 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:17 crc kubenswrapper[4795]: I1129 07:40:17.275766 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:17Z","lastTransitionTime":"2025-11-29T07:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:17 crc kubenswrapper[4795]: I1129 07:40:17.378776 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:17 crc kubenswrapper[4795]: I1129 07:40:17.378812 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:17 crc kubenswrapper[4795]: I1129 07:40:17.378820 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:17 crc kubenswrapper[4795]: I1129 07:40:17.378834 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:17 crc kubenswrapper[4795]: I1129 07:40:17.378842 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:17Z","lastTransitionTime":"2025-11-29T07:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:17 crc kubenswrapper[4795]: I1129 07:40:17.482073 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:17 crc kubenswrapper[4795]: I1129 07:40:17.482145 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:17 crc kubenswrapper[4795]: I1129 07:40:17.482163 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:17 crc kubenswrapper[4795]: I1129 07:40:17.482189 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:17 crc kubenswrapper[4795]: I1129 07:40:17.482207 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:17Z","lastTransitionTime":"2025-11-29T07:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:17 crc kubenswrapper[4795]: I1129 07:40:17.585839 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:17 crc kubenswrapper[4795]: I1129 07:40:17.586300 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:17 crc kubenswrapper[4795]: I1129 07:40:17.586521 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:17 crc kubenswrapper[4795]: I1129 07:40:17.586726 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:17 crc kubenswrapper[4795]: I1129 07:40:17.586868 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:17Z","lastTransitionTime":"2025-11-29T07:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:17 crc kubenswrapper[4795]: I1129 07:40:17.689293 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:17 crc kubenswrapper[4795]: I1129 07:40:17.689321 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:17 crc kubenswrapper[4795]: I1129 07:40:17.689330 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:17 crc kubenswrapper[4795]: I1129 07:40:17.689345 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:17 crc kubenswrapper[4795]: I1129 07:40:17.689357 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:17Z","lastTransitionTime":"2025-11-29T07:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:17 crc kubenswrapper[4795]: I1129 07:40:17.791475 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:17 crc kubenswrapper[4795]: I1129 07:40:17.791510 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:17 crc kubenswrapper[4795]: I1129 07:40:17.791518 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:17 crc kubenswrapper[4795]: I1129 07:40:17.791532 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:17 crc kubenswrapper[4795]: I1129 07:40:17.791541 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:17Z","lastTransitionTime":"2025-11-29T07:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:17 crc kubenswrapper[4795]: I1129 07:40:17.894347 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:17 crc kubenswrapper[4795]: I1129 07:40:17.894377 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:17 crc kubenswrapper[4795]: I1129 07:40:17.894387 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:17 crc kubenswrapper[4795]: I1129 07:40:17.894400 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:17 crc kubenswrapper[4795]: I1129 07:40:17.894410 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:17Z","lastTransitionTime":"2025-11-29T07:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:17 crc kubenswrapper[4795]: I1129 07:40:17.996856 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:17 crc kubenswrapper[4795]: I1129 07:40:17.996897 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:17 crc kubenswrapper[4795]: I1129 07:40:17.996907 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:17 crc kubenswrapper[4795]: I1129 07:40:17.996923 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:17 crc kubenswrapper[4795]: I1129 07:40:17.996935 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:17Z","lastTransitionTime":"2025-11-29T07:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:18 crc kubenswrapper[4795]: I1129 07:40:18.099673 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:18 crc kubenswrapper[4795]: I1129 07:40:18.099727 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:18 crc kubenswrapper[4795]: I1129 07:40:18.099739 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:18 crc kubenswrapper[4795]: I1129 07:40:18.099763 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:18 crc kubenswrapper[4795]: I1129 07:40:18.099773 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:18Z","lastTransitionTime":"2025-11-29T07:40:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:18 crc kubenswrapper[4795]: I1129 07:40:18.202526 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:18 crc kubenswrapper[4795]: I1129 07:40:18.202579 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:18 crc kubenswrapper[4795]: I1129 07:40:18.202608 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:18 crc kubenswrapper[4795]: I1129 07:40:18.202654 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:18 crc kubenswrapper[4795]: I1129 07:40:18.202667 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:18Z","lastTransitionTime":"2025-11-29T07:40:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:18 crc kubenswrapper[4795]: I1129 07:40:18.275308 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:40:18 crc kubenswrapper[4795]: I1129 07:40:18.275361 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:40:18 crc kubenswrapper[4795]: I1129 07:40:18.275409 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvmzq" Nov 29 07:40:18 crc kubenswrapper[4795]: I1129 07:40:18.275476 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:40:18 crc kubenswrapper[4795]: E1129 07:40:18.275500 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:40:18 crc kubenswrapper[4795]: E1129 07:40:18.275632 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvmzq" podUID="9be66670-47c2-4d05-bf3d-59ae6f4ff53b" Nov 29 07:40:18 crc kubenswrapper[4795]: E1129 07:40:18.275723 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:40:18 crc kubenswrapper[4795]: E1129 07:40:18.275797 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:40:18 crc kubenswrapper[4795]: I1129 07:40:18.305577 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:18 crc kubenswrapper[4795]: I1129 07:40:18.305686 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:18 crc kubenswrapper[4795]: I1129 07:40:18.305699 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:18 crc kubenswrapper[4795]: I1129 07:40:18.305722 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:18 crc kubenswrapper[4795]: I1129 07:40:18.305738 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:18Z","lastTransitionTime":"2025-11-29T07:40:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:18 crc kubenswrapper[4795]: I1129 07:40:18.408664 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:18 crc kubenswrapper[4795]: I1129 07:40:18.408715 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:18 crc kubenswrapper[4795]: I1129 07:40:18.408729 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:18 crc kubenswrapper[4795]: I1129 07:40:18.408748 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:18 crc kubenswrapper[4795]: I1129 07:40:18.408759 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:18Z","lastTransitionTime":"2025-11-29T07:40:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:18 crc kubenswrapper[4795]: I1129 07:40:18.512617 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:18 crc kubenswrapper[4795]: I1129 07:40:18.512673 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:18 crc kubenswrapper[4795]: I1129 07:40:18.512688 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:18 crc kubenswrapper[4795]: I1129 07:40:18.512707 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:18 crc kubenswrapper[4795]: I1129 07:40:18.512720 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:18Z","lastTransitionTime":"2025-11-29T07:40:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:18 crc kubenswrapper[4795]: I1129 07:40:18.617231 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:18 crc kubenswrapper[4795]: I1129 07:40:18.617295 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:18 crc kubenswrapper[4795]: I1129 07:40:18.617309 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:18 crc kubenswrapper[4795]: I1129 07:40:18.617329 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:18 crc kubenswrapper[4795]: I1129 07:40:18.617351 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:18Z","lastTransitionTime":"2025-11-29T07:40:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:18 crc kubenswrapper[4795]: I1129 07:40:18.719865 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:18 crc kubenswrapper[4795]: I1129 07:40:18.719913 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:18 crc kubenswrapper[4795]: I1129 07:40:18.719925 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:18 crc kubenswrapper[4795]: I1129 07:40:18.719942 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:18 crc kubenswrapper[4795]: I1129 07:40:18.719954 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:18Z","lastTransitionTime":"2025-11-29T07:40:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:18 crc kubenswrapper[4795]: I1129 07:40:18.822488 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:18 crc kubenswrapper[4795]: I1129 07:40:18.822536 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:18 crc kubenswrapper[4795]: I1129 07:40:18.822553 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:18 crc kubenswrapper[4795]: I1129 07:40:18.822578 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:18 crc kubenswrapper[4795]: I1129 07:40:18.822617 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:18Z","lastTransitionTime":"2025-11-29T07:40:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:18 crc kubenswrapper[4795]: I1129 07:40:18.924851 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:18 crc kubenswrapper[4795]: I1129 07:40:18.924909 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:18 crc kubenswrapper[4795]: I1129 07:40:18.924917 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:18 crc kubenswrapper[4795]: I1129 07:40:18.924930 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:18 crc kubenswrapper[4795]: I1129 07:40:18.924939 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:18Z","lastTransitionTime":"2025-11-29T07:40:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:19 crc kubenswrapper[4795]: I1129 07:40:19.028129 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:19 crc kubenswrapper[4795]: I1129 07:40:19.028191 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:19 crc kubenswrapper[4795]: I1129 07:40:19.028209 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:19 crc kubenswrapper[4795]: I1129 07:40:19.028234 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:19 crc kubenswrapper[4795]: I1129 07:40:19.028252 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:19Z","lastTransitionTime":"2025-11-29T07:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:19 crc kubenswrapper[4795]: I1129 07:40:19.130987 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:19 crc kubenswrapper[4795]: I1129 07:40:19.131019 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:19 crc kubenswrapper[4795]: I1129 07:40:19.131027 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:19 crc kubenswrapper[4795]: I1129 07:40:19.131041 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:19 crc kubenswrapper[4795]: I1129 07:40:19.131052 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:19Z","lastTransitionTime":"2025-11-29T07:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:19 crc kubenswrapper[4795]: I1129 07:40:19.234509 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:19 crc kubenswrapper[4795]: I1129 07:40:19.234555 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:19 crc kubenswrapper[4795]: I1129 07:40:19.234565 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:19 crc kubenswrapper[4795]: I1129 07:40:19.234578 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:19 crc kubenswrapper[4795]: I1129 07:40:19.234600 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:19Z","lastTransitionTime":"2025-11-29T07:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:19 crc kubenswrapper[4795]: I1129 07:40:19.276092 4795 scope.go:117] "RemoveContainer" containerID="8863f06c350a821d91544fd372bed53f0ae71b14571eb9b9e189228b3eb48e53" Nov 29 07:40:19 crc kubenswrapper[4795]: I1129 07:40:19.290818 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84bbfda4-f166-406b-8459-d8cad5d21032\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://102735e5872ecbca5ba247f4fc457619ce4b383def180851d69bf37f7ea1051a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cba2e40f95664da3b482ce01c3432b7a421b785791004833aeaffbc2b6bdcb9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c385dac962c7c2c54fe6b5eabe286bc75d0518601f68b6845677764c5916ad44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6319d19e0aa1655aaff5e3abcc13d8ff9ff26f401c28cf23593ecaa23d1ac5a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa34ac2960898820609db88aa051a6567a7b6bd0bfc7fcb741205f6f5c3f402\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:39:37Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 07:39:27.030389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:39:27.033381 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-823169215/tls.crt::/tmp/serving-cert-823169215/tls.key\\\\\\\"\\\\nI1129 07:39:37.392848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:39:37.396263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:39:37.396294 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:39:37.396669 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:39:37.396692 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:39:37.407579 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:39:37.407638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407648 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407655 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:39:37.407660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:39:37.407665 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:39:37.407670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:39:37.408043 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:39:37.411891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7b73774f26f4ea48586f8faf7f53be02b4678c85937c8e402868610f36d4d00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:19Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:19 crc kubenswrapper[4795]: I1129 07:40:19.303069 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c05a965f-51fb-403d-bbac-b34d4d6b2bbd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98152568d086a30f06fcd1535110482b356c587d906973de9baedd65fc85b6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34ebb0f2c455d2133d2fcf00d19e74d539e80523f2ed0b2ccc1f8eabcf2cb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9d7937efcaeb684eb229e396c7896a805f1bf205bda1cd1b8113aeda2bea5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bcaeb06b249f6fa81deaf9f997ca41c9dd7f3568d458072208901c04543e272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bcaeb06b249f6fa81deaf9f997ca41c9dd7f3568d458072208901c04543e272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:19Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:19 crc kubenswrapper[4795]: I1129 07:40:19.317316 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30fbc9849348e3900b416f78f697f1d88eb5eb018f5ee420e711561423a79459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:19Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:19 crc kubenswrapper[4795]: I1129 07:40:19.329070 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b4d032ee6e86c2b61a59ea1477aa6d97a454c754888c37db24bc49d921eacd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:19Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:19 crc kubenswrapper[4795]: I1129 07:40:19.337268 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:19 crc kubenswrapper[4795]: I1129 07:40:19.337329 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:19 crc kubenswrapper[4795]: I1129 07:40:19.337343 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:19 crc kubenswrapper[4795]: I1129 07:40:19.337399 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:19 crc kubenswrapper[4795]: I1129 07:40:19.337417 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:19Z","lastTransitionTime":"2025-11-29T07:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:19 crc kubenswrapper[4795]: I1129 07:40:19.341491 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://148610b20c1dfe43c3e5443cda520fa95ed2823afcaa880dbe879b6d93dc8ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3aac368767d0534a658bc0223f5cda0a951940f2d6ff3f1c89a6ac401a7d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkmq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:19Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:19 crc kubenswrapper[4795]: I1129 07:40:19.359430 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceed8db447ae237cae16d2aa37b1651eade557ea7aaab5bc7b55678ce6e36f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcce9ccf35e72987d01428b7573aafacfe90fc973009657b077f40b6fe6ddecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://248e5291402287c1a589a168527135bd1df15e293a7614681fc896fcbdeb9dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8112226f945ad27e0c5af4d588800322977e17efc3eac9acec2ce591f30f46fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://feacad8666527640ec9d6bee04a743e99e0cc9f9e9c6432c1a70bc02e85fa039\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25239d7c14973e35144f9ac1b909564749220fc32b1851b68e08434a5aca7def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8863f06c350a821d91544fd372bed53f0ae71b14571eb9b9e189228b3eb48e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8863f06c350a821d91544fd372bed53f0ae71b14571eb9b9e189228b3eb48e53\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:40:04Z\\\",\\\"message\\\":\\\"twork-check-source-55646444c4-trplf\\\\nI1129 07:40:02.480885 6335 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1129 07:40:02.480888 6335 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1129 07:40:02.480892 6335 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI1129 07:40:02.480897 6335 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1129 07:40:02.480905 6335 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI1129 07:40:02.480911 6335 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI1129 07:40:02.480918 6335 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF1129 07:40:02.480921 6335 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:40:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-km2g9_openshift-ovn-kubernetes(3d3ff2b2-cbaa-4309-805a-2b044f867d3a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63ac8cc3766762113c8fdfdc459f77570fc533bf636011f1e64a5c0a55b3ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e28f6209bd7be6beb5d86d316c3194ad3162ae8561ac532a3dfc894415fb406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e28f6209bd7be6beb5d86d316c3194ad3162ae8561ac532a3dfc894415fb406\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-km2g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:19Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:19 crc kubenswrapper[4795]: I1129 07:40:19.373167 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bvmzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9be66670-47c2-4d05-bf3d-59ae6f4ff53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4h7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4h7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bvmzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:19Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:19 crc kubenswrapper[4795]: I1129 07:40:19.385294 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vcd5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3fc3441-5d98-4323-8a78-cab492090c5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d047b5f2e979d7eebec46b45ef363e94e99bd7311214872c66fcce39cfd411db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcqq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vcd5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:19Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:19 crc kubenswrapper[4795]: I1129 07:40:19.401384 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hbg2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50b9c3ea-4ff5-434f-803c-2365a0938f9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46105526d9b0e90d4e14597c7b9c505b01d89a55e947fb378e4587ec265cac89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv7gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hbg2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:19Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:19 crc kubenswrapper[4795]: I1129 07:40:19.420030 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:19Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:19 crc kubenswrapper[4795]: I1129 07:40:19.434951 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:19Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:19 crc kubenswrapper[4795]: I1129 07:40:19.440054 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:19 crc kubenswrapper[4795]: I1129 07:40:19.440116 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:19 crc kubenswrapper[4795]: I1129 07:40:19.440125 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:19 crc kubenswrapper[4795]: I1129 07:40:19.440148 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:19 crc kubenswrapper[4795]: I1129 07:40:19.440167 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:19Z","lastTransitionTime":"2025-11-29T07:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:19 crc kubenswrapper[4795]: I1129 07:40:19.454694 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27975" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceab872d-7a73-44d5-936e-3dd17facf399\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b771b13a7bbce9270d919a830c700a1a04bd1d82141a1487e7abb6b1def634cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:40:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e24a1ba0b3bd47c8cff733e54ce43cf1641ba69df8a3d2083f47fcfdcd6640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e24a1ba0b3bd47c8cff733e54ce43cf1641ba69df8a3d2083f47fcfdcd6640\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9e93605467f5dae805a8b5b55e1eebc3f046f4ab7fd3a409235b918d8b1aa88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9e93605467f5dae805a8b5b55e1eebc3f046f4ab7fd3a409235b918d8b1aa88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1856815c6475b6c9db1d59694dfc249a9b254ca7f6b402570dbb791f60e038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1856815c6475b6c9db1d59694dfc249a9b254ca7f6b402570dbb791f60e038\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c79e6f292a540eaeab2781c31973da7e0213d08460452129dbb25075c05a9094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c79e6f292a540eaeab2781c31973da7e0213d08460452129dbb25075c05a9094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e9ceaddb24d5dc8e3689b2e05482fd76883f4ce3475557c03419491b53e800d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9ceaddb24d5dc8e3689b2e05482fd76883f4ce3475557c03419491b53e800d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2966e801bc03a51bece33fae1207ba6a432620fcbcb1c702abddbb11b40a1e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2966e801bc03a51bece33fae1207ba6a432620fcbcb1c702abddbb11b40a1e1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27975\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:19Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:19 crc kubenswrapper[4795]: I1129 07:40:19.466324 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-s9x86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcb60df4-ad48-4830-b8c7-c63621b96707\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6d9cdb665b3bed924577ab173dcf48968a0b7217ca81b1f8708d733d27d3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-s9x86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:19Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:19 crc kubenswrapper[4795]: I1129 07:40:19.485661 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qcmb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6472ebbc-939b-4dd0-8b03-110cb9811484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad977e9d70f50688539c9d0c9cf08789584d08f77792d3dde1a619e9d958cff2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qwxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2497113482ab716b22a90b7872f2837bccd484b765e979c2c67a394f82de688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qwxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qcmb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:19Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:19 crc kubenswrapper[4795]: I1129 07:40:19.517239 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"262bf41d-eaf0-42b7-946d-3223857ca705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bbfc45764b29055dcafdee2e3b77893c35b23cfa3eaa9125f01850baef2fd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://508f7cb88a5b33c09bb52519555900031de44693e10de750ed34d9bb4c31738c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9f1344bd7c3a1bc53870f7e5f5f2ca4993df112a24aa474655cc07670e22c48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b06202234f21aa27a3dc67978cc9f82738d7526100bce3fd640be08131413f69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:19Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:19 crc kubenswrapper[4795]: I1129 07:40:19.534060 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2faa21199ef0e7bbf065823fff1e4b33636239d022db292bec68ff54e8c948c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c09cfeaabce351a2face8aa09a04f88aa9d50122718358d07301aeab84e4a76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:19Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:19 crc kubenswrapper[4795]: I1129 07:40:19.541924 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:19 crc kubenswrapper[4795]: I1129 07:40:19.541967 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:19 crc kubenswrapper[4795]: I1129 07:40:19.541979 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:19 crc kubenswrapper[4795]: I1129 07:40:19.541995 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:19 crc kubenswrapper[4795]: I1129 07:40:19.542006 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:19Z","lastTransitionTime":"2025-11-29T07:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:19 crc kubenswrapper[4795]: I1129 07:40:19.555900 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:19Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:19 crc kubenswrapper[4795]: I1129 07:40:19.644115 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:19 crc kubenswrapper[4795]: I1129 07:40:19.644178 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:19 crc kubenswrapper[4795]: I1129 07:40:19.644188 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:19 crc kubenswrapper[4795]: I1129 07:40:19.644204 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:19 crc kubenswrapper[4795]: I1129 07:40:19.644222 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:19Z","lastTransitionTime":"2025-11-29T07:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:19 crc kubenswrapper[4795]: I1129 07:40:19.747093 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:19 crc kubenswrapper[4795]: I1129 07:40:19.747144 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:19 crc kubenswrapper[4795]: I1129 07:40:19.747158 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:19 crc kubenswrapper[4795]: I1129 07:40:19.747180 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:19 crc kubenswrapper[4795]: I1129 07:40:19.747196 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:19Z","lastTransitionTime":"2025-11-29T07:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:19 crc kubenswrapper[4795]: I1129 07:40:19.849907 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:19 crc kubenswrapper[4795]: I1129 07:40:19.849942 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:19 crc kubenswrapper[4795]: I1129 07:40:19.849950 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:19 crc kubenswrapper[4795]: I1129 07:40:19.849965 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:19 crc kubenswrapper[4795]: I1129 07:40:19.849974 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:19Z","lastTransitionTime":"2025-11-29T07:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:19 crc kubenswrapper[4795]: I1129 07:40:19.952142 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:19 crc kubenswrapper[4795]: I1129 07:40:19.952234 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:19 crc kubenswrapper[4795]: I1129 07:40:19.952246 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:19 crc kubenswrapper[4795]: I1129 07:40:19.952263 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:19 crc kubenswrapper[4795]: I1129 07:40:19.952273 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:19Z","lastTransitionTime":"2025-11-29T07:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:20 crc kubenswrapper[4795]: I1129 07:40:20.054825 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:20 crc kubenswrapper[4795]: I1129 07:40:20.054900 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:20 crc kubenswrapper[4795]: I1129 07:40:20.054919 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:20 crc kubenswrapper[4795]: I1129 07:40:20.054949 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:20 crc kubenswrapper[4795]: I1129 07:40:20.054967 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:20Z","lastTransitionTime":"2025-11-29T07:40:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:20 crc kubenswrapper[4795]: I1129 07:40:20.157491 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:20 crc kubenswrapper[4795]: I1129 07:40:20.157558 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:20 crc kubenswrapper[4795]: I1129 07:40:20.157571 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:20 crc kubenswrapper[4795]: I1129 07:40:20.157626 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:20 crc kubenswrapper[4795]: I1129 07:40:20.157643 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:20Z","lastTransitionTime":"2025-11-29T07:40:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:20 crc kubenswrapper[4795]: I1129 07:40:20.260755 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:20 crc kubenswrapper[4795]: I1129 07:40:20.260798 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:20 crc kubenswrapper[4795]: I1129 07:40:20.260808 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:20 crc kubenswrapper[4795]: I1129 07:40:20.260823 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:20 crc kubenswrapper[4795]: I1129 07:40:20.260833 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:20Z","lastTransitionTime":"2025-11-29T07:40:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:20 crc kubenswrapper[4795]: I1129 07:40:20.275534 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:40:20 crc kubenswrapper[4795]: I1129 07:40:20.275606 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:40:20 crc kubenswrapper[4795]: I1129 07:40:20.275606 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:40:20 crc kubenswrapper[4795]: E1129 07:40:20.275726 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:40:20 crc kubenswrapper[4795]: I1129 07:40:20.275767 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvmzq" Nov 29 07:40:20 crc kubenswrapper[4795]: E1129 07:40:20.275871 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:40:20 crc kubenswrapper[4795]: E1129 07:40:20.276031 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:40:20 crc kubenswrapper[4795]: E1129 07:40:20.276076 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvmzq" podUID="9be66670-47c2-4d05-bf3d-59ae6f4ff53b" Nov 29 07:40:20 crc kubenswrapper[4795]: I1129 07:40:20.306919 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-km2g9_3d3ff2b2-cbaa-4309-805a-2b044f867d3a/ovnkube-controller/1.log" Nov 29 07:40:20 crc kubenswrapper[4795]: I1129 07:40:20.310224 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" event={"ID":"3d3ff2b2-cbaa-4309-805a-2b044f867d3a","Type":"ContainerStarted","Data":"4a0056c2a13ab4a4482c049c3fb5f6fe9938979d96f9837f40fa1dc45719a09b"} Nov 29 07:40:20 crc kubenswrapper[4795]: I1129 07:40:20.310732 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" Nov 29 07:40:20 crc kubenswrapper[4795]: I1129 07:40:20.325083 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:20Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:20 crc kubenswrapper[4795]: I1129 07:40:20.339494 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2faa21199ef0e7bbf065823fff1e4b33636239d022db292bec68ff54e8c948c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c09cfeaabce351a2face8aa09a04f88aa9d50122718358d07301aeab84e4a76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:20Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:20 crc kubenswrapper[4795]: I1129 07:40:20.356063 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84bbfda4-f166-406b-8459-d8cad5d21032\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://102735e5872ecbca5ba247f4fc457619ce4b383def180851d69bf37f7ea1051a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cba2e40f95664da3b482ce01c3432b7a421b785791004833aeaffbc2b6bdcb9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c385dac962c7c2c54fe6b5eabe286bc75d0518601f68b6845677764c5916ad44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6319d19e0aa1655aaff5e3abcc13d8ff9ff26f401c28cf23593ecaa23d1ac5a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa34ac2960898820609db88aa051a6567a7b6bd0bfc7fcb741205f6f5c3f402\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:39:37Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 07:39:27.030389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:39:27.033381 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-823169215/tls.crt::/tmp/serving-cert-823169215/tls.key\\\\\\\"\\\\nI1129 07:39:37.392848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:39:37.396263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:39:37.396294 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:39:37.396669 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:39:37.396692 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:39:37.407579 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:39:37.407638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407648 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407655 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:39:37.407660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:39:37.407665 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:39:37.407670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:39:37.408043 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:39:37.411891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7b73774f26f4ea48586f8faf7f53be02b4678c85937c8e402868610f36d4d00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:20Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:20 crc kubenswrapper[4795]: I1129 07:40:20.364235 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:20 crc kubenswrapper[4795]: I1129 07:40:20.364287 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:20 crc kubenswrapper[4795]: I1129 07:40:20.364300 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:20 crc kubenswrapper[4795]: I1129 07:40:20.364322 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:20 crc kubenswrapper[4795]: I1129 07:40:20.364335 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:20Z","lastTransitionTime":"2025-11-29T07:40:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:20 crc kubenswrapper[4795]: I1129 07:40:20.389201 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c05a965f-51fb-403d-bbac-b34d4d6b2bbd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98152568d086a30f06fcd1535110482b356c587d906973de9baedd65fc85b6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34ebb0f2c455d2133d2fcf00d19e74d539e80523f2ed0b2ccc1f8eabcf2cb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9d7937efcaeb684eb229e396c7896a805f1bf205bda1cd1b8113aeda2bea5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bcaeb06b249f6fa81deaf9f997ca41c9dd7f3568d458072208901c04543e272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bcaeb06b249f6fa81deaf9f997ca41c9dd7f3568d458072208901c04543e272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:20Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:20 crc kubenswrapper[4795]: I1129 07:40:20.409613 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30fbc9849348e3900b416f78f697f1d88eb5eb018f5ee420e711561423a79459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:20Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:20 crc kubenswrapper[4795]: I1129 07:40:20.423418 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b4d032ee6e86c2b61a59ea1477aa6d97a454c754888c37db24bc49d921eacd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:20Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:20 crc kubenswrapper[4795]: I1129 07:40:20.438737 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://148610b20c1dfe43c3e5443cda520fa95ed2823afcaa880dbe879b6d93dc8ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3aac368767d0534a658bc0223f5cda0a951940f2d6ff3f1c89a6ac401a7d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkmq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:20Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:20 crc kubenswrapper[4795]: I1129 07:40:20.458866 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceed8db447ae237cae16d2aa37b1651eade557ea7aaab5bc7b55678ce6e36f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcce9ccf35e72987d01428b7573aafacfe90fc973009657b077f40b6fe6ddecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://248e5291402287c1a589a168527135bd1df15e293a7614681fc896fcbdeb9dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8112226f945ad27e0c5af4d588800322977e17efc3eac9acec2ce591f30f46fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://feacad8666527640ec9d6bee04a743e99e0cc9f9e9c6432c1a70bc02e85fa039\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25239d7c14973e35144f9ac1b909564749220fc32b1851b68e08434a5aca7def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a0056c2a13ab4a4482c049c3fb5f6fe9938979d96f9837f40fa1dc45719a09b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8863f06c350a821d91544fd372bed53f0ae71b14571eb9b9e189228b3eb48e53\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:40:04Z\\\",\\\"message\\\":\\\"twork-check-source-55646444c4-trplf\\\\nI1129 07:40:02.480885 6335 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1129 07:40:02.480888 6335 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1129 07:40:02.480892 6335 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI1129 07:40:02.480897 6335 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1129 07:40:02.480905 6335 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI1129 07:40:02.480911 6335 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI1129 07:40:02.480918 6335 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF1129 07:40:02.480921 6335 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:40:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:40:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63ac8cc3766762113c8fdfdc459f77570fc533bf636011f1e64a5c0a55b3ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e28f6209bd7be6beb5d86d316c3194ad3162ae8561ac532a3dfc894415fb406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e28f6209bd7be6beb5d86d316c3194ad3162ae8561ac532a3dfc894415fb406\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-km2g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:20Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:20 crc kubenswrapper[4795]: I1129 07:40:20.466556 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:20 crc kubenswrapper[4795]: I1129 07:40:20.466611 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:20 crc kubenswrapper[4795]: I1129 07:40:20.466624 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:20 crc kubenswrapper[4795]: I1129 07:40:20.466645 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:20 crc kubenswrapper[4795]: I1129 07:40:20.466658 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:20Z","lastTransitionTime":"2025-11-29T07:40:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:20 crc kubenswrapper[4795]: I1129 07:40:20.470914 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bvmzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9be66670-47c2-4d05-bf3d-59ae6f4ff53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4h7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4h7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bvmzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:20Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:20 crc kubenswrapper[4795]: I1129 07:40:20.483674 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hbg2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50b9c3ea-4ff5-434f-803c-2365a0938f9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46105526d9b0e90d4e14597c7b9c505b01d89a55e947fb378e4587ec265cac89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv7gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hbg2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:20Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:20 crc kubenswrapper[4795]: I1129 07:40:20.493899 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vcd5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3fc3441-5d98-4323-8a78-cab492090c5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d047b5f2e979d7eebec46b45ef363e94e99bd7311214872c66fcce39cfd411db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcqq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vcd5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:20Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:20 crc kubenswrapper[4795]: I1129 07:40:20.508147 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"262bf41d-eaf0-42b7-946d-3223857ca705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bbfc45764b29055dcafdee2e3b77893c35b23cfa3eaa9125f01850baef2fd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://508f7cb88a5b33c09bb52519555900031de44693e10de750ed34d9bb4c31738c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9f1344bd7c3a1bc53870f7e5f5f2ca4993df112a24aa474655cc07670e22c48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b06202234f21aa27a3dc67978cc9f82738d7526100bce3fd640be08131413f69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:20Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:20 crc kubenswrapper[4795]: I1129 07:40:20.521111 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:20Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:20 crc kubenswrapper[4795]: I1129 07:40:20.535216 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:20Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:20 crc kubenswrapper[4795]: I1129 07:40:20.550176 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27975" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceab872d-7a73-44d5-936e-3dd17facf399\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b771b13a7bbce9270d919a830c700a1a04bd1d82141a1487e7abb6b1def634cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:40:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e24a1ba0b3bd47c8cff733e54ce43cf1641ba69df8a3d2083f47fcfdcd6640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e24a1ba0b3bd47c8cff733e54ce43cf1641ba69df8a3d2083f47fcfdcd6640\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9e93605467f5dae805a8b5b55e1eebc3f046f4ab7fd3a409235b918d8b1aa88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9e93605467f5dae805a8b5b55e1eebc3f046f4ab7fd3a409235b918d8b1aa88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1856815c6475b6c9db1d59694dfc249a9b254ca7f6b402570dbb791f60e038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1856815c6475b6c9db1d59694dfc249a9b254ca7f6b402570dbb791f60e038\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c79e6f292a540eaeab2781c31973da7e0213d08460452129dbb25075c05a9094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c79e6f292a540eaeab2781c31973da7e0213d08460452129dbb25075c05a9094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e9ceaddb24d5dc8e3689b2e05482fd76883f4ce3475557c03419491b53e800d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9ceaddb24d5dc8e3689b2e05482fd76883f4ce3475557c03419491b53e800d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2966e801bc03a51bece33fae1207ba6a432620fcbcb1c702abddbb11b40a1e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2966e801bc03a51bece33fae1207ba6a432620fcbcb1c702abddbb11b40a1e1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27975\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:20Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:20 crc kubenswrapper[4795]: I1129 07:40:20.561584 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-s9x86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcb60df4-ad48-4830-b8c7-c63621b96707\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6d9cdb665b3bed924577ab173dcf48968a0b7217ca81b1f8708d733d27d3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-s9x86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:20Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:20 crc kubenswrapper[4795]: I1129 07:40:20.569250 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:20 crc kubenswrapper[4795]: I1129 07:40:20.569279 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:20 crc kubenswrapper[4795]: I1129 07:40:20.569289 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:20 crc kubenswrapper[4795]: I1129 07:40:20.569303 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:20 crc kubenswrapper[4795]: I1129 07:40:20.569314 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:20Z","lastTransitionTime":"2025-11-29T07:40:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:20 crc kubenswrapper[4795]: I1129 07:40:20.573680 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qcmb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6472ebbc-939b-4dd0-8b03-110cb9811484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad977e9d70f50688539c9d0c9cf08789584d08f77792d3dde1a619e9d958cff2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qwxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2497113482ab716b22a90b7872f2837bccd484b765e979c2c67a394f82de688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qwxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qcmb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:20Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:20 crc kubenswrapper[4795]: I1129 07:40:20.671360 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:20 crc kubenswrapper[4795]: I1129 07:40:20.671584 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:20 crc kubenswrapper[4795]: I1129 07:40:20.671614 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:20 crc kubenswrapper[4795]: I1129 07:40:20.671633 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:20 crc kubenswrapper[4795]: I1129 07:40:20.671643 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:20Z","lastTransitionTime":"2025-11-29T07:40:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:20 crc kubenswrapper[4795]: I1129 07:40:20.774079 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:20 crc kubenswrapper[4795]: I1129 07:40:20.774113 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:20 crc kubenswrapper[4795]: I1129 07:40:20.774122 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:20 crc kubenswrapper[4795]: I1129 07:40:20.774136 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:20 crc kubenswrapper[4795]: I1129 07:40:20.774168 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:20Z","lastTransitionTime":"2025-11-29T07:40:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:20 crc kubenswrapper[4795]: I1129 07:40:20.877438 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:20 crc kubenswrapper[4795]: I1129 07:40:20.877515 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:20 crc kubenswrapper[4795]: I1129 07:40:20.877532 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:20 crc kubenswrapper[4795]: I1129 07:40:20.877554 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:20 crc kubenswrapper[4795]: I1129 07:40:20.877573 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:20Z","lastTransitionTime":"2025-11-29T07:40:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:20 crc kubenswrapper[4795]: I1129 07:40:20.980105 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:20 crc kubenswrapper[4795]: I1129 07:40:20.980178 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:20 crc kubenswrapper[4795]: I1129 07:40:20.980193 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:20 crc kubenswrapper[4795]: I1129 07:40:20.980225 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:20 crc kubenswrapper[4795]: I1129 07:40:20.980243 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:20Z","lastTransitionTime":"2025-11-29T07:40:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:21 crc kubenswrapper[4795]: I1129 07:40:21.083123 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:21 crc kubenswrapper[4795]: I1129 07:40:21.083212 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:21 crc kubenswrapper[4795]: I1129 07:40:21.083230 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:21 crc kubenswrapper[4795]: I1129 07:40:21.083256 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:21 crc kubenswrapper[4795]: I1129 07:40:21.083291 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:21Z","lastTransitionTime":"2025-11-29T07:40:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:21 crc kubenswrapper[4795]: I1129 07:40:21.185965 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:21 crc kubenswrapper[4795]: I1129 07:40:21.186031 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:21 crc kubenswrapper[4795]: I1129 07:40:21.186049 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:21 crc kubenswrapper[4795]: I1129 07:40:21.186070 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:21 crc kubenswrapper[4795]: I1129 07:40:21.186086 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:21Z","lastTransitionTime":"2025-11-29T07:40:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:21 crc kubenswrapper[4795]: I1129 07:40:21.291792 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:21 crc kubenswrapper[4795]: I1129 07:40:21.291840 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:21 crc kubenswrapper[4795]: I1129 07:40:21.291849 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:21 crc kubenswrapper[4795]: I1129 07:40:21.291865 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:21 crc kubenswrapper[4795]: I1129 07:40:21.291875 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:21Z","lastTransitionTime":"2025-11-29T07:40:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:21 crc kubenswrapper[4795]: I1129 07:40:21.314935 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-km2g9_3d3ff2b2-cbaa-4309-805a-2b044f867d3a/ovnkube-controller/2.log" Nov 29 07:40:21 crc kubenswrapper[4795]: I1129 07:40:21.315503 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-km2g9_3d3ff2b2-cbaa-4309-805a-2b044f867d3a/ovnkube-controller/1.log" Nov 29 07:40:21 crc kubenswrapper[4795]: I1129 07:40:21.317908 4795 generic.go:334] "Generic (PLEG): container finished" podID="3d3ff2b2-cbaa-4309-805a-2b044f867d3a" containerID="4a0056c2a13ab4a4482c049c3fb5f6fe9938979d96f9837f40fa1dc45719a09b" exitCode=1 Nov 29 07:40:21 crc kubenswrapper[4795]: I1129 07:40:21.317960 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" event={"ID":"3d3ff2b2-cbaa-4309-805a-2b044f867d3a","Type":"ContainerDied","Data":"4a0056c2a13ab4a4482c049c3fb5f6fe9938979d96f9837f40fa1dc45719a09b"} Nov 29 07:40:21 crc kubenswrapper[4795]: I1129 07:40:21.318006 4795 scope.go:117] "RemoveContainer" containerID="8863f06c350a821d91544fd372bed53f0ae71b14571eb9b9e189228b3eb48e53" Nov 29 07:40:21 crc kubenswrapper[4795]: I1129 07:40:21.318740 4795 scope.go:117] "RemoveContainer" containerID="4a0056c2a13ab4a4482c049c3fb5f6fe9938979d96f9837f40fa1dc45719a09b" Nov 29 07:40:21 crc kubenswrapper[4795]: E1129 07:40:21.318951 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-km2g9_openshift-ovn-kubernetes(3d3ff2b2-cbaa-4309-805a-2b044f867d3a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" podUID="3d3ff2b2-cbaa-4309-805a-2b044f867d3a" Nov 29 07:40:21 crc kubenswrapper[4795]: I1129 07:40:21.333096 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hbg2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50b9c3ea-4ff5-434f-803c-2365a0938f9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46105526d9b0e90d4e14597c7b9c505b01d89a55e947fb378e4587ec265cac89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv7gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hbg2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:21Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:21 crc kubenswrapper[4795]: I1129 07:40:21.341736 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vcd5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3fc3441-5d98-4323-8a78-cab492090c5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d047b5f2e979d7eebec46b45ef363e94e99bd7311214872c66fcce39cfd411db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcqq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vcd5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:21Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:21 crc kubenswrapper[4795]: I1129 07:40:21.353932 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:21Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:21 crc kubenswrapper[4795]: I1129 07:40:21.369499 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27975" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceab872d-7a73-44d5-936e-3dd17facf399\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b771b13a7bbce9270d919a830c700a1a04bd1d82141a1487e7abb6b1def634cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:40:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e24a1ba0b3bd47c8cff733e54ce43cf1641ba69df8a3d2083f47fcfdcd6640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e24a1ba0b3bd47c8cff733e54ce43cf1641ba69df8a3d2083f47fcfdcd6640\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9e93605467f5dae805a8b5b55e1eebc3f046f4ab7fd3a409235b918d8b1aa88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9e93605467f5dae805a8b5b55e1eebc3f046f4ab7fd3a409235b918d8b1aa88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1856815c6475b6c9db1d59694dfc249a9b254ca7f6b402570dbb791f60e038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1856815c6475b6c9db1d59694dfc249a9b254ca7f6b402570dbb791f60e038\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c79e6f292a540eaeab2781c31973da7e0213d08460452129dbb25075c05a9094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c79e6f292a540eaeab2781c31973da7e0213d08460452129dbb25075c05a9094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e9ceaddb24d5dc8e3689b2e05482fd76883f4ce3475557c03419491b53e800d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9ceaddb24d5dc8e3689b2e05482fd76883f4ce3475557c03419491b53e800d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2966e801bc03a51bece33fae1207ba6a432620fcbcb1c702abddbb11b40a1e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2966e801bc03a51bece33fae1207ba6a432620fcbcb1c702abddbb11b40a1e1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27975\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:21Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:21 crc kubenswrapper[4795]: I1129 07:40:21.379090 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-s9x86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcb60df4-ad48-4830-b8c7-c63621b96707\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6d9cdb665b3bed924577ab173dcf48968a0b7217ca81b1f8708d733d27d3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-s9x86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:21Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:21 crc kubenswrapper[4795]: I1129 07:40:21.389367 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qcmb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6472ebbc-939b-4dd0-8b03-110cb9811484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad977e9d70f50688539c9d0c9cf08789584d08f77792d3dde1a619e9d958cff2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qwxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2497113482ab716b22a90b7872f2837bccd484b765e979c2c67a394f82de688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qwxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qcmb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:21Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:21 crc kubenswrapper[4795]: I1129 07:40:21.398274 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:21 crc kubenswrapper[4795]: I1129 07:40:21.398320 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:21 crc kubenswrapper[4795]: I1129 07:40:21.398332 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:21 crc kubenswrapper[4795]: I1129 07:40:21.398349 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:21 crc kubenswrapper[4795]: I1129 07:40:21.398359 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:21Z","lastTransitionTime":"2025-11-29T07:40:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:21 crc kubenswrapper[4795]: I1129 07:40:21.400153 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"262bf41d-eaf0-42b7-946d-3223857ca705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bbfc45764b29055dcafdee2e3b77893c35b23cfa3eaa9125f01850baef2fd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://508f7cb88a5b33c09bb52519555900031de44693e10de750ed34d9bb4c31738c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9f1344bd7c3a1bc53870f7e5f5f2ca4993df112a24aa474655cc07670e22c48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b06202234f21aa27a3dc67978cc9f82738d7526100bce3fd640be08131413f69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:21Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:21 crc kubenswrapper[4795]: I1129 07:40:21.412483 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:21Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:21 crc kubenswrapper[4795]: I1129 07:40:21.423820 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:21Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:21 crc kubenswrapper[4795]: I1129 07:40:21.435385 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2faa21199ef0e7bbf065823fff1e4b33636239d022db292bec68ff54e8c948c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c09cfeaabce351a2face8aa09a04f88aa9d50122718358d07301aeab84e4a76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:21Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:21 crc kubenswrapper[4795]: I1129 07:40:21.448137 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84bbfda4-f166-406b-8459-d8cad5d21032\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://102735e5872ecbca5ba247f4fc457619ce4b383def180851d69bf37f7ea1051a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cba2e40f95664da3b482ce01c3432b7a421b785791004833aeaffbc2b6bdcb9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c385dac962c7c2c54fe6b5eabe286bc75d0518601f68b6845677764c5916ad44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6319d19e0aa1655aaff5e3abcc13d8ff9ff26f401c28cf23593ecaa23d1ac5a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa34ac2960898820609db88aa051a6567a7b6bd0bfc7fcb741205f6f5c3f402\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:39:37Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 07:39:27.030389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:39:27.033381 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-823169215/tls.crt::/tmp/serving-cert-823169215/tls.key\\\\\\\"\\\\nI1129 07:39:37.392848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:39:37.396263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:39:37.396294 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:39:37.396669 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:39:37.396692 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:39:37.407579 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:39:37.407638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407648 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407655 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:39:37.407660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:39:37.407665 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:39:37.407670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:39:37.408043 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:39:37.411891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7b73774f26f4ea48586f8faf7f53be02b4678c85937c8e402868610f36d4d00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:21Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:21 crc kubenswrapper[4795]: I1129 07:40:21.458411 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c05a965f-51fb-403d-bbac-b34d4d6b2bbd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98152568d086a30f06fcd1535110482b356c587d906973de9baedd65fc85b6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34ebb0f2c455d2133d2fcf00d19e74d539e80523f2ed0b2ccc1f8eabcf2cb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9d7937efcaeb684eb229e396c7896a805f1bf205bda1cd1b8113aeda2bea5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bcaeb06b249f6fa81deaf9f997ca41c9dd7f3568d458072208901c04543e272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bcaeb06b249f6fa81deaf9f997ca41c9dd7f3568d458072208901c04543e272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:21Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:21 crc kubenswrapper[4795]: I1129 07:40:21.471524 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30fbc9849348e3900b416f78f697f1d88eb5eb018f5ee420e711561423a79459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:21Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:21 crc kubenswrapper[4795]: I1129 07:40:21.482899 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b4d032ee6e86c2b61a59ea1477aa6d97a454c754888c37db24bc49d921eacd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:21Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:21 crc kubenswrapper[4795]: I1129 07:40:21.494984 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://148610b20c1dfe43c3e5443cda520fa95ed2823afcaa880dbe879b6d93dc8ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3aac368767d0534a658bc0223f5cda0a951940f2d6ff3f1c89a6ac401a7d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkmq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:21Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:21 crc kubenswrapper[4795]: I1129 07:40:21.499866 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:21 crc kubenswrapper[4795]: I1129 07:40:21.499910 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:21 crc kubenswrapper[4795]: I1129 07:40:21.499923 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:21 crc kubenswrapper[4795]: I1129 07:40:21.499938 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:21 crc kubenswrapper[4795]: I1129 07:40:21.500275 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:21Z","lastTransitionTime":"2025-11-29T07:40:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:21 crc kubenswrapper[4795]: I1129 07:40:21.513643 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceed8db447ae237cae16d2aa37b1651eade557ea7aaab5bc7b55678ce6e36f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcce9ccf35e72987d01428b7573aafacfe90fc973009657b077f40b6fe6ddecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://248e5291402287c1a589a168527135bd1df15e293a7614681fc896fcbdeb9dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8112226f945ad27e0c5af4d588800322977e17efc3eac9acec2ce591f30f46fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://feacad8666527640ec9d6bee04a743e99e0cc9f9e9c6432c1a70bc02e85fa039\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25239d7c14973e35144f9ac1b909564749220fc32b1851b68e08434a5aca7def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a0056c2a13ab4a4482c049c3fb5f6fe9938979d96f9837f40fa1dc45719a09b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8863f06c350a821d91544fd372bed53f0ae71b14571eb9b9e189228b3eb48e53\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:40:04Z\\\",\\\"message\\\":\\\"twork-check-source-55646444c4-trplf\\\\nI1129 07:40:02.480885 6335 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1129 07:40:02.480888 6335 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1129 07:40:02.480892 6335 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI1129 07:40:02.480897 6335 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1129 07:40:02.480905 6335 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI1129 07:40:02.480911 6335 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI1129 07:40:02.480918 6335 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF1129 07:40:02.480921 6335 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:40:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a0056c2a13ab4a4482c049c3fb5f6fe9938979d96f9837f40fa1dc45719a09b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:40:20Z\\\",\\\"message\\\":\\\"Policy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:40:20.066807 6580 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:40:20.066708 6580 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:40:20.067272 6580 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:40:20.067564 6580 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:40:20.067747 6580 factory.go:656] Stopping watch factory\\\\nI1129 07:40:20.067868 6580 services_controller.go:214] Setting up event handlers for endpoint slices for network=default\\\\nI1129 07:40:20.067909 6580 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1129 07:40:20.068426 6580 ovnkube.go:599] Stopped ovnkube\\\\nI1129 07:40:20.068484 6580 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1129 07:40:20.068569 6580 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:40:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63ac8cc3766762113c8fdfdc459f77570fc533bf636011f1e64a5c0a55b3ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e28f6209bd7be6beb5d86d316c3194ad3162ae8561ac532a3dfc894415fb406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e28f6209bd7be6beb5d86d316c3194ad3162ae8561ac532a3dfc894415fb406\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-km2g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:21Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:21 crc kubenswrapper[4795]: I1129 07:40:21.524454 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bvmzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9be66670-47c2-4d05-bf3d-59ae6f4ff53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4h7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4h7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bvmzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:21Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:21 crc kubenswrapper[4795]: I1129 07:40:21.602804 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:21 crc kubenswrapper[4795]: I1129 07:40:21.602839 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:21 crc kubenswrapper[4795]: I1129 07:40:21.602848 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:21 crc kubenswrapper[4795]: I1129 07:40:21.602864 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:21 crc kubenswrapper[4795]: I1129 07:40:21.602874 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:21Z","lastTransitionTime":"2025-11-29T07:40:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:21 crc kubenswrapper[4795]: I1129 07:40:21.704635 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:21 crc kubenswrapper[4795]: I1129 07:40:21.704688 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:21 crc kubenswrapper[4795]: I1129 07:40:21.704703 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:21 crc kubenswrapper[4795]: I1129 07:40:21.704719 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:21 crc kubenswrapper[4795]: I1129 07:40:21.704731 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:21Z","lastTransitionTime":"2025-11-29T07:40:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:21 crc kubenswrapper[4795]: I1129 07:40:21.806893 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:21 crc kubenswrapper[4795]: I1129 07:40:21.806938 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:21 crc kubenswrapper[4795]: I1129 07:40:21.806949 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:21 crc kubenswrapper[4795]: I1129 07:40:21.806965 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:21 crc kubenswrapper[4795]: I1129 07:40:21.806976 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:21Z","lastTransitionTime":"2025-11-29T07:40:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:21 crc kubenswrapper[4795]: I1129 07:40:21.908848 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:21 crc kubenswrapper[4795]: I1129 07:40:21.908904 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:21 crc kubenswrapper[4795]: I1129 07:40:21.908923 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:21 crc kubenswrapper[4795]: I1129 07:40:21.908940 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:21 crc kubenswrapper[4795]: I1129 07:40:21.908952 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:21Z","lastTransitionTime":"2025-11-29T07:40:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:22 crc kubenswrapper[4795]: I1129 07:40:22.011750 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:22 crc kubenswrapper[4795]: I1129 07:40:22.011924 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:22 crc kubenswrapper[4795]: I1129 07:40:22.011957 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:22 crc kubenswrapper[4795]: I1129 07:40:22.011989 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:22 crc kubenswrapper[4795]: I1129 07:40:22.012015 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:22Z","lastTransitionTime":"2025-11-29T07:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:22 crc kubenswrapper[4795]: I1129 07:40:22.113886 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:22 crc kubenswrapper[4795]: I1129 07:40:22.113974 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:22 crc kubenswrapper[4795]: I1129 07:40:22.113983 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:22 crc kubenswrapper[4795]: I1129 07:40:22.113996 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:22 crc kubenswrapper[4795]: I1129 07:40:22.114005 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:22Z","lastTransitionTime":"2025-11-29T07:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:22 crc kubenswrapper[4795]: I1129 07:40:22.215973 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:22 crc kubenswrapper[4795]: I1129 07:40:22.216047 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:22 crc kubenswrapper[4795]: I1129 07:40:22.216066 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:22 crc kubenswrapper[4795]: I1129 07:40:22.216092 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:22 crc kubenswrapper[4795]: I1129 07:40:22.216110 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:22Z","lastTransitionTime":"2025-11-29T07:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:22 crc kubenswrapper[4795]: I1129 07:40:22.274936 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:40:22 crc kubenswrapper[4795]: I1129 07:40:22.274979 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:40:22 crc kubenswrapper[4795]: I1129 07:40:22.274942 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvmzq" Nov 29 07:40:22 crc kubenswrapper[4795]: E1129 07:40:22.275063 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:40:22 crc kubenswrapper[4795]: I1129 07:40:22.275077 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:40:22 crc kubenswrapper[4795]: E1129 07:40:22.275162 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvmzq" podUID="9be66670-47c2-4d05-bf3d-59ae6f4ff53b" Nov 29 07:40:22 crc kubenswrapper[4795]: E1129 07:40:22.275230 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:40:22 crc kubenswrapper[4795]: E1129 07:40:22.275322 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:40:22 crc kubenswrapper[4795]: I1129 07:40:22.318314 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:22 crc kubenswrapper[4795]: I1129 07:40:22.318348 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:22 crc kubenswrapper[4795]: I1129 07:40:22.318357 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:22 crc kubenswrapper[4795]: I1129 07:40:22.318370 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:22 crc kubenswrapper[4795]: I1129 07:40:22.318396 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:22Z","lastTransitionTime":"2025-11-29T07:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:22 crc kubenswrapper[4795]: I1129 07:40:22.320989 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-km2g9_3d3ff2b2-cbaa-4309-805a-2b044f867d3a/ovnkube-controller/2.log" Nov 29 07:40:22 crc kubenswrapper[4795]: I1129 07:40:22.323486 4795 scope.go:117] "RemoveContainer" containerID="4a0056c2a13ab4a4482c049c3fb5f6fe9938979d96f9837f40fa1dc45719a09b" Nov 29 07:40:22 crc kubenswrapper[4795]: E1129 07:40:22.323657 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-km2g9_openshift-ovn-kubernetes(3d3ff2b2-cbaa-4309-805a-2b044f867d3a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" podUID="3d3ff2b2-cbaa-4309-805a-2b044f867d3a" Nov 29 07:40:22 crc kubenswrapper[4795]: I1129 07:40:22.337372 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"262bf41d-eaf0-42b7-946d-3223857ca705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bbfc45764b29055dcafdee2e3b77893c35b23cfa3eaa9125f01850baef2fd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://508f7cb88a5b33c09bb52519555900031de44693e10de750ed34d9bb4c31738c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9f1344bd7c3a1bc53870f7e5f5f2ca4993df112a24aa474655cc07670e22c48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b06202234f21aa27a3dc67978cc9f82738d7526100bce3fd640be08131413f69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:22Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:22 crc kubenswrapper[4795]: I1129 07:40:22.349974 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:22Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:22 crc kubenswrapper[4795]: I1129 07:40:22.361455 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:22Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:22 crc kubenswrapper[4795]: I1129 07:40:22.379570 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27975" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceab872d-7a73-44d5-936e-3dd17facf399\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b771b13a7bbce9270d919a830c700a1a04bd1d82141a1487e7abb6b1def634cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:40:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e24a1ba0b3bd47c8cff733e54ce43cf1641ba69df8a3d2083f47fcfdcd6640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e24a1ba0b3bd47c8cff733e54ce43cf1641ba69df8a3d2083f47fcfdcd6640\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9e93605467f5dae805a8b5b55e1eebc3f046f4ab7fd3a409235b918d8b1aa88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9e93605467f5dae805a8b5b55e1eebc3f046f4ab7fd3a409235b918d8b1aa88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1856815c6475b6c9db1d59694dfc249a9b254ca7f6b402570dbb791f60e038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1856815c6475b6c9db1d59694dfc249a9b254ca7f6b402570dbb791f60e038\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c79e6f292a540eaeab2781c31973da7e0213d08460452129dbb25075c05a9094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c79e6f292a540eaeab2781c31973da7e0213d08460452129dbb25075c05a9094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e9ceaddb24d5dc8e3689b2e05482fd76883f4ce3475557c03419491b53e800d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9ceaddb24d5dc8e3689b2e05482fd76883f4ce3475557c03419491b53e800d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2966e801bc03a51bece33fae1207ba6a432620fcbcb1c702abddbb11b40a1e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2966e801bc03a51bece33fae1207ba6a432620fcbcb1c702abddbb11b40a1e1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27975\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:22Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:22 crc kubenswrapper[4795]: I1129 07:40:22.391386 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-s9x86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcb60df4-ad48-4830-b8c7-c63621b96707\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6d9cdb665b3bed924577ab173dcf48968a0b7217ca81b1f8708d733d27d3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-s9x86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:22Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:22 crc kubenswrapper[4795]: I1129 07:40:22.403989 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qcmb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6472ebbc-939b-4dd0-8b03-110cb9811484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad977e9d70f50688539c9d0c9cf08789584d08f77792d3dde1a619e9d958cff2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qwxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2497113482ab716b22a90b7872f2837bccd484b765e979c2c67a394f82de688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qwxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qcmb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:22Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:22 crc kubenswrapper[4795]: I1129 07:40:22.415795 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:22Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:22 crc kubenswrapper[4795]: I1129 07:40:22.420441 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:22 crc kubenswrapper[4795]: I1129 07:40:22.420485 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:22 crc kubenswrapper[4795]: I1129 07:40:22.420497 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:22 crc kubenswrapper[4795]: I1129 07:40:22.420515 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:22 crc kubenswrapper[4795]: I1129 07:40:22.420528 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:22Z","lastTransitionTime":"2025-11-29T07:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:22 crc kubenswrapper[4795]: I1129 07:40:22.427286 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2faa21199ef0e7bbf065823fff1e4b33636239d022db292bec68ff54e8c948c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c09cfeaabce351a2face8aa09a04f88aa9d50122718358d07301aeab84e4a76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:22Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:22 crc kubenswrapper[4795]: I1129 07:40:22.437467 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://148610b20c1dfe43c3e5443cda520fa95ed2823afcaa880dbe879b6d93dc8ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3aac368767d0534a658bc0223f5cda0a951940f2d6ff3f1c89a6ac401a7d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkmq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:22Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:22 crc kubenswrapper[4795]: I1129 07:40:22.453341 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceed8db447ae237cae16d2aa37b1651eade557ea7aaab5bc7b55678ce6e36f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcce9ccf35e72987d01428b7573aafacfe90fc973009657b077f40b6fe6ddecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://248e5291402287c1a589a168527135bd1df15e293a7614681fc896fcbdeb9dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8112226f945ad27e0c5af4d588800322977e17efc3eac9acec2ce591f30f46fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://feacad8666527640ec9d6bee04a743e99e0cc9f9e9c6432c1a70bc02e85fa039\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25239d7c14973e35144f9ac1b909564749220fc32b1851b68e08434a5aca7def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a0056c2a13ab4a4482c049c3fb5f6fe9938979d96f9837f40fa1dc45719a09b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a0056c2a13ab4a4482c049c3fb5f6fe9938979d96f9837f40fa1dc45719a09b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:40:20Z\\\",\\\"message\\\":\\\"Policy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:40:20.066807 6580 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:40:20.066708 6580 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:40:20.067272 6580 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:40:20.067564 6580 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:40:20.067747 6580 factory.go:656] Stopping watch factory\\\\nI1129 07:40:20.067868 6580 services_controller.go:214] Setting up event handlers for endpoint slices for network=default\\\\nI1129 07:40:20.067909 6580 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1129 07:40:20.068426 6580 ovnkube.go:599] Stopped ovnkube\\\\nI1129 07:40:20.068484 6580 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1129 07:40:20.068569 6580 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:40:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-km2g9_openshift-ovn-kubernetes(3d3ff2b2-cbaa-4309-805a-2b044f867d3a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63ac8cc3766762113c8fdfdc459f77570fc533bf636011f1e64a5c0a55b3ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e28f6209bd7be6beb5d86d316c3194ad3162ae8561ac532a3dfc894415fb406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e28f6209bd7be6beb5d86d316c3194ad3162ae8561ac532a3dfc894415fb406\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-km2g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:22Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:22 crc kubenswrapper[4795]: I1129 07:40:22.468194 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84bbfda4-f166-406b-8459-d8cad5d21032\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://102735e5872ecbca5ba247f4fc457619ce4b383def180851d69bf37f7ea1051a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cba2e40f95664da3b482ce01c3432b7a421b785791004833aeaffbc2b6bdcb9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c385dac962c7c2c54fe6b5eabe286bc75d0518601f68b6845677764c5916ad44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6319d19e0aa1655aaff5e3abcc13d8ff9ff26f401c28cf23593ecaa23d1ac5a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa34ac2960898820609db88aa051a6567a7b6bd0bfc7fcb741205f6f5c3f402\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:39:37Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 07:39:27.030389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:39:27.033381 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-823169215/tls.crt::/tmp/serving-cert-823169215/tls.key\\\\\\\"\\\\nI1129 07:39:37.392848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:39:37.396263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:39:37.396294 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:39:37.396669 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:39:37.396692 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:39:37.407579 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:39:37.407638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407648 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407655 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:39:37.407660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:39:37.407665 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:39:37.407670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:39:37.408043 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:39:37.411891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7b73774f26f4ea48586f8faf7f53be02b4678c85937c8e402868610f36d4d00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:22Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:22 crc kubenswrapper[4795]: I1129 07:40:22.481634 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c05a965f-51fb-403d-bbac-b34d4d6b2bbd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98152568d086a30f06fcd1535110482b356c587d906973de9baedd65fc85b6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34ebb0f2c455d2133d2fcf00d19e74d539e80523f2ed0b2ccc1f8eabcf2cb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9d7937efcaeb684eb229e396c7896a805f1bf205bda1cd1b8113aeda2bea5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bcaeb06b249f6fa81deaf9f997ca41c9dd7f3568d458072208901c04543e272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bcaeb06b249f6fa81deaf9f997ca41c9dd7f3568d458072208901c04543e272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:22Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:22 crc kubenswrapper[4795]: I1129 07:40:22.494217 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30fbc9849348e3900b416f78f697f1d88eb5eb018f5ee420e711561423a79459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:22Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:22 crc kubenswrapper[4795]: I1129 07:40:22.504342 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b4d032ee6e86c2b61a59ea1477aa6d97a454c754888c37db24bc49d921eacd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:22Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:22 crc kubenswrapper[4795]: I1129 07:40:22.514722 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bvmzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9be66670-47c2-4d05-bf3d-59ae6f4ff53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4h7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4h7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bvmzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:22Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:22 crc kubenswrapper[4795]: I1129 07:40:22.522843 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:22 crc kubenswrapper[4795]: I1129 07:40:22.522879 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:22 crc kubenswrapper[4795]: I1129 07:40:22.522891 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:22 crc kubenswrapper[4795]: I1129 07:40:22.522906 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:22 crc kubenswrapper[4795]: I1129 07:40:22.522916 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:22Z","lastTransitionTime":"2025-11-29T07:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:22 crc kubenswrapper[4795]: I1129 07:40:22.525369 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hbg2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50b9c3ea-4ff5-434f-803c-2365a0938f9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46105526d9b0e90d4e14597c7b9c505b01d89a55e947fb378e4587ec265cac89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv7gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hbg2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:22Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:22 crc kubenswrapper[4795]: I1129 07:40:22.534504 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vcd5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3fc3441-5d98-4323-8a78-cab492090c5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d047b5f2e979d7eebec46b45ef363e94e99bd7311214872c66fcce39cfd411db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcqq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vcd5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:22Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:22 crc kubenswrapper[4795]: I1129 07:40:22.624557 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:22 crc kubenswrapper[4795]: I1129 07:40:22.624581 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:22 crc kubenswrapper[4795]: I1129 07:40:22.624616 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:22 crc kubenswrapper[4795]: I1129 07:40:22.624629 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:22 crc kubenswrapper[4795]: I1129 07:40:22.624637 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:22Z","lastTransitionTime":"2025-11-29T07:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:22 crc kubenswrapper[4795]: I1129 07:40:22.727164 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:22 crc kubenswrapper[4795]: I1129 07:40:22.727201 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:22 crc kubenswrapper[4795]: I1129 07:40:22.727228 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:22 crc kubenswrapper[4795]: I1129 07:40:22.727241 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:22 crc kubenswrapper[4795]: I1129 07:40:22.727249 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:22Z","lastTransitionTime":"2025-11-29T07:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:22 crc kubenswrapper[4795]: I1129 07:40:22.829427 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:22 crc kubenswrapper[4795]: I1129 07:40:22.829463 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:22 crc kubenswrapper[4795]: I1129 07:40:22.829472 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:22 crc kubenswrapper[4795]: I1129 07:40:22.829485 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:22 crc kubenswrapper[4795]: I1129 07:40:22.829494 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:22Z","lastTransitionTime":"2025-11-29T07:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:22 crc kubenswrapper[4795]: I1129 07:40:22.931638 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:22 crc kubenswrapper[4795]: I1129 07:40:22.931669 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:22 crc kubenswrapper[4795]: I1129 07:40:22.931677 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:22 crc kubenswrapper[4795]: I1129 07:40:22.931690 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:22 crc kubenswrapper[4795]: I1129 07:40:22.931699 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:22Z","lastTransitionTime":"2025-11-29T07:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:23 crc kubenswrapper[4795]: I1129 07:40:23.033855 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:23 crc kubenswrapper[4795]: I1129 07:40:23.033887 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:23 crc kubenswrapper[4795]: I1129 07:40:23.033897 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:23 crc kubenswrapper[4795]: I1129 07:40:23.033911 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:23 crc kubenswrapper[4795]: I1129 07:40:23.033921 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:23Z","lastTransitionTime":"2025-11-29T07:40:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:23 crc kubenswrapper[4795]: I1129 07:40:23.136181 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:23 crc kubenswrapper[4795]: I1129 07:40:23.136325 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:23 crc kubenswrapper[4795]: I1129 07:40:23.136341 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:23 crc kubenswrapper[4795]: I1129 07:40:23.136358 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:23 crc kubenswrapper[4795]: I1129 07:40:23.136372 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:23Z","lastTransitionTime":"2025-11-29T07:40:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:23 crc kubenswrapper[4795]: I1129 07:40:23.238450 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:23 crc kubenswrapper[4795]: I1129 07:40:23.238487 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:23 crc kubenswrapper[4795]: I1129 07:40:23.238497 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:23 crc kubenswrapper[4795]: I1129 07:40:23.238528 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:23 crc kubenswrapper[4795]: I1129 07:40:23.238538 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:23Z","lastTransitionTime":"2025-11-29T07:40:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:23 crc kubenswrapper[4795]: I1129 07:40:23.340553 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:23 crc kubenswrapper[4795]: I1129 07:40:23.340608 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:23 crc kubenswrapper[4795]: I1129 07:40:23.340619 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:23 crc kubenswrapper[4795]: I1129 07:40:23.340635 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:23 crc kubenswrapper[4795]: I1129 07:40:23.340643 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:23Z","lastTransitionTime":"2025-11-29T07:40:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:23 crc kubenswrapper[4795]: I1129 07:40:23.443248 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:23 crc kubenswrapper[4795]: I1129 07:40:23.443301 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:23 crc kubenswrapper[4795]: I1129 07:40:23.443311 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:23 crc kubenswrapper[4795]: I1129 07:40:23.443328 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:23 crc kubenswrapper[4795]: I1129 07:40:23.443338 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:23Z","lastTransitionTime":"2025-11-29T07:40:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:23 crc kubenswrapper[4795]: I1129 07:40:23.545422 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:23 crc kubenswrapper[4795]: I1129 07:40:23.545485 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:23 crc kubenswrapper[4795]: I1129 07:40:23.545498 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:23 crc kubenswrapper[4795]: I1129 07:40:23.545518 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:23 crc kubenswrapper[4795]: I1129 07:40:23.545533 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:23Z","lastTransitionTime":"2025-11-29T07:40:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:23 crc kubenswrapper[4795]: I1129 07:40:23.648255 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:23 crc kubenswrapper[4795]: I1129 07:40:23.648288 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:23 crc kubenswrapper[4795]: I1129 07:40:23.648296 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:23 crc kubenswrapper[4795]: I1129 07:40:23.648309 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:23 crc kubenswrapper[4795]: I1129 07:40:23.648318 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:23Z","lastTransitionTime":"2025-11-29T07:40:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:23 crc kubenswrapper[4795]: I1129 07:40:23.750713 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:23 crc kubenswrapper[4795]: I1129 07:40:23.750754 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:23 crc kubenswrapper[4795]: I1129 07:40:23.750761 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:23 crc kubenswrapper[4795]: I1129 07:40:23.750776 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:23 crc kubenswrapper[4795]: I1129 07:40:23.750785 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:23Z","lastTransitionTime":"2025-11-29T07:40:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:23 crc kubenswrapper[4795]: I1129 07:40:23.853400 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:23 crc kubenswrapper[4795]: I1129 07:40:23.853448 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:23 crc kubenswrapper[4795]: I1129 07:40:23.853458 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:23 crc kubenswrapper[4795]: I1129 07:40:23.853472 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:23 crc kubenswrapper[4795]: I1129 07:40:23.853481 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:23Z","lastTransitionTime":"2025-11-29T07:40:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:23 crc kubenswrapper[4795]: I1129 07:40:23.956904 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:23 crc kubenswrapper[4795]: I1129 07:40:23.956950 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:23 crc kubenswrapper[4795]: I1129 07:40:23.956960 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:23 crc kubenswrapper[4795]: I1129 07:40:23.956974 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:23 crc kubenswrapper[4795]: I1129 07:40:23.956987 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:23Z","lastTransitionTime":"2025-11-29T07:40:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.059187 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.059216 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.059241 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.059257 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.059267 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:24Z","lastTransitionTime":"2025-11-29T07:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.161881 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.161920 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.161930 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.161945 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.161954 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:24Z","lastTransitionTime":"2025-11-29T07:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.264377 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.264421 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.264433 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.264450 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.264464 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:24Z","lastTransitionTime":"2025-11-29T07:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.274898 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:40:24 crc kubenswrapper[4795]: E1129 07:40:24.275085 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.275199 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvmzq" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.275294 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:40:24 crc kubenswrapper[4795]: E1129 07:40:24.275433 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvmzq" podUID="9be66670-47c2-4d05-bf3d-59ae6f4ff53b" Nov 29 07:40:24 crc kubenswrapper[4795]: E1129 07:40:24.275648 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.275833 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:40:24 crc kubenswrapper[4795]: E1129 07:40:24.275989 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.296180 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27975" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceab872d-7a73-44d5-936e-3dd17facf399\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b771b13a7bbce9270d919a830c700a1a04bd1d82141a1487e7abb6b1def634cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:40:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e24a1ba0b3bd47c8cff733e54ce43cf1641ba69df8a3d2083f47fcfdcd6640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e24a1ba0b3bd47c8cff733e54ce43cf1641ba69df8a3d2083f47fcfdcd6640\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9e93605467f5dae805a8b5b55e1eebc3f046f4ab7fd3a409235b918d8b1aa88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9e93605467f5dae805a8b5b55e1eebc3f046f4ab7fd3a409235b918d8b1aa88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1856815c6475b6c9db1d59694dfc249a9b254ca7f6b402570dbb791f60e038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1856815c6475b6c9db1d59694dfc249a9b254ca7f6b402570dbb791f60e038\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c79e6f292a540eaeab2781c31973da7e0213d08460452129dbb25075c05a9094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c79e6f292a540eaeab2781c31973da7e0213d08460452129dbb25075c05a9094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e9ceaddb24d5dc8e3689b2e05482fd76883f4ce3475557c03419491b53e800d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9ceaddb24d5dc8e3689b2e05482fd76883f4ce3475557c03419491b53e800d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2966e801bc03a51bece33fae1207ba6a432620fcbcb1c702abddbb11b40a1e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2966e801bc03a51bece33fae1207ba6a432620fcbcb1c702abddbb11b40a1e1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27975\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:24Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.308829 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-s9x86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcb60df4-ad48-4830-b8c7-c63621b96707\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6d9cdb665b3bed924577ab173dcf48968a0b7217ca81b1f8708d733d27d3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-s9x86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:24Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.324853 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qcmb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6472ebbc-939b-4dd0-8b03-110cb9811484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad977e9d70f50688539c9d0c9cf08789584d08f77792d3dde1a619e9d958cff2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qwxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2497113482ab716b22a90b7872f2837bccd484b765e979c2c67a394f82de688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qwxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qcmb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:24Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.341121 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"262bf41d-eaf0-42b7-946d-3223857ca705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bbfc45764b29055dcafdee2e3b77893c35b23cfa3eaa9125f01850baef2fd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://508f7cb88a5b33c09bb52519555900031de44693e10de750ed34d9bb4c31738c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9f1344bd7c3a1bc53870f7e5f5f2ca4993df112a24aa474655cc07670e22c48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b06202234f21aa27a3dc67978cc9f82738d7526100bce3fd640be08131413f69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:24Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.356284 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:24Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.367978 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.368026 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.368039 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.368058 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.368071 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:24Z","lastTransitionTime":"2025-11-29T07:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.370864 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:24Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.385550 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:24Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.403419 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2faa21199ef0e7bbf065823fff1e4b33636239d022db292bec68ff54e8c948c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c09cfeaabce351a2face8aa09a04f88aa9d50122718358d07301aeab84e4a76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:24Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.418221 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c05a965f-51fb-403d-bbac-b34d4d6b2bbd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98152568d086a30f06fcd1535110482b356c587d906973de9baedd65fc85b6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34ebb0f2c455d2133d2fcf00d19e74d539e80523f2ed0b2ccc1f8eabcf2cb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9d7937efcaeb684eb229e396c7896a805f1bf205bda1cd1b8113aeda2bea5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bcaeb06b249f6fa81deaf9f997ca41c9dd7f3568d458072208901c04543e272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bcaeb06b249f6fa81deaf9f997ca41c9dd7f3568d458072208901c04543e272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:24Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.429464 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30fbc9849348e3900b416f78f697f1d88eb5eb018f5ee420e711561423a79459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:24Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.440193 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b4d032ee6e86c2b61a59ea1477aa6d97a454c754888c37db24bc49d921eacd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:24Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.449768 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://148610b20c1dfe43c3e5443cda520fa95ed2823afcaa880dbe879b6d93dc8ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3aac368767d0534a658bc0223f5cda0a951940f2d6ff3f1c89a6ac401a7d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkmq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:24Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.470413 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.470490 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.470509 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.470539 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.470553 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:24Z","lastTransitionTime":"2025-11-29T07:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.471283 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceed8db447ae237cae16d2aa37b1651eade557ea7aaab5bc7b55678ce6e36f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcce9ccf35e72987d01428b7573aafacfe90fc973009657b077f40b6fe6ddecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://248e5291402287c1a589a168527135bd1df15e293a7614681fc896fcbdeb9dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8112226f945ad27e0c5af4d588800322977e17efc3eac9acec2ce591f30f46fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://feacad8666527640ec9d6bee04a743e99e0cc9f9e9c6432c1a70bc02e85fa039\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25239d7c14973e35144f9ac1b909564749220fc32b1851b68e08434a5aca7def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a0056c2a13ab4a4482c049c3fb5f6fe9938979d96f9837f40fa1dc45719a09b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a0056c2a13ab4a4482c049c3fb5f6fe9938979d96f9837f40fa1dc45719a09b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:40:20Z\\\",\\\"message\\\":\\\"Policy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:40:20.066807 6580 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:40:20.066708 6580 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:40:20.067272 6580 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:40:20.067564 6580 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:40:20.067747 6580 factory.go:656] Stopping watch factory\\\\nI1129 07:40:20.067868 6580 services_controller.go:214] Setting up event handlers for endpoint slices for network=default\\\\nI1129 07:40:20.067909 6580 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1129 07:40:20.068426 6580 ovnkube.go:599] Stopped ovnkube\\\\nI1129 07:40:20.068484 6580 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1129 07:40:20.068569 6580 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:40:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-km2g9_openshift-ovn-kubernetes(3d3ff2b2-cbaa-4309-805a-2b044f867d3a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63ac8cc3766762113c8fdfdc459f77570fc533bf636011f1e64a5c0a55b3ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e28f6209bd7be6beb5d86d316c3194ad3162ae8561ac532a3dfc894415fb406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e28f6209bd7be6beb5d86d316c3194ad3162ae8561ac532a3dfc894415fb406\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-km2g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:24Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.485106 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84bbfda4-f166-406b-8459-d8cad5d21032\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://102735e5872ecbca5ba247f4fc457619ce4b383def180851d69bf37f7ea1051a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cba2e40f95664da3b482ce01c3432b7a421b785791004833aeaffbc2b6bdcb9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c385dac962c7c2c54fe6b5eabe286bc75d0518601f68b6845677764c5916ad44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6319d19e0aa1655aaff5e3abcc13d8ff9ff26f401c28cf23593ecaa23d1ac5a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa34ac2960898820609db88aa051a6567a7b6bd0bfc7fcb741205f6f5c3f402\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:39:37Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 07:39:27.030389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:39:27.033381 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-823169215/tls.crt::/tmp/serving-cert-823169215/tls.key\\\\\\\"\\\\nI1129 07:39:37.392848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:39:37.396263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:39:37.396294 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:39:37.396669 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:39:37.396692 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:39:37.407579 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:39:37.407638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407648 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407655 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:39:37.407660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:39:37.407665 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:39:37.407670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:39:37.408043 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:39:37.411891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7b73774f26f4ea48586f8faf7f53be02b4678c85937c8e402868610f36d4d00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:24Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.496325 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bvmzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9be66670-47c2-4d05-bf3d-59ae6f4ff53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4h7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4h7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bvmzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:24Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.509499 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hbg2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50b9c3ea-4ff5-434f-803c-2365a0938f9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46105526d9b0e90d4e14597c7b9c505b01d89a55e947fb378e4587ec265cac89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv7gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hbg2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:24Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.519119 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vcd5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3fc3441-5d98-4323-8a78-cab492090c5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d047b5f2e979d7eebec46b45ef363e94e99bd7311214872c66fcce39cfd411db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcqq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vcd5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:24Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.573816 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.573869 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.573882 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.573898 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.573909 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:24Z","lastTransitionTime":"2025-11-29T07:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.636386 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.636432 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.636441 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.636456 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.636467 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:24Z","lastTransitionTime":"2025-11-29T07:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:24 crc kubenswrapper[4795]: E1129 07:40:24.647666 4795 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8cd0c036-22eb-405e-b57e-ec0c4424780e\\\",\\\"systemUUID\\\":\\\"bd085386-a70e-485f-9f18-00b3aef4bcca\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:24Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.652092 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.652139 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.652255 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.652283 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.652296 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:24Z","lastTransitionTime":"2025-11-29T07:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:24 crc kubenswrapper[4795]: E1129 07:40:24.667194 4795 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8cd0c036-22eb-405e-b57e-ec0c4424780e\\\",\\\"systemUUID\\\":\\\"bd085386-a70e-485f-9f18-00b3aef4bcca\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:24Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.670834 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.670855 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.670864 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.670880 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.670889 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:24Z","lastTransitionTime":"2025-11-29T07:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:24 crc kubenswrapper[4795]: E1129 07:40:24.682844 4795 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8cd0c036-22eb-405e-b57e-ec0c4424780e\\\",\\\"systemUUID\\\":\\\"bd085386-a70e-485f-9f18-00b3aef4bcca\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:24Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.686423 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.686449 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.686462 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.686477 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.686488 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:24Z","lastTransitionTime":"2025-11-29T07:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:24 crc kubenswrapper[4795]: E1129 07:40:24.698006 4795 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8cd0c036-22eb-405e-b57e-ec0c4424780e\\\",\\\"systemUUID\\\":\\\"bd085386-a70e-485f-9f18-00b3aef4bcca\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:24Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.701314 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.701336 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.701346 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.701357 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.701365 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:24Z","lastTransitionTime":"2025-11-29T07:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:24 crc kubenswrapper[4795]: E1129 07:40:24.713915 4795 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8cd0c036-22eb-405e-b57e-ec0c4424780e\\\",\\\"systemUUID\\\":\\\"bd085386-a70e-485f-9f18-00b3aef4bcca\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:24Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:24 crc kubenswrapper[4795]: E1129 07:40:24.714252 4795 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.719849 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.719884 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.719894 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.719908 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.719919 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:24Z","lastTransitionTime":"2025-11-29T07:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.822274 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.822556 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.822686 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.822774 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.822871 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:24Z","lastTransitionTime":"2025-11-29T07:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.925367 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.925402 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.925413 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.925428 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:24 crc kubenswrapper[4795]: I1129 07:40:24.925439 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:24Z","lastTransitionTime":"2025-11-29T07:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:25 crc kubenswrapper[4795]: I1129 07:40:25.030843 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:25 crc kubenswrapper[4795]: I1129 07:40:25.030876 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:25 crc kubenswrapper[4795]: I1129 07:40:25.030886 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:25 crc kubenswrapper[4795]: I1129 07:40:25.030901 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:25 crc kubenswrapper[4795]: I1129 07:40:25.030911 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:25Z","lastTransitionTime":"2025-11-29T07:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:25 crc kubenswrapper[4795]: I1129 07:40:25.133633 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:25 crc kubenswrapper[4795]: I1129 07:40:25.133690 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:25 crc kubenswrapper[4795]: I1129 07:40:25.133702 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:25 crc kubenswrapper[4795]: I1129 07:40:25.133720 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:25 crc kubenswrapper[4795]: I1129 07:40:25.133733 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:25Z","lastTransitionTime":"2025-11-29T07:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:25 crc kubenswrapper[4795]: I1129 07:40:25.235433 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:25 crc kubenswrapper[4795]: I1129 07:40:25.235476 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:25 crc kubenswrapper[4795]: I1129 07:40:25.235487 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:25 crc kubenswrapper[4795]: I1129 07:40:25.235502 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:25 crc kubenswrapper[4795]: I1129 07:40:25.235513 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:25Z","lastTransitionTime":"2025-11-29T07:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:25 crc kubenswrapper[4795]: I1129 07:40:25.337065 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:25 crc kubenswrapper[4795]: I1129 07:40:25.337094 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:25 crc kubenswrapper[4795]: I1129 07:40:25.337105 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:25 crc kubenswrapper[4795]: I1129 07:40:25.337118 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:25 crc kubenswrapper[4795]: I1129 07:40:25.337127 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:25Z","lastTransitionTime":"2025-11-29T07:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:25 crc kubenswrapper[4795]: I1129 07:40:25.439528 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:25 crc kubenswrapper[4795]: I1129 07:40:25.439558 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:25 crc kubenswrapper[4795]: I1129 07:40:25.439567 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:25 crc kubenswrapper[4795]: I1129 07:40:25.439581 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:25 crc kubenswrapper[4795]: I1129 07:40:25.439610 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:25Z","lastTransitionTime":"2025-11-29T07:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:25 crc kubenswrapper[4795]: I1129 07:40:25.541883 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:25 crc kubenswrapper[4795]: I1129 07:40:25.542136 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:25 crc kubenswrapper[4795]: I1129 07:40:25.542240 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:25 crc kubenswrapper[4795]: I1129 07:40:25.542334 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:25 crc kubenswrapper[4795]: I1129 07:40:25.542437 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:25Z","lastTransitionTime":"2025-11-29T07:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:25 crc kubenswrapper[4795]: I1129 07:40:25.644408 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:25 crc kubenswrapper[4795]: I1129 07:40:25.644954 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:25 crc kubenswrapper[4795]: I1129 07:40:25.645041 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:25 crc kubenswrapper[4795]: I1129 07:40:25.645119 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:25 crc kubenswrapper[4795]: I1129 07:40:25.645194 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:25Z","lastTransitionTime":"2025-11-29T07:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:25 crc kubenswrapper[4795]: I1129 07:40:25.747563 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:25 crc kubenswrapper[4795]: I1129 07:40:25.747624 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:25 crc kubenswrapper[4795]: I1129 07:40:25.747638 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:25 crc kubenswrapper[4795]: I1129 07:40:25.747654 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:25 crc kubenswrapper[4795]: I1129 07:40:25.747666 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:25Z","lastTransitionTime":"2025-11-29T07:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:25 crc kubenswrapper[4795]: I1129 07:40:25.849963 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:25 crc kubenswrapper[4795]: I1129 07:40:25.850252 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:25 crc kubenswrapper[4795]: I1129 07:40:25.850332 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:25 crc kubenswrapper[4795]: I1129 07:40:25.850411 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:25 crc kubenswrapper[4795]: I1129 07:40:25.850490 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:25Z","lastTransitionTime":"2025-11-29T07:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:25 crc kubenswrapper[4795]: I1129 07:40:25.953460 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:25 crc kubenswrapper[4795]: I1129 07:40:25.953763 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:25 crc kubenswrapper[4795]: I1129 07:40:25.953853 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:25 crc kubenswrapper[4795]: I1129 07:40:25.954159 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:25 crc kubenswrapper[4795]: I1129 07:40:25.954234 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:25Z","lastTransitionTime":"2025-11-29T07:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:26 crc kubenswrapper[4795]: I1129 07:40:26.056461 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:26 crc kubenswrapper[4795]: I1129 07:40:26.056524 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:26 crc kubenswrapper[4795]: I1129 07:40:26.056536 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:26 crc kubenswrapper[4795]: I1129 07:40:26.056551 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:26 crc kubenswrapper[4795]: I1129 07:40:26.056562 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:26Z","lastTransitionTime":"2025-11-29T07:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:26 crc kubenswrapper[4795]: I1129 07:40:26.158371 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:26 crc kubenswrapper[4795]: I1129 07:40:26.158764 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:26 crc kubenswrapper[4795]: I1129 07:40:26.158886 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:26 crc kubenswrapper[4795]: I1129 07:40:26.158999 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:26 crc kubenswrapper[4795]: I1129 07:40:26.159079 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:26Z","lastTransitionTime":"2025-11-29T07:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:26 crc kubenswrapper[4795]: I1129 07:40:26.261332 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:26 crc kubenswrapper[4795]: I1129 07:40:26.261362 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:26 crc kubenswrapper[4795]: I1129 07:40:26.261370 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:26 crc kubenswrapper[4795]: I1129 07:40:26.261383 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:26 crc kubenswrapper[4795]: I1129 07:40:26.261391 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:26Z","lastTransitionTime":"2025-11-29T07:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:26 crc kubenswrapper[4795]: I1129 07:40:26.275114 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:40:26 crc kubenswrapper[4795]: E1129 07:40:26.275195 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:40:26 crc kubenswrapper[4795]: I1129 07:40:26.275263 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:40:26 crc kubenswrapper[4795]: I1129 07:40:26.275412 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvmzq" Nov 29 07:40:26 crc kubenswrapper[4795]: E1129 07:40:26.275555 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:40:26 crc kubenswrapper[4795]: I1129 07:40:26.275662 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:40:26 crc kubenswrapper[4795]: E1129 07:40:26.275841 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:40:26 crc kubenswrapper[4795]: E1129 07:40:26.276126 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvmzq" podUID="9be66670-47c2-4d05-bf3d-59ae6f4ff53b" Nov 29 07:40:26 crc kubenswrapper[4795]: I1129 07:40:26.365008 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:26 crc kubenswrapper[4795]: I1129 07:40:26.365063 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:26 crc kubenswrapper[4795]: I1129 07:40:26.365080 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:26 crc kubenswrapper[4795]: I1129 07:40:26.365101 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:26 crc kubenswrapper[4795]: I1129 07:40:26.365119 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:26Z","lastTransitionTime":"2025-11-29T07:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:26 crc kubenswrapper[4795]: I1129 07:40:26.468938 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:26 crc kubenswrapper[4795]: I1129 07:40:26.468987 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:26 crc kubenswrapper[4795]: I1129 07:40:26.469003 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:26 crc kubenswrapper[4795]: I1129 07:40:26.469030 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:26 crc kubenswrapper[4795]: I1129 07:40:26.469047 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:26Z","lastTransitionTime":"2025-11-29T07:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:26 crc kubenswrapper[4795]: I1129 07:40:26.571676 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:26 crc kubenswrapper[4795]: I1129 07:40:26.571738 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:26 crc kubenswrapper[4795]: I1129 07:40:26.571750 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:26 crc kubenswrapper[4795]: I1129 07:40:26.571774 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:26 crc kubenswrapper[4795]: I1129 07:40:26.571788 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:26Z","lastTransitionTime":"2025-11-29T07:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:26 crc kubenswrapper[4795]: I1129 07:40:26.674751 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:26 crc kubenswrapper[4795]: I1129 07:40:26.674813 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:26 crc kubenswrapper[4795]: I1129 07:40:26.674825 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:26 crc kubenswrapper[4795]: I1129 07:40:26.674844 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:26 crc kubenswrapper[4795]: I1129 07:40:26.674856 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:26Z","lastTransitionTime":"2025-11-29T07:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:26 crc kubenswrapper[4795]: I1129 07:40:26.714427 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9be66670-47c2-4d05-bf3d-59ae6f4ff53b-metrics-certs\") pod \"network-metrics-daemon-bvmzq\" (UID: \"9be66670-47c2-4d05-bf3d-59ae6f4ff53b\") " pod="openshift-multus/network-metrics-daemon-bvmzq" Nov 29 07:40:26 crc kubenswrapper[4795]: E1129 07:40:26.714567 4795 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 29 07:40:26 crc kubenswrapper[4795]: E1129 07:40:26.714653 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9be66670-47c2-4d05-bf3d-59ae6f4ff53b-metrics-certs podName:9be66670-47c2-4d05-bf3d-59ae6f4ff53b nodeName:}" failed. No retries permitted until 2025-11-29 07:40:58.714633229 +0000 UTC m=+104.690209009 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9be66670-47c2-4d05-bf3d-59ae6f4ff53b-metrics-certs") pod "network-metrics-daemon-bvmzq" (UID: "9be66670-47c2-4d05-bf3d-59ae6f4ff53b") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 29 07:40:26 crc kubenswrapper[4795]: I1129 07:40:26.777166 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:26 crc kubenswrapper[4795]: I1129 07:40:26.777227 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:26 crc kubenswrapper[4795]: I1129 07:40:26.777239 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:26 crc kubenswrapper[4795]: I1129 07:40:26.777254 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:26 crc kubenswrapper[4795]: I1129 07:40:26.777287 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:26Z","lastTransitionTime":"2025-11-29T07:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:26 crc kubenswrapper[4795]: I1129 07:40:26.880865 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:26 crc kubenswrapper[4795]: I1129 07:40:26.881216 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:26 crc kubenswrapper[4795]: I1129 07:40:26.881231 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:26 crc kubenswrapper[4795]: I1129 07:40:26.881274 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:26 crc kubenswrapper[4795]: I1129 07:40:26.881299 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:26Z","lastTransitionTime":"2025-11-29T07:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:26 crc kubenswrapper[4795]: I1129 07:40:26.983696 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:26 crc kubenswrapper[4795]: I1129 07:40:26.983729 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:26 crc kubenswrapper[4795]: I1129 07:40:26.983740 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:26 crc kubenswrapper[4795]: I1129 07:40:26.983754 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:26 crc kubenswrapper[4795]: I1129 07:40:26.983762 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:26Z","lastTransitionTime":"2025-11-29T07:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:27 crc kubenswrapper[4795]: I1129 07:40:27.085838 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:27 crc kubenswrapper[4795]: I1129 07:40:27.086227 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:27 crc kubenswrapper[4795]: I1129 07:40:27.086360 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:27 crc kubenswrapper[4795]: I1129 07:40:27.086489 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:27 crc kubenswrapper[4795]: I1129 07:40:27.086656 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:27Z","lastTransitionTime":"2025-11-29T07:40:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:27 crc kubenswrapper[4795]: I1129 07:40:27.188815 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:27 crc kubenswrapper[4795]: I1129 07:40:27.189069 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:27 crc kubenswrapper[4795]: I1129 07:40:27.189153 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:27 crc kubenswrapper[4795]: I1129 07:40:27.189225 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:27 crc kubenswrapper[4795]: I1129 07:40:27.189287 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:27Z","lastTransitionTime":"2025-11-29T07:40:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:27 crc kubenswrapper[4795]: I1129 07:40:27.292864 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:27 crc kubenswrapper[4795]: I1129 07:40:27.292917 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:27 crc kubenswrapper[4795]: I1129 07:40:27.292928 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:27 crc kubenswrapper[4795]: I1129 07:40:27.292946 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:27 crc kubenswrapper[4795]: I1129 07:40:27.292964 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:27Z","lastTransitionTime":"2025-11-29T07:40:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:27 crc kubenswrapper[4795]: I1129 07:40:27.394890 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:27 crc kubenswrapper[4795]: I1129 07:40:27.394940 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:27 crc kubenswrapper[4795]: I1129 07:40:27.394952 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:27 crc kubenswrapper[4795]: I1129 07:40:27.394970 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:27 crc kubenswrapper[4795]: I1129 07:40:27.394982 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:27Z","lastTransitionTime":"2025-11-29T07:40:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:27 crc kubenswrapper[4795]: I1129 07:40:27.497111 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:27 crc kubenswrapper[4795]: I1129 07:40:27.497182 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:27 crc kubenswrapper[4795]: I1129 07:40:27.497193 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:27 crc kubenswrapper[4795]: I1129 07:40:27.497210 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:27 crc kubenswrapper[4795]: I1129 07:40:27.497222 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:27Z","lastTransitionTime":"2025-11-29T07:40:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:27 crc kubenswrapper[4795]: I1129 07:40:27.599720 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:27 crc kubenswrapper[4795]: I1129 07:40:27.600026 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:27 crc kubenswrapper[4795]: I1129 07:40:27.600122 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:27 crc kubenswrapper[4795]: I1129 07:40:27.600204 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:27 crc kubenswrapper[4795]: I1129 07:40:27.600272 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:27Z","lastTransitionTime":"2025-11-29T07:40:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:27 crc kubenswrapper[4795]: I1129 07:40:27.702355 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:27 crc kubenswrapper[4795]: I1129 07:40:27.702963 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:27 crc kubenswrapper[4795]: I1129 07:40:27.703067 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:27 crc kubenswrapper[4795]: I1129 07:40:27.703154 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:27 crc kubenswrapper[4795]: I1129 07:40:27.703254 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:27Z","lastTransitionTime":"2025-11-29T07:40:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:27 crc kubenswrapper[4795]: I1129 07:40:27.806455 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:27 crc kubenswrapper[4795]: I1129 07:40:27.806992 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:27 crc kubenswrapper[4795]: I1129 07:40:27.807150 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:27 crc kubenswrapper[4795]: I1129 07:40:27.807329 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:27 crc kubenswrapper[4795]: I1129 07:40:27.807484 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:27Z","lastTransitionTime":"2025-11-29T07:40:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:27 crc kubenswrapper[4795]: I1129 07:40:27.909517 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:27 crc kubenswrapper[4795]: I1129 07:40:27.909555 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:27 crc kubenswrapper[4795]: I1129 07:40:27.909568 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:27 crc kubenswrapper[4795]: I1129 07:40:27.909625 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:27 crc kubenswrapper[4795]: I1129 07:40:27.909640 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:27Z","lastTransitionTime":"2025-11-29T07:40:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:28 crc kubenswrapper[4795]: I1129 07:40:28.012134 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:28 crc kubenswrapper[4795]: I1129 07:40:28.012178 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:28 crc kubenswrapper[4795]: I1129 07:40:28.012192 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:28 crc kubenswrapper[4795]: I1129 07:40:28.012211 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:28 crc kubenswrapper[4795]: I1129 07:40:28.012225 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:28Z","lastTransitionTime":"2025-11-29T07:40:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:28 crc kubenswrapper[4795]: I1129 07:40:28.114317 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:28 crc kubenswrapper[4795]: I1129 07:40:28.114349 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:28 crc kubenswrapper[4795]: I1129 07:40:28.114356 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:28 crc kubenswrapper[4795]: I1129 07:40:28.114370 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:28 crc kubenswrapper[4795]: I1129 07:40:28.114378 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:28Z","lastTransitionTime":"2025-11-29T07:40:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:28 crc kubenswrapper[4795]: I1129 07:40:28.216707 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:28 crc kubenswrapper[4795]: I1129 07:40:28.216752 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:28 crc kubenswrapper[4795]: I1129 07:40:28.216764 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:28 crc kubenswrapper[4795]: I1129 07:40:28.216782 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:28 crc kubenswrapper[4795]: I1129 07:40:28.216796 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:28Z","lastTransitionTime":"2025-11-29T07:40:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:28 crc kubenswrapper[4795]: I1129 07:40:28.275210 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:40:28 crc kubenswrapper[4795]: I1129 07:40:28.275245 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:40:28 crc kubenswrapper[4795]: I1129 07:40:28.275254 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:40:28 crc kubenswrapper[4795]: E1129 07:40:28.275338 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:40:28 crc kubenswrapper[4795]: E1129 07:40:28.275504 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:40:28 crc kubenswrapper[4795]: E1129 07:40:28.275626 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:40:28 crc kubenswrapper[4795]: I1129 07:40:28.275817 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvmzq" Nov 29 07:40:28 crc kubenswrapper[4795]: E1129 07:40:28.276033 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvmzq" podUID="9be66670-47c2-4d05-bf3d-59ae6f4ff53b" Nov 29 07:40:28 crc kubenswrapper[4795]: I1129 07:40:28.319226 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:28 crc kubenswrapper[4795]: I1129 07:40:28.319271 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:28 crc kubenswrapper[4795]: I1129 07:40:28.319279 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:28 crc kubenswrapper[4795]: I1129 07:40:28.319294 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:28 crc kubenswrapper[4795]: I1129 07:40:28.319307 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:28Z","lastTransitionTime":"2025-11-29T07:40:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:28 crc kubenswrapper[4795]: I1129 07:40:28.421143 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:28 crc kubenswrapper[4795]: I1129 07:40:28.421185 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:28 crc kubenswrapper[4795]: I1129 07:40:28.421196 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:28 crc kubenswrapper[4795]: I1129 07:40:28.421210 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:28 crc kubenswrapper[4795]: I1129 07:40:28.421222 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:28Z","lastTransitionTime":"2025-11-29T07:40:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:28 crc kubenswrapper[4795]: I1129 07:40:28.523491 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:28 crc kubenswrapper[4795]: I1129 07:40:28.523537 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:28 crc kubenswrapper[4795]: I1129 07:40:28.523548 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:28 crc kubenswrapper[4795]: I1129 07:40:28.523564 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:28 crc kubenswrapper[4795]: I1129 07:40:28.523576 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:28Z","lastTransitionTime":"2025-11-29T07:40:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:28 crc kubenswrapper[4795]: I1129 07:40:28.627145 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:28 crc kubenswrapper[4795]: I1129 07:40:28.627195 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:28 crc kubenswrapper[4795]: I1129 07:40:28.627208 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:28 crc kubenswrapper[4795]: I1129 07:40:28.627225 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:28 crc kubenswrapper[4795]: I1129 07:40:28.627237 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:28Z","lastTransitionTime":"2025-11-29T07:40:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:28 crc kubenswrapper[4795]: I1129 07:40:28.730346 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:28 crc kubenswrapper[4795]: I1129 07:40:28.730385 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:28 crc kubenswrapper[4795]: I1129 07:40:28.730396 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:28 crc kubenswrapper[4795]: I1129 07:40:28.730411 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:28 crc kubenswrapper[4795]: I1129 07:40:28.730420 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:28Z","lastTransitionTime":"2025-11-29T07:40:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:28 crc kubenswrapper[4795]: I1129 07:40:28.832754 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:28 crc kubenswrapper[4795]: I1129 07:40:28.833211 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:28 crc kubenswrapper[4795]: I1129 07:40:28.833451 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:28 crc kubenswrapper[4795]: I1129 07:40:28.833749 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:28 crc kubenswrapper[4795]: I1129 07:40:28.834075 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:28Z","lastTransitionTime":"2025-11-29T07:40:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:28 crc kubenswrapper[4795]: I1129 07:40:28.936434 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:28 crc kubenswrapper[4795]: I1129 07:40:28.936504 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:28 crc kubenswrapper[4795]: I1129 07:40:28.936523 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:28 crc kubenswrapper[4795]: I1129 07:40:28.936546 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:28 crc kubenswrapper[4795]: I1129 07:40:28.936564 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:28Z","lastTransitionTime":"2025-11-29T07:40:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:29 crc kubenswrapper[4795]: I1129 07:40:29.039127 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:29 crc kubenswrapper[4795]: I1129 07:40:29.039162 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:29 crc kubenswrapper[4795]: I1129 07:40:29.039173 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:29 crc kubenswrapper[4795]: I1129 07:40:29.039189 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:29 crc kubenswrapper[4795]: I1129 07:40:29.039204 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:29Z","lastTransitionTime":"2025-11-29T07:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:29 crc kubenswrapper[4795]: I1129 07:40:29.141847 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:29 crc kubenswrapper[4795]: I1129 07:40:29.141906 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:29 crc kubenswrapper[4795]: I1129 07:40:29.141925 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:29 crc kubenswrapper[4795]: I1129 07:40:29.141950 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:29 crc kubenswrapper[4795]: I1129 07:40:29.141969 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:29Z","lastTransitionTime":"2025-11-29T07:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:29 crc kubenswrapper[4795]: I1129 07:40:29.244750 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:29 crc kubenswrapper[4795]: I1129 07:40:29.244786 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:29 crc kubenswrapper[4795]: I1129 07:40:29.244797 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:29 crc kubenswrapper[4795]: I1129 07:40:29.244812 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:29 crc kubenswrapper[4795]: I1129 07:40:29.244827 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:29Z","lastTransitionTime":"2025-11-29T07:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:29 crc kubenswrapper[4795]: I1129 07:40:29.346872 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:29 crc kubenswrapper[4795]: I1129 07:40:29.347225 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:29 crc kubenswrapper[4795]: I1129 07:40:29.347416 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:29 crc kubenswrapper[4795]: I1129 07:40:29.347578 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:29 crc kubenswrapper[4795]: I1129 07:40:29.347774 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:29Z","lastTransitionTime":"2025-11-29T07:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:29 crc kubenswrapper[4795]: I1129 07:40:29.451405 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:29 crc kubenswrapper[4795]: I1129 07:40:29.451463 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:29 crc kubenswrapper[4795]: I1129 07:40:29.451475 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:29 crc kubenswrapper[4795]: I1129 07:40:29.451490 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:29 crc kubenswrapper[4795]: I1129 07:40:29.451501 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:29Z","lastTransitionTime":"2025-11-29T07:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:29 crc kubenswrapper[4795]: I1129 07:40:29.553245 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:29 crc kubenswrapper[4795]: I1129 07:40:29.553482 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:29 crc kubenswrapper[4795]: I1129 07:40:29.553551 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:29 crc kubenswrapper[4795]: I1129 07:40:29.553641 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:29 crc kubenswrapper[4795]: I1129 07:40:29.553699 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:29Z","lastTransitionTime":"2025-11-29T07:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:29 crc kubenswrapper[4795]: I1129 07:40:29.656291 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:29 crc kubenswrapper[4795]: I1129 07:40:29.656361 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:29 crc kubenswrapper[4795]: I1129 07:40:29.656380 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:29 crc kubenswrapper[4795]: I1129 07:40:29.656404 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:29 crc kubenswrapper[4795]: I1129 07:40:29.656420 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:29Z","lastTransitionTime":"2025-11-29T07:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:29 crc kubenswrapper[4795]: I1129 07:40:29.759123 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:29 crc kubenswrapper[4795]: I1129 07:40:29.759155 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:29 crc kubenswrapper[4795]: I1129 07:40:29.759165 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:29 crc kubenswrapper[4795]: I1129 07:40:29.759178 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:29 crc kubenswrapper[4795]: I1129 07:40:29.759187 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:29Z","lastTransitionTime":"2025-11-29T07:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:29 crc kubenswrapper[4795]: I1129 07:40:29.861905 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:29 crc kubenswrapper[4795]: I1129 07:40:29.861966 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:29 crc kubenswrapper[4795]: I1129 07:40:29.861978 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:29 crc kubenswrapper[4795]: I1129 07:40:29.862000 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:29 crc kubenswrapper[4795]: I1129 07:40:29.862013 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:29Z","lastTransitionTime":"2025-11-29T07:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:29 crc kubenswrapper[4795]: I1129 07:40:29.965364 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:29 crc kubenswrapper[4795]: I1129 07:40:29.965436 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:29 crc kubenswrapper[4795]: I1129 07:40:29.965447 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:29 crc kubenswrapper[4795]: I1129 07:40:29.965467 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:29 crc kubenswrapper[4795]: I1129 07:40:29.965482 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:29Z","lastTransitionTime":"2025-11-29T07:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:30 crc kubenswrapper[4795]: I1129 07:40:30.068785 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:30 crc kubenswrapper[4795]: I1129 07:40:30.068840 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:30 crc kubenswrapper[4795]: I1129 07:40:30.068850 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:30 crc kubenswrapper[4795]: I1129 07:40:30.068866 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:30 crc kubenswrapper[4795]: I1129 07:40:30.068877 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:30Z","lastTransitionTime":"2025-11-29T07:40:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:30 crc kubenswrapper[4795]: I1129 07:40:30.172058 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:30 crc kubenswrapper[4795]: I1129 07:40:30.172101 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:30 crc kubenswrapper[4795]: I1129 07:40:30.172114 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:30 crc kubenswrapper[4795]: I1129 07:40:30.172137 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:30 crc kubenswrapper[4795]: I1129 07:40:30.172152 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:30Z","lastTransitionTime":"2025-11-29T07:40:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:30 crc kubenswrapper[4795]: I1129 07:40:30.275571 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:40:30 crc kubenswrapper[4795]: I1129 07:40:30.275742 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:40:30 crc kubenswrapper[4795]: I1129 07:40:30.275893 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvmzq" Nov 29 07:40:30 crc kubenswrapper[4795]: E1129 07:40:30.276042 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:40:30 crc kubenswrapper[4795]: I1129 07:40:30.276167 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:30 crc kubenswrapper[4795]: I1129 07:40:30.276257 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:30 crc kubenswrapper[4795]: I1129 07:40:30.276284 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:30 crc kubenswrapper[4795]: E1129 07:40:30.276285 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:40:30 crc kubenswrapper[4795]: I1129 07:40:30.276322 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:30 crc kubenswrapper[4795]: I1129 07:40:30.276378 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:30Z","lastTransitionTime":"2025-11-29T07:40:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:30 crc kubenswrapper[4795]: E1129 07:40:30.276426 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvmzq" podUID="9be66670-47c2-4d05-bf3d-59ae6f4ff53b" Nov 29 07:40:30 crc kubenswrapper[4795]: I1129 07:40:30.276756 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:40:30 crc kubenswrapper[4795]: E1129 07:40:30.276976 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:40:30 crc kubenswrapper[4795]: I1129 07:40:30.381843 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:30 crc kubenswrapper[4795]: I1129 07:40:30.381908 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:30 crc kubenswrapper[4795]: I1129 07:40:30.381928 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:30 crc kubenswrapper[4795]: I1129 07:40:30.381951 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:30 crc kubenswrapper[4795]: I1129 07:40:30.381970 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:30Z","lastTransitionTime":"2025-11-29T07:40:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:30 crc kubenswrapper[4795]: I1129 07:40:30.484795 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:30 crc kubenswrapper[4795]: I1129 07:40:30.484846 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:30 crc kubenswrapper[4795]: I1129 07:40:30.484857 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:30 crc kubenswrapper[4795]: I1129 07:40:30.484875 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:30 crc kubenswrapper[4795]: I1129 07:40:30.484886 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:30Z","lastTransitionTime":"2025-11-29T07:40:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:30 crc kubenswrapper[4795]: I1129 07:40:30.587475 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:30 crc kubenswrapper[4795]: I1129 07:40:30.587562 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:30 crc kubenswrapper[4795]: I1129 07:40:30.587574 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:30 crc kubenswrapper[4795]: I1129 07:40:30.587635 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:30 crc kubenswrapper[4795]: I1129 07:40:30.587651 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:30Z","lastTransitionTime":"2025-11-29T07:40:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:30 crc kubenswrapper[4795]: I1129 07:40:30.690583 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:30 crc kubenswrapper[4795]: I1129 07:40:30.690694 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:30 crc kubenswrapper[4795]: I1129 07:40:30.690722 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:30 crc kubenswrapper[4795]: I1129 07:40:30.690754 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:30 crc kubenswrapper[4795]: I1129 07:40:30.690777 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:30Z","lastTransitionTime":"2025-11-29T07:40:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:30 crc kubenswrapper[4795]: I1129 07:40:30.793476 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:30 crc kubenswrapper[4795]: I1129 07:40:30.793529 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:30 crc kubenswrapper[4795]: I1129 07:40:30.793548 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:30 crc kubenswrapper[4795]: I1129 07:40:30.793570 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:30 crc kubenswrapper[4795]: I1129 07:40:30.793608 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:30Z","lastTransitionTime":"2025-11-29T07:40:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:30 crc kubenswrapper[4795]: I1129 07:40:30.896259 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:30 crc kubenswrapper[4795]: I1129 07:40:30.896321 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:30 crc kubenswrapper[4795]: I1129 07:40:30.896337 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:30 crc kubenswrapper[4795]: I1129 07:40:30.896359 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:30 crc kubenswrapper[4795]: I1129 07:40:30.896375 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:30Z","lastTransitionTime":"2025-11-29T07:40:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:30 crc kubenswrapper[4795]: I1129 07:40:30.998532 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:30 crc kubenswrapper[4795]: I1129 07:40:30.998582 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:30 crc kubenswrapper[4795]: I1129 07:40:30.998626 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:30 crc kubenswrapper[4795]: I1129 07:40:30.998643 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:30 crc kubenswrapper[4795]: I1129 07:40:30.998655 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:30Z","lastTransitionTime":"2025-11-29T07:40:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:31 crc kubenswrapper[4795]: I1129 07:40:31.100267 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:31 crc kubenswrapper[4795]: I1129 07:40:31.100303 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:31 crc kubenswrapper[4795]: I1129 07:40:31.100329 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:31 crc kubenswrapper[4795]: I1129 07:40:31.100346 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:31 crc kubenswrapper[4795]: I1129 07:40:31.100356 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:31Z","lastTransitionTime":"2025-11-29T07:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:31 crc kubenswrapper[4795]: I1129 07:40:31.202299 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:31 crc kubenswrapper[4795]: I1129 07:40:31.202334 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:31 crc kubenswrapper[4795]: I1129 07:40:31.202344 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:31 crc kubenswrapper[4795]: I1129 07:40:31.202445 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:31 crc kubenswrapper[4795]: I1129 07:40:31.202461 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:31Z","lastTransitionTime":"2025-11-29T07:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:31 crc kubenswrapper[4795]: I1129 07:40:31.304977 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:31 crc kubenswrapper[4795]: I1129 07:40:31.305022 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:31 crc kubenswrapper[4795]: I1129 07:40:31.305034 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:31 crc kubenswrapper[4795]: I1129 07:40:31.305050 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:31 crc kubenswrapper[4795]: I1129 07:40:31.305062 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:31Z","lastTransitionTime":"2025-11-29T07:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:31 crc kubenswrapper[4795]: I1129 07:40:31.406727 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:31 crc kubenswrapper[4795]: I1129 07:40:31.406772 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:31 crc kubenswrapper[4795]: I1129 07:40:31.406789 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:31 crc kubenswrapper[4795]: I1129 07:40:31.406805 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:31 crc kubenswrapper[4795]: I1129 07:40:31.406815 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:31Z","lastTransitionTime":"2025-11-29T07:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:31 crc kubenswrapper[4795]: I1129 07:40:31.509260 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:31 crc kubenswrapper[4795]: I1129 07:40:31.509320 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:31 crc kubenswrapper[4795]: I1129 07:40:31.509334 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:31 crc kubenswrapper[4795]: I1129 07:40:31.509355 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:31 crc kubenswrapper[4795]: I1129 07:40:31.509374 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:31Z","lastTransitionTime":"2025-11-29T07:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:31 crc kubenswrapper[4795]: I1129 07:40:31.612003 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:31 crc kubenswrapper[4795]: I1129 07:40:31.612092 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:31 crc kubenswrapper[4795]: I1129 07:40:31.612119 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:31 crc kubenswrapper[4795]: I1129 07:40:31.612151 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:31 crc kubenswrapper[4795]: I1129 07:40:31.612178 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:31Z","lastTransitionTime":"2025-11-29T07:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:31 crc kubenswrapper[4795]: I1129 07:40:31.715935 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:31 crc kubenswrapper[4795]: I1129 07:40:31.716001 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:31 crc kubenswrapper[4795]: I1129 07:40:31.716019 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:31 crc kubenswrapper[4795]: I1129 07:40:31.716045 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:31 crc kubenswrapper[4795]: I1129 07:40:31.716064 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:31Z","lastTransitionTime":"2025-11-29T07:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:31 crc kubenswrapper[4795]: I1129 07:40:31.820060 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:31 crc kubenswrapper[4795]: I1129 07:40:31.820143 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:31 crc kubenswrapper[4795]: I1129 07:40:31.820158 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:31 crc kubenswrapper[4795]: I1129 07:40:31.820212 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:31 crc kubenswrapper[4795]: I1129 07:40:31.820230 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:31Z","lastTransitionTime":"2025-11-29T07:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:31 crc kubenswrapper[4795]: I1129 07:40:31.922864 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:31 crc kubenswrapper[4795]: I1129 07:40:31.922917 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:31 crc kubenswrapper[4795]: I1129 07:40:31.922930 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:31 crc kubenswrapper[4795]: I1129 07:40:31.922957 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:31 crc kubenswrapper[4795]: I1129 07:40:31.922982 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:31Z","lastTransitionTime":"2025-11-29T07:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:32 crc kubenswrapper[4795]: I1129 07:40:32.026534 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:32 crc kubenswrapper[4795]: I1129 07:40:32.026643 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:32 crc kubenswrapper[4795]: I1129 07:40:32.026662 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:32 crc kubenswrapper[4795]: I1129 07:40:32.026686 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:32 crc kubenswrapper[4795]: I1129 07:40:32.026703 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:32Z","lastTransitionTime":"2025-11-29T07:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:32 crc kubenswrapper[4795]: I1129 07:40:32.130679 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:32 crc kubenswrapper[4795]: I1129 07:40:32.130724 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:32 crc kubenswrapper[4795]: I1129 07:40:32.130736 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:32 crc kubenswrapper[4795]: I1129 07:40:32.130753 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:32 crc kubenswrapper[4795]: I1129 07:40:32.130766 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:32Z","lastTransitionTime":"2025-11-29T07:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:32 crc kubenswrapper[4795]: I1129 07:40:32.234110 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:32 crc kubenswrapper[4795]: I1129 07:40:32.234186 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:32 crc kubenswrapper[4795]: I1129 07:40:32.234203 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:32 crc kubenswrapper[4795]: I1129 07:40:32.234235 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:32 crc kubenswrapper[4795]: I1129 07:40:32.234256 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:32Z","lastTransitionTime":"2025-11-29T07:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:32 crc kubenswrapper[4795]: I1129 07:40:32.274826 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:40:32 crc kubenswrapper[4795]: E1129 07:40:32.275185 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:40:32 crc kubenswrapper[4795]: I1129 07:40:32.275537 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvmzq" Nov 29 07:40:32 crc kubenswrapper[4795]: E1129 07:40:32.275735 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvmzq" podUID="9be66670-47c2-4d05-bf3d-59ae6f4ff53b" Nov 29 07:40:32 crc kubenswrapper[4795]: I1129 07:40:32.276009 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:40:32 crc kubenswrapper[4795]: I1129 07:40:32.276050 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:40:32 crc kubenswrapper[4795]: E1129 07:40:32.276240 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:40:32 crc kubenswrapper[4795]: E1129 07:40:32.276381 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:40:32 crc kubenswrapper[4795]: I1129 07:40:32.336505 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:32 crc kubenswrapper[4795]: I1129 07:40:32.336548 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:32 crc kubenswrapper[4795]: I1129 07:40:32.336556 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:32 crc kubenswrapper[4795]: I1129 07:40:32.336573 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:32 crc kubenswrapper[4795]: I1129 07:40:32.336582 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:32Z","lastTransitionTime":"2025-11-29T07:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:32 crc kubenswrapper[4795]: I1129 07:40:32.357879 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hbg2m_50b9c3ea-4ff5-434f-803c-2365a0938f9a/kube-multus/0.log" Nov 29 07:40:32 crc kubenswrapper[4795]: I1129 07:40:32.357968 4795 generic.go:334] "Generic (PLEG): container finished" podID="50b9c3ea-4ff5-434f-803c-2365a0938f9a" containerID="46105526d9b0e90d4e14597c7b9c505b01d89a55e947fb378e4587ec265cac89" exitCode=1 Nov 29 07:40:32 crc kubenswrapper[4795]: I1129 07:40:32.358026 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hbg2m" event={"ID":"50b9c3ea-4ff5-434f-803c-2365a0938f9a","Type":"ContainerDied","Data":"46105526d9b0e90d4e14597c7b9c505b01d89a55e947fb378e4587ec265cac89"} Nov 29 07:40:32 crc kubenswrapper[4795]: I1129 07:40:32.358760 4795 scope.go:117] "RemoveContainer" containerID="46105526d9b0e90d4e14597c7b9c505b01d89a55e947fb378e4587ec265cac89" Nov 29 07:40:32 crc kubenswrapper[4795]: I1129 07:40:32.379633 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"262bf41d-eaf0-42b7-946d-3223857ca705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bbfc45764b29055dcafdee2e3b77893c35b23cfa3eaa9125f01850baef2fd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://508f7cb88a5b33c09bb52519555900031de44693e10de750ed34d9bb4c31738c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9f1344bd7c3a1bc53870f7e5f5f2ca4993df112a24aa474655cc07670e22c48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b06202234f21aa27a3dc67978cc9f82738d7526100bce3fd640be08131413f69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:32 crc kubenswrapper[4795]: I1129 07:40:32.399839 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:32 crc kubenswrapper[4795]: I1129 07:40:32.414779 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:32 crc kubenswrapper[4795]: I1129 07:40:32.436458 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27975" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceab872d-7a73-44d5-936e-3dd17facf399\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b771b13a7bbce9270d919a830c700a1a04bd1d82141a1487e7abb6b1def634cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:40:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e24a1ba0b3bd47c8cff733e54ce43cf1641ba69df8a3d2083f47fcfdcd6640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e24a1ba0b3bd47c8cff733e54ce43cf1641ba69df8a3d2083f47fcfdcd6640\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9e93605467f5dae805a8b5b55e1eebc3f046f4ab7fd3a409235b918d8b1aa88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9e93605467f5dae805a8b5b55e1eebc3f046f4ab7fd3a409235b918d8b1aa88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1856815c6475b6c9db1d59694dfc249a9b254ca7f6b402570dbb791f60e038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1856815c6475b6c9db1d59694dfc249a9b254ca7f6b402570dbb791f60e038\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c79e6f292a540eaeab2781c31973da7e0213d08460452129dbb25075c05a9094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c79e6f292a540eaeab2781c31973da7e0213d08460452129dbb25075c05a9094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e9ceaddb24d5dc8e3689b2e05482fd76883f4ce3475557c03419491b53e800d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9ceaddb24d5dc8e3689b2e05482fd76883f4ce3475557c03419491b53e800d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2966e801bc03a51bece33fae1207ba6a432620fcbcb1c702abddbb11b40a1e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2966e801bc03a51bece33fae1207ba6a432620fcbcb1c702abddbb11b40a1e1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27975\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:32 crc kubenswrapper[4795]: I1129 07:40:32.440205 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:32 crc kubenswrapper[4795]: I1129 07:40:32.440242 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:32 crc kubenswrapper[4795]: I1129 07:40:32.440256 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:32 crc kubenswrapper[4795]: I1129 07:40:32.440278 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:32 crc kubenswrapper[4795]: I1129 07:40:32.440295 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:32Z","lastTransitionTime":"2025-11-29T07:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:32 crc kubenswrapper[4795]: I1129 07:40:32.453697 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-s9x86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcb60df4-ad48-4830-b8c7-c63621b96707\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6d9cdb665b3bed924577ab173dcf48968a0b7217ca81b1f8708d733d27d3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-s9x86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:32 crc kubenswrapper[4795]: I1129 07:40:32.468104 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qcmb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6472ebbc-939b-4dd0-8b03-110cb9811484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad977e9d70f50688539c9d0c9cf08789584d08f77792d3dde1a619e9d958cff2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qwxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2497113482ab716b22a90b7872f2837bccd484b765e979c2c67a394f82de688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qwxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qcmb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:32 crc kubenswrapper[4795]: I1129 07:40:32.480332 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:32 crc kubenswrapper[4795]: I1129 07:40:32.496255 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2faa21199ef0e7bbf065823fff1e4b33636239d022db292bec68ff54e8c948c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c09cfeaabce351a2face8aa09a04f88aa9d50122718358d07301aeab84e4a76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:32 crc kubenswrapper[4795]: I1129 07:40:32.518671 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceed8db447ae237cae16d2aa37b1651eade557ea7aaab5bc7b55678ce6e36f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcce9ccf35e72987d01428b7573aafacfe90fc973009657b077f40b6fe6ddecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://248e5291402287c1a589a168527135bd1df15e293a7614681fc896fcbdeb9dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8112226f945ad27e0c5af4d588800322977e17efc3eac9acec2ce591f30f46fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://feacad8666527640ec9d6bee04a743e99e0cc9f9e9c6432c1a70bc02e85fa039\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25239d7c14973e35144f9ac1b909564749220fc32b1851b68e08434a5aca7def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a0056c2a13ab4a4482c049c3fb5f6fe9938979d96f9837f40fa1dc45719a09b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a0056c2a13ab4a4482c049c3fb5f6fe9938979d96f9837f40fa1dc45719a09b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:40:20Z\\\",\\\"message\\\":\\\"Policy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:40:20.066807 6580 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:40:20.066708 6580 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:40:20.067272 6580 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:40:20.067564 6580 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:40:20.067747 6580 factory.go:656] Stopping watch factory\\\\nI1129 07:40:20.067868 6580 services_controller.go:214] Setting up event handlers for endpoint slices for network=default\\\\nI1129 07:40:20.067909 6580 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1129 07:40:20.068426 6580 ovnkube.go:599] Stopped ovnkube\\\\nI1129 07:40:20.068484 6580 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1129 07:40:20.068569 6580 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:40:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-km2g9_openshift-ovn-kubernetes(3d3ff2b2-cbaa-4309-805a-2b044f867d3a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63ac8cc3766762113c8fdfdc459f77570fc533bf636011f1e64a5c0a55b3ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e28f6209bd7be6beb5d86d316c3194ad3162ae8561ac532a3dfc894415fb406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e28f6209bd7be6beb5d86d316c3194ad3162ae8561ac532a3dfc894415fb406\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-km2g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:32 crc kubenswrapper[4795]: I1129 07:40:32.537127 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84bbfda4-f166-406b-8459-d8cad5d21032\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://102735e5872ecbca5ba247f4fc457619ce4b383def180851d69bf37f7ea1051a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cba2e40f95664da3b482ce01c3432b7a421b785791004833aeaffbc2b6bdcb9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c385dac962c7c2c54fe6b5eabe286bc75d0518601f68b6845677764c5916ad44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6319d19e0aa1655aaff5e3abcc13d8ff9ff26f401c28cf23593ecaa23d1ac5a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa34ac2960898820609db88aa051a6567a7b6bd0bfc7fcb741205f6f5c3f402\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:39:37Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 07:39:27.030389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:39:27.033381 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-823169215/tls.crt::/tmp/serving-cert-823169215/tls.key\\\\\\\"\\\\nI1129 07:39:37.392848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:39:37.396263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:39:37.396294 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:39:37.396669 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:39:37.396692 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:39:37.407579 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:39:37.407638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407648 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407655 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:39:37.407660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:39:37.407665 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:39:37.407670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:39:37.408043 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:39:37.411891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7b73774f26f4ea48586f8faf7f53be02b4678c85937c8e402868610f36d4d00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:32 crc kubenswrapper[4795]: I1129 07:40:32.546372 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:32 crc kubenswrapper[4795]: I1129 07:40:32.546409 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:32 crc kubenswrapper[4795]: I1129 07:40:32.546426 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:32 crc kubenswrapper[4795]: I1129 07:40:32.546448 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:32 crc kubenswrapper[4795]: I1129 07:40:32.546462 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:32Z","lastTransitionTime":"2025-11-29T07:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:32 crc kubenswrapper[4795]: I1129 07:40:32.552012 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c05a965f-51fb-403d-bbac-b34d4d6b2bbd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98152568d086a30f06fcd1535110482b356c587d906973de9baedd65fc85b6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34ebb0f2c455d2133d2fcf00d19e74d539e80523f2ed0b2ccc1f8eabcf2cb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9d7937efcaeb684eb229e396c7896a805f1bf205bda1cd1b8113aeda2bea5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bcaeb06b249f6fa81deaf9f997ca41c9dd7f3568d458072208901c04543e272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bcaeb06b249f6fa81deaf9f997ca41c9dd7f3568d458072208901c04543e272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:32 crc kubenswrapper[4795]: I1129 07:40:32.582431 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30fbc9849348e3900b416f78f697f1d88eb5eb018f5ee420e711561423a79459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:32 crc kubenswrapper[4795]: I1129 07:40:32.599354 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b4d032ee6e86c2b61a59ea1477aa6d97a454c754888c37db24bc49d921eacd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:32 crc kubenswrapper[4795]: I1129 07:40:32.614537 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://148610b20c1dfe43c3e5443cda520fa95ed2823afcaa880dbe879b6d93dc8ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3aac368767d0534a658bc0223f5cda0a951940f2d6ff3f1c89a6ac401a7d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkmq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:32 crc kubenswrapper[4795]: I1129 07:40:32.625278 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bvmzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9be66670-47c2-4d05-bf3d-59ae6f4ff53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4h7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4h7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bvmzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:32 crc kubenswrapper[4795]: I1129 07:40:32.645565 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hbg2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50b9c3ea-4ff5-434f-803c-2365a0938f9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46105526d9b0e90d4e14597c7b9c505b01d89a55e947fb378e4587ec265cac89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46105526d9b0e90d4e14597c7b9c505b01d89a55e947fb378e4587ec265cac89\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:40:31Z\\\",\\\"message\\\":\\\"2025-11-29T07:39:45+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7afd0fe0-7b98-4b72-a478-6a6cb3a1287c\\\\n2025-11-29T07:39:45+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7afd0fe0-7b98-4b72-a478-6a6cb3a1287c to /host/opt/cni/bin/\\\\n2025-11-29T07:39:45Z [verbose] multus-daemon started\\\\n2025-11-29T07:39:45Z [verbose] Readiness Indicator file check\\\\n2025-11-29T07:40:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv7gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hbg2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:32 crc kubenswrapper[4795]: I1129 07:40:32.648798 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:32 crc kubenswrapper[4795]: I1129 07:40:32.648865 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:32 crc kubenswrapper[4795]: I1129 07:40:32.648880 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:32 crc kubenswrapper[4795]: I1129 07:40:32.648904 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:32 crc kubenswrapper[4795]: I1129 07:40:32.648919 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:32Z","lastTransitionTime":"2025-11-29T07:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:32 crc kubenswrapper[4795]: I1129 07:40:32.662564 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vcd5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3fc3441-5d98-4323-8a78-cab492090c5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d047b5f2e979d7eebec46b45ef363e94e99bd7311214872c66fcce39cfd411db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcqq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vcd5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:32 crc kubenswrapper[4795]: I1129 07:40:32.751470 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:32 crc kubenswrapper[4795]: I1129 07:40:32.751510 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:32 crc kubenswrapper[4795]: I1129 07:40:32.751523 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:32 crc kubenswrapper[4795]: I1129 07:40:32.751539 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:32 crc kubenswrapper[4795]: I1129 07:40:32.751552 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:32Z","lastTransitionTime":"2025-11-29T07:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:32 crc kubenswrapper[4795]: I1129 07:40:32.857923 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:32 crc kubenswrapper[4795]: I1129 07:40:32.857966 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:32 crc kubenswrapper[4795]: I1129 07:40:32.857977 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:32 crc kubenswrapper[4795]: I1129 07:40:32.857992 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:32 crc kubenswrapper[4795]: I1129 07:40:32.858009 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:32Z","lastTransitionTime":"2025-11-29T07:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:32 crc kubenswrapper[4795]: I1129 07:40:32.960361 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:32 crc kubenswrapper[4795]: I1129 07:40:32.960408 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:32 crc kubenswrapper[4795]: I1129 07:40:32.960419 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:32 crc kubenswrapper[4795]: I1129 07:40:32.960434 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:32 crc kubenswrapper[4795]: I1129 07:40:32.960443 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:32Z","lastTransitionTime":"2025-11-29T07:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:33 crc kubenswrapper[4795]: I1129 07:40:33.062355 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:33 crc kubenswrapper[4795]: I1129 07:40:33.062425 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:33 crc kubenswrapper[4795]: I1129 07:40:33.062435 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:33 crc kubenswrapper[4795]: I1129 07:40:33.062450 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:33 crc kubenswrapper[4795]: I1129 07:40:33.062462 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:33Z","lastTransitionTime":"2025-11-29T07:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:33 crc kubenswrapper[4795]: I1129 07:40:33.164434 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:33 crc kubenswrapper[4795]: I1129 07:40:33.164471 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:33 crc kubenswrapper[4795]: I1129 07:40:33.164499 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:33 crc kubenswrapper[4795]: I1129 07:40:33.164514 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:33 crc kubenswrapper[4795]: I1129 07:40:33.164522 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:33Z","lastTransitionTime":"2025-11-29T07:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:33 crc kubenswrapper[4795]: I1129 07:40:33.267300 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:33 crc kubenswrapper[4795]: I1129 07:40:33.267340 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:33 crc kubenswrapper[4795]: I1129 07:40:33.267363 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:33 crc kubenswrapper[4795]: I1129 07:40:33.267382 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:33 crc kubenswrapper[4795]: I1129 07:40:33.267398 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:33Z","lastTransitionTime":"2025-11-29T07:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:33 crc kubenswrapper[4795]: I1129 07:40:33.363791 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hbg2m_50b9c3ea-4ff5-434f-803c-2365a0938f9a/kube-multus/0.log" Nov 29 07:40:33 crc kubenswrapper[4795]: I1129 07:40:33.363870 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hbg2m" event={"ID":"50b9c3ea-4ff5-434f-803c-2365a0938f9a","Type":"ContainerStarted","Data":"00a5c9d1a3e5371aa4899c05c4c4eecdbbbe3666777446a5fa82e13837a23050"} Nov 29 07:40:33 crc kubenswrapper[4795]: I1129 07:40:33.369144 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:33 crc kubenswrapper[4795]: I1129 07:40:33.369179 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:33 crc kubenswrapper[4795]: I1129 07:40:33.369194 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:33 crc kubenswrapper[4795]: I1129 07:40:33.369214 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:33 crc kubenswrapper[4795]: I1129 07:40:33.369229 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:33Z","lastTransitionTime":"2025-11-29T07:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:33 crc kubenswrapper[4795]: I1129 07:40:33.385055 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30fbc9849348e3900b416f78f697f1d88eb5eb018f5ee420e711561423a79459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:33Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:33 crc kubenswrapper[4795]: I1129 07:40:33.397994 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b4d032ee6e86c2b61a59ea1477aa6d97a454c754888c37db24bc49d921eacd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:33Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:33 crc kubenswrapper[4795]: I1129 07:40:33.411268 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://148610b20c1dfe43c3e5443cda520fa95ed2823afcaa880dbe879b6d93dc8ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3aac368767d0534a658bc0223f5cda0a951940f2d6ff3f1c89a6ac401a7d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkmq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:33Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:33 crc kubenswrapper[4795]: I1129 07:40:33.428321 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceed8db447ae237cae16d2aa37b1651eade557ea7aaab5bc7b55678ce6e36f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcce9ccf35e72987d01428b7573aafacfe90fc973009657b077f40b6fe6ddecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://248e5291402287c1a589a168527135bd1df15e293a7614681fc896fcbdeb9dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8112226f945ad27e0c5af4d588800322977e17efc3eac9acec2ce591f30f46fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://feacad8666527640ec9d6bee04a743e99e0cc9f9e9c6432c1a70bc02e85fa039\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25239d7c14973e35144f9ac1b909564749220fc32b1851b68e08434a5aca7def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a0056c2a13ab4a4482c049c3fb5f6fe9938979d96f9837f40fa1dc45719a09b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a0056c2a13ab4a4482c049c3fb5f6fe9938979d96f9837f40fa1dc45719a09b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:40:20Z\\\",\\\"message\\\":\\\"Policy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:40:20.066807 6580 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:40:20.066708 6580 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:40:20.067272 6580 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:40:20.067564 6580 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:40:20.067747 6580 factory.go:656] Stopping watch factory\\\\nI1129 07:40:20.067868 6580 services_controller.go:214] Setting up event handlers for endpoint slices for network=default\\\\nI1129 07:40:20.067909 6580 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1129 07:40:20.068426 6580 ovnkube.go:599] Stopped ovnkube\\\\nI1129 07:40:20.068484 6580 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1129 07:40:20.068569 6580 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:40:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-km2g9_openshift-ovn-kubernetes(3d3ff2b2-cbaa-4309-805a-2b044f867d3a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63ac8cc3766762113c8fdfdc459f77570fc533bf636011f1e64a5c0a55b3ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e28f6209bd7be6beb5d86d316c3194ad3162ae8561ac532a3dfc894415fb406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e28f6209bd7be6beb5d86d316c3194ad3162ae8561ac532a3dfc894415fb406\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-km2g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:33Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:33 crc kubenswrapper[4795]: I1129 07:40:33.447341 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84bbfda4-f166-406b-8459-d8cad5d21032\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://102735e5872ecbca5ba247f4fc457619ce4b383def180851d69bf37f7ea1051a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cba2e40f95664da3b482ce01c3432b7a421b785791004833aeaffbc2b6bdcb9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c385dac962c7c2c54fe6b5eabe286bc75d0518601f68b6845677764c5916ad44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6319d19e0aa1655aaff5e3abcc13d8ff9ff26f401c28cf23593ecaa23d1ac5a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa34ac2960898820609db88aa051a6567a7b6bd0bfc7fcb741205f6f5c3f402\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:39:37Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 07:39:27.030389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:39:27.033381 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-823169215/tls.crt::/tmp/serving-cert-823169215/tls.key\\\\\\\"\\\\nI1129 07:39:37.392848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:39:37.396263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:39:37.396294 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:39:37.396669 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:39:37.396692 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:39:37.407579 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:39:37.407638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407648 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407655 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:39:37.407660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:39:37.407665 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:39:37.407670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:39:37.408043 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:39:37.411891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7b73774f26f4ea48586f8faf7f53be02b4678c85937c8e402868610f36d4d00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:33Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:33 crc kubenswrapper[4795]: I1129 07:40:33.459205 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c05a965f-51fb-403d-bbac-b34d4d6b2bbd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98152568d086a30f06fcd1535110482b356c587d906973de9baedd65fc85b6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34ebb0f2c455d2133d2fcf00d19e74d539e80523f2ed0b2ccc1f8eabcf2cb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9d7937efcaeb684eb229e396c7896a805f1bf205bda1cd1b8113aeda2bea5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bcaeb06b249f6fa81deaf9f997ca41c9dd7f3568d458072208901c04543e272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bcaeb06b249f6fa81deaf9f997ca41c9dd7f3568d458072208901c04543e272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:33Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:33 crc kubenswrapper[4795]: I1129 07:40:33.469465 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bvmzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9be66670-47c2-4d05-bf3d-59ae6f4ff53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4h7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4h7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bvmzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:33Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:33 crc kubenswrapper[4795]: I1129 07:40:33.470676 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:33 crc kubenswrapper[4795]: I1129 07:40:33.470715 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:33 crc kubenswrapper[4795]: I1129 07:40:33.470724 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:33 crc kubenswrapper[4795]: I1129 07:40:33.470738 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:33 crc kubenswrapper[4795]: I1129 07:40:33.470747 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:33Z","lastTransitionTime":"2025-11-29T07:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:33 crc kubenswrapper[4795]: I1129 07:40:33.482578 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hbg2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50b9c3ea-4ff5-434f-803c-2365a0938f9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00a5c9d1a3e5371aa4899c05c4c4eecdbbbe3666777446a5fa82e13837a23050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46105526d9b0e90d4e14597c7b9c505b01d89a55e947fb378e4587ec265cac89\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:40:31Z\\\",\\\"message\\\":\\\"2025-11-29T07:39:45+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7afd0fe0-7b98-4b72-a478-6a6cb3a1287c\\\\n2025-11-29T07:39:45+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7afd0fe0-7b98-4b72-a478-6a6cb3a1287c to /host/opt/cni/bin/\\\\n2025-11-29T07:39:45Z [verbose] multus-daemon started\\\\n2025-11-29T07:39:45Z [verbose] Readiness Indicator file check\\\\n2025-11-29T07:40:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:40:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv7gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hbg2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:33Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:33 crc kubenswrapper[4795]: I1129 07:40:33.493328 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vcd5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3fc3441-5d98-4323-8a78-cab492090c5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d047b5f2e979d7eebec46b45ef363e94e99bd7311214872c66fcce39cfd411db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcqq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vcd5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:33Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:33 crc kubenswrapper[4795]: I1129 07:40:33.506386 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-s9x86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcb60df4-ad48-4830-b8c7-c63621b96707\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6d9cdb665b3bed924577ab173dcf48968a0b7217ca81b1f8708d733d27d3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-s9x86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:33Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:33 crc kubenswrapper[4795]: I1129 07:40:33.516801 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qcmb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6472ebbc-939b-4dd0-8b03-110cb9811484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad977e9d70f50688539c9d0c9cf08789584d08f77792d3dde1a619e9d958cff2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qwxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2497113482ab716b22a90b7872f2837bccd484b765e979c2c67a394f82de688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qwxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qcmb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:33Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:33 crc kubenswrapper[4795]: I1129 07:40:33.528201 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"262bf41d-eaf0-42b7-946d-3223857ca705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bbfc45764b29055dcafdee2e3b77893c35b23cfa3eaa9125f01850baef2fd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://508f7cb88a5b33c09bb52519555900031de44693e10de750ed34d9bb4c31738c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9f1344bd7c3a1bc53870f7e5f5f2ca4993df112a24aa474655cc07670e22c48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b06202234f21aa27a3dc67978cc9f82738d7526100bce3fd640be08131413f69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:33Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:33 crc kubenswrapper[4795]: I1129 07:40:33.539230 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:33Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:33 crc kubenswrapper[4795]: I1129 07:40:33.549353 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:33Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:33 crc kubenswrapper[4795]: I1129 07:40:33.561073 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27975" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceab872d-7a73-44d5-936e-3dd17facf399\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b771b13a7bbce9270d919a830c700a1a04bd1d82141a1487e7abb6b1def634cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:40:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e24a1ba0b3bd47c8cff733e54ce43cf1641ba69df8a3d2083f47fcfdcd6640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e24a1ba0b3bd47c8cff733e54ce43cf1641ba69df8a3d2083f47fcfdcd6640\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9e93605467f5dae805a8b5b55e1eebc3f046f4ab7fd3a409235b918d8b1aa88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9e93605467f5dae805a8b5b55e1eebc3f046f4ab7fd3a409235b918d8b1aa88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1856815c6475b6c9db1d59694dfc249a9b254ca7f6b402570dbb791f60e038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1856815c6475b6c9db1d59694dfc249a9b254ca7f6b402570dbb791f60e038\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c79e6f292a540eaeab2781c31973da7e0213d08460452129dbb25075c05a9094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c79e6f292a540eaeab2781c31973da7e0213d08460452129dbb25075c05a9094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e9ceaddb24d5dc8e3689b2e05482fd76883f4ce3475557c03419491b53e800d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9ceaddb24d5dc8e3689b2e05482fd76883f4ce3475557c03419491b53e800d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2966e801bc03a51bece33fae1207ba6a432620fcbcb1c702abddbb11b40a1e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2966e801bc03a51bece33fae1207ba6a432620fcbcb1c702abddbb11b40a1e1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27975\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:33Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:33 crc kubenswrapper[4795]: I1129 07:40:33.572728 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:33 crc kubenswrapper[4795]: I1129 07:40:33.572761 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:33 crc kubenswrapper[4795]: I1129 07:40:33.572770 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:33 crc kubenswrapper[4795]: I1129 07:40:33.572783 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:33 crc kubenswrapper[4795]: I1129 07:40:33.572792 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:33Z","lastTransitionTime":"2025-11-29T07:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:33 crc kubenswrapper[4795]: I1129 07:40:33.573460 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:33Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:33 crc kubenswrapper[4795]: I1129 07:40:33.586617 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2faa21199ef0e7bbf065823fff1e4b33636239d022db292bec68ff54e8c948c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c09cfeaabce351a2face8aa09a04f88aa9d50122718358d07301aeab84e4a76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:33Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:33 crc kubenswrapper[4795]: I1129 07:40:33.676419 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:33 crc kubenswrapper[4795]: I1129 07:40:33.676473 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:33 crc kubenswrapper[4795]: I1129 07:40:33.676488 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:33 crc kubenswrapper[4795]: I1129 07:40:33.676509 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:33 crc kubenswrapper[4795]: I1129 07:40:33.676523 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:33Z","lastTransitionTime":"2025-11-29T07:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:33 crc kubenswrapper[4795]: I1129 07:40:33.779374 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:33 crc kubenswrapper[4795]: I1129 07:40:33.779430 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:33 crc kubenswrapper[4795]: I1129 07:40:33.779441 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:33 crc kubenswrapper[4795]: I1129 07:40:33.779459 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:33 crc kubenswrapper[4795]: I1129 07:40:33.779471 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:33Z","lastTransitionTime":"2025-11-29T07:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:33 crc kubenswrapper[4795]: I1129 07:40:33.881751 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:33 crc kubenswrapper[4795]: I1129 07:40:33.881816 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:33 crc kubenswrapper[4795]: I1129 07:40:33.881835 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:33 crc kubenswrapper[4795]: I1129 07:40:33.881859 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:33 crc kubenswrapper[4795]: I1129 07:40:33.881877 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:33Z","lastTransitionTime":"2025-11-29T07:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:33 crc kubenswrapper[4795]: I1129 07:40:33.984367 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:33 crc kubenswrapper[4795]: I1129 07:40:33.984422 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:33 crc kubenswrapper[4795]: I1129 07:40:33.984458 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:33 crc kubenswrapper[4795]: I1129 07:40:33.984480 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:33 crc kubenswrapper[4795]: I1129 07:40:33.984494 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:33Z","lastTransitionTime":"2025-11-29T07:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.088094 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.088151 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.088167 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.088185 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.088199 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:34Z","lastTransitionTime":"2025-11-29T07:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.191114 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.191181 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.191198 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.191220 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.191236 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:34Z","lastTransitionTime":"2025-11-29T07:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.275888 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvmzq" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.276033 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:40:34 crc kubenswrapper[4795]: E1129 07:40:34.276156 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvmzq" podUID="9be66670-47c2-4d05-bf3d-59ae6f4ff53b" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.276376 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:40:34 crc kubenswrapper[4795]: E1129 07:40:34.276487 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:40:34 crc kubenswrapper[4795]: E1129 07:40:34.276661 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.276909 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:40:34 crc kubenswrapper[4795]: E1129 07:40:34.277093 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.294004 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.294051 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.294073 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.294101 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.294124 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:34Z","lastTransitionTime":"2025-11-29T07:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.298262 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qcmb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6472ebbc-939b-4dd0-8b03-110cb9811484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad977e9d70f50688539c9d0c9cf08789584d08f77792d3dde1a619e9d958cff2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qwxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2497113482ab716b22a90b7872f2837bccd484b765e979c2c67a394f82de688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qwxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qcmb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:34Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.320432 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"262bf41d-eaf0-42b7-946d-3223857ca705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bbfc45764b29055dcafdee2e3b77893c35b23cfa3eaa9125f01850baef2fd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://508f7cb88a5b33c09bb52519555900031de44693e10de750ed34d9bb4c31738c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9f1344bd7c3a1bc53870f7e5f5f2ca4993df112a24aa474655cc07670e22c48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b06202234f21aa27a3dc67978cc9f82738d7526100bce3fd640be08131413f69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:34Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.343204 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:34Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.365194 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:34Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.388430 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27975" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceab872d-7a73-44d5-936e-3dd17facf399\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b771b13a7bbce9270d919a830c700a1a04bd1d82141a1487e7abb6b1def634cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:40:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e24a1ba0b3bd47c8cff733e54ce43cf1641ba69df8a3d2083f47fcfdcd6640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e24a1ba0b3bd47c8cff733e54ce43cf1641ba69df8a3d2083f47fcfdcd6640\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9e93605467f5dae805a8b5b55e1eebc3f046f4ab7fd3a409235b918d8b1aa88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9e93605467f5dae805a8b5b55e1eebc3f046f4ab7fd3a409235b918d8b1aa88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1856815c6475b6c9db1d59694dfc249a9b254ca7f6b402570dbb791f60e038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1856815c6475b6c9db1d59694dfc249a9b254ca7f6b402570dbb791f60e038\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c79e6f292a540eaeab2781c31973da7e0213d08460452129dbb25075c05a9094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c79e6f292a540eaeab2781c31973da7e0213d08460452129dbb25075c05a9094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e9ceaddb24d5dc8e3689b2e05482fd76883f4ce3475557c03419491b53e800d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9ceaddb24d5dc8e3689b2e05482fd76883f4ce3475557c03419491b53e800d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2966e801bc03a51bece33fae1207ba6a432620fcbcb1c702abddbb11b40a1e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2966e801bc03a51bece33fae1207ba6a432620fcbcb1c702abddbb11b40a1e1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27975\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:34Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.396471 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.396558 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.396584 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.396698 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.396725 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:34Z","lastTransitionTime":"2025-11-29T07:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.401804 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-s9x86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcb60df4-ad48-4830-b8c7-c63621b96707\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6d9cdb665b3bed924577ab173dcf48968a0b7217ca81b1f8708d733d27d3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-s9x86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:34Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.415940 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:34Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.431825 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2faa21199ef0e7bbf065823fff1e4b33636239d022db292bec68ff54e8c948c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c09cfeaabce351a2face8aa09a04f88aa9d50122718358d07301aeab84e4a76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:34Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.442346 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b4d032ee6e86c2b61a59ea1477aa6d97a454c754888c37db24bc49d921eacd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:34Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.459898 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://148610b20c1dfe43c3e5443cda520fa95ed2823afcaa880dbe879b6d93dc8ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3aac368767d0534a658bc0223f5cda0a951940f2d6ff3f1c89a6ac401a7d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkmq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:34Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.481803 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceed8db447ae237cae16d2aa37b1651eade557ea7aaab5bc7b55678ce6e36f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcce9ccf35e72987d01428b7573aafacfe90fc973009657b077f40b6fe6ddecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://248e5291402287c1a589a168527135bd1df15e293a7614681fc896fcbdeb9dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8112226f945ad27e0c5af4d588800322977e17efc3eac9acec2ce591f30f46fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://feacad8666527640ec9d6bee04a743e99e0cc9f9e9c6432c1a70bc02e85fa039\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25239d7c14973e35144f9ac1b909564749220fc32b1851b68e08434a5aca7def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a0056c2a13ab4a4482c049c3fb5f6fe9938979d96f9837f40fa1dc45719a09b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a0056c2a13ab4a4482c049c3fb5f6fe9938979d96f9837f40fa1dc45719a09b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:40:20Z\\\",\\\"message\\\":\\\"Policy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:40:20.066807 6580 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:40:20.066708 6580 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:40:20.067272 6580 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:40:20.067564 6580 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:40:20.067747 6580 factory.go:656] Stopping watch factory\\\\nI1129 07:40:20.067868 6580 services_controller.go:214] Setting up event handlers for endpoint slices for network=default\\\\nI1129 07:40:20.067909 6580 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1129 07:40:20.068426 6580 ovnkube.go:599] Stopped ovnkube\\\\nI1129 07:40:20.068484 6580 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1129 07:40:20.068569 6580 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:40:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-km2g9_openshift-ovn-kubernetes(3d3ff2b2-cbaa-4309-805a-2b044f867d3a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63ac8cc3766762113c8fdfdc459f77570fc533bf636011f1e64a5c0a55b3ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e28f6209bd7be6beb5d86d316c3194ad3162ae8561ac532a3dfc894415fb406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e28f6209bd7be6beb5d86d316c3194ad3162ae8561ac532a3dfc894415fb406\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-km2g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:34Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.499902 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.499938 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.499952 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.499971 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.499987 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:34Z","lastTransitionTime":"2025-11-29T07:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.504700 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84bbfda4-f166-406b-8459-d8cad5d21032\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://102735e5872ecbca5ba247f4fc457619ce4b383def180851d69bf37f7ea1051a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cba2e40f95664da3b482ce01c3432b7a421b785791004833aeaffbc2b6bdcb9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c385dac962c7c2c54fe6b5eabe286bc75d0518601f68b6845677764c5916ad44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6319d19e0aa1655aaff5e3abcc13d8ff9ff26f401c28cf23593ecaa23d1ac5a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa34ac2960898820609db88aa051a6567a7b6bd0bfc7fcb741205f6f5c3f402\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:39:37Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 07:39:27.030389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:39:27.033381 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-823169215/tls.crt::/tmp/serving-cert-823169215/tls.key\\\\\\\"\\\\nI1129 07:39:37.392848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:39:37.396263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:39:37.396294 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:39:37.396669 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:39:37.396692 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:39:37.407579 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:39:37.407638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407648 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407655 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:39:37.407660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:39:37.407665 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:39:37.407670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:39:37.408043 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:39:37.411891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7b73774f26f4ea48586f8faf7f53be02b4678c85937c8e402868610f36d4d00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:34Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.519929 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c05a965f-51fb-403d-bbac-b34d4d6b2bbd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98152568d086a30f06fcd1535110482b356c587d906973de9baedd65fc85b6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34ebb0f2c455d2133d2fcf00d19e74d539e80523f2ed0b2ccc1f8eabcf2cb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9d7937efcaeb684eb229e396c7896a805f1bf205bda1cd1b8113aeda2bea5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bcaeb06b249f6fa81deaf9f997ca41c9dd7f3568d458072208901c04543e272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bcaeb06b249f6fa81deaf9f997ca41c9dd7f3568d458072208901c04543e272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:34Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.534333 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30fbc9849348e3900b416f78f697f1d88eb5eb018f5ee420e711561423a79459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:34Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.544527 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bvmzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9be66670-47c2-4d05-bf3d-59ae6f4ff53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4h7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4h7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bvmzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:34Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.565114 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hbg2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50b9c3ea-4ff5-434f-803c-2365a0938f9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00a5c9d1a3e5371aa4899c05c4c4eecdbbbe3666777446a5fa82e13837a23050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46105526d9b0e90d4e14597c7b9c505b01d89a55e947fb378e4587ec265cac89\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:40:31Z\\\",\\\"message\\\":\\\"2025-11-29T07:39:45+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7afd0fe0-7b98-4b72-a478-6a6cb3a1287c\\\\n2025-11-29T07:39:45+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7afd0fe0-7b98-4b72-a478-6a6cb3a1287c to /host/opt/cni/bin/\\\\n2025-11-29T07:39:45Z [verbose] multus-daemon started\\\\n2025-11-29T07:39:45Z [verbose] Readiness Indicator file check\\\\n2025-11-29T07:40:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:40:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv7gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hbg2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:34Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.576689 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vcd5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3fc3441-5d98-4323-8a78-cab492090c5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d047b5f2e979d7eebec46b45ef363e94e99bd7311214872c66fcce39cfd411db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcqq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vcd5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:34Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.601873 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.601918 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.601929 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.601946 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.601958 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:34Z","lastTransitionTime":"2025-11-29T07:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.704700 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.704763 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.704781 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.704808 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.704825 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:34Z","lastTransitionTime":"2025-11-29T07:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.756284 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.756336 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.756353 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.756373 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.756433 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:34Z","lastTransitionTime":"2025-11-29T07:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:34 crc kubenswrapper[4795]: E1129 07:40:34.770365 4795 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8cd0c036-22eb-405e-b57e-ec0c4424780e\\\",\\\"systemUUID\\\":\\\"bd085386-a70e-485f-9f18-00b3aef4bcca\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:34Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.774479 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.774532 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.774545 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.774559 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.774568 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:34Z","lastTransitionTime":"2025-11-29T07:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:34 crc kubenswrapper[4795]: E1129 07:40:34.787903 4795 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8cd0c036-22eb-405e-b57e-ec0c4424780e\\\",\\\"systemUUID\\\":\\\"bd085386-a70e-485f-9f18-00b3aef4bcca\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:34Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.791391 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.791459 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.791475 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.791488 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.791497 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:34Z","lastTransitionTime":"2025-11-29T07:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:34 crc kubenswrapper[4795]: E1129 07:40:34.804361 4795 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8cd0c036-22eb-405e-b57e-ec0c4424780e\\\",\\\"systemUUID\\\":\\\"bd085386-a70e-485f-9f18-00b3aef4bcca\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:34Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.807704 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.807750 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.807764 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.807781 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.807792 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:34Z","lastTransitionTime":"2025-11-29T07:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:34 crc kubenswrapper[4795]: E1129 07:40:34.821150 4795 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8cd0c036-22eb-405e-b57e-ec0c4424780e\\\",\\\"systemUUID\\\":\\\"bd085386-a70e-485f-9f18-00b3aef4bcca\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:34Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.824198 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.824245 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.824257 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.824294 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.824303 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:34Z","lastTransitionTime":"2025-11-29T07:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:34 crc kubenswrapper[4795]: E1129 07:40:34.834833 4795 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8cd0c036-22eb-405e-b57e-ec0c4424780e\\\",\\\"systemUUID\\\":\\\"bd085386-a70e-485f-9f18-00b3aef4bcca\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:34Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:34 crc kubenswrapper[4795]: E1129 07:40:34.835101 4795 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.836625 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.836657 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.836668 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.836685 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.836698 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:34Z","lastTransitionTime":"2025-11-29T07:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.938549 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.938633 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.938646 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.938669 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:34 crc kubenswrapper[4795]: I1129 07:40:34.938682 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:34Z","lastTransitionTime":"2025-11-29T07:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:35 crc kubenswrapper[4795]: I1129 07:40:35.041151 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:35 crc kubenswrapper[4795]: I1129 07:40:35.041191 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:35 crc kubenswrapper[4795]: I1129 07:40:35.041205 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:35 crc kubenswrapper[4795]: I1129 07:40:35.041220 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:35 crc kubenswrapper[4795]: I1129 07:40:35.041231 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:35Z","lastTransitionTime":"2025-11-29T07:40:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:35 crc kubenswrapper[4795]: I1129 07:40:35.143500 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:35 crc kubenswrapper[4795]: I1129 07:40:35.143542 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:35 crc kubenswrapper[4795]: I1129 07:40:35.143550 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:35 crc kubenswrapper[4795]: I1129 07:40:35.143563 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:35 crc kubenswrapper[4795]: I1129 07:40:35.143573 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:35Z","lastTransitionTime":"2025-11-29T07:40:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:35 crc kubenswrapper[4795]: I1129 07:40:35.245331 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:35 crc kubenswrapper[4795]: I1129 07:40:35.245389 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:35 crc kubenswrapper[4795]: I1129 07:40:35.245404 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:35 crc kubenswrapper[4795]: I1129 07:40:35.245424 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:35 crc kubenswrapper[4795]: I1129 07:40:35.245438 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:35Z","lastTransitionTime":"2025-11-29T07:40:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:35 crc kubenswrapper[4795]: I1129 07:40:35.346962 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:35 crc kubenswrapper[4795]: I1129 07:40:35.346998 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:35 crc kubenswrapper[4795]: I1129 07:40:35.347006 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:35 crc kubenswrapper[4795]: I1129 07:40:35.347018 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:35 crc kubenswrapper[4795]: I1129 07:40:35.347027 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:35Z","lastTransitionTime":"2025-11-29T07:40:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:35 crc kubenswrapper[4795]: I1129 07:40:35.449744 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:35 crc kubenswrapper[4795]: I1129 07:40:35.449812 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:35 crc kubenswrapper[4795]: I1129 07:40:35.449876 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:35 crc kubenswrapper[4795]: I1129 07:40:35.449901 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:35 crc kubenswrapper[4795]: I1129 07:40:35.449920 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:35Z","lastTransitionTime":"2025-11-29T07:40:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:35 crc kubenswrapper[4795]: I1129 07:40:35.552991 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:35 crc kubenswrapper[4795]: I1129 07:40:35.553037 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:35 crc kubenswrapper[4795]: I1129 07:40:35.553047 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:35 crc kubenswrapper[4795]: I1129 07:40:35.553071 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:35 crc kubenswrapper[4795]: I1129 07:40:35.553080 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:35Z","lastTransitionTime":"2025-11-29T07:40:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:35 crc kubenswrapper[4795]: I1129 07:40:35.655201 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:35 crc kubenswrapper[4795]: I1129 07:40:35.655235 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:35 crc kubenswrapper[4795]: I1129 07:40:35.655245 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:35 crc kubenswrapper[4795]: I1129 07:40:35.655259 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:35 crc kubenswrapper[4795]: I1129 07:40:35.655270 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:35Z","lastTransitionTime":"2025-11-29T07:40:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:35 crc kubenswrapper[4795]: I1129 07:40:35.757688 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:35 crc kubenswrapper[4795]: I1129 07:40:35.757737 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:35 crc kubenswrapper[4795]: I1129 07:40:35.757751 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:35 crc kubenswrapper[4795]: I1129 07:40:35.757770 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:35 crc kubenswrapper[4795]: I1129 07:40:35.757782 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:35Z","lastTransitionTime":"2025-11-29T07:40:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:35 crc kubenswrapper[4795]: I1129 07:40:35.860291 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:35 crc kubenswrapper[4795]: I1129 07:40:35.860325 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:35 crc kubenswrapper[4795]: I1129 07:40:35.860333 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:35 crc kubenswrapper[4795]: I1129 07:40:35.860344 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:35 crc kubenswrapper[4795]: I1129 07:40:35.860355 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:35Z","lastTransitionTime":"2025-11-29T07:40:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:35 crc kubenswrapper[4795]: I1129 07:40:35.962847 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:35 crc kubenswrapper[4795]: I1129 07:40:35.962895 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:35 crc kubenswrapper[4795]: I1129 07:40:35.962903 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:35 crc kubenswrapper[4795]: I1129 07:40:35.962917 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:35 crc kubenswrapper[4795]: I1129 07:40:35.962929 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:35Z","lastTransitionTime":"2025-11-29T07:40:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:36 crc kubenswrapper[4795]: I1129 07:40:36.069207 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:36 crc kubenswrapper[4795]: I1129 07:40:36.069266 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:36 crc kubenswrapper[4795]: I1129 07:40:36.069282 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:36 crc kubenswrapper[4795]: I1129 07:40:36.069303 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:36 crc kubenswrapper[4795]: I1129 07:40:36.069321 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:36Z","lastTransitionTime":"2025-11-29T07:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:36 crc kubenswrapper[4795]: I1129 07:40:36.171945 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:36 crc kubenswrapper[4795]: I1129 07:40:36.171983 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:36 crc kubenswrapper[4795]: I1129 07:40:36.171992 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:36 crc kubenswrapper[4795]: I1129 07:40:36.172006 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:36 crc kubenswrapper[4795]: I1129 07:40:36.172018 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:36Z","lastTransitionTime":"2025-11-29T07:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:36 crc kubenswrapper[4795]: I1129 07:40:36.274704 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:40:36 crc kubenswrapper[4795]: I1129 07:40:36.274717 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:40:36 crc kubenswrapper[4795]: E1129 07:40:36.274810 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:40:36 crc kubenswrapper[4795]: I1129 07:40:36.274884 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:40:36 crc kubenswrapper[4795]: I1129 07:40:36.274973 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvmzq" Nov 29 07:40:36 crc kubenswrapper[4795]: E1129 07:40:36.275082 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:40:36 crc kubenswrapper[4795]: E1129 07:40:36.275225 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvmzq" podUID="9be66670-47c2-4d05-bf3d-59ae6f4ff53b" Nov 29 07:40:36 crc kubenswrapper[4795]: I1129 07:40:36.275250 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:36 crc kubenswrapper[4795]: I1129 07:40:36.275283 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:36 crc kubenswrapper[4795]: I1129 07:40:36.275296 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:36 crc kubenswrapper[4795]: I1129 07:40:36.275319 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:36 crc kubenswrapper[4795]: E1129 07:40:36.275333 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:40:36 crc kubenswrapper[4795]: I1129 07:40:36.275332 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:36Z","lastTransitionTime":"2025-11-29T07:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:36 crc kubenswrapper[4795]: I1129 07:40:36.275962 4795 scope.go:117] "RemoveContainer" containerID="4a0056c2a13ab4a4482c049c3fb5f6fe9938979d96f9837f40fa1dc45719a09b" Nov 29 07:40:36 crc kubenswrapper[4795]: E1129 07:40:36.276240 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-km2g9_openshift-ovn-kubernetes(3d3ff2b2-cbaa-4309-805a-2b044f867d3a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" podUID="3d3ff2b2-cbaa-4309-805a-2b044f867d3a" Nov 29 07:40:36 crc kubenswrapper[4795]: I1129 07:40:36.377942 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:36 crc kubenswrapper[4795]: I1129 07:40:36.377989 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:36 crc kubenswrapper[4795]: I1129 07:40:36.378001 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:36 crc kubenswrapper[4795]: I1129 07:40:36.378016 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:36 crc kubenswrapper[4795]: I1129 07:40:36.378030 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:36Z","lastTransitionTime":"2025-11-29T07:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:36 crc kubenswrapper[4795]: I1129 07:40:36.481405 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:36 crc kubenswrapper[4795]: I1129 07:40:36.481444 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:36 crc kubenswrapper[4795]: I1129 07:40:36.481456 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:36 crc kubenswrapper[4795]: I1129 07:40:36.481473 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:36 crc kubenswrapper[4795]: I1129 07:40:36.481491 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:36Z","lastTransitionTime":"2025-11-29T07:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:36 crc kubenswrapper[4795]: I1129 07:40:36.585218 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:36 crc kubenswrapper[4795]: I1129 07:40:36.585264 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:36 crc kubenswrapper[4795]: I1129 07:40:36.585276 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:36 crc kubenswrapper[4795]: I1129 07:40:36.585295 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:36 crc kubenswrapper[4795]: I1129 07:40:36.585307 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:36Z","lastTransitionTime":"2025-11-29T07:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:36 crc kubenswrapper[4795]: I1129 07:40:36.688284 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:36 crc kubenswrapper[4795]: I1129 07:40:36.688358 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:36 crc kubenswrapper[4795]: I1129 07:40:36.688369 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:36 crc kubenswrapper[4795]: I1129 07:40:36.688411 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:36 crc kubenswrapper[4795]: I1129 07:40:36.688423 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:36Z","lastTransitionTime":"2025-11-29T07:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:36 crc kubenswrapper[4795]: I1129 07:40:36.791775 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:36 crc kubenswrapper[4795]: I1129 07:40:36.791822 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:36 crc kubenswrapper[4795]: I1129 07:40:36.791833 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:36 crc kubenswrapper[4795]: I1129 07:40:36.791848 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:36 crc kubenswrapper[4795]: I1129 07:40:36.791861 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:36Z","lastTransitionTime":"2025-11-29T07:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:36 crc kubenswrapper[4795]: I1129 07:40:36.894870 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:36 crc kubenswrapper[4795]: I1129 07:40:36.894923 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:36 crc kubenswrapper[4795]: I1129 07:40:36.894940 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:36 crc kubenswrapper[4795]: I1129 07:40:36.894961 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:36 crc kubenswrapper[4795]: I1129 07:40:36.894973 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:36Z","lastTransitionTime":"2025-11-29T07:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:36 crc kubenswrapper[4795]: I1129 07:40:36.998289 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:36 crc kubenswrapper[4795]: I1129 07:40:36.998359 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:36 crc kubenswrapper[4795]: I1129 07:40:36.998370 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:36 crc kubenswrapper[4795]: I1129 07:40:36.998390 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:36 crc kubenswrapper[4795]: I1129 07:40:36.998402 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:36Z","lastTransitionTime":"2025-11-29T07:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:37 crc kubenswrapper[4795]: I1129 07:40:37.100978 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:37 crc kubenswrapper[4795]: I1129 07:40:37.101325 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:37 crc kubenswrapper[4795]: I1129 07:40:37.101500 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:37 crc kubenswrapper[4795]: I1129 07:40:37.101752 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:37 crc kubenswrapper[4795]: I1129 07:40:37.104845 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:37Z","lastTransitionTime":"2025-11-29T07:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:37 crc kubenswrapper[4795]: I1129 07:40:37.207949 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:37 crc kubenswrapper[4795]: I1129 07:40:37.208234 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:37 crc kubenswrapper[4795]: I1129 07:40:37.208343 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:37 crc kubenswrapper[4795]: I1129 07:40:37.208437 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:37 crc kubenswrapper[4795]: I1129 07:40:37.208522 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:37Z","lastTransitionTime":"2025-11-29T07:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:37 crc kubenswrapper[4795]: I1129 07:40:37.310489 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:37 crc kubenswrapper[4795]: I1129 07:40:37.310531 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:37 crc kubenswrapper[4795]: I1129 07:40:37.310540 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:37 crc kubenswrapper[4795]: I1129 07:40:37.310555 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:37 crc kubenswrapper[4795]: I1129 07:40:37.310565 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:37Z","lastTransitionTime":"2025-11-29T07:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:37 crc kubenswrapper[4795]: I1129 07:40:37.413353 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:37 crc kubenswrapper[4795]: I1129 07:40:37.413425 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:37 crc kubenswrapper[4795]: I1129 07:40:37.413448 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:37 crc kubenswrapper[4795]: I1129 07:40:37.413478 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:37 crc kubenswrapper[4795]: I1129 07:40:37.413499 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:37Z","lastTransitionTime":"2025-11-29T07:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:37 crc kubenswrapper[4795]: I1129 07:40:37.516068 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:37 crc kubenswrapper[4795]: I1129 07:40:37.516112 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:37 crc kubenswrapper[4795]: I1129 07:40:37.516127 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:37 crc kubenswrapper[4795]: I1129 07:40:37.516148 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:37 crc kubenswrapper[4795]: I1129 07:40:37.516164 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:37Z","lastTransitionTime":"2025-11-29T07:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:37 crc kubenswrapper[4795]: I1129 07:40:37.620021 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:37 crc kubenswrapper[4795]: I1129 07:40:37.620081 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:37 crc kubenswrapper[4795]: I1129 07:40:37.620103 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:37 crc kubenswrapper[4795]: I1129 07:40:37.620127 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:37 crc kubenswrapper[4795]: I1129 07:40:37.620144 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:37Z","lastTransitionTime":"2025-11-29T07:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:37 crc kubenswrapper[4795]: I1129 07:40:37.722584 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:37 crc kubenswrapper[4795]: I1129 07:40:37.722640 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:37 crc kubenswrapper[4795]: I1129 07:40:37.722655 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:37 crc kubenswrapper[4795]: I1129 07:40:37.722674 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:37 crc kubenswrapper[4795]: I1129 07:40:37.722685 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:37Z","lastTransitionTime":"2025-11-29T07:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:37 crc kubenswrapper[4795]: I1129 07:40:37.825628 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:37 crc kubenswrapper[4795]: I1129 07:40:37.825668 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:37 crc kubenswrapper[4795]: I1129 07:40:37.825678 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:37 crc kubenswrapper[4795]: I1129 07:40:37.825692 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:37 crc kubenswrapper[4795]: I1129 07:40:37.825703 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:37Z","lastTransitionTime":"2025-11-29T07:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:37 crc kubenswrapper[4795]: I1129 07:40:37.927741 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:37 crc kubenswrapper[4795]: I1129 07:40:37.927834 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:37 crc kubenswrapper[4795]: I1129 07:40:37.927849 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:37 crc kubenswrapper[4795]: I1129 07:40:37.927888 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:37 crc kubenswrapper[4795]: I1129 07:40:37.927899 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:37Z","lastTransitionTime":"2025-11-29T07:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:38 crc kubenswrapper[4795]: I1129 07:40:38.030304 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:38 crc kubenswrapper[4795]: I1129 07:40:38.030342 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:38 crc kubenswrapper[4795]: I1129 07:40:38.030353 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:38 crc kubenswrapper[4795]: I1129 07:40:38.030371 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:38 crc kubenswrapper[4795]: I1129 07:40:38.030382 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:38Z","lastTransitionTime":"2025-11-29T07:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:38 crc kubenswrapper[4795]: I1129 07:40:38.133110 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:38 crc kubenswrapper[4795]: I1129 07:40:38.133399 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:38 crc kubenswrapper[4795]: I1129 07:40:38.133469 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:38 crc kubenswrapper[4795]: I1129 07:40:38.133540 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:38 crc kubenswrapper[4795]: I1129 07:40:38.133663 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:38Z","lastTransitionTime":"2025-11-29T07:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:38 crc kubenswrapper[4795]: I1129 07:40:38.236332 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:38 crc kubenswrapper[4795]: I1129 07:40:38.236380 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:38 crc kubenswrapper[4795]: I1129 07:40:38.236390 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:38 crc kubenswrapper[4795]: I1129 07:40:38.236404 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:38 crc kubenswrapper[4795]: I1129 07:40:38.236413 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:38Z","lastTransitionTime":"2025-11-29T07:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:38 crc kubenswrapper[4795]: I1129 07:40:38.275650 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:40:38 crc kubenswrapper[4795]: I1129 07:40:38.275735 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvmzq" Nov 29 07:40:38 crc kubenswrapper[4795]: I1129 07:40:38.275805 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:40:38 crc kubenswrapper[4795]: E1129 07:40:38.275828 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:40:38 crc kubenswrapper[4795]: I1129 07:40:38.275843 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:40:38 crc kubenswrapper[4795]: E1129 07:40:38.276036 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvmzq" podUID="9be66670-47c2-4d05-bf3d-59ae6f4ff53b" Nov 29 07:40:38 crc kubenswrapper[4795]: E1129 07:40:38.276197 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:40:38 crc kubenswrapper[4795]: E1129 07:40:38.276578 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:40:38 crc kubenswrapper[4795]: I1129 07:40:38.339703 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:38 crc kubenswrapper[4795]: I1129 07:40:38.339767 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:38 crc kubenswrapper[4795]: I1129 07:40:38.339779 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:38 crc kubenswrapper[4795]: I1129 07:40:38.339793 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:38 crc kubenswrapper[4795]: I1129 07:40:38.339803 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:38Z","lastTransitionTime":"2025-11-29T07:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:38 crc kubenswrapper[4795]: I1129 07:40:38.442920 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:38 crc kubenswrapper[4795]: I1129 07:40:38.442991 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:38 crc kubenswrapper[4795]: I1129 07:40:38.443003 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:38 crc kubenswrapper[4795]: I1129 07:40:38.443021 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:38 crc kubenswrapper[4795]: I1129 07:40:38.443035 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:38Z","lastTransitionTime":"2025-11-29T07:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:38 crc kubenswrapper[4795]: I1129 07:40:38.544912 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:38 crc kubenswrapper[4795]: I1129 07:40:38.544996 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:38 crc kubenswrapper[4795]: I1129 07:40:38.545061 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:38 crc kubenswrapper[4795]: I1129 07:40:38.545093 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:38 crc kubenswrapper[4795]: I1129 07:40:38.545108 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:38Z","lastTransitionTime":"2025-11-29T07:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:38 crc kubenswrapper[4795]: I1129 07:40:38.649145 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:38 crc kubenswrapper[4795]: I1129 07:40:38.649238 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:38 crc kubenswrapper[4795]: I1129 07:40:38.649289 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:38 crc kubenswrapper[4795]: I1129 07:40:38.649332 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:38 crc kubenswrapper[4795]: I1129 07:40:38.649363 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:38Z","lastTransitionTime":"2025-11-29T07:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:38 crc kubenswrapper[4795]: I1129 07:40:38.754420 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:38 crc kubenswrapper[4795]: I1129 07:40:38.754473 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:38 crc kubenswrapper[4795]: I1129 07:40:38.754482 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:38 crc kubenswrapper[4795]: I1129 07:40:38.754497 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:38 crc kubenswrapper[4795]: I1129 07:40:38.754505 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:38Z","lastTransitionTime":"2025-11-29T07:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:38 crc kubenswrapper[4795]: I1129 07:40:38.857443 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:38 crc kubenswrapper[4795]: I1129 07:40:38.857488 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:38 crc kubenswrapper[4795]: I1129 07:40:38.857497 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:38 crc kubenswrapper[4795]: I1129 07:40:38.857515 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:38 crc kubenswrapper[4795]: I1129 07:40:38.857526 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:38Z","lastTransitionTime":"2025-11-29T07:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:38 crc kubenswrapper[4795]: I1129 07:40:38.959896 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:38 crc kubenswrapper[4795]: I1129 07:40:38.959934 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:38 crc kubenswrapper[4795]: I1129 07:40:38.959942 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:38 crc kubenswrapper[4795]: I1129 07:40:38.959954 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:38 crc kubenswrapper[4795]: I1129 07:40:38.959964 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:38Z","lastTransitionTime":"2025-11-29T07:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:39 crc kubenswrapper[4795]: I1129 07:40:39.061880 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:39 crc kubenswrapper[4795]: I1129 07:40:39.061935 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:39 crc kubenswrapper[4795]: I1129 07:40:39.061948 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:39 crc kubenswrapper[4795]: I1129 07:40:39.061966 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:39 crc kubenswrapper[4795]: I1129 07:40:39.062019 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:39Z","lastTransitionTime":"2025-11-29T07:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:39 crc kubenswrapper[4795]: I1129 07:40:39.165335 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:39 crc kubenswrapper[4795]: I1129 07:40:39.165432 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:39 crc kubenswrapper[4795]: I1129 07:40:39.165456 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:39 crc kubenswrapper[4795]: I1129 07:40:39.165480 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:39 crc kubenswrapper[4795]: I1129 07:40:39.165532 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:39Z","lastTransitionTime":"2025-11-29T07:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:39 crc kubenswrapper[4795]: I1129 07:40:39.268911 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:39 crc kubenswrapper[4795]: I1129 07:40:39.268979 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:39 crc kubenswrapper[4795]: I1129 07:40:39.268996 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:39 crc kubenswrapper[4795]: I1129 07:40:39.269023 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:39 crc kubenswrapper[4795]: I1129 07:40:39.269039 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:39Z","lastTransitionTime":"2025-11-29T07:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:39 crc kubenswrapper[4795]: I1129 07:40:39.372271 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:39 crc kubenswrapper[4795]: I1129 07:40:39.372330 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:39 crc kubenswrapper[4795]: I1129 07:40:39.372348 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:39 crc kubenswrapper[4795]: I1129 07:40:39.372371 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:39 crc kubenswrapper[4795]: I1129 07:40:39.372391 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:39Z","lastTransitionTime":"2025-11-29T07:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:39 crc kubenswrapper[4795]: I1129 07:40:39.475604 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:39 crc kubenswrapper[4795]: I1129 07:40:39.475685 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:39 crc kubenswrapper[4795]: I1129 07:40:39.475696 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:39 crc kubenswrapper[4795]: I1129 07:40:39.475712 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:39 crc kubenswrapper[4795]: I1129 07:40:39.475724 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:39Z","lastTransitionTime":"2025-11-29T07:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:39 crc kubenswrapper[4795]: I1129 07:40:39.577873 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:39 crc kubenswrapper[4795]: I1129 07:40:39.577902 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:39 crc kubenswrapper[4795]: I1129 07:40:39.577910 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:39 crc kubenswrapper[4795]: I1129 07:40:39.577922 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:39 crc kubenswrapper[4795]: I1129 07:40:39.577931 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:39Z","lastTransitionTime":"2025-11-29T07:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:39 crc kubenswrapper[4795]: I1129 07:40:39.681572 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:39 crc kubenswrapper[4795]: I1129 07:40:39.681636 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:39 crc kubenswrapper[4795]: I1129 07:40:39.681647 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:39 crc kubenswrapper[4795]: I1129 07:40:39.681663 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:39 crc kubenswrapper[4795]: I1129 07:40:39.681674 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:39Z","lastTransitionTime":"2025-11-29T07:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:39 crc kubenswrapper[4795]: I1129 07:40:39.784938 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:39 crc kubenswrapper[4795]: I1129 07:40:39.785021 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:39 crc kubenswrapper[4795]: I1129 07:40:39.785038 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:39 crc kubenswrapper[4795]: I1129 07:40:39.785063 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:39 crc kubenswrapper[4795]: I1129 07:40:39.785575 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:39Z","lastTransitionTime":"2025-11-29T07:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:39 crc kubenswrapper[4795]: I1129 07:40:39.889203 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:39 crc kubenswrapper[4795]: I1129 07:40:39.889330 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:39 crc kubenswrapper[4795]: I1129 07:40:39.889349 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:39 crc kubenswrapper[4795]: I1129 07:40:39.889369 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:39 crc kubenswrapper[4795]: I1129 07:40:39.889380 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:39Z","lastTransitionTime":"2025-11-29T07:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:39 crc kubenswrapper[4795]: I1129 07:40:39.991894 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:39 crc kubenswrapper[4795]: I1129 07:40:39.991951 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:39 crc kubenswrapper[4795]: I1129 07:40:39.991962 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:39 crc kubenswrapper[4795]: I1129 07:40:39.991980 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:39 crc kubenswrapper[4795]: I1129 07:40:39.991995 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:39Z","lastTransitionTime":"2025-11-29T07:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:40 crc kubenswrapper[4795]: I1129 07:40:40.095381 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:40 crc kubenswrapper[4795]: I1129 07:40:40.095449 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:40 crc kubenswrapper[4795]: I1129 07:40:40.095465 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:40 crc kubenswrapper[4795]: I1129 07:40:40.095491 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:40 crc kubenswrapper[4795]: I1129 07:40:40.095508 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:40Z","lastTransitionTime":"2025-11-29T07:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:40 crc kubenswrapper[4795]: I1129 07:40:40.198742 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:40 crc kubenswrapper[4795]: I1129 07:40:40.198808 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:40 crc kubenswrapper[4795]: I1129 07:40:40.198819 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:40 crc kubenswrapper[4795]: I1129 07:40:40.198853 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:40 crc kubenswrapper[4795]: I1129 07:40:40.198866 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:40Z","lastTransitionTime":"2025-11-29T07:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:40 crc kubenswrapper[4795]: I1129 07:40:40.275642 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:40:40 crc kubenswrapper[4795]: I1129 07:40:40.275689 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:40:40 crc kubenswrapper[4795]: I1129 07:40:40.275642 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvmzq" Nov 29 07:40:40 crc kubenswrapper[4795]: E1129 07:40:40.275768 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:40:40 crc kubenswrapper[4795]: I1129 07:40:40.275656 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:40:40 crc kubenswrapper[4795]: E1129 07:40:40.276264 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvmzq" podUID="9be66670-47c2-4d05-bf3d-59ae6f4ff53b" Nov 29 07:40:40 crc kubenswrapper[4795]: E1129 07:40:40.276386 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:40:40 crc kubenswrapper[4795]: E1129 07:40:40.276488 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:40:40 crc kubenswrapper[4795]: I1129 07:40:40.301795 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:40 crc kubenswrapper[4795]: I1129 07:40:40.301839 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:40 crc kubenswrapper[4795]: I1129 07:40:40.301850 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:40 crc kubenswrapper[4795]: I1129 07:40:40.301869 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:40 crc kubenswrapper[4795]: I1129 07:40:40.301882 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:40Z","lastTransitionTime":"2025-11-29T07:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:40 crc kubenswrapper[4795]: I1129 07:40:40.403692 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:40 crc kubenswrapper[4795]: I1129 07:40:40.403737 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:40 crc kubenswrapper[4795]: I1129 07:40:40.403748 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:40 crc kubenswrapper[4795]: I1129 07:40:40.403763 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:40 crc kubenswrapper[4795]: I1129 07:40:40.403773 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:40Z","lastTransitionTime":"2025-11-29T07:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:40 crc kubenswrapper[4795]: I1129 07:40:40.506539 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:40 crc kubenswrapper[4795]: I1129 07:40:40.506583 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:40 crc kubenswrapper[4795]: I1129 07:40:40.506653 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:40 crc kubenswrapper[4795]: I1129 07:40:40.506668 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:40 crc kubenswrapper[4795]: I1129 07:40:40.506678 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:40Z","lastTransitionTime":"2025-11-29T07:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:40 crc kubenswrapper[4795]: I1129 07:40:40.614130 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:40 crc kubenswrapper[4795]: I1129 07:40:40.614176 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:40 crc kubenswrapper[4795]: I1129 07:40:40.614185 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:40 crc kubenswrapper[4795]: I1129 07:40:40.614202 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:40 crc kubenswrapper[4795]: I1129 07:40:40.614211 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:40Z","lastTransitionTime":"2025-11-29T07:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:40 crc kubenswrapper[4795]: I1129 07:40:40.716702 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:40 crc kubenswrapper[4795]: I1129 07:40:40.716763 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:40 crc kubenswrapper[4795]: I1129 07:40:40.716777 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:40 crc kubenswrapper[4795]: I1129 07:40:40.716796 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:40 crc kubenswrapper[4795]: I1129 07:40:40.716807 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:40Z","lastTransitionTime":"2025-11-29T07:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:40 crc kubenswrapper[4795]: I1129 07:40:40.819370 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:40 crc kubenswrapper[4795]: I1129 07:40:40.819418 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:40 crc kubenswrapper[4795]: I1129 07:40:40.819433 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:40 crc kubenswrapper[4795]: I1129 07:40:40.819454 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:40 crc kubenswrapper[4795]: I1129 07:40:40.819468 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:40Z","lastTransitionTime":"2025-11-29T07:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:40 crc kubenswrapper[4795]: I1129 07:40:40.921454 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:40 crc kubenswrapper[4795]: I1129 07:40:40.921499 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:40 crc kubenswrapper[4795]: I1129 07:40:40.921507 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:40 crc kubenswrapper[4795]: I1129 07:40:40.921521 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:40 crc kubenswrapper[4795]: I1129 07:40:40.921530 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:40Z","lastTransitionTime":"2025-11-29T07:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:41 crc kubenswrapper[4795]: I1129 07:40:41.038753 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:41 crc kubenswrapper[4795]: I1129 07:40:41.038799 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:41 crc kubenswrapper[4795]: I1129 07:40:41.038808 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:41 crc kubenswrapper[4795]: I1129 07:40:41.038827 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:41 crc kubenswrapper[4795]: I1129 07:40:41.038839 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:41Z","lastTransitionTime":"2025-11-29T07:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:41 crc kubenswrapper[4795]: I1129 07:40:41.141061 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:41 crc kubenswrapper[4795]: I1129 07:40:41.141101 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:41 crc kubenswrapper[4795]: I1129 07:40:41.141113 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:41 crc kubenswrapper[4795]: I1129 07:40:41.141131 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:41 crc kubenswrapper[4795]: I1129 07:40:41.141143 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:41Z","lastTransitionTime":"2025-11-29T07:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:41 crc kubenswrapper[4795]: I1129 07:40:41.243538 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:41 crc kubenswrapper[4795]: I1129 07:40:41.243642 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:41 crc kubenswrapper[4795]: I1129 07:40:41.243662 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:41 crc kubenswrapper[4795]: I1129 07:40:41.243690 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:41 crc kubenswrapper[4795]: I1129 07:40:41.243728 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:41Z","lastTransitionTime":"2025-11-29T07:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:41 crc kubenswrapper[4795]: I1129 07:40:41.346918 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:41 crc kubenswrapper[4795]: I1129 07:40:41.346960 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:41 crc kubenswrapper[4795]: I1129 07:40:41.346970 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:41 crc kubenswrapper[4795]: I1129 07:40:41.346984 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:41 crc kubenswrapper[4795]: I1129 07:40:41.346993 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:41Z","lastTransitionTime":"2025-11-29T07:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:41 crc kubenswrapper[4795]: I1129 07:40:41.449005 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:41 crc kubenswrapper[4795]: I1129 07:40:41.449123 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:41 crc kubenswrapper[4795]: I1129 07:40:41.449139 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:41 crc kubenswrapper[4795]: I1129 07:40:41.449160 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:41 crc kubenswrapper[4795]: I1129 07:40:41.449174 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:41Z","lastTransitionTime":"2025-11-29T07:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:41 crc kubenswrapper[4795]: I1129 07:40:41.551917 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:41 crc kubenswrapper[4795]: I1129 07:40:41.552294 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:41 crc kubenswrapper[4795]: I1129 07:40:41.552422 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:41 crc kubenswrapper[4795]: I1129 07:40:41.552890 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:41 crc kubenswrapper[4795]: I1129 07:40:41.553071 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:41Z","lastTransitionTime":"2025-11-29T07:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:41 crc kubenswrapper[4795]: I1129 07:40:41.656365 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:41 crc kubenswrapper[4795]: I1129 07:40:41.656421 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:41 crc kubenswrapper[4795]: I1129 07:40:41.656433 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:41 crc kubenswrapper[4795]: I1129 07:40:41.656452 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:41 crc kubenswrapper[4795]: I1129 07:40:41.656465 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:41Z","lastTransitionTime":"2025-11-29T07:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:41 crc kubenswrapper[4795]: I1129 07:40:41.758839 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:41 crc kubenswrapper[4795]: I1129 07:40:41.758913 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:41 crc kubenswrapper[4795]: I1129 07:40:41.758934 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:41 crc kubenswrapper[4795]: I1129 07:40:41.758963 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:41 crc kubenswrapper[4795]: I1129 07:40:41.758984 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:41Z","lastTransitionTime":"2025-11-29T07:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:41 crc kubenswrapper[4795]: I1129 07:40:41.861462 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:41 crc kubenswrapper[4795]: I1129 07:40:41.861531 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:41 crc kubenswrapper[4795]: I1129 07:40:41.861554 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:41 crc kubenswrapper[4795]: I1129 07:40:41.861583 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:41 crc kubenswrapper[4795]: I1129 07:40:41.861639 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:41Z","lastTransitionTime":"2025-11-29T07:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:41 crc kubenswrapper[4795]: I1129 07:40:41.964631 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:41 crc kubenswrapper[4795]: I1129 07:40:41.964679 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:41 crc kubenswrapper[4795]: I1129 07:40:41.964688 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:41 crc kubenswrapper[4795]: I1129 07:40:41.964703 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:41 crc kubenswrapper[4795]: I1129 07:40:41.964742 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:41Z","lastTransitionTime":"2025-11-29T07:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:42 crc kubenswrapper[4795]: I1129 07:40:42.067611 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:42 crc kubenswrapper[4795]: I1129 07:40:42.067646 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:42 crc kubenswrapper[4795]: I1129 07:40:42.067657 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:42 crc kubenswrapper[4795]: I1129 07:40:42.067672 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:42 crc kubenswrapper[4795]: I1129 07:40:42.067683 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:42Z","lastTransitionTime":"2025-11-29T07:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:42 crc kubenswrapper[4795]: I1129 07:40:42.170591 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:42 crc kubenswrapper[4795]: I1129 07:40:42.170677 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:42 crc kubenswrapper[4795]: I1129 07:40:42.170693 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:42 crc kubenswrapper[4795]: I1129 07:40:42.170718 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:42 crc kubenswrapper[4795]: I1129 07:40:42.170735 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:42Z","lastTransitionTime":"2025-11-29T07:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:42 crc kubenswrapper[4795]: I1129 07:40:42.272926 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:42 crc kubenswrapper[4795]: I1129 07:40:42.272972 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:42 crc kubenswrapper[4795]: I1129 07:40:42.272984 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:42 crc kubenswrapper[4795]: I1129 07:40:42.273002 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:42 crc kubenswrapper[4795]: I1129 07:40:42.273016 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:42Z","lastTransitionTime":"2025-11-29T07:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:42 crc kubenswrapper[4795]: I1129 07:40:42.275187 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:40:42 crc kubenswrapper[4795]: I1129 07:40:42.275236 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:40:42 crc kubenswrapper[4795]: I1129 07:40:42.275284 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvmzq" Nov 29 07:40:42 crc kubenswrapper[4795]: E1129 07:40:42.275284 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:40:42 crc kubenswrapper[4795]: I1129 07:40:42.275315 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:40:42 crc kubenswrapper[4795]: E1129 07:40:42.275403 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvmzq" podUID="9be66670-47c2-4d05-bf3d-59ae6f4ff53b" Nov 29 07:40:42 crc kubenswrapper[4795]: E1129 07:40:42.275583 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:40:42 crc kubenswrapper[4795]: E1129 07:40:42.275758 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:40:42 crc kubenswrapper[4795]: I1129 07:40:42.374906 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:42 crc kubenswrapper[4795]: I1129 07:40:42.374957 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:42 crc kubenswrapper[4795]: I1129 07:40:42.374968 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:42 crc kubenswrapper[4795]: I1129 07:40:42.374983 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:42 crc kubenswrapper[4795]: I1129 07:40:42.374994 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:42Z","lastTransitionTime":"2025-11-29T07:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:42 crc kubenswrapper[4795]: I1129 07:40:42.478479 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:42 crc kubenswrapper[4795]: I1129 07:40:42.478622 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:42 crc kubenswrapper[4795]: I1129 07:40:42.478643 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:42 crc kubenswrapper[4795]: I1129 07:40:42.478670 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:42 crc kubenswrapper[4795]: I1129 07:40:42.478687 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:42Z","lastTransitionTime":"2025-11-29T07:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:42 crc kubenswrapper[4795]: I1129 07:40:42.581064 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:42 crc kubenswrapper[4795]: I1129 07:40:42.581128 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:42 crc kubenswrapper[4795]: I1129 07:40:42.581140 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:42 crc kubenswrapper[4795]: I1129 07:40:42.581159 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:42 crc kubenswrapper[4795]: I1129 07:40:42.581171 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:42Z","lastTransitionTime":"2025-11-29T07:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:42 crc kubenswrapper[4795]: I1129 07:40:42.593850 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:40:42 crc kubenswrapper[4795]: E1129 07:40:42.594004 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:41:46.593976057 +0000 UTC m=+152.569551847 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:40:42 crc kubenswrapper[4795]: I1129 07:40:42.594064 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:40:42 crc kubenswrapper[4795]: I1129 07:40:42.594116 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:40:42 crc kubenswrapper[4795]: I1129 07:40:42.594168 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:40:42 crc kubenswrapper[4795]: I1129 07:40:42.594195 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:40:42 crc kubenswrapper[4795]: E1129 07:40:42.594274 4795 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 07:40:42 crc kubenswrapper[4795]: E1129 07:40:42.594289 4795 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 07:40:42 crc kubenswrapper[4795]: E1129 07:40:42.594317 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 07:41:46.594309356 +0000 UTC m=+152.569885146 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 07:40:42 crc kubenswrapper[4795]: E1129 07:40:42.594318 4795 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 07:40:42 crc kubenswrapper[4795]: E1129 07:40:42.594338 4795 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:40:42 crc kubenswrapper[4795]: E1129 07:40:42.594343 4795 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 07:40:42 crc kubenswrapper[4795]: E1129 07:40:42.594294 4795 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 07:40:42 crc kubenswrapper[4795]: E1129 07:40:42.594387 4795 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 07:40:42 crc kubenswrapper[4795]: E1129 07:40:42.594398 4795 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:40:42 crc kubenswrapper[4795]: E1129 07:40:42.594362 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-29 07:41:46.594357087 +0000 UTC m=+152.569932877 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:40:42 crc kubenswrapper[4795]: E1129 07:40:42.594438 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-29 07:41:46.594425529 +0000 UTC m=+152.570001319 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:40:42 crc kubenswrapper[4795]: E1129 07:40:42.594456 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 07:41:46.59444866 +0000 UTC m=+152.570024450 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 07:40:42 crc kubenswrapper[4795]: I1129 07:40:42.684004 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:42 crc kubenswrapper[4795]: I1129 07:40:42.684046 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:42 crc kubenswrapper[4795]: I1129 07:40:42.684057 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:42 crc kubenswrapper[4795]: I1129 07:40:42.684073 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:42 crc kubenswrapper[4795]: I1129 07:40:42.684084 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:42Z","lastTransitionTime":"2025-11-29T07:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:42 crc kubenswrapper[4795]: I1129 07:40:42.786213 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:42 crc kubenswrapper[4795]: I1129 07:40:42.786259 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:42 crc kubenswrapper[4795]: I1129 07:40:42.786269 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:42 crc kubenswrapper[4795]: I1129 07:40:42.786288 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:42 crc kubenswrapper[4795]: I1129 07:40:42.786300 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:42Z","lastTransitionTime":"2025-11-29T07:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:42 crc kubenswrapper[4795]: I1129 07:40:42.888927 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:42 crc kubenswrapper[4795]: I1129 07:40:42.888977 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:42 crc kubenswrapper[4795]: I1129 07:40:42.888998 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:42 crc kubenswrapper[4795]: I1129 07:40:42.889013 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:42 crc kubenswrapper[4795]: I1129 07:40:42.889023 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:42Z","lastTransitionTime":"2025-11-29T07:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:42 crc kubenswrapper[4795]: I1129 07:40:42.992354 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:42 crc kubenswrapper[4795]: I1129 07:40:42.992420 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:42 crc kubenswrapper[4795]: I1129 07:40:42.992433 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:42 crc kubenswrapper[4795]: I1129 07:40:42.992453 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:42 crc kubenswrapper[4795]: I1129 07:40:42.992465 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:42Z","lastTransitionTime":"2025-11-29T07:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:43 crc kubenswrapper[4795]: I1129 07:40:43.094870 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:43 crc kubenswrapper[4795]: I1129 07:40:43.094911 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:43 crc kubenswrapper[4795]: I1129 07:40:43.094919 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:43 crc kubenswrapper[4795]: I1129 07:40:43.094932 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:43 crc kubenswrapper[4795]: I1129 07:40:43.094940 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:43Z","lastTransitionTime":"2025-11-29T07:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:43 crc kubenswrapper[4795]: I1129 07:40:43.197246 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:43 crc kubenswrapper[4795]: I1129 07:40:43.197313 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:43 crc kubenswrapper[4795]: I1129 07:40:43.197331 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:43 crc kubenswrapper[4795]: I1129 07:40:43.197358 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:43 crc kubenswrapper[4795]: I1129 07:40:43.197381 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:43Z","lastTransitionTime":"2025-11-29T07:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:43 crc kubenswrapper[4795]: I1129 07:40:43.289654 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Nov 29 07:40:43 crc kubenswrapper[4795]: I1129 07:40:43.301103 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:43 crc kubenswrapper[4795]: I1129 07:40:43.301185 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:43 crc kubenswrapper[4795]: I1129 07:40:43.301197 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:43 crc kubenswrapper[4795]: I1129 07:40:43.301217 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:43 crc kubenswrapper[4795]: I1129 07:40:43.301236 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:43Z","lastTransitionTime":"2025-11-29T07:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:43 crc kubenswrapper[4795]: I1129 07:40:43.403722 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:43 crc kubenswrapper[4795]: I1129 07:40:43.403783 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:43 crc kubenswrapper[4795]: I1129 07:40:43.403794 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:43 crc kubenswrapper[4795]: I1129 07:40:43.403812 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:43 crc kubenswrapper[4795]: I1129 07:40:43.403827 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:43Z","lastTransitionTime":"2025-11-29T07:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:43 crc kubenswrapper[4795]: I1129 07:40:43.506707 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:43 crc kubenswrapper[4795]: I1129 07:40:43.506785 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:43 crc kubenswrapper[4795]: I1129 07:40:43.506799 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:43 crc kubenswrapper[4795]: I1129 07:40:43.506820 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:43 crc kubenswrapper[4795]: I1129 07:40:43.506835 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:43Z","lastTransitionTime":"2025-11-29T07:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:43 crc kubenswrapper[4795]: I1129 07:40:43.610450 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:43 crc kubenswrapper[4795]: I1129 07:40:43.610517 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:43 crc kubenswrapper[4795]: I1129 07:40:43.610539 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:43 crc kubenswrapper[4795]: I1129 07:40:43.610578 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:43 crc kubenswrapper[4795]: I1129 07:40:43.610651 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:43Z","lastTransitionTime":"2025-11-29T07:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:43 crc kubenswrapper[4795]: I1129 07:40:43.713704 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:43 crc kubenswrapper[4795]: I1129 07:40:43.713751 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:43 crc kubenswrapper[4795]: I1129 07:40:43.713766 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:43 crc kubenswrapper[4795]: I1129 07:40:43.713782 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:43 crc kubenswrapper[4795]: I1129 07:40:43.713794 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:43Z","lastTransitionTime":"2025-11-29T07:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:43 crc kubenswrapper[4795]: I1129 07:40:43.816445 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:43 crc kubenswrapper[4795]: I1129 07:40:43.816522 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:43 crc kubenswrapper[4795]: I1129 07:40:43.816535 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:43 crc kubenswrapper[4795]: I1129 07:40:43.816549 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:43 crc kubenswrapper[4795]: I1129 07:40:43.816559 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:43Z","lastTransitionTime":"2025-11-29T07:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:43 crc kubenswrapper[4795]: I1129 07:40:43.919274 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:43 crc kubenswrapper[4795]: I1129 07:40:43.919329 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:43 crc kubenswrapper[4795]: I1129 07:40:43.919344 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:43 crc kubenswrapper[4795]: I1129 07:40:43.919364 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:43 crc kubenswrapper[4795]: I1129 07:40:43.919379 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:43Z","lastTransitionTime":"2025-11-29T07:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.021568 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.021655 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.021671 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.021693 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.021708 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:44Z","lastTransitionTime":"2025-11-29T07:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.125348 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.125387 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.125397 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.125413 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.125424 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:44Z","lastTransitionTime":"2025-11-29T07:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.227667 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.227710 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.227721 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.227736 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.227747 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:44Z","lastTransitionTime":"2025-11-29T07:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.275248 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.275246 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvmzq" Nov 29 07:40:44 crc kubenswrapper[4795]: E1129 07:40:44.275368 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.275411 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:40:44 crc kubenswrapper[4795]: E1129 07:40:44.275451 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.275525 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:40:44 crc kubenswrapper[4795]: E1129 07:40:44.275579 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:40:44 crc kubenswrapper[4795]: E1129 07:40:44.276864 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvmzq" podUID="9be66670-47c2-4d05-bf3d-59ae6f4ff53b" Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.290258 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.302815 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2faa21199ef0e7bbf065823fff1e4b33636239d022db292bec68ff54e8c948c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c09cfeaabce351a2face8aa09a04f88aa9d50122718358d07301aeab84e4a76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.314682 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b4d032ee6e86c2b61a59ea1477aa6d97a454c754888c37db24bc49d921eacd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.327153 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://148610b20c1dfe43c3e5443cda520fa95ed2823afcaa880dbe879b6d93dc8ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3aac368767d0534a658bc0223f5cda0a951940f2d6ff3f1c89a6ac401a7d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkmq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.330153 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.330187 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.330196 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.330209 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.330218 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:44Z","lastTransitionTime":"2025-11-29T07:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.346354 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceed8db447ae237cae16d2aa37b1651eade557ea7aaab5bc7b55678ce6e36f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcce9ccf35e72987d01428b7573aafacfe90fc973009657b077f40b6fe6ddecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://248e5291402287c1a589a168527135bd1df15e293a7614681fc896fcbdeb9dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8112226f945ad27e0c5af4d588800322977e17efc3eac9acec2ce591f30f46fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://feacad8666527640ec9d6bee04a743e99e0cc9f9e9c6432c1a70bc02e85fa039\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25239d7c14973e35144f9ac1b909564749220fc32b1851b68e08434a5aca7def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a0056c2a13ab4a4482c049c3fb5f6fe9938979d96f9837f40fa1dc45719a09b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a0056c2a13ab4a4482c049c3fb5f6fe9938979d96f9837f40fa1dc45719a09b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:40:20Z\\\",\\\"message\\\":\\\"Policy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:40:20.066807 6580 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:40:20.066708 6580 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:40:20.067272 6580 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:40:20.067564 6580 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:40:20.067747 6580 factory.go:656] Stopping watch factory\\\\nI1129 07:40:20.067868 6580 services_controller.go:214] Setting up event handlers for endpoint slices for network=default\\\\nI1129 07:40:20.067909 6580 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1129 07:40:20.068426 6580 ovnkube.go:599] Stopped ovnkube\\\\nI1129 07:40:20.068484 6580 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1129 07:40:20.068569 6580 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:40:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-km2g9_openshift-ovn-kubernetes(3d3ff2b2-cbaa-4309-805a-2b044f867d3a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63ac8cc3766762113c8fdfdc459f77570fc533bf636011f1e64a5c0a55b3ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e28f6209bd7be6beb5d86d316c3194ad3162ae8561ac532a3dfc894415fb406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e28f6209bd7be6beb5d86d316c3194ad3162ae8561ac532a3dfc894415fb406\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-km2g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.359223 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"027ef0a7-d55c-4187-aff0-931ba50db57f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62dc6b7572b93a8e91f4dc1404de860af85b5e37d2c1a802e4a481b01bfd892c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c9db4ec27597862bfc36c6a62900a3aa639ebb33d7c7212d449cc85a321d60d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c9db4ec27597862bfc36c6a62900a3aa639ebb33d7c7212d449cc85a321d60d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.374857 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84bbfda4-f166-406b-8459-d8cad5d21032\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://102735e5872ecbca5ba247f4fc457619ce4b383def180851d69bf37f7ea1051a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cba2e40f95664da3b482ce01c3432b7a421b785791004833aeaffbc2b6bdcb9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c385dac962c7c2c54fe6b5eabe286bc75d0518601f68b6845677764c5916ad44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6319d19e0aa1655aaff5e3abcc13d8ff9ff26f401c28cf23593ecaa23d1ac5a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa34ac2960898820609db88aa051a6567a7b6bd0bfc7fcb741205f6f5c3f402\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:39:37Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 07:39:27.030389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:39:27.033381 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-823169215/tls.crt::/tmp/serving-cert-823169215/tls.key\\\\\\\"\\\\nI1129 07:39:37.392848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:39:37.396263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:39:37.396294 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:39:37.396669 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:39:37.396692 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:39:37.407579 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:39:37.407638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407648 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407655 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:39:37.407660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:39:37.407665 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:39:37.407670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:39:37.408043 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:39:37.411891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7b73774f26f4ea48586f8faf7f53be02b4678c85937c8e402868610f36d4d00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.387969 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c05a965f-51fb-403d-bbac-b34d4d6b2bbd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98152568d086a30f06fcd1535110482b356c587d906973de9baedd65fc85b6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34ebb0f2c455d2133d2fcf00d19e74d539e80523f2ed0b2ccc1f8eabcf2cb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9d7937efcaeb684eb229e396c7896a805f1bf205bda1cd1b8113aeda2bea5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bcaeb06b249f6fa81deaf9f997ca41c9dd7f3568d458072208901c04543e272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bcaeb06b249f6fa81deaf9f997ca41c9dd7f3568d458072208901c04543e272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.400379 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30fbc9849348e3900b416f78f697f1d88eb5eb018f5ee420e711561423a79459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.411944 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bvmzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9be66670-47c2-4d05-bf3d-59ae6f4ff53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4h7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4h7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bvmzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.426115 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hbg2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50b9c3ea-4ff5-434f-803c-2365a0938f9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00a5c9d1a3e5371aa4899c05c4c4eecdbbbe3666777446a5fa82e13837a23050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46105526d9b0e90d4e14597c7b9c505b01d89a55e947fb378e4587ec265cac89\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:40:31Z\\\",\\\"message\\\":\\\"2025-11-29T07:39:45+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7afd0fe0-7b98-4b72-a478-6a6cb3a1287c\\\\n2025-11-29T07:39:45+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7afd0fe0-7b98-4b72-a478-6a6cb3a1287c to /host/opt/cni/bin/\\\\n2025-11-29T07:39:45Z [verbose] multus-daemon started\\\\n2025-11-29T07:39:45Z [verbose] Readiness Indicator file check\\\\n2025-11-29T07:40:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:40:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv7gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hbg2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.433729 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.433779 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.433788 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.433806 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.433820 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:44Z","lastTransitionTime":"2025-11-29T07:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.438191 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vcd5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3fc3441-5d98-4323-8a78-cab492090c5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d047b5f2e979d7eebec46b45ef363e94e99bd7311214872c66fcce39cfd411db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcqq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vcd5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.451248 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qcmb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6472ebbc-939b-4dd0-8b03-110cb9811484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad977e9d70f50688539c9d0c9cf08789584d08f77792d3dde1a619e9d958cff2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qwxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2497113482ab716b22a90b7872f2837bccd484b765e979c2c67a394f82de688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qwxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qcmb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.462022 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"262bf41d-eaf0-42b7-946d-3223857ca705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bbfc45764b29055dcafdee2e3b77893c35b23cfa3eaa9125f01850baef2fd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://508f7cb88a5b33c09bb52519555900031de44693e10de750ed34d9bb4c31738c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9f1344bd7c3a1bc53870f7e5f5f2ca4993df112a24aa474655cc07670e22c48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b06202234f21aa27a3dc67978cc9f82738d7526100bce3fd640be08131413f69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.473435 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.486146 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.504584 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27975" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceab872d-7a73-44d5-936e-3dd17facf399\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b771b13a7bbce9270d919a830c700a1a04bd1d82141a1487e7abb6b1def634cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:40:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e24a1ba0b3bd47c8cff733e54ce43cf1641ba69df8a3d2083f47fcfdcd6640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e24a1ba0b3bd47c8cff733e54ce43cf1641ba69df8a3d2083f47fcfdcd6640\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9e93605467f5dae805a8b5b55e1eebc3f046f4ab7fd3a409235b918d8b1aa88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9e93605467f5dae805a8b5b55e1eebc3f046f4ab7fd3a409235b918d8b1aa88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1856815c6475b6c9db1d59694dfc249a9b254ca7f6b402570dbb791f60e038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1856815c6475b6c9db1d59694dfc249a9b254ca7f6b402570dbb791f60e038\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c79e6f292a540eaeab2781c31973da7e0213d08460452129dbb25075c05a9094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c79e6f292a540eaeab2781c31973da7e0213d08460452129dbb25075c05a9094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e9ceaddb24d5dc8e3689b2e05482fd76883f4ce3475557c03419491b53e800d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9ceaddb24d5dc8e3689b2e05482fd76883f4ce3475557c03419491b53e800d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2966e801bc03a51bece33fae1207ba6a432620fcbcb1c702abddbb11b40a1e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2966e801bc03a51bece33fae1207ba6a432620fcbcb1c702abddbb11b40a1e1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27975\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.515753 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-s9x86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcb60df4-ad48-4830-b8c7-c63621b96707\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6d9cdb665b3bed924577ab173dcf48968a0b7217ca81b1f8708d733d27d3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-s9x86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.536500 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.536538 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.536550 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.536567 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.536580 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:44Z","lastTransitionTime":"2025-11-29T07:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.639794 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.639846 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.639859 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.639878 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.639892 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:44Z","lastTransitionTime":"2025-11-29T07:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.743805 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.743933 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.744411 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.744513 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.744819 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:44Z","lastTransitionTime":"2025-11-29T07:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.848613 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.849053 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.849075 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.851194 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.851235 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:44Z","lastTransitionTime":"2025-11-29T07:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.924336 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.924383 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.924393 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.924411 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.924420 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:44Z","lastTransitionTime":"2025-11-29T07:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:44 crc kubenswrapper[4795]: E1129 07:40:44.942260 4795 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8cd0c036-22eb-405e-b57e-ec0c4424780e\\\",\\\"systemUUID\\\":\\\"bd085386-a70e-485f-9f18-00b3aef4bcca\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.947801 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.947862 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.947872 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.947887 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.947897 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:44Z","lastTransitionTime":"2025-11-29T07:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:44 crc kubenswrapper[4795]: E1129 07:40:44.966904 4795 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8cd0c036-22eb-405e-b57e-ec0c4424780e\\\",\\\"systemUUID\\\":\\\"bd085386-a70e-485f-9f18-00b3aef4bcca\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.971956 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.971993 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.972001 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.972016 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.972025 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:44Z","lastTransitionTime":"2025-11-29T07:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:44 crc kubenswrapper[4795]: E1129 07:40:44.987505 4795 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8cd0c036-22eb-405e-b57e-ec0c4424780e\\\",\\\"systemUUID\\\":\\\"bd085386-a70e-485f-9f18-00b3aef4bcca\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:44Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.991995 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.992099 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.992112 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.992127 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:44 crc kubenswrapper[4795]: I1129 07:40:44.992160 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:44Z","lastTransitionTime":"2025-11-29T07:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:45 crc kubenswrapper[4795]: E1129 07:40:45.010167 4795 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8cd0c036-22eb-405e-b57e-ec0c4424780e\\\",\\\"systemUUID\\\":\\\"bd085386-a70e-485f-9f18-00b3aef4bcca\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:45 crc kubenswrapper[4795]: I1129 07:40:45.014683 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:45 crc kubenswrapper[4795]: I1129 07:40:45.014716 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:45 crc kubenswrapper[4795]: I1129 07:40:45.014725 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:45 crc kubenswrapper[4795]: I1129 07:40:45.014739 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:45 crc kubenswrapper[4795]: I1129 07:40:45.014749 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:45Z","lastTransitionTime":"2025-11-29T07:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:45 crc kubenswrapper[4795]: E1129 07:40:45.032030 4795 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8cd0c036-22eb-405e-b57e-ec0c4424780e\\\",\\\"systemUUID\\\":\\\"bd085386-a70e-485f-9f18-00b3aef4bcca\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:45 crc kubenswrapper[4795]: E1129 07:40:45.032181 4795 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 29 07:40:45 crc kubenswrapper[4795]: I1129 07:40:45.034319 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:45 crc kubenswrapper[4795]: I1129 07:40:45.034364 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:45 crc kubenswrapper[4795]: I1129 07:40:45.034385 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:45 crc kubenswrapper[4795]: I1129 07:40:45.034413 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:45 crc kubenswrapper[4795]: I1129 07:40:45.034437 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:45Z","lastTransitionTime":"2025-11-29T07:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:45 crc kubenswrapper[4795]: I1129 07:40:45.137149 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:45 crc kubenswrapper[4795]: I1129 07:40:45.137199 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:45 crc kubenswrapper[4795]: I1129 07:40:45.137210 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:45 crc kubenswrapper[4795]: I1129 07:40:45.137225 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:45 crc kubenswrapper[4795]: I1129 07:40:45.137237 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:45Z","lastTransitionTime":"2025-11-29T07:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:45 crc kubenswrapper[4795]: I1129 07:40:45.239956 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:45 crc kubenswrapper[4795]: I1129 07:40:45.240029 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:45 crc kubenswrapper[4795]: I1129 07:40:45.240047 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:45 crc kubenswrapper[4795]: I1129 07:40:45.240084 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:45 crc kubenswrapper[4795]: I1129 07:40:45.240121 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:45Z","lastTransitionTime":"2025-11-29T07:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:45 crc kubenswrapper[4795]: I1129 07:40:45.343734 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:45 crc kubenswrapper[4795]: I1129 07:40:45.343809 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:45 crc kubenswrapper[4795]: I1129 07:40:45.343837 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:45 crc kubenswrapper[4795]: I1129 07:40:45.343869 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:45 crc kubenswrapper[4795]: I1129 07:40:45.343895 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:45Z","lastTransitionTime":"2025-11-29T07:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:45 crc kubenswrapper[4795]: I1129 07:40:45.446234 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:45 crc kubenswrapper[4795]: I1129 07:40:45.446313 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:45 crc kubenswrapper[4795]: I1129 07:40:45.446323 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:45 crc kubenswrapper[4795]: I1129 07:40:45.446337 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:45 crc kubenswrapper[4795]: I1129 07:40:45.446348 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:45Z","lastTransitionTime":"2025-11-29T07:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:45 crc kubenswrapper[4795]: I1129 07:40:45.548290 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:45 crc kubenswrapper[4795]: I1129 07:40:45.548395 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:45 crc kubenswrapper[4795]: I1129 07:40:45.548415 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:45 crc kubenswrapper[4795]: I1129 07:40:45.548439 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:45 crc kubenswrapper[4795]: I1129 07:40:45.548456 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:45Z","lastTransitionTime":"2025-11-29T07:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:45 crc kubenswrapper[4795]: I1129 07:40:45.650848 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:45 crc kubenswrapper[4795]: I1129 07:40:45.650896 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:45 crc kubenswrapper[4795]: I1129 07:40:45.650905 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:45 crc kubenswrapper[4795]: I1129 07:40:45.650919 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:45 crc kubenswrapper[4795]: I1129 07:40:45.650928 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:45Z","lastTransitionTime":"2025-11-29T07:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:45 crc kubenswrapper[4795]: I1129 07:40:45.754670 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:45 crc kubenswrapper[4795]: I1129 07:40:45.754718 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:45 crc kubenswrapper[4795]: I1129 07:40:45.754727 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:45 crc kubenswrapper[4795]: I1129 07:40:45.754746 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:45 crc kubenswrapper[4795]: I1129 07:40:45.754760 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:45Z","lastTransitionTime":"2025-11-29T07:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:45 crc kubenswrapper[4795]: I1129 07:40:45.857937 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:45 crc kubenswrapper[4795]: I1129 07:40:45.857990 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:45 crc kubenswrapper[4795]: I1129 07:40:45.857999 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:45 crc kubenswrapper[4795]: I1129 07:40:45.858013 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:45 crc kubenswrapper[4795]: I1129 07:40:45.858022 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:45Z","lastTransitionTime":"2025-11-29T07:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:45 crc kubenswrapper[4795]: I1129 07:40:45.961092 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:45 crc kubenswrapper[4795]: I1129 07:40:45.961231 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:45 crc kubenswrapper[4795]: I1129 07:40:45.961259 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:45 crc kubenswrapper[4795]: I1129 07:40:45.961291 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:45 crc kubenswrapper[4795]: I1129 07:40:45.961315 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:45Z","lastTransitionTime":"2025-11-29T07:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:46 crc kubenswrapper[4795]: I1129 07:40:46.065491 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:46 crc kubenswrapper[4795]: I1129 07:40:46.065569 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:46 crc kubenswrapper[4795]: I1129 07:40:46.065652 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:46 crc kubenswrapper[4795]: I1129 07:40:46.065682 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:46 crc kubenswrapper[4795]: I1129 07:40:46.065700 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:46Z","lastTransitionTime":"2025-11-29T07:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:46 crc kubenswrapper[4795]: I1129 07:40:46.167648 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:46 crc kubenswrapper[4795]: I1129 07:40:46.167709 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:46 crc kubenswrapper[4795]: I1129 07:40:46.167722 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:46 crc kubenswrapper[4795]: I1129 07:40:46.167741 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:46 crc kubenswrapper[4795]: I1129 07:40:46.167751 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:46Z","lastTransitionTime":"2025-11-29T07:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:46 crc kubenswrapper[4795]: I1129 07:40:46.270492 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:46 crc kubenswrapper[4795]: I1129 07:40:46.270553 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:46 crc kubenswrapper[4795]: I1129 07:40:46.270571 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:46 crc kubenswrapper[4795]: I1129 07:40:46.270607 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:46 crc kubenswrapper[4795]: I1129 07:40:46.270620 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:46Z","lastTransitionTime":"2025-11-29T07:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:46 crc kubenswrapper[4795]: I1129 07:40:46.274915 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:40:46 crc kubenswrapper[4795]: I1129 07:40:46.274973 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvmzq" Nov 29 07:40:46 crc kubenswrapper[4795]: I1129 07:40:46.274981 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:40:46 crc kubenswrapper[4795]: I1129 07:40:46.274921 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:40:46 crc kubenswrapper[4795]: E1129 07:40:46.275086 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:40:46 crc kubenswrapper[4795]: E1129 07:40:46.275211 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvmzq" podUID="9be66670-47c2-4d05-bf3d-59ae6f4ff53b" Nov 29 07:40:46 crc kubenswrapper[4795]: E1129 07:40:46.275324 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:40:46 crc kubenswrapper[4795]: E1129 07:40:46.275640 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:40:46 crc kubenswrapper[4795]: I1129 07:40:46.372805 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:46 crc kubenswrapper[4795]: I1129 07:40:46.372844 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:46 crc kubenswrapper[4795]: I1129 07:40:46.372852 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:46 crc kubenswrapper[4795]: I1129 07:40:46.372869 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:46 crc kubenswrapper[4795]: I1129 07:40:46.372879 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:46Z","lastTransitionTime":"2025-11-29T07:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:46 crc kubenswrapper[4795]: I1129 07:40:46.475805 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:46 crc kubenswrapper[4795]: I1129 07:40:46.475941 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:46 crc kubenswrapper[4795]: I1129 07:40:46.475952 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:46 crc kubenswrapper[4795]: I1129 07:40:46.475968 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:46 crc kubenswrapper[4795]: I1129 07:40:46.475981 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:46Z","lastTransitionTime":"2025-11-29T07:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:46 crc kubenswrapper[4795]: I1129 07:40:46.578912 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:46 crc kubenswrapper[4795]: I1129 07:40:46.578965 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:46 crc kubenswrapper[4795]: I1129 07:40:46.578978 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:46 crc kubenswrapper[4795]: I1129 07:40:46.578999 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:46 crc kubenswrapper[4795]: I1129 07:40:46.579015 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:46Z","lastTransitionTime":"2025-11-29T07:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:46 crc kubenswrapper[4795]: I1129 07:40:46.682393 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:46 crc kubenswrapper[4795]: I1129 07:40:46.682448 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:46 crc kubenswrapper[4795]: I1129 07:40:46.682460 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:46 crc kubenswrapper[4795]: I1129 07:40:46.682480 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:46 crc kubenswrapper[4795]: I1129 07:40:46.682492 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:46Z","lastTransitionTime":"2025-11-29T07:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:46 crc kubenswrapper[4795]: I1129 07:40:46.785119 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:46 crc kubenswrapper[4795]: I1129 07:40:46.785173 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:46 crc kubenswrapper[4795]: I1129 07:40:46.785183 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:46 crc kubenswrapper[4795]: I1129 07:40:46.785195 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:46 crc kubenswrapper[4795]: I1129 07:40:46.785203 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:46Z","lastTransitionTime":"2025-11-29T07:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:46 crc kubenswrapper[4795]: I1129 07:40:46.887559 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:46 crc kubenswrapper[4795]: I1129 07:40:46.887625 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:46 crc kubenswrapper[4795]: I1129 07:40:46.887640 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:46 crc kubenswrapper[4795]: I1129 07:40:46.887657 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:46 crc kubenswrapper[4795]: I1129 07:40:46.887668 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:46Z","lastTransitionTime":"2025-11-29T07:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:46 crc kubenswrapper[4795]: I1129 07:40:46.990389 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:46 crc kubenswrapper[4795]: I1129 07:40:46.990453 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:46 crc kubenswrapper[4795]: I1129 07:40:46.990467 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:46 crc kubenswrapper[4795]: I1129 07:40:46.990484 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:46 crc kubenswrapper[4795]: I1129 07:40:46.990496 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:46Z","lastTransitionTime":"2025-11-29T07:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:47 crc kubenswrapper[4795]: I1129 07:40:47.093254 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:47 crc kubenswrapper[4795]: I1129 07:40:47.093295 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:47 crc kubenswrapper[4795]: I1129 07:40:47.093307 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:47 crc kubenswrapper[4795]: I1129 07:40:47.093324 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:47 crc kubenswrapper[4795]: I1129 07:40:47.093336 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:47Z","lastTransitionTime":"2025-11-29T07:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:47 crc kubenswrapper[4795]: I1129 07:40:47.196381 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:47 crc kubenswrapper[4795]: I1129 07:40:47.196426 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:47 crc kubenswrapper[4795]: I1129 07:40:47.196439 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:47 crc kubenswrapper[4795]: I1129 07:40:47.196454 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:47 crc kubenswrapper[4795]: I1129 07:40:47.196466 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:47Z","lastTransitionTime":"2025-11-29T07:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:47 crc kubenswrapper[4795]: I1129 07:40:47.275917 4795 scope.go:117] "RemoveContainer" containerID="4a0056c2a13ab4a4482c049c3fb5f6fe9938979d96f9837f40fa1dc45719a09b" Nov 29 07:40:47 crc kubenswrapper[4795]: I1129 07:40:47.298157 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:47 crc kubenswrapper[4795]: I1129 07:40:47.298202 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:47 crc kubenswrapper[4795]: I1129 07:40:47.298214 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:47 crc kubenswrapper[4795]: I1129 07:40:47.298231 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:47 crc kubenswrapper[4795]: I1129 07:40:47.298295 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:47Z","lastTransitionTime":"2025-11-29T07:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:47 crc kubenswrapper[4795]: I1129 07:40:47.401161 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:47 crc kubenswrapper[4795]: I1129 07:40:47.401187 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:47 crc kubenswrapper[4795]: I1129 07:40:47.401195 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:47 crc kubenswrapper[4795]: I1129 07:40:47.401208 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:47 crc kubenswrapper[4795]: I1129 07:40:47.401217 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:47Z","lastTransitionTime":"2025-11-29T07:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:47 crc kubenswrapper[4795]: I1129 07:40:47.506339 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:47 crc kubenswrapper[4795]: I1129 07:40:47.506396 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:47 crc kubenswrapper[4795]: I1129 07:40:47.506415 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:47 crc kubenswrapper[4795]: I1129 07:40:47.506442 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:47 crc kubenswrapper[4795]: I1129 07:40:47.506460 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:47Z","lastTransitionTime":"2025-11-29T07:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:47 crc kubenswrapper[4795]: I1129 07:40:47.610283 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:47 crc kubenswrapper[4795]: I1129 07:40:47.610345 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:47 crc kubenswrapper[4795]: I1129 07:40:47.610363 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:47 crc kubenswrapper[4795]: I1129 07:40:47.610388 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:47 crc kubenswrapper[4795]: I1129 07:40:47.610406 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:47Z","lastTransitionTime":"2025-11-29T07:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:47 crc kubenswrapper[4795]: I1129 07:40:47.712834 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:47 crc kubenswrapper[4795]: I1129 07:40:47.712878 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:47 crc kubenswrapper[4795]: I1129 07:40:47.712888 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:47 crc kubenswrapper[4795]: I1129 07:40:47.712904 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:47 crc kubenswrapper[4795]: I1129 07:40:47.712916 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:47Z","lastTransitionTime":"2025-11-29T07:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:47 crc kubenswrapper[4795]: I1129 07:40:47.815780 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:47 crc kubenswrapper[4795]: I1129 07:40:47.815879 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:47 crc kubenswrapper[4795]: I1129 07:40:47.815898 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:47 crc kubenswrapper[4795]: I1129 07:40:47.815923 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:47 crc kubenswrapper[4795]: I1129 07:40:47.815939 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:47Z","lastTransitionTime":"2025-11-29T07:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:47 crc kubenswrapper[4795]: I1129 07:40:47.918889 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:47 crc kubenswrapper[4795]: I1129 07:40:47.918939 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:47 crc kubenswrapper[4795]: I1129 07:40:47.918950 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:47 crc kubenswrapper[4795]: I1129 07:40:47.918968 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:47 crc kubenswrapper[4795]: I1129 07:40:47.918980 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:47Z","lastTransitionTime":"2025-11-29T07:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:48 crc kubenswrapper[4795]: I1129 07:40:48.022469 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:48 crc kubenswrapper[4795]: I1129 07:40:48.022553 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:48 crc kubenswrapper[4795]: I1129 07:40:48.022575 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:48 crc kubenswrapper[4795]: I1129 07:40:48.022649 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:48 crc kubenswrapper[4795]: I1129 07:40:48.022684 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:48Z","lastTransitionTime":"2025-11-29T07:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:48 crc kubenswrapper[4795]: I1129 07:40:48.126400 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:48 crc kubenswrapper[4795]: I1129 07:40:48.126450 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:48 crc kubenswrapper[4795]: I1129 07:40:48.126464 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:48 crc kubenswrapper[4795]: I1129 07:40:48.126486 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:48 crc kubenswrapper[4795]: I1129 07:40:48.126500 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:48Z","lastTransitionTime":"2025-11-29T07:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:48 crc kubenswrapper[4795]: I1129 07:40:48.246731 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:48 crc kubenswrapper[4795]: I1129 07:40:48.246796 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:48 crc kubenswrapper[4795]: I1129 07:40:48.246816 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:48 crc kubenswrapper[4795]: I1129 07:40:48.246839 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:48 crc kubenswrapper[4795]: I1129 07:40:48.246851 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:48Z","lastTransitionTime":"2025-11-29T07:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:48 crc kubenswrapper[4795]: I1129 07:40:48.275218 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:40:48 crc kubenswrapper[4795]: I1129 07:40:48.275285 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:40:48 crc kubenswrapper[4795]: I1129 07:40:48.275370 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:40:48 crc kubenswrapper[4795]: E1129 07:40:48.275531 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:40:48 crc kubenswrapper[4795]: I1129 07:40:48.275552 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvmzq" Nov 29 07:40:48 crc kubenswrapper[4795]: E1129 07:40:48.275644 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:40:48 crc kubenswrapper[4795]: E1129 07:40:48.275680 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvmzq" podUID="9be66670-47c2-4d05-bf3d-59ae6f4ff53b" Nov 29 07:40:48 crc kubenswrapper[4795]: E1129 07:40:48.275744 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:40:48 crc kubenswrapper[4795]: I1129 07:40:48.348749 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:48 crc kubenswrapper[4795]: I1129 07:40:48.348787 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:48 crc kubenswrapper[4795]: I1129 07:40:48.348797 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:48 crc kubenswrapper[4795]: I1129 07:40:48.348813 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:48 crc kubenswrapper[4795]: I1129 07:40:48.348827 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:48Z","lastTransitionTime":"2025-11-29T07:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:48 crc kubenswrapper[4795]: I1129 07:40:48.417353 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-km2g9_3d3ff2b2-cbaa-4309-805a-2b044f867d3a/ovnkube-controller/2.log" Nov 29 07:40:48 crc kubenswrapper[4795]: I1129 07:40:48.419858 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" event={"ID":"3d3ff2b2-cbaa-4309-805a-2b044f867d3a","Type":"ContainerStarted","Data":"265a8143fe9383f13d2e5d972c3fdd716990d82b30245eeb740f8a0fdb4f089c"} Nov 29 07:40:48 crc kubenswrapper[4795]: I1129 07:40:48.420181 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" Nov 29 07:40:48 crc kubenswrapper[4795]: I1129 07:40:48.432336 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"262bf41d-eaf0-42b7-946d-3223857ca705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bbfc45764b29055dcafdee2e3b77893c35b23cfa3eaa9125f01850baef2fd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://508f7cb88a5b33c09bb52519555900031de44693e10de750ed34d9bb4c31738c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9f1344bd7c3a1bc53870f7e5f5f2ca4993df112a24aa474655cc07670e22c48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b06202234f21aa27a3dc67978cc9f82738d7526100bce3fd640be08131413f69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:48Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:48 crc kubenswrapper[4795]: I1129 07:40:48.442058 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:48Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:48 crc kubenswrapper[4795]: I1129 07:40:48.450654 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:48 crc kubenswrapper[4795]: I1129 07:40:48.450704 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:48 crc kubenswrapper[4795]: I1129 07:40:48.450714 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:48 crc kubenswrapper[4795]: I1129 07:40:48.450729 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:48 crc kubenswrapper[4795]: I1129 07:40:48.450739 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:48Z","lastTransitionTime":"2025-11-29T07:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:48 crc kubenswrapper[4795]: I1129 07:40:48.453925 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:48Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:48 crc kubenswrapper[4795]: I1129 07:40:48.465512 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27975" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceab872d-7a73-44d5-936e-3dd17facf399\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b771b13a7bbce9270d919a830c700a1a04bd1d82141a1487e7abb6b1def634cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:40:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e24a1ba0b3bd47c8cff733e54ce43cf1641ba69df8a3d2083f47fcfdcd6640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e24a1ba0b3bd47c8cff733e54ce43cf1641ba69df8a3d2083f47fcfdcd6640\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9e93605467f5dae805a8b5b55e1eebc3f046f4ab7fd3a409235b918d8b1aa88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9e93605467f5dae805a8b5b55e1eebc3f046f4ab7fd3a409235b918d8b1aa88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1856815c6475b6c9db1d59694dfc249a9b254ca7f6b402570dbb791f60e038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1856815c6475b6c9db1d59694dfc249a9b254ca7f6b402570dbb791f60e038\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c79e6f292a540eaeab2781c31973da7e0213d08460452129dbb25075c05a9094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c79e6f292a540eaeab2781c31973da7e0213d08460452129dbb25075c05a9094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e9ceaddb24d5dc8e3689b2e05482fd76883f4ce3475557c03419491b53e800d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9ceaddb24d5dc8e3689b2e05482fd76883f4ce3475557c03419491b53e800d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2966e801bc03a51bece33fae1207ba6a432620fcbcb1c702abddbb11b40a1e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2966e801bc03a51bece33fae1207ba6a432620fcbcb1c702abddbb11b40a1e1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27975\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:48Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:48 crc kubenswrapper[4795]: I1129 07:40:48.474325 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-s9x86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcb60df4-ad48-4830-b8c7-c63621b96707\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6d9cdb665b3bed924577ab173dcf48968a0b7217ca81b1f8708d733d27d3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-s9x86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:48Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:48 crc kubenswrapper[4795]: I1129 07:40:48.482816 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qcmb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6472ebbc-939b-4dd0-8b03-110cb9811484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad977e9d70f50688539c9d0c9cf08789584d08f77792d3dde1a619e9d958cff2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qwxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2497113482ab716b22a90b7872f2837bccd484b765e979c2c67a394f82de688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qwxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qcmb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:48Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:48 crc kubenswrapper[4795]: I1129 07:40:48.493988 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:48Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:48 crc kubenswrapper[4795]: I1129 07:40:48.506719 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2faa21199ef0e7bbf065823fff1e4b33636239d022db292bec68ff54e8c948c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c09cfeaabce351a2face8aa09a04f88aa9d50122718358d07301aeab84e4a76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:48Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:48 crc kubenswrapper[4795]: I1129 07:40:48.519841 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"027ef0a7-d55c-4187-aff0-931ba50db57f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62dc6b7572b93a8e91f4dc1404de860af85b5e37d2c1a802e4a481b01bfd892c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c9db4ec27597862bfc36c6a62900a3aa639ebb33d7c7212d449cc85a321d60d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c9db4ec27597862bfc36c6a62900a3aa639ebb33d7c7212d449cc85a321d60d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:48Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:48 crc kubenswrapper[4795]: I1129 07:40:48.533858 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84bbfda4-f166-406b-8459-d8cad5d21032\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://102735e5872ecbca5ba247f4fc457619ce4b383def180851d69bf37f7ea1051a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cba2e40f95664da3b482ce01c3432b7a421b785791004833aeaffbc2b6bdcb9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c385dac962c7c2c54fe6b5eabe286bc75d0518601f68b6845677764c5916ad44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6319d19e0aa1655aaff5e3abcc13d8ff9ff26f401c28cf23593ecaa23d1ac5a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa34ac2960898820609db88aa051a6567a7b6bd0bfc7fcb741205f6f5c3f402\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:39:37Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 07:39:27.030389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:39:27.033381 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-823169215/tls.crt::/tmp/serving-cert-823169215/tls.key\\\\\\\"\\\\nI1129 07:39:37.392848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:39:37.396263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:39:37.396294 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:39:37.396669 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:39:37.396692 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:39:37.407579 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:39:37.407638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407648 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407655 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:39:37.407660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:39:37.407665 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:39:37.407670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:39:37.408043 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:39:37.411891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7b73774f26f4ea48586f8faf7f53be02b4678c85937c8e402868610f36d4d00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:48Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:48 crc kubenswrapper[4795]: I1129 07:40:48.545585 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c05a965f-51fb-403d-bbac-b34d4d6b2bbd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98152568d086a30f06fcd1535110482b356c587d906973de9baedd65fc85b6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34ebb0f2c455d2133d2fcf00d19e74d539e80523f2ed0b2ccc1f8eabcf2cb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9d7937efcaeb684eb229e396c7896a805f1bf205bda1cd1b8113aeda2bea5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bcaeb06b249f6fa81deaf9f997ca41c9dd7f3568d458072208901c04543e272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bcaeb06b249f6fa81deaf9f997ca41c9dd7f3568d458072208901c04543e272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:48Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:48 crc kubenswrapper[4795]: I1129 07:40:48.552288 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:48 crc kubenswrapper[4795]: I1129 07:40:48.552320 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:48 crc kubenswrapper[4795]: I1129 07:40:48.552330 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:48 crc kubenswrapper[4795]: I1129 07:40:48.552345 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:48 crc kubenswrapper[4795]: I1129 07:40:48.552354 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:48Z","lastTransitionTime":"2025-11-29T07:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:48 crc kubenswrapper[4795]: I1129 07:40:48.560715 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30fbc9849348e3900b416f78f697f1d88eb5eb018f5ee420e711561423a79459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:48Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:48 crc kubenswrapper[4795]: I1129 07:40:48.571115 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b4d032ee6e86c2b61a59ea1477aa6d97a454c754888c37db24bc49d921eacd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:48Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:48 crc kubenswrapper[4795]: I1129 07:40:48.581164 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://148610b20c1dfe43c3e5443cda520fa95ed2823afcaa880dbe879b6d93dc8ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3aac368767d0534a658bc0223f5cda0a951940f2d6ff3f1c89a6ac401a7d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkmq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:48Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:48 crc kubenswrapper[4795]: I1129 07:40:48.609635 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceed8db447ae237cae16d2aa37b1651eade557ea7aaab5bc7b55678ce6e36f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcce9ccf35e72987d01428b7573aafacfe90fc973009657b077f40b6fe6ddecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://248e5291402287c1a589a168527135bd1df15e293a7614681fc896fcbdeb9dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8112226f945ad27e0c5af4d588800322977e17efc3eac9acec2ce591f30f46fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://feacad8666527640ec9d6bee04a743e99e0cc9f9e9c6432c1a70bc02e85fa039\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25239d7c14973e35144f9ac1b909564749220fc32b1851b68e08434a5aca7def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://265a8143fe9383f13d2e5d972c3fdd716990d82b30245eeb740f8a0fdb4f089c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a0056c2a13ab4a4482c049c3fb5f6fe9938979d96f9837f40fa1dc45719a09b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:40:20Z\\\",\\\"message\\\":\\\"Policy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:40:20.066807 6580 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:40:20.066708 6580 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:40:20.067272 6580 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:40:20.067564 6580 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:40:20.067747 6580 factory.go:656] Stopping watch factory\\\\nI1129 07:40:20.067868 6580 services_controller.go:214] Setting up event handlers for endpoint slices for network=default\\\\nI1129 07:40:20.067909 6580 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1129 07:40:20.068426 6580 ovnkube.go:599] Stopped ovnkube\\\\nI1129 07:40:20.068484 6580 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1129 07:40:20.068569 6580 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:40:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63ac8cc3766762113c8fdfdc459f77570fc533bf636011f1e64a5c0a55b3ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e28f6209bd7be6beb5d86d316c3194ad3162ae8561ac532a3dfc894415fb406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e28f6209bd7be6beb5d86d316c3194ad3162ae8561ac532a3dfc894415fb406\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-km2g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:48Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:48 crc kubenswrapper[4795]: I1129 07:40:48.630364 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bvmzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9be66670-47c2-4d05-bf3d-59ae6f4ff53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4h7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4h7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bvmzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:48Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:48 crc kubenswrapper[4795]: I1129 07:40:48.648963 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hbg2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50b9c3ea-4ff5-434f-803c-2365a0938f9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00a5c9d1a3e5371aa4899c05c4c4eecdbbbe3666777446a5fa82e13837a23050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46105526d9b0e90d4e14597c7b9c505b01d89a55e947fb378e4587ec265cac89\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:40:31Z\\\",\\\"message\\\":\\\"2025-11-29T07:39:45+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7afd0fe0-7b98-4b72-a478-6a6cb3a1287c\\\\n2025-11-29T07:39:45+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7afd0fe0-7b98-4b72-a478-6a6cb3a1287c to /host/opt/cni/bin/\\\\n2025-11-29T07:39:45Z [verbose] multus-daemon started\\\\n2025-11-29T07:39:45Z [verbose] Readiness Indicator file check\\\\n2025-11-29T07:40:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:40:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv7gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hbg2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:48Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:48 crc kubenswrapper[4795]: I1129 07:40:48.653998 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:48 crc kubenswrapper[4795]: I1129 07:40:48.654043 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:48 crc kubenswrapper[4795]: I1129 07:40:48.654056 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:48 crc kubenswrapper[4795]: I1129 07:40:48.654074 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:48 crc kubenswrapper[4795]: I1129 07:40:48.654086 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:48Z","lastTransitionTime":"2025-11-29T07:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:48 crc kubenswrapper[4795]: I1129 07:40:48.663820 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vcd5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3fc3441-5d98-4323-8a78-cab492090c5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d047b5f2e979d7eebec46b45ef363e94e99bd7311214872c66fcce39cfd411db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcqq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vcd5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:48Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:48 crc kubenswrapper[4795]: I1129 07:40:48.756527 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:48 crc kubenswrapper[4795]: I1129 07:40:48.756563 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:48 crc kubenswrapper[4795]: I1129 07:40:48.756574 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:48 crc kubenswrapper[4795]: I1129 07:40:48.756601 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:48 crc kubenswrapper[4795]: I1129 07:40:48.756613 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:48Z","lastTransitionTime":"2025-11-29T07:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:48 crc kubenswrapper[4795]: I1129 07:40:48.858911 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:48 crc kubenswrapper[4795]: I1129 07:40:48.858953 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:48 crc kubenswrapper[4795]: I1129 07:40:48.858965 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:48 crc kubenswrapper[4795]: I1129 07:40:48.858981 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:48 crc kubenswrapper[4795]: I1129 07:40:48.858994 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:48Z","lastTransitionTime":"2025-11-29T07:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:48 crc kubenswrapper[4795]: I1129 07:40:48.961287 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:48 crc kubenswrapper[4795]: I1129 07:40:48.961322 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:48 crc kubenswrapper[4795]: I1129 07:40:48.961332 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:48 crc kubenswrapper[4795]: I1129 07:40:48.961348 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:48 crc kubenswrapper[4795]: I1129 07:40:48.961360 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:48Z","lastTransitionTime":"2025-11-29T07:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:49 crc kubenswrapper[4795]: I1129 07:40:49.063879 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:49 crc kubenswrapper[4795]: I1129 07:40:49.064120 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:49 crc kubenswrapper[4795]: I1129 07:40:49.064130 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:49 crc kubenswrapper[4795]: I1129 07:40:49.064142 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:49 crc kubenswrapper[4795]: I1129 07:40:49.064153 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:49Z","lastTransitionTime":"2025-11-29T07:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:49 crc kubenswrapper[4795]: I1129 07:40:49.166303 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:49 crc kubenswrapper[4795]: I1129 07:40:49.166341 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:49 crc kubenswrapper[4795]: I1129 07:40:49.166351 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:49 crc kubenswrapper[4795]: I1129 07:40:49.166367 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:49 crc kubenswrapper[4795]: I1129 07:40:49.166378 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:49Z","lastTransitionTime":"2025-11-29T07:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:49 crc kubenswrapper[4795]: I1129 07:40:49.268151 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:49 crc kubenswrapper[4795]: I1129 07:40:49.268193 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:49 crc kubenswrapper[4795]: I1129 07:40:49.268205 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:49 crc kubenswrapper[4795]: I1129 07:40:49.268221 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:49 crc kubenswrapper[4795]: I1129 07:40:49.268232 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:49Z","lastTransitionTime":"2025-11-29T07:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:49 crc kubenswrapper[4795]: I1129 07:40:49.370481 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:49 crc kubenswrapper[4795]: I1129 07:40:49.370578 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:49 crc kubenswrapper[4795]: I1129 07:40:49.370609 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:49 crc kubenswrapper[4795]: I1129 07:40:49.370628 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:49 crc kubenswrapper[4795]: I1129 07:40:49.370648 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:49Z","lastTransitionTime":"2025-11-29T07:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:49 crc kubenswrapper[4795]: I1129 07:40:49.425078 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-km2g9_3d3ff2b2-cbaa-4309-805a-2b044f867d3a/ovnkube-controller/3.log" Nov 29 07:40:49 crc kubenswrapper[4795]: I1129 07:40:49.425570 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-km2g9_3d3ff2b2-cbaa-4309-805a-2b044f867d3a/ovnkube-controller/2.log" Nov 29 07:40:49 crc kubenswrapper[4795]: I1129 07:40:49.428645 4795 generic.go:334] "Generic (PLEG): container finished" podID="3d3ff2b2-cbaa-4309-805a-2b044f867d3a" containerID="265a8143fe9383f13d2e5d972c3fdd716990d82b30245eeb740f8a0fdb4f089c" exitCode=1 Nov 29 07:40:49 crc kubenswrapper[4795]: I1129 07:40:49.428705 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" event={"ID":"3d3ff2b2-cbaa-4309-805a-2b044f867d3a","Type":"ContainerDied","Data":"265a8143fe9383f13d2e5d972c3fdd716990d82b30245eeb740f8a0fdb4f089c"} Nov 29 07:40:49 crc kubenswrapper[4795]: I1129 07:40:49.428744 4795 scope.go:117] "RemoveContainer" containerID="4a0056c2a13ab4a4482c049c3fb5f6fe9938979d96f9837f40fa1dc45719a09b" Nov 29 07:40:49 crc kubenswrapper[4795]: I1129 07:40:49.431339 4795 scope.go:117] "RemoveContainer" containerID="265a8143fe9383f13d2e5d972c3fdd716990d82b30245eeb740f8a0fdb4f089c" Nov 29 07:40:49 crc kubenswrapper[4795]: E1129 07:40:49.431578 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-km2g9_openshift-ovn-kubernetes(3d3ff2b2-cbaa-4309-805a-2b044f867d3a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" podUID="3d3ff2b2-cbaa-4309-805a-2b044f867d3a" Nov 29 07:40:49 crc kubenswrapper[4795]: I1129 07:40:49.444715 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:49Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:49 crc kubenswrapper[4795]: I1129 07:40:49.458365 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:49Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:49 crc kubenswrapper[4795]: I1129 07:40:49.473166 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:49 crc kubenswrapper[4795]: I1129 07:40:49.473199 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:49 crc kubenswrapper[4795]: I1129 07:40:49.473209 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:49 crc kubenswrapper[4795]: I1129 07:40:49.473223 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:49 crc kubenswrapper[4795]: I1129 07:40:49.473233 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:49Z","lastTransitionTime":"2025-11-29T07:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:49 crc kubenswrapper[4795]: I1129 07:40:49.473997 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27975" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceab872d-7a73-44d5-936e-3dd17facf399\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b771b13a7bbce9270d919a830c700a1a04bd1d82141a1487e7abb6b1def634cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:40:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e24a1ba0b3bd47c8cff733e54ce43cf1641ba69df8a3d2083f47fcfdcd6640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e24a1ba0b3bd47c8cff733e54ce43cf1641ba69df8a3d2083f47fcfdcd6640\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9e93605467f5dae805a8b5b55e1eebc3f046f4ab7fd3a409235b918d8b1aa88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9e93605467f5dae805a8b5b55e1eebc3f046f4ab7fd3a409235b918d8b1aa88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1856815c6475b6c9db1d59694dfc249a9b254ca7f6b402570dbb791f60e038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1856815c6475b6c9db1d59694dfc249a9b254ca7f6b402570dbb791f60e038\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c79e6f292a540eaeab2781c31973da7e0213d08460452129dbb25075c05a9094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c79e6f292a540eaeab2781c31973da7e0213d08460452129dbb25075c05a9094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e9ceaddb24d5dc8e3689b2e05482fd76883f4ce3475557c03419491b53e800d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9ceaddb24d5dc8e3689b2e05482fd76883f4ce3475557c03419491b53e800d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2966e801bc03a51bece33fae1207ba6a432620fcbcb1c702abddbb11b40a1e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2966e801bc03a51bece33fae1207ba6a432620fcbcb1c702abddbb11b40a1e1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27975\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:49Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:49 crc kubenswrapper[4795]: I1129 07:40:49.484785 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-s9x86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcb60df4-ad48-4830-b8c7-c63621b96707\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6d9cdb665b3bed924577ab173dcf48968a0b7217ca81b1f8708d733d27d3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-s9x86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:49Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:49 crc kubenswrapper[4795]: I1129 07:40:49.500962 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qcmb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6472ebbc-939b-4dd0-8b03-110cb9811484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad977e9d70f50688539c9d0c9cf08789584d08f77792d3dde1a619e9d958cff2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qwxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2497113482ab716b22a90b7872f2837bccd484b765e979c2c67a394f82de688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qwxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qcmb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:49Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:49 crc kubenswrapper[4795]: I1129 07:40:49.517037 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"262bf41d-eaf0-42b7-946d-3223857ca705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bbfc45764b29055dcafdee2e3b77893c35b23cfa3eaa9125f01850baef2fd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://508f7cb88a5b33c09bb52519555900031de44693e10de750ed34d9bb4c31738c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9f1344bd7c3a1bc53870f7e5f5f2ca4993df112a24aa474655cc07670e22c48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b06202234f21aa27a3dc67978cc9f82738d7526100bce3fd640be08131413f69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:49Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:49 crc kubenswrapper[4795]: I1129 07:40:49.535059 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2faa21199ef0e7bbf065823fff1e4b33636239d022db292bec68ff54e8c948c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c09cfeaabce351a2face8aa09a04f88aa9d50122718358d07301aeab84e4a76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:49Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:49 crc kubenswrapper[4795]: I1129 07:40:49.546859 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:49Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:49 crc kubenswrapper[4795]: I1129 07:40:49.558293 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84bbfda4-f166-406b-8459-d8cad5d21032\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://102735e5872ecbca5ba247f4fc457619ce4b383def180851d69bf37f7ea1051a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cba2e40f95664da3b482ce01c3432b7a421b785791004833aeaffbc2b6bdcb9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c385dac962c7c2c54fe6b5eabe286bc75d0518601f68b6845677764c5916ad44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6319d19e0aa1655aaff5e3abcc13d8ff9ff26f401c28cf23593ecaa23d1ac5a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa34ac2960898820609db88aa051a6567a7b6bd0bfc7fcb741205f6f5c3f402\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:39:37Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 07:39:27.030389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:39:27.033381 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-823169215/tls.crt::/tmp/serving-cert-823169215/tls.key\\\\\\\"\\\\nI1129 07:39:37.392848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:39:37.396263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:39:37.396294 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:39:37.396669 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:39:37.396692 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:39:37.407579 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:39:37.407638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407648 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407655 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:39:37.407660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:39:37.407665 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:39:37.407670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:39:37.408043 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:39:37.411891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7b73774f26f4ea48586f8faf7f53be02b4678c85937c8e402868610f36d4d00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:49Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:49 crc kubenswrapper[4795]: I1129 07:40:49.568522 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c05a965f-51fb-403d-bbac-b34d4d6b2bbd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98152568d086a30f06fcd1535110482b356c587d906973de9baedd65fc85b6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34ebb0f2c455d2133d2fcf00d19e74d539e80523f2ed0b2ccc1f8eabcf2cb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9d7937efcaeb684eb229e396c7896a805f1bf205bda1cd1b8113aeda2bea5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bcaeb06b249f6fa81deaf9f997ca41c9dd7f3568d458072208901c04543e272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bcaeb06b249f6fa81deaf9f997ca41c9dd7f3568d458072208901c04543e272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:49Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:49 crc kubenswrapper[4795]: I1129 07:40:49.575400 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:49 crc kubenswrapper[4795]: I1129 07:40:49.575429 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:49 crc kubenswrapper[4795]: I1129 07:40:49.575436 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:49 crc kubenswrapper[4795]: I1129 07:40:49.575449 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:49 crc kubenswrapper[4795]: I1129 07:40:49.575458 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:49Z","lastTransitionTime":"2025-11-29T07:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:49 crc kubenswrapper[4795]: I1129 07:40:49.581631 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30fbc9849348e3900b416f78f697f1d88eb5eb018f5ee420e711561423a79459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:49Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:49 crc kubenswrapper[4795]: I1129 07:40:49.591504 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b4d032ee6e86c2b61a59ea1477aa6d97a454c754888c37db24bc49d921eacd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:49Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:49 crc kubenswrapper[4795]: I1129 07:40:49.603224 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://148610b20c1dfe43c3e5443cda520fa95ed2823afcaa880dbe879b6d93dc8ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3aac368767d0534a658bc0223f5cda0a951940f2d6ff3f1c89a6ac401a7d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkmq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:49Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:49 crc kubenswrapper[4795]: I1129 07:40:49.620573 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceed8db447ae237cae16d2aa37b1651eade557ea7aaab5bc7b55678ce6e36f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcce9ccf35e72987d01428b7573aafacfe90fc973009657b077f40b6fe6ddecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://248e5291402287c1a589a168527135bd1df15e293a7614681fc896fcbdeb9dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8112226f945ad27e0c5af4d588800322977e17efc3eac9acec2ce591f30f46fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://feacad8666527640ec9d6bee04a743e99e0cc9f9e9c6432c1a70bc02e85fa039\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25239d7c14973e35144f9ac1b909564749220fc32b1851b68e08434a5aca7def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://265a8143fe9383f13d2e5d972c3fdd716990d82b30245eeb740f8a0fdb4f089c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a0056c2a13ab4a4482c049c3fb5f6fe9938979d96f9837f40fa1dc45719a09b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:40:20Z\\\",\\\"message\\\":\\\"Policy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:40:20.066807 6580 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:40:20.066708 6580 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:40:20.067272 6580 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:40:20.067564 6580 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:40:20.067747 6580 factory.go:656] Stopping watch factory\\\\nI1129 07:40:20.067868 6580 services_controller.go:214] Setting up event handlers for endpoint slices for network=default\\\\nI1129 07:40:20.067909 6580 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1129 07:40:20.068426 6580 ovnkube.go:599] Stopped ovnkube\\\\nI1129 07:40:20.068484 6580 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1129 07:40:20.068569 6580 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:40:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://265a8143fe9383f13d2e5d972c3fdd716990d82b30245eeb740f8a0fdb4f089c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:40:48Z\\\",\\\"message\\\":\\\"le:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1129 07:40:48.655993 6816 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-scheduler-operator/metrics]} name:Service_openshift-kube-scheduler-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.233:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {1dc899db-4498-4b7a-8437-861940b962e7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1129 07:40:48.655207 6816 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-bvmzq before timer (time: 2025-11-29 07:40:50.085660412 +0000 UTC m=+1.991744270): skip\\\\nI1129 07:40:48.656054 6816 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 1.136331ms)\\\\nI1129 07:40:48.656174 6816 ovnkube.go:599] Stopped ovnkube\\\\nI1129 07:40:48.656202 6816 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1129 07:40:48.656253 6816 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63ac8cc3766762113c8fdfdc459f77570fc533bf636011f1e64a5c0a55b3ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e28f6209bd7be6beb5d86d316c3194ad3162ae8561ac532a3dfc894415fb406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e28f6209bd7be6beb5d86d316c3194ad3162ae8561ac532a3dfc894415fb406\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-km2g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:49Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:49 crc kubenswrapper[4795]: I1129 07:40:49.630545 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"027ef0a7-d55c-4187-aff0-931ba50db57f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62dc6b7572b93a8e91f4dc1404de860af85b5e37d2c1a802e4a481b01bfd892c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c9db4ec27597862bfc36c6a62900a3aa639ebb33d7c7212d449cc85a321d60d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c9db4ec27597862bfc36c6a62900a3aa639ebb33d7c7212d449cc85a321d60d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:49Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:49 crc kubenswrapper[4795]: I1129 07:40:49.640465 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bvmzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9be66670-47c2-4d05-bf3d-59ae6f4ff53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4h7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4h7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bvmzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:49Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:49 crc kubenswrapper[4795]: I1129 07:40:49.648833 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vcd5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3fc3441-5d98-4323-8a78-cab492090c5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d047b5f2e979d7eebec46b45ef363e94e99bd7311214872c66fcce39cfd411db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcqq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vcd5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:49Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:49 crc kubenswrapper[4795]: I1129 07:40:49.660080 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hbg2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50b9c3ea-4ff5-434f-803c-2365a0938f9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00a5c9d1a3e5371aa4899c05c4c4eecdbbbe3666777446a5fa82e13837a23050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46105526d9b0e90d4e14597c7b9c505b01d89a55e947fb378e4587ec265cac89\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:40:31Z\\\",\\\"message\\\":\\\"2025-11-29T07:39:45+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7afd0fe0-7b98-4b72-a478-6a6cb3a1287c\\\\n2025-11-29T07:39:45+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7afd0fe0-7b98-4b72-a478-6a6cb3a1287c to /host/opt/cni/bin/\\\\n2025-11-29T07:39:45Z [verbose] multus-daemon started\\\\n2025-11-29T07:39:45Z [verbose] Readiness Indicator file check\\\\n2025-11-29T07:40:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:40:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv7gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hbg2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:49Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:49 crc kubenswrapper[4795]: I1129 07:40:49.677306 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:49 crc kubenswrapper[4795]: I1129 07:40:49.677349 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:49 crc kubenswrapper[4795]: I1129 07:40:49.677364 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:49 crc kubenswrapper[4795]: I1129 07:40:49.677384 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:49 crc kubenswrapper[4795]: I1129 07:40:49.677399 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:49Z","lastTransitionTime":"2025-11-29T07:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:49 crc kubenswrapper[4795]: I1129 07:40:49.779421 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:49 crc kubenswrapper[4795]: I1129 07:40:49.779472 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:49 crc kubenswrapper[4795]: I1129 07:40:49.779488 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:49 crc kubenswrapper[4795]: I1129 07:40:49.779509 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:49 crc kubenswrapper[4795]: I1129 07:40:49.779525 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:49Z","lastTransitionTime":"2025-11-29T07:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:49 crc kubenswrapper[4795]: I1129 07:40:49.881754 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:49 crc kubenswrapper[4795]: I1129 07:40:49.881792 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:49 crc kubenswrapper[4795]: I1129 07:40:49.881801 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:49 crc kubenswrapper[4795]: I1129 07:40:49.881815 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:49 crc kubenswrapper[4795]: I1129 07:40:49.881824 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:49Z","lastTransitionTime":"2025-11-29T07:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:49 crc kubenswrapper[4795]: I1129 07:40:49.984439 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:49 crc kubenswrapper[4795]: I1129 07:40:49.984478 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:49 crc kubenswrapper[4795]: I1129 07:40:49.984489 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:49 crc kubenswrapper[4795]: I1129 07:40:49.984506 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:49 crc kubenswrapper[4795]: I1129 07:40:49.984518 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:49Z","lastTransitionTime":"2025-11-29T07:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:50 crc kubenswrapper[4795]: I1129 07:40:50.087879 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:50 crc kubenswrapper[4795]: I1129 07:40:50.087985 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:50 crc kubenswrapper[4795]: I1129 07:40:50.088007 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:50 crc kubenswrapper[4795]: I1129 07:40:50.088051 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:50 crc kubenswrapper[4795]: I1129 07:40:50.088097 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:50Z","lastTransitionTime":"2025-11-29T07:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:50 crc kubenswrapper[4795]: I1129 07:40:50.190956 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:50 crc kubenswrapper[4795]: I1129 07:40:50.191049 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:50 crc kubenswrapper[4795]: I1129 07:40:50.191073 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:50 crc kubenswrapper[4795]: I1129 07:40:50.191105 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:50 crc kubenswrapper[4795]: I1129 07:40:50.191128 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:50Z","lastTransitionTime":"2025-11-29T07:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:50 crc kubenswrapper[4795]: I1129 07:40:50.275873 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:40:50 crc kubenswrapper[4795]: I1129 07:40:50.275914 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:40:50 crc kubenswrapper[4795]: I1129 07:40:50.275962 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:40:50 crc kubenswrapper[4795]: I1129 07:40:50.275875 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvmzq" Nov 29 07:40:50 crc kubenswrapper[4795]: E1129 07:40:50.276048 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:40:50 crc kubenswrapper[4795]: E1129 07:40:50.276152 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:40:50 crc kubenswrapper[4795]: E1129 07:40:50.276381 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:40:50 crc kubenswrapper[4795]: E1129 07:40:50.276708 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvmzq" podUID="9be66670-47c2-4d05-bf3d-59ae6f4ff53b" Nov 29 07:40:50 crc kubenswrapper[4795]: I1129 07:40:50.294555 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:50 crc kubenswrapper[4795]: I1129 07:40:50.294629 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:50 crc kubenswrapper[4795]: I1129 07:40:50.294638 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:50 crc kubenswrapper[4795]: I1129 07:40:50.294670 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:50 crc kubenswrapper[4795]: I1129 07:40:50.294680 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:50Z","lastTransitionTime":"2025-11-29T07:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:50 crc kubenswrapper[4795]: I1129 07:40:50.398207 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:50 crc kubenswrapper[4795]: I1129 07:40:50.398264 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:50 crc kubenswrapper[4795]: I1129 07:40:50.398276 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:50 crc kubenswrapper[4795]: I1129 07:40:50.398294 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:50 crc kubenswrapper[4795]: I1129 07:40:50.398307 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:50Z","lastTransitionTime":"2025-11-29T07:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:50 crc kubenswrapper[4795]: I1129 07:40:50.432818 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-km2g9_3d3ff2b2-cbaa-4309-805a-2b044f867d3a/ovnkube-controller/3.log" Nov 29 07:40:50 crc kubenswrapper[4795]: I1129 07:40:50.437675 4795 scope.go:117] "RemoveContainer" containerID="265a8143fe9383f13d2e5d972c3fdd716990d82b30245eeb740f8a0fdb4f089c" Nov 29 07:40:50 crc kubenswrapper[4795]: E1129 07:40:50.437833 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-km2g9_openshift-ovn-kubernetes(3d3ff2b2-cbaa-4309-805a-2b044f867d3a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" podUID="3d3ff2b2-cbaa-4309-805a-2b044f867d3a" Nov 29 07:40:50 crc kubenswrapper[4795]: I1129 07:40:50.451569 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hbg2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50b9c3ea-4ff5-434f-803c-2365a0938f9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00a5c9d1a3e5371aa4899c05c4c4eecdbbbe3666777446a5fa82e13837a23050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46105526d9b0e90d4e14597c7b9c505b01d89a55e947fb378e4587ec265cac89\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:40:31Z\\\",\\\"message\\\":\\\"2025-11-29T07:39:45+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7afd0fe0-7b98-4b72-a478-6a6cb3a1287c\\\\n2025-11-29T07:39:45+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7afd0fe0-7b98-4b72-a478-6a6cb3a1287c to /host/opt/cni/bin/\\\\n2025-11-29T07:39:45Z [verbose] multus-daemon started\\\\n2025-11-29T07:39:45Z [verbose] Readiness Indicator file check\\\\n2025-11-29T07:40:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:40:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv7gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hbg2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:50Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:50 crc kubenswrapper[4795]: I1129 07:40:50.463578 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vcd5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3fc3441-5d98-4323-8a78-cab492090c5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d047b5f2e979d7eebec46b45ef363e94e99bd7311214872c66fcce39cfd411db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcqq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vcd5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:50Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:50 crc kubenswrapper[4795]: I1129 07:40:50.474284 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-s9x86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcb60df4-ad48-4830-b8c7-c63621b96707\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6d9cdb665b3bed924577ab173dcf48968a0b7217ca81b1f8708d733d27d3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-s9x86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:50Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:50 crc kubenswrapper[4795]: I1129 07:40:50.487576 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qcmb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6472ebbc-939b-4dd0-8b03-110cb9811484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad977e9d70f50688539c9d0c9cf08789584d08f77792d3dde1a619e9d958cff2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qwxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2497113482ab716b22a90b7872f2837bccd484b765e979c2c67a394f82de688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qwxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qcmb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:50Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:50 crc kubenswrapper[4795]: I1129 07:40:50.500611 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:50 crc kubenswrapper[4795]: I1129 07:40:50.500660 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:50 crc kubenswrapper[4795]: I1129 07:40:50.500673 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:50 crc kubenswrapper[4795]: I1129 07:40:50.500695 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:50 crc kubenswrapper[4795]: I1129 07:40:50.500710 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:50Z","lastTransitionTime":"2025-11-29T07:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:50 crc kubenswrapper[4795]: I1129 07:40:50.503339 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"262bf41d-eaf0-42b7-946d-3223857ca705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bbfc45764b29055dcafdee2e3b77893c35b23cfa3eaa9125f01850baef2fd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://508f7cb88a5b33c09bb52519555900031de44693e10de750ed34d9bb4c31738c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9f1344bd7c3a1bc53870f7e5f5f2ca4993df112a24aa474655cc07670e22c48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b06202234f21aa27a3dc67978cc9f82738d7526100bce3fd640be08131413f69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:50Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:50 crc kubenswrapper[4795]: I1129 07:40:50.520222 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:50Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:50 crc kubenswrapper[4795]: I1129 07:40:50.534929 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:50Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:50 crc kubenswrapper[4795]: I1129 07:40:50.548655 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27975" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceab872d-7a73-44d5-936e-3dd17facf399\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b771b13a7bbce9270d919a830c700a1a04bd1d82141a1487e7abb6b1def634cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:40:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e24a1ba0b3bd47c8cff733e54ce43cf1641ba69df8a3d2083f47fcfdcd6640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e24a1ba0b3bd47c8cff733e54ce43cf1641ba69df8a3d2083f47fcfdcd6640\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9e93605467f5dae805a8b5b55e1eebc3f046f4ab7fd3a409235b918d8b1aa88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9e93605467f5dae805a8b5b55e1eebc3f046f4ab7fd3a409235b918d8b1aa88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1856815c6475b6c9db1d59694dfc249a9b254ca7f6b402570dbb791f60e038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1856815c6475b6c9db1d59694dfc249a9b254ca7f6b402570dbb791f60e038\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c79e6f292a540eaeab2781c31973da7e0213d08460452129dbb25075c05a9094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c79e6f292a540eaeab2781c31973da7e0213d08460452129dbb25075c05a9094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e9ceaddb24d5dc8e3689b2e05482fd76883f4ce3475557c03419491b53e800d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9ceaddb24d5dc8e3689b2e05482fd76883f4ce3475557c03419491b53e800d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2966e801bc03a51bece33fae1207ba6a432620fcbcb1c702abddbb11b40a1e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2966e801bc03a51bece33fae1207ba6a432620fcbcb1c702abddbb11b40a1e1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27975\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:50Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:50 crc kubenswrapper[4795]: I1129 07:40:50.559878 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:50Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:50 crc kubenswrapper[4795]: I1129 07:40:50.573732 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2faa21199ef0e7bbf065823fff1e4b33636239d022db292bec68ff54e8c948c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c09cfeaabce351a2face8aa09a04f88aa9d50122718358d07301aeab84e4a76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:50Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:50 crc kubenswrapper[4795]: I1129 07:40:50.588436 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30fbc9849348e3900b416f78f697f1d88eb5eb018f5ee420e711561423a79459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:50Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:50 crc kubenswrapper[4795]: I1129 07:40:50.600718 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b4d032ee6e86c2b61a59ea1477aa6d97a454c754888c37db24bc49d921eacd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:50Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:50 crc kubenswrapper[4795]: I1129 07:40:50.602339 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:50 crc kubenswrapper[4795]: I1129 07:40:50.602384 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:50 crc kubenswrapper[4795]: I1129 07:40:50.602396 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:50 crc kubenswrapper[4795]: I1129 07:40:50.602412 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:50 crc kubenswrapper[4795]: I1129 07:40:50.602424 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:50Z","lastTransitionTime":"2025-11-29T07:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:50 crc kubenswrapper[4795]: I1129 07:40:50.613042 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://148610b20c1dfe43c3e5443cda520fa95ed2823afcaa880dbe879b6d93dc8ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3aac368767d0534a658bc0223f5cda0a951940f2d6ff3f1c89a6ac401a7d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkmq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:50Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:50 crc kubenswrapper[4795]: I1129 07:40:50.629748 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceed8db447ae237cae16d2aa37b1651eade557ea7aaab5bc7b55678ce6e36f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcce9ccf35e72987d01428b7573aafacfe90fc973009657b077f40b6fe6ddecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://248e5291402287c1a589a168527135bd1df15e293a7614681fc896fcbdeb9dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8112226f945ad27e0c5af4d588800322977e17efc3eac9acec2ce591f30f46fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://feacad8666527640ec9d6bee04a743e99e0cc9f9e9c6432c1a70bc02e85fa039\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25239d7c14973e35144f9ac1b909564749220fc32b1851b68e08434a5aca7def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://265a8143fe9383f13d2e5d972c3fdd716990d82b30245eeb740f8a0fdb4f089c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://265a8143fe9383f13d2e5d972c3fdd716990d82b30245eeb740f8a0fdb4f089c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:40:48Z\\\",\\\"message\\\":\\\"le:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1129 07:40:48.655993 6816 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-scheduler-operator/metrics]} name:Service_openshift-kube-scheduler-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.233:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {1dc899db-4498-4b7a-8437-861940b962e7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1129 07:40:48.655207 6816 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-bvmzq before timer (time: 2025-11-29 07:40:50.085660412 +0000 UTC m=+1.991744270): skip\\\\nI1129 07:40:48.656054 6816 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 1.136331ms)\\\\nI1129 07:40:48.656174 6816 ovnkube.go:599] Stopped ovnkube\\\\nI1129 07:40:48.656202 6816 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1129 07:40:48.656253 6816 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:40:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-km2g9_openshift-ovn-kubernetes(3d3ff2b2-cbaa-4309-805a-2b044f867d3a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63ac8cc3766762113c8fdfdc459f77570fc533bf636011f1e64a5c0a55b3ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e28f6209bd7be6beb5d86d316c3194ad3162ae8561ac532a3dfc894415fb406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e28f6209bd7be6beb5d86d316c3194ad3162ae8561ac532a3dfc894415fb406\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-km2g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:50Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:50 crc kubenswrapper[4795]: I1129 07:40:50.640838 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"027ef0a7-d55c-4187-aff0-931ba50db57f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62dc6b7572b93a8e91f4dc1404de860af85b5e37d2c1a802e4a481b01bfd892c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c9db4ec27597862bfc36c6a62900a3aa639ebb33d7c7212d449cc85a321d60d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c9db4ec27597862bfc36c6a62900a3aa639ebb33d7c7212d449cc85a321d60d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:50Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:50 crc kubenswrapper[4795]: I1129 07:40:50.653707 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84bbfda4-f166-406b-8459-d8cad5d21032\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://102735e5872ecbca5ba247f4fc457619ce4b383def180851d69bf37f7ea1051a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cba2e40f95664da3b482ce01c3432b7a421b785791004833aeaffbc2b6bdcb9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c385dac962c7c2c54fe6b5eabe286bc75d0518601f68b6845677764c5916ad44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6319d19e0aa1655aaff5e3abcc13d8ff9ff26f401c28cf23593ecaa23d1ac5a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa34ac2960898820609db88aa051a6567a7b6bd0bfc7fcb741205f6f5c3f402\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:39:37Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 07:39:27.030389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:39:27.033381 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-823169215/tls.crt::/tmp/serving-cert-823169215/tls.key\\\\\\\"\\\\nI1129 07:39:37.392848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:39:37.396263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:39:37.396294 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:39:37.396669 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:39:37.396692 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:39:37.407579 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:39:37.407638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407648 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407655 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:39:37.407660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:39:37.407665 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:39:37.407670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:39:37.408043 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:39:37.411891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7b73774f26f4ea48586f8faf7f53be02b4678c85937c8e402868610f36d4d00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:50Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:50 crc kubenswrapper[4795]: I1129 07:40:50.667308 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c05a965f-51fb-403d-bbac-b34d4d6b2bbd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98152568d086a30f06fcd1535110482b356c587d906973de9baedd65fc85b6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34ebb0f2c455d2133d2fcf00d19e74d539e80523f2ed0b2ccc1f8eabcf2cb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9d7937efcaeb684eb229e396c7896a805f1bf205bda1cd1b8113aeda2bea5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bcaeb06b249f6fa81deaf9f997ca41c9dd7f3568d458072208901c04543e272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bcaeb06b249f6fa81deaf9f997ca41c9dd7f3568d458072208901c04543e272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:50Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:50 crc kubenswrapper[4795]: I1129 07:40:50.678269 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bvmzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9be66670-47c2-4d05-bf3d-59ae6f4ff53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4h7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4h7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bvmzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:50Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:50 crc kubenswrapper[4795]: I1129 07:40:50.704935 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:50 crc kubenswrapper[4795]: I1129 07:40:50.705231 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:50 crc kubenswrapper[4795]: I1129 07:40:50.705336 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:50 crc kubenswrapper[4795]: I1129 07:40:50.705421 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:50 crc kubenswrapper[4795]: I1129 07:40:50.705500 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:50Z","lastTransitionTime":"2025-11-29T07:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:50 crc kubenswrapper[4795]: I1129 07:40:50.807305 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:50 crc kubenswrapper[4795]: I1129 07:40:50.807338 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:50 crc kubenswrapper[4795]: I1129 07:40:50.807347 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:50 crc kubenswrapper[4795]: I1129 07:40:50.807360 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:50 crc kubenswrapper[4795]: I1129 07:40:50.807368 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:50Z","lastTransitionTime":"2025-11-29T07:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:50 crc kubenswrapper[4795]: I1129 07:40:50.909804 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:50 crc kubenswrapper[4795]: I1129 07:40:50.909848 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:50 crc kubenswrapper[4795]: I1129 07:40:50.909857 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:50 crc kubenswrapper[4795]: I1129 07:40:50.909869 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:50 crc kubenswrapper[4795]: I1129 07:40:50.909878 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:50Z","lastTransitionTime":"2025-11-29T07:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:51 crc kubenswrapper[4795]: I1129 07:40:51.011870 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:51 crc kubenswrapper[4795]: I1129 07:40:51.011922 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:51 crc kubenswrapper[4795]: I1129 07:40:51.011938 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:51 crc kubenswrapper[4795]: I1129 07:40:51.011954 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:51 crc kubenswrapper[4795]: I1129 07:40:51.011967 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:51Z","lastTransitionTime":"2025-11-29T07:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:51 crc kubenswrapper[4795]: I1129 07:40:51.114471 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:51 crc kubenswrapper[4795]: I1129 07:40:51.114852 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:51 crc kubenswrapper[4795]: I1129 07:40:51.115120 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:51 crc kubenswrapper[4795]: I1129 07:40:51.115476 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:51 crc kubenswrapper[4795]: I1129 07:40:51.115699 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:51Z","lastTransitionTime":"2025-11-29T07:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:51 crc kubenswrapper[4795]: I1129 07:40:51.217899 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:51 crc kubenswrapper[4795]: I1129 07:40:51.218128 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:51 crc kubenswrapper[4795]: I1129 07:40:51.218187 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:51 crc kubenswrapper[4795]: I1129 07:40:51.218243 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:51 crc kubenswrapper[4795]: I1129 07:40:51.218296 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:51Z","lastTransitionTime":"2025-11-29T07:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:51 crc kubenswrapper[4795]: I1129 07:40:51.321251 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:51 crc kubenswrapper[4795]: I1129 07:40:51.321641 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:51 crc kubenswrapper[4795]: I1129 07:40:51.321792 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:51 crc kubenswrapper[4795]: I1129 07:40:51.321964 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:51 crc kubenswrapper[4795]: I1129 07:40:51.322091 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:51Z","lastTransitionTime":"2025-11-29T07:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:51 crc kubenswrapper[4795]: I1129 07:40:51.425390 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:51 crc kubenswrapper[4795]: I1129 07:40:51.425446 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:51 crc kubenswrapper[4795]: I1129 07:40:51.425463 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:51 crc kubenswrapper[4795]: I1129 07:40:51.425484 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:51 crc kubenswrapper[4795]: I1129 07:40:51.425500 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:51Z","lastTransitionTime":"2025-11-29T07:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:51 crc kubenswrapper[4795]: I1129 07:40:51.527463 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:51 crc kubenswrapper[4795]: I1129 07:40:51.527493 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:51 crc kubenswrapper[4795]: I1129 07:40:51.527503 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:51 crc kubenswrapper[4795]: I1129 07:40:51.527520 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:51 crc kubenswrapper[4795]: I1129 07:40:51.527532 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:51Z","lastTransitionTime":"2025-11-29T07:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:51 crc kubenswrapper[4795]: I1129 07:40:51.629853 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:51 crc kubenswrapper[4795]: I1129 07:40:51.630124 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:51 crc kubenswrapper[4795]: I1129 07:40:51.630244 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:51 crc kubenswrapper[4795]: I1129 07:40:51.630318 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:51 crc kubenswrapper[4795]: I1129 07:40:51.630377 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:51Z","lastTransitionTime":"2025-11-29T07:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:51 crc kubenswrapper[4795]: I1129 07:40:51.732729 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:51 crc kubenswrapper[4795]: I1129 07:40:51.732768 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:51 crc kubenswrapper[4795]: I1129 07:40:51.732777 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:51 crc kubenswrapper[4795]: I1129 07:40:51.732792 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:51 crc kubenswrapper[4795]: I1129 07:40:51.732801 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:51Z","lastTransitionTime":"2025-11-29T07:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:51 crc kubenswrapper[4795]: I1129 07:40:51.835111 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:51 crc kubenswrapper[4795]: I1129 07:40:51.835157 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:51 crc kubenswrapper[4795]: I1129 07:40:51.835166 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:51 crc kubenswrapper[4795]: I1129 07:40:51.835182 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:51 crc kubenswrapper[4795]: I1129 07:40:51.835196 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:51Z","lastTransitionTime":"2025-11-29T07:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:51 crc kubenswrapper[4795]: I1129 07:40:51.938265 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:51 crc kubenswrapper[4795]: I1129 07:40:51.938334 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:51 crc kubenswrapper[4795]: I1129 07:40:51.938351 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:51 crc kubenswrapper[4795]: I1129 07:40:51.938375 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:51 crc kubenswrapper[4795]: I1129 07:40:51.938399 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:51Z","lastTransitionTime":"2025-11-29T07:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:52 crc kubenswrapper[4795]: I1129 07:40:52.041675 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:52 crc kubenswrapper[4795]: I1129 07:40:52.041726 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:52 crc kubenswrapper[4795]: I1129 07:40:52.041745 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:52 crc kubenswrapper[4795]: I1129 07:40:52.041762 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:52 crc kubenswrapper[4795]: I1129 07:40:52.041774 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:52Z","lastTransitionTime":"2025-11-29T07:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:52 crc kubenswrapper[4795]: I1129 07:40:52.145629 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:52 crc kubenswrapper[4795]: I1129 07:40:52.145685 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:52 crc kubenswrapper[4795]: I1129 07:40:52.145704 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:52 crc kubenswrapper[4795]: I1129 07:40:52.145733 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:52 crc kubenswrapper[4795]: I1129 07:40:52.145753 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:52Z","lastTransitionTime":"2025-11-29T07:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:52 crc kubenswrapper[4795]: I1129 07:40:52.249611 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:52 crc kubenswrapper[4795]: I1129 07:40:52.249670 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:52 crc kubenswrapper[4795]: I1129 07:40:52.249683 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:52 crc kubenswrapper[4795]: I1129 07:40:52.249704 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:52 crc kubenswrapper[4795]: I1129 07:40:52.249720 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:52Z","lastTransitionTime":"2025-11-29T07:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:52 crc kubenswrapper[4795]: I1129 07:40:52.274954 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:40:52 crc kubenswrapper[4795]: I1129 07:40:52.275025 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:40:52 crc kubenswrapper[4795]: E1129 07:40:52.275095 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:40:52 crc kubenswrapper[4795]: E1129 07:40:52.275236 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:40:52 crc kubenswrapper[4795]: I1129 07:40:52.275349 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:40:52 crc kubenswrapper[4795]: E1129 07:40:52.275436 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:40:52 crc kubenswrapper[4795]: I1129 07:40:52.275797 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvmzq" Nov 29 07:40:52 crc kubenswrapper[4795]: E1129 07:40:52.275882 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvmzq" podUID="9be66670-47c2-4d05-bf3d-59ae6f4ff53b" Nov 29 07:40:52 crc kubenswrapper[4795]: I1129 07:40:52.352924 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:52 crc kubenswrapper[4795]: I1129 07:40:52.352982 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:52 crc kubenswrapper[4795]: I1129 07:40:52.352994 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:52 crc kubenswrapper[4795]: I1129 07:40:52.353010 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:52 crc kubenswrapper[4795]: I1129 07:40:52.353021 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:52Z","lastTransitionTime":"2025-11-29T07:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:52 crc kubenswrapper[4795]: I1129 07:40:52.454817 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:52 crc kubenswrapper[4795]: I1129 07:40:52.454849 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:52 crc kubenswrapper[4795]: I1129 07:40:52.454861 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:52 crc kubenswrapper[4795]: I1129 07:40:52.454876 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:52 crc kubenswrapper[4795]: I1129 07:40:52.454888 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:52Z","lastTransitionTime":"2025-11-29T07:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:52 crc kubenswrapper[4795]: I1129 07:40:52.557100 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:52 crc kubenswrapper[4795]: I1129 07:40:52.557138 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:52 crc kubenswrapper[4795]: I1129 07:40:52.557150 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:52 crc kubenswrapper[4795]: I1129 07:40:52.557165 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:52 crc kubenswrapper[4795]: I1129 07:40:52.557177 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:52Z","lastTransitionTime":"2025-11-29T07:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:52 crc kubenswrapper[4795]: I1129 07:40:52.659447 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:52 crc kubenswrapper[4795]: I1129 07:40:52.659490 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:52 crc kubenswrapper[4795]: I1129 07:40:52.659500 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:52 crc kubenswrapper[4795]: I1129 07:40:52.659512 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:52 crc kubenswrapper[4795]: I1129 07:40:52.659523 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:52Z","lastTransitionTime":"2025-11-29T07:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:52 crc kubenswrapper[4795]: I1129 07:40:52.761241 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:52 crc kubenswrapper[4795]: I1129 07:40:52.761281 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:52 crc kubenswrapper[4795]: I1129 07:40:52.761290 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:52 crc kubenswrapper[4795]: I1129 07:40:52.761304 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:52 crc kubenswrapper[4795]: I1129 07:40:52.761313 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:52Z","lastTransitionTime":"2025-11-29T07:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:52 crc kubenswrapper[4795]: I1129 07:40:52.863532 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:52 crc kubenswrapper[4795]: I1129 07:40:52.863584 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:52 crc kubenswrapper[4795]: I1129 07:40:52.863624 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:52 crc kubenswrapper[4795]: I1129 07:40:52.863643 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:52 crc kubenswrapper[4795]: I1129 07:40:52.863654 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:52Z","lastTransitionTime":"2025-11-29T07:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:52 crc kubenswrapper[4795]: I1129 07:40:52.966362 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:52 crc kubenswrapper[4795]: I1129 07:40:52.966419 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:52 crc kubenswrapper[4795]: I1129 07:40:52.966435 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:52 crc kubenswrapper[4795]: I1129 07:40:52.966456 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:52 crc kubenswrapper[4795]: I1129 07:40:52.966474 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:52Z","lastTransitionTime":"2025-11-29T07:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:53 crc kubenswrapper[4795]: I1129 07:40:53.069308 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:53 crc kubenswrapper[4795]: I1129 07:40:53.069409 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:53 crc kubenswrapper[4795]: I1129 07:40:53.069424 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:53 crc kubenswrapper[4795]: I1129 07:40:53.069441 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:53 crc kubenswrapper[4795]: I1129 07:40:53.069455 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:53Z","lastTransitionTime":"2025-11-29T07:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:53 crc kubenswrapper[4795]: I1129 07:40:53.172046 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:53 crc kubenswrapper[4795]: I1129 07:40:53.172126 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:53 crc kubenswrapper[4795]: I1129 07:40:53.172154 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:53 crc kubenswrapper[4795]: I1129 07:40:53.172185 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:53 crc kubenswrapper[4795]: I1129 07:40:53.172210 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:53Z","lastTransitionTime":"2025-11-29T07:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:53 crc kubenswrapper[4795]: I1129 07:40:53.275009 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:53 crc kubenswrapper[4795]: I1129 07:40:53.275059 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:53 crc kubenswrapper[4795]: I1129 07:40:53.275073 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:53 crc kubenswrapper[4795]: I1129 07:40:53.275088 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:53 crc kubenswrapper[4795]: I1129 07:40:53.275098 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:53Z","lastTransitionTime":"2025-11-29T07:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:53 crc kubenswrapper[4795]: I1129 07:40:53.376750 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:53 crc kubenswrapper[4795]: I1129 07:40:53.376790 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:53 crc kubenswrapper[4795]: I1129 07:40:53.376805 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:53 crc kubenswrapper[4795]: I1129 07:40:53.376820 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:53 crc kubenswrapper[4795]: I1129 07:40:53.376830 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:53Z","lastTransitionTime":"2025-11-29T07:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:53 crc kubenswrapper[4795]: I1129 07:40:53.479057 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:53 crc kubenswrapper[4795]: I1129 07:40:53.479137 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:53 crc kubenswrapper[4795]: I1129 07:40:53.479152 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:53 crc kubenswrapper[4795]: I1129 07:40:53.479171 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:53 crc kubenswrapper[4795]: I1129 07:40:53.479186 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:53Z","lastTransitionTime":"2025-11-29T07:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:53 crc kubenswrapper[4795]: I1129 07:40:53.581797 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:53 crc kubenswrapper[4795]: I1129 07:40:53.582102 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:53 crc kubenswrapper[4795]: I1129 07:40:53.582204 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:53 crc kubenswrapper[4795]: I1129 07:40:53.582298 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:53 crc kubenswrapper[4795]: I1129 07:40:53.582396 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:53Z","lastTransitionTime":"2025-11-29T07:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:53 crc kubenswrapper[4795]: I1129 07:40:53.685221 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:53 crc kubenswrapper[4795]: I1129 07:40:53.685256 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:53 crc kubenswrapper[4795]: I1129 07:40:53.685267 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:53 crc kubenswrapper[4795]: I1129 07:40:53.685284 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:53 crc kubenswrapper[4795]: I1129 07:40:53.685296 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:53Z","lastTransitionTime":"2025-11-29T07:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:53 crc kubenswrapper[4795]: I1129 07:40:53.787584 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:53 crc kubenswrapper[4795]: I1129 07:40:53.787643 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:53 crc kubenswrapper[4795]: I1129 07:40:53.787657 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:53 crc kubenswrapper[4795]: I1129 07:40:53.787673 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:53 crc kubenswrapper[4795]: I1129 07:40:53.787683 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:53Z","lastTransitionTime":"2025-11-29T07:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:53 crc kubenswrapper[4795]: I1129 07:40:53.890511 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:53 crc kubenswrapper[4795]: I1129 07:40:53.890559 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:53 crc kubenswrapper[4795]: I1129 07:40:53.890572 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:53 crc kubenswrapper[4795]: I1129 07:40:53.890607 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:53 crc kubenswrapper[4795]: I1129 07:40:53.890619 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:53Z","lastTransitionTime":"2025-11-29T07:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:53 crc kubenswrapper[4795]: I1129 07:40:53.992506 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:53 crc kubenswrapper[4795]: I1129 07:40:53.992545 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:53 crc kubenswrapper[4795]: I1129 07:40:53.992562 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:53 crc kubenswrapper[4795]: I1129 07:40:53.992583 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:53 crc kubenswrapper[4795]: I1129 07:40:53.992631 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:53Z","lastTransitionTime":"2025-11-29T07:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:54 crc kubenswrapper[4795]: I1129 07:40:54.095280 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:54 crc kubenswrapper[4795]: I1129 07:40:54.095329 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:54 crc kubenswrapper[4795]: I1129 07:40:54.095338 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:54 crc kubenswrapper[4795]: I1129 07:40:54.095352 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:54 crc kubenswrapper[4795]: I1129 07:40:54.095362 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:54Z","lastTransitionTime":"2025-11-29T07:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:54 crc kubenswrapper[4795]: I1129 07:40:54.198318 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:54 crc kubenswrapper[4795]: I1129 07:40:54.198368 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:54 crc kubenswrapper[4795]: I1129 07:40:54.198376 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:54 crc kubenswrapper[4795]: I1129 07:40:54.198389 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:54 crc kubenswrapper[4795]: I1129 07:40:54.198398 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:54Z","lastTransitionTime":"2025-11-29T07:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:54 crc kubenswrapper[4795]: I1129 07:40:54.274997 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:40:54 crc kubenswrapper[4795]: I1129 07:40:54.275002 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:40:54 crc kubenswrapper[4795]: E1129 07:40:54.275178 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:40:54 crc kubenswrapper[4795]: I1129 07:40:54.275205 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvmzq" Nov 29 07:40:54 crc kubenswrapper[4795]: E1129 07:40:54.275242 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:40:54 crc kubenswrapper[4795]: I1129 07:40:54.275063 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:40:54 crc kubenswrapper[4795]: E1129 07:40:54.275296 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvmzq" podUID="9be66670-47c2-4d05-bf3d-59ae6f4ff53b" Nov 29 07:40:54 crc kubenswrapper[4795]: E1129 07:40:54.275455 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:40:54 crc kubenswrapper[4795]: I1129 07:40:54.296815 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:54Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:54 crc kubenswrapper[4795]: I1129 07:40:54.300272 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:54 crc kubenswrapper[4795]: I1129 07:40:54.300305 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:54 crc kubenswrapper[4795]: I1129 07:40:54.300318 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:54 crc kubenswrapper[4795]: I1129 07:40:54.300335 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:54 crc kubenswrapper[4795]: I1129 07:40:54.300352 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:54Z","lastTransitionTime":"2025-11-29T07:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:54 crc kubenswrapper[4795]: I1129 07:40:54.310549 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2faa21199ef0e7bbf065823fff1e4b33636239d022db292bec68ff54e8c948c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c09cfeaabce351a2face8aa09a04f88aa9d50122718358d07301aeab84e4a76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:54Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:54 crc kubenswrapper[4795]: I1129 07:40:54.335223 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceed8db447ae237cae16d2aa37b1651eade557ea7aaab5bc7b55678ce6e36f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcce9ccf35e72987d01428b7573aafacfe90fc973009657b077f40b6fe6ddecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://248e5291402287c1a589a168527135bd1df15e293a7614681fc896fcbdeb9dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8112226f945ad27e0c5af4d588800322977e17efc3eac9acec2ce591f30f46fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://feacad8666527640ec9d6bee04a743e99e0cc9f9e9c6432c1a70bc02e85fa039\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25239d7c14973e35144f9ac1b909564749220fc32b1851b68e08434a5aca7def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://265a8143fe9383f13d2e5d972c3fdd716990d82b30245eeb740f8a0fdb4f089c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://265a8143fe9383f13d2e5d972c3fdd716990d82b30245eeb740f8a0fdb4f089c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:40:48Z\\\",\\\"message\\\":\\\"le:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1129 07:40:48.655993 6816 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-scheduler-operator/metrics]} name:Service_openshift-kube-scheduler-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.233:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {1dc899db-4498-4b7a-8437-861940b962e7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1129 07:40:48.655207 6816 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-bvmzq before timer (time: 2025-11-29 07:40:50.085660412 +0000 UTC m=+1.991744270): skip\\\\nI1129 07:40:48.656054 6816 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 1.136331ms)\\\\nI1129 07:40:48.656174 6816 ovnkube.go:599] Stopped ovnkube\\\\nI1129 07:40:48.656202 6816 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1129 07:40:48.656253 6816 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:40:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-km2g9_openshift-ovn-kubernetes(3d3ff2b2-cbaa-4309-805a-2b044f867d3a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63ac8cc3766762113c8fdfdc459f77570fc533bf636011f1e64a5c0a55b3ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e28f6209bd7be6beb5d86d316c3194ad3162ae8561ac532a3dfc894415fb406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e28f6209bd7be6beb5d86d316c3194ad3162ae8561ac532a3dfc894415fb406\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psskj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-km2g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:54Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:54 crc kubenswrapper[4795]: I1129 07:40:54.344910 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"027ef0a7-d55c-4187-aff0-931ba50db57f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62dc6b7572b93a8e91f4dc1404de860af85b5e37d2c1a802e4a481b01bfd892c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c9db4ec27597862bfc36c6a62900a3aa639ebb33d7c7212d449cc85a321d60d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c9db4ec27597862bfc36c6a62900a3aa639ebb33d7c7212d449cc85a321d60d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:54Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:54 crc kubenswrapper[4795]: I1129 07:40:54.356926 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84bbfda4-f166-406b-8459-d8cad5d21032\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://102735e5872ecbca5ba247f4fc457619ce4b383def180851d69bf37f7ea1051a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cba2e40f95664da3b482ce01c3432b7a421b785791004833aeaffbc2b6bdcb9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c385dac962c7c2c54fe6b5eabe286bc75d0518601f68b6845677764c5916ad44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6319d19e0aa1655aaff5e3abcc13d8ff9ff26f401c28cf23593ecaa23d1ac5a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa34ac2960898820609db88aa051a6567a7b6bd0bfc7fcb741205f6f5c3f402\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T07:39:37Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 07:39:27.030389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 07:39:27.033381 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-823169215/tls.crt::/tmp/serving-cert-823169215/tls.key\\\\\\\"\\\\nI1129 07:39:37.392848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 07:39:37.396263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 07:39:37.396294 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 07:39:37.396669 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 07:39:37.396692 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 07:39:37.407579 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 07:39:37.407638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407648 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 07:39:37.407655 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 07:39:37.407660 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 07:39:37.407665 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 07:39:37.407670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 07:39:37.408043 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 07:39:37.411891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7b73774f26f4ea48586f8faf7f53be02b4678c85937c8e402868610f36d4d00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:54Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:54 crc kubenswrapper[4795]: I1129 07:40:54.368044 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c05a965f-51fb-403d-bbac-b34d4d6b2bbd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98152568d086a30f06fcd1535110482b356c587d906973de9baedd65fc85b6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34ebb0f2c455d2133d2fcf00d19e74d539e80523f2ed0b2ccc1f8eabcf2cb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9d7937efcaeb684eb229e396c7896a805f1bf205bda1cd1b8113aeda2bea5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bcaeb06b249f6fa81deaf9f997ca41c9dd7f3568d458072208901c04543e272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bcaeb06b249f6fa81deaf9f997ca41c9dd7f3568d458072208901c04543e272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:54Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:54 crc kubenswrapper[4795]: I1129 07:40:54.387839 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30fbc9849348e3900b416f78f697f1d88eb5eb018f5ee420e711561423a79459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:54Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:54 crc kubenswrapper[4795]: I1129 07:40:54.399425 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b4d032ee6e86c2b61a59ea1477aa6d97a454c754888c37db24bc49d921eacd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:54Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:54 crc kubenswrapper[4795]: I1129 07:40:54.403184 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:54 crc kubenswrapper[4795]: I1129 07:40:54.403220 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:54 crc kubenswrapper[4795]: I1129 07:40:54.403233 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:54 crc kubenswrapper[4795]: I1129 07:40:54.403251 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:54 crc kubenswrapper[4795]: I1129 07:40:54.403263 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:54Z","lastTransitionTime":"2025-11-29T07:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:54 crc kubenswrapper[4795]: I1129 07:40:54.412337 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://148610b20c1dfe43c3e5443cda520fa95ed2823afcaa880dbe879b6d93dc8ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f3aac368767d0534a658bc0223f5cda0a951940f2d6ff3f1c89a6ac401a7d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p5fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkmq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:54Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:54 crc kubenswrapper[4795]: I1129 07:40:54.422299 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bvmzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9be66670-47c2-4d05-bf3d-59ae6f4ff53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4h7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4h7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bvmzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:54Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:54 crc kubenswrapper[4795]: I1129 07:40:54.433356 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hbg2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50b9c3ea-4ff5-434f-803c-2365a0938f9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00a5c9d1a3e5371aa4899c05c4c4eecdbbbe3666777446a5fa82e13837a23050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46105526d9b0e90d4e14597c7b9c505b01d89a55e947fb378e4587ec265cac89\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:40:31Z\\\",\\\"message\\\":\\\"2025-11-29T07:39:45+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7afd0fe0-7b98-4b72-a478-6a6cb3a1287c\\\\n2025-11-29T07:39:45+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7afd0fe0-7b98-4b72-a478-6a6cb3a1287c to /host/opt/cni/bin/\\\\n2025-11-29T07:39:45Z [verbose] multus-daemon started\\\\n2025-11-29T07:39:45Z [verbose] Readiness Indicator file check\\\\n2025-11-29T07:40:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:40:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv7gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hbg2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:54Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:54 crc kubenswrapper[4795]: I1129 07:40:54.443192 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vcd5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3fc3441-5d98-4323-8a78-cab492090c5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d047b5f2e979d7eebec46b45ef363e94e99bd7311214872c66fcce39cfd411db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcqq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vcd5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:54Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:54 crc kubenswrapper[4795]: I1129 07:40:54.454765 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"262bf41d-eaf0-42b7-946d-3223857ca705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bbfc45764b29055dcafdee2e3b77893c35b23cfa3eaa9125f01850baef2fd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://508f7cb88a5b33c09bb52519555900031de44693e10de750ed34d9bb4c31738c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9f1344bd7c3a1bc53870f7e5f5f2ca4993df112a24aa474655cc07670e22c48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b06202234f21aa27a3dc67978cc9f82738d7526100bce3fd640be08131413f69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:54Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:54 crc kubenswrapper[4795]: I1129 07:40:54.466383 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:54Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:54 crc kubenswrapper[4795]: I1129 07:40:54.478327 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:54Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:54 crc kubenswrapper[4795]: I1129 07:40:54.491209 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27975" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceab872d-7a73-44d5-936e-3dd17facf399\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b771b13a7bbce9270d919a830c700a1a04bd1d82141a1487e7abb6b1def634cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:40:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e24a1ba0b3bd47c8cff733e54ce43cf1641ba69df8a3d2083f47fcfdcd6640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e24a1ba0b3bd47c8cff733e54ce43cf1641ba69df8a3d2083f47fcfdcd6640\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9e93605467f5dae805a8b5b55e1eebc3f046f4ab7fd3a409235b918d8b1aa88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9e93605467f5dae805a8b5b55e1eebc3f046f4ab7fd3a409235b918d8b1aa88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1856815c6475b6c9db1d59694dfc249a9b254ca7f6b402570dbb791f60e038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1856815c6475b6c9db1d59694dfc249a9b254ca7f6b402570dbb791f60e038\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c79e6f292a540eaeab2781c31973da7e0213d08460452129dbb25075c05a9094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c79e6f292a540eaeab2781c31973da7e0213d08460452129dbb25075c05a9094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e9ceaddb24d5dc8e3689b2e05482fd76883f4ce3475557c03419491b53e800d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9ceaddb24d5dc8e3689b2e05482fd76883f4ce3475557c03419491b53e800d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2966e801bc03a51bece33fae1207ba6a432620fcbcb1c702abddbb11b40a1e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2966e801bc03a51bece33fae1207ba6a432620fcbcb1c702abddbb11b40a1e1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:39:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:39:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgsq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27975\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:54Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:54 crc kubenswrapper[4795]: I1129 07:40:54.500304 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-s9x86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcb60df4-ad48-4830-b8c7-c63621b96707\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6d9cdb665b3bed924577ab173dcf48968a0b7217ca81b1f8708d733d27d3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-s9x86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:54Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:54 crc kubenswrapper[4795]: I1129 07:40:54.505679 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:54 crc kubenswrapper[4795]: I1129 07:40:54.505715 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:54 crc kubenswrapper[4795]: I1129 07:40:54.505728 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:54 crc kubenswrapper[4795]: I1129 07:40:54.505748 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:54 crc kubenswrapper[4795]: I1129 07:40:54.505801 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:54Z","lastTransitionTime":"2025-11-29T07:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:54 crc kubenswrapper[4795]: I1129 07:40:54.512176 4795 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qcmb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6472ebbc-939b-4dd0-8b03-110cb9811484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad977e9d70f50688539c9d0c9cf08789584d08f77792d3dde1a619e9d958cff2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qwxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2497113482ab716b22a90b7872f2837bccd484b765e979c2c67a394f82de688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:39:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qwxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:39:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qcmb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:54Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:54 crc kubenswrapper[4795]: I1129 07:40:54.607935 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:54 crc kubenswrapper[4795]: I1129 07:40:54.607964 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:54 crc kubenswrapper[4795]: I1129 07:40:54.607973 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:54 crc kubenswrapper[4795]: I1129 07:40:54.607985 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:54 crc kubenswrapper[4795]: I1129 07:40:54.607993 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:54Z","lastTransitionTime":"2025-11-29T07:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:54 crc kubenswrapper[4795]: I1129 07:40:54.710652 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:54 crc kubenswrapper[4795]: I1129 07:40:54.710699 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:54 crc kubenswrapper[4795]: I1129 07:40:54.710713 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:54 crc kubenswrapper[4795]: I1129 07:40:54.710733 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:54 crc kubenswrapper[4795]: I1129 07:40:54.710749 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:54Z","lastTransitionTime":"2025-11-29T07:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:54 crc kubenswrapper[4795]: I1129 07:40:54.813947 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:54 crc kubenswrapper[4795]: I1129 07:40:54.814002 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:54 crc kubenswrapper[4795]: I1129 07:40:54.814020 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:54 crc kubenswrapper[4795]: I1129 07:40:54.814043 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:54 crc kubenswrapper[4795]: I1129 07:40:54.814059 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:54Z","lastTransitionTime":"2025-11-29T07:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:54 crc kubenswrapper[4795]: I1129 07:40:54.916651 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:54 crc kubenswrapper[4795]: I1129 07:40:54.916715 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:54 crc kubenswrapper[4795]: I1129 07:40:54.916740 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:54 crc kubenswrapper[4795]: I1129 07:40:54.916768 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:54 crc kubenswrapper[4795]: I1129 07:40:54.916790 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:54Z","lastTransitionTime":"2025-11-29T07:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:55 crc kubenswrapper[4795]: I1129 07:40:55.018855 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:55 crc kubenswrapper[4795]: I1129 07:40:55.018916 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:55 crc kubenswrapper[4795]: I1129 07:40:55.018937 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:55 crc kubenswrapper[4795]: I1129 07:40:55.018966 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:55 crc kubenswrapper[4795]: I1129 07:40:55.018981 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:55Z","lastTransitionTime":"2025-11-29T07:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:55 crc kubenswrapper[4795]: I1129 07:40:55.063015 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:55 crc kubenswrapper[4795]: I1129 07:40:55.063079 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:55 crc kubenswrapper[4795]: I1129 07:40:55.063093 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:55 crc kubenswrapper[4795]: I1129 07:40:55.063110 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:55 crc kubenswrapper[4795]: I1129 07:40:55.063122 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:55Z","lastTransitionTime":"2025-11-29T07:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:55 crc kubenswrapper[4795]: E1129 07:40:55.077806 4795 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8cd0c036-22eb-405e-b57e-ec0c4424780e\\\",\\\"systemUUID\\\":\\\"bd085386-a70e-485f-9f18-00b3aef4bcca\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:55Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:55 crc kubenswrapper[4795]: I1129 07:40:55.082238 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:55 crc kubenswrapper[4795]: I1129 07:40:55.082278 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:55 crc kubenswrapper[4795]: I1129 07:40:55.082294 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:55 crc kubenswrapper[4795]: I1129 07:40:55.082315 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:55 crc kubenswrapper[4795]: I1129 07:40:55.082330 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:55Z","lastTransitionTime":"2025-11-29T07:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:55 crc kubenswrapper[4795]: E1129 07:40:55.096636 4795 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8cd0c036-22eb-405e-b57e-ec0c4424780e\\\",\\\"systemUUID\\\":\\\"bd085386-a70e-485f-9f18-00b3aef4bcca\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:55Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:55 crc kubenswrapper[4795]: I1129 07:40:55.100527 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:55 crc kubenswrapper[4795]: I1129 07:40:55.100564 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:55 crc kubenswrapper[4795]: I1129 07:40:55.100573 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:55 crc kubenswrapper[4795]: I1129 07:40:55.100599 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:55 crc kubenswrapper[4795]: I1129 07:40:55.100611 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:55Z","lastTransitionTime":"2025-11-29T07:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:55 crc kubenswrapper[4795]: E1129 07:40:55.118929 4795 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8cd0c036-22eb-405e-b57e-ec0c4424780e\\\",\\\"systemUUID\\\":\\\"bd085386-a70e-485f-9f18-00b3aef4bcca\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:55Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:55 crc kubenswrapper[4795]: I1129 07:40:55.123016 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:55 crc kubenswrapper[4795]: I1129 07:40:55.123039 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:55 crc kubenswrapper[4795]: I1129 07:40:55.123047 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:55 crc kubenswrapper[4795]: I1129 07:40:55.123061 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:55 crc kubenswrapper[4795]: I1129 07:40:55.123070 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:55Z","lastTransitionTime":"2025-11-29T07:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:55 crc kubenswrapper[4795]: E1129 07:40:55.136509 4795 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8cd0c036-22eb-405e-b57e-ec0c4424780e\\\",\\\"systemUUID\\\":\\\"bd085386-a70e-485f-9f18-00b3aef4bcca\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:55Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:55 crc kubenswrapper[4795]: I1129 07:40:55.140299 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:55 crc kubenswrapper[4795]: I1129 07:40:55.140328 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:55 crc kubenswrapper[4795]: I1129 07:40:55.140337 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:55 crc kubenswrapper[4795]: I1129 07:40:55.140350 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:55 crc kubenswrapper[4795]: I1129 07:40:55.140360 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:55Z","lastTransitionTime":"2025-11-29T07:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:55 crc kubenswrapper[4795]: E1129 07:40:55.153081 4795 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:40:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:40:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8cd0c036-22eb-405e-b57e-ec0c4424780e\\\",\\\"systemUUID\\\":\\\"bd085386-a70e-485f-9f18-00b3aef4bcca\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:40:55Z is after 2025-08-24T17:21:41Z" Nov 29 07:40:55 crc kubenswrapper[4795]: E1129 07:40:55.153195 4795 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 29 07:40:55 crc kubenswrapper[4795]: I1129 07:40:55.154728 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:55 crc kubenswrapper[4795]: I1129 07:40:55.154754 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:55 crc kubenswrapper[4795]: I1129 07:40:55.154763 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:55 crc kubenswrapper[4795]: I1129 07:40:55.154779 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:55 crc kubenswrapper[4795]: I1129 07:40:55.154790 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:55Z","lastTransitionTime":"2025-11-29T07:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:55 crc kubenswrapper[4795]: I1129 07:40:55.256763 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:55 crc kubenswrapper[4795]: I1129 07:40:55.256803 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:55 crc kubenswrapper[4795]: I1129 07:40:55.256811 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:55 crc kubenswrapper[4795]: I1129 07:40:55.256824 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:55 crc kubenswrapper[4795]: I1129 07:40:55.256834 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:55Z","lastTransitionTime":"2025-11-29T07:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:55 crc kubenswrapper[4795]: I1129 07:40:55.359667 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:55 crc kubenswrapper[4795]: I1129 07:40:55.359718 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:55 crc kubenswrapper[4795]: I1129 07:40:55.359732 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:55 crc kubenswrapper[4795]: I1129 07:40:55.359750 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:55 crc kubenswrapper[4795]: I1129 07:40:55.359764 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:55Z","lastTransitionTime":"2025-11-29T07:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:55 crc kubenswrapper[4795]: I1129 07:40:55.463096 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:55 crc kubenswrapper[4795]: I1129 07:40:55.463165 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:55 crc kubenswrapper[4795]: I1129 07:40:55.463190 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:55 crc kubenswrapper[4795]: I1129 07:40:55.463221 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:55 crc kubenswrapper[4795]: I1129 07:40:55.463244 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:55Z","lastTransitionTime":"2025-11-29T07:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:55 crc kubenswrapper[4795]: I1129 07:40:55.566187 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:55 crc kubenswrapper[4795]: I1129 07:40:55.566250 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:55 crc kubenswrapper[4795]: I1129 07:40:55.566266 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:55 crc kubenswrapper[4795]: I1129 07:40:55.566290 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:55 crc kubenswrapper[4795]: I1129 07:40:55.566306 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:55Z","lastTransitionTime":"2025-11-29T07:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:55 crc kubenswrapper[4795]: I1129 07:40:55.668635 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:55 crc kubenswrapper[4795]: I1129 07:40:55.668691 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:55 crc kubenswrapper[4795]: I1129 07:40:55.668702 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:55 crc kubenswrapper[4795]: I1129 07:40:55.668723 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:55 crc kubenswrapper[4795]: I1129 07:40:55.668735 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:55Z","lastTransitionTime":"2025-11-29T07:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:55 crc kubenswrapper[4795]: I1129 07:40:55.772028 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:55 crc kubenswrapper[4795]: I1129 07:40:55.772105 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:55 crc kubenswrapper[4795]: I1129 07:40:55.772124 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:55 crc kubenswrapper[4795]: I1129 07:40:55.772149 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:55 crc kubenswrapper[4795]: I1129 07:40:55.772166 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:55Z","lastTransitionTime":"2025-11-29T07:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:55 crc kubenswrapper[4795]: I1129 07:40:55.874399 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:55 crc kubenswrapper[4795]: I1129 07:40:55.874441 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:55 crc kubenswrapper[4795]: I1129 07:40:55.874452 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:55 crc kubenswrapper[4795]: I1129 07:40:55.874469 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:55 crc kubenswrapper[4795]: I1129 07:40:55.874481 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:55Z","lastTransitionTime":"2025-11-29T07:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:55 crc kubenswrapper[4795]: I1129 07:40:55.976979 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:55 crc kubenswrapper[4795]: I1129 07:40:55.977023 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:55 crc kubenswrapper[4795]: I1129 07:40:55.977038 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:55 crc kubenswrapper[4795]: I1129 07:40:55.977059 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:55 crc kubenswrapper[4795]: I1129 07:40:55.977076 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:55Z","lastTransitionTime":"2025-11-29T07:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:56 crc kubenswrapper[4795]: I1129 07:40:56.079286 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:56 crc kubenswrapper[4795]: I1129 07:40:56.079325 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:56 crc kubenswrapper[4795]: I1129 07:40:56.079336 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:56 crc kubenswrapper[4795]: I1129 07:40:56.079352 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:56 crc kubenswrapper[4795]: I1129 07:40:56.079365 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:56Z","lastTransitionTime":"2025-11-29T07:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:56 crc kubenswrapper[4795]: I1129 07:40:56.181798 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:56 crc kubenswrapper[4795]: I1129 07:40:56.181857 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:56 crc kubenswrapper[4795]: I1129 07:40:56.181866 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:56 crc kubenswrapper[4795]: I1129 07:40:56.181883 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:56 crc kubenswrapper[4795]: I1129 07:40:56.181892 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:56Z","lastTransitionTime":"2025-11-29T07:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:56 crc kubenswrapper[4795]: I1129 07:40:56.275239 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:40:56 crc kubenswrapper[4795]: I1129 07:40:56.275271 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:40:56 crc kubenswrapper[4795]: E1129 07:40:56.275354 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:40:56 crc kubenswrapper[4795]: I1129 07:40:56.275371 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:40:56 crc kubenswrapper[4795]: I1129 07:40:56.275397 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvmzq" Nov 29 07:40:56 crc kubenswrapper[4795]: E1129 07:40:56.275463 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:40:56 crc kubenswrapper[4795]: E1129 07:40:56.275518 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:40:56 crc kubenswrapper[4795]: E1129 07:40:56.275570 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvmzq" podUID="9be66670-47c2-4d05-bf3d-59ae6f4ff53b" Nov 29 07:40:56 crc kubenswrapper[4795]: I1129 07:40:56.283671 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:56 crc kubenswrapper[4795]: I1129 07:40:56.283709 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:56 crc kubenswrapper[4795]: I1129 07:40:56.283724 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:56 crc kubenswrapper[4795]: I1129 07:40:56.283740 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:56 crc kubenswrapper[4795]: I1129 07:40:56.283752 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:56Z","lastTransitionTime":"2025-11-29T07:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:56 crc kubenswrapper[4795]: I1129 07:40:56.386280 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:56 crc kubenswrapper[4795]: I1129 07:40:56.386322 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:56 crc kubenswrapper[4795]: I1129 07:40:56.386332 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:56 crc kubenswrapper[4795]: I1129 07:40:56.386348 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:56 crc kubenswrapper[4795]: I1129 07:40:56.386357 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:56Z","lastTransitionTime":"2025-11-29T07:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:56 crc kubenswrapper[4795]: I1129 07:40:56.489406 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:56 crc kubenswrapper[4795]: I1129 07:40:56.489473 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:56 crc kubenswrapper[4795]: I1129 07:40:56.489493 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:56 crc kubenswrapper[4795]: I1129 07:40:56.489516 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:56 crc kubenswrapper[4795]: I1129 07:40:56.489533 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:56Z","lastTransitionTime":"2025-11-29T07:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:56 crc kubenswrapper[4795]: I1129 07:40:56.592840 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:56 crc kubenswrapper[4795]: I1129 07:40:56.592892 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:56 crc kubenswrapper[4795]: I1129 07:40:56.592907 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:56 crc kubenswrapper[4795]: I1129 07:40:56.592924 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:56 crc kubenswrapper[4795]: I1129 07:40:56.592937 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:56Z","lastTransitionTime":"2025-11-29T07:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:56 crc kubenswrapper[4795]: I1129 07:40:56.695739 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:56 crc kubenswrapper[4795]: I1129 07:40:56.695804 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:56 crc kubenswrapper[4795]: I1129 07:40:56.695818 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:56 crc kubenswrapper[4795]: I1129 07:40:56.695836 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:56 crc kubenswrapper[4795]: I1129 07:40:56.695849 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:56Z","lastTransitionTime":"2025-11-29T07:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:56 crc kubenswrapper[4795]: I1129 07:40:56.799715 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:56 crc kubenswrapper[4795]: I1129 07:40:56.799766 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:56 crc kubenswrapper[4795]: I1129 07:40:56.799783 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:56 crc kubenswrapper[4795]: I1129 07:40:56.799803 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:56 crc kubenswrapper[4795]: I1129 07:40:56.799816 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:56Z","lastTransitionTime":"2025-11-29T07:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:56 crc kubenswrapper[4795]: I1129 07:40:56.903228 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:56 crc kubenswrapper[4795]: I1129 07:40:56.903273 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:56 crc kubenswrapper[4795]: I1129 07:40:56.903282 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:56 crc kubenswrapper[4795]: I1129 07:40:56.903295 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:56 crc kubenswrapper[4795]: I1129 07:40:56.903305 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:56Z","lastTransitionTime":"2025-11-29T07:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:57 crc kubenswrapper[4795]: I1129 07:40:57.006970 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:57 crc kubenswrapper[4795]: I1129 07:40:57.007088 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:57 crc kubenswrapper[4795]: I1129 07:40:57.007115 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:57 crc kubenswrapper[4795]: I1129 07:40:57.007149 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:57 crc kubenswrapper[4795]: I1129 07:40:57.007169 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:57Z","lastTransitionTime":"2025-11-29T07:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:57 crc kubenswrapper[4795]: I1129 07:40:57.110855 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:57 crc kubenswrapper[4795]: I1129 07:40:57.110952 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:57 crc kubenswrapper[4795]: I1129 07:40:57.110980 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:57 crc kubenswrapper[4795]: I1129 07:40:57.111019 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:57 crc kubenswrapper[4795]: I1129 07:40:57.111056 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:57Z","lastTransitionTime":"2025-11-29T07:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:57 crc kubenswrapper[4795]: I1129 07:40:57.213736 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:57 crc kubenswrapper[4795]: I1129 07:40:57.213790 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:57 crc kubenswrapper[4795]: I1129 07:40:57.213806 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:57 crc kubenswrapper[4795]: I1129 07:40:57.213827 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:57 crc kubenswrapper[4795]: I1129 07:40:57.213843 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:57Z","lastTransitionTime":"2025-11-29T07:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:57 crc kubenswrapper[4795]: I1129 07:40:57.315732 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:57 crc kubenswrapper[4795]: I1129 07:40:57.315771 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:57 crc kubenswrapper[4795]: I1129 07:40:57.315782 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:57 crc kubenswrapper[4795]: I1129 07:40:57.315797 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:57 crc kubenswrapper[4795]: I1129 07:40:57.315807 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:57Z","lastTransitionTime":"2025-11-29T07:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:57 crc kubenswrapper[4795]: I1129 07:40:57.418969 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:57 crc kubenswrapper[4795]: I1129 07:40:57.419013 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:57 crc kubenswrapper[4795]: I1129 07:40:57.419026 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:57 crc kubenswrapper[4795]: I1129 07:40:57.419042 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:57 crc kubenswrapper[4795]: I1129 07:40:57.419058 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:57Z","lastTransitionTime":"2025-11-29T07:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:57 crc kubenswrapper[4795]: I1129 07:40:57.522106 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:57 crc kubenswrapper[4795]: I1129 07:40:57.522162 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:57 crc kubenswrapper[4795]: I1129 07:40:57.522178 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:57 crc kubenswrapper[4795]: I1129 07:40:57.522199 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:57 crc kubenswrapper[4795]: I1129 07:40:57.522215 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:57Z","lastTransitionTime":"2025-11-29T07:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:57 crc kubenswrapper[4795]: I1129 07:40:57.624949 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:57 crc kubenswrapper[4795]: I1129 07:40:57.625020 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:57 crc kubenswrapper[4795]: I1129 07:40:57.625034 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:57 crc kubenswrapper[4795]: I1129 07:40:57.625076 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:57 crc kubenswrapper[4795]: I1129 07:40:57.625110 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:57Z","lastTransitionTime":"2025-11-29T07:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:57 crc kubenswrapper[4795]: I1129 07:40:57.728902 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:57 crc kubenswrapper[4795]: I1129 07:40:57.728968 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:57 crc kubenswrapper[4795]: I1129 07:40:57.728987 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:57 crc kubenswrapper[4795]: I1129 07:40:57.729016 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:57 crc kubenswrapper[4795]: I1129 07:40:57.729225 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:57Z","lastTransitionTime":"2025-11-29T07:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:57 crc kubenswrapper[4795]: I1129 07:40:57.833846 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:57 crc kubenswrapper[4795]: I1129 07:40:57.833905 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:57 crc kubenswrapper[4795]: I1129 07:40:57.833917 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:57 crc kubenswrapper[4795]: I1129 07:40:57.833935 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:57 crc kubenswrapper[4795]: I1129 07:40:57.833947 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:57Z","lastTransitionTime":"2025-11-29T07:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:57 crc kubenswrapper[4795]: I1129 07:40:57.938076 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:57 crc kubenswrapper[4795]: I1129 07:40:57.938130 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:57 crc kubenswrapper[4795]: I1129 07:40:57.938170 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:57 crc kubenswrapper[4795]: I1129 07:40:57.938189 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:57 crc kubenswrapper[4795]: I1129 07:40:57.938200 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:57Z","lastTransitionTime":"2025-11-29T07:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:58 crc kubenswrapper[4795]: I1129 07:40:58.040777 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:58 crc kubenswrapper[4795]: I1129 07:40:58.040840 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:58 crc kubenswrapper[4795]: I1129 07:40:58.040850 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:58 crc kubenswrapper[4795]: I1129 07:40:58.040870 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:58 crc kubenswrapper[4795]: I1129 07:40:58.040886 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:58Z","lastTransitionTime":"2025-11-29T07:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:58 crc kubenswrapper[4795]: I1129 07:40:58.145087 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:58 crc kubenswrapper[4795]: I1129 07:40:58.145135 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:58 crc kubenswrapper[4795]: I1129 07:40:58.145152 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:58 crc kubenswrapper[4795]: I1129 07:40:58.145172 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:58 crc kubenswrapper[4795]: I1129 07:40:58.145184 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:58Z","lastTransitionTime":"2025-11-29T07:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:58 crc kubenswrapper[4795]: I1129 07:40:58.248647 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:58 crc kubenswrapper[4795]: I1129 07:40:58.248706 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:58 crc kubenswrapper[4795]: I1129 07:40:58.248718 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:58 crc kubenswrapper[4795]: I1129 07:40:58.248740 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:58 crc kubenswrapper[4795]: I1129 07:40:58.248754 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:58Z","lastTransitionTime":"2025-11-29T07:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:58 crc kubenswrapper[4795]: I1129 07:40:58.275672 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:40:58 crc kubenswrapper[4795]: I1129 07:40:58.275808 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvmzq" Nov 29 07:40:58 crc kubenswrapper[4795]: E1129 07:40:58.276069 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:40:58 crc kubenswrapper[4795]: I1129 07:40:58.275886 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:40:58 crc kubenswrapper[4795]: E1129 07:40:58.276228 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvmzq" podUID="9be66670-47c2-4d05-bf3d-59ae6f4ff53b" Nov 29 07:40:58 crc kubenswrapper[4795]: E1129 07:40:58.276390 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:40:58 crc kubenswrapper[4795]: I1129 07:40:58.276474 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:40:58 crc kubenswrapper[4795]: E1129 07:40:58.276573 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:40:58 crc kubenswrapper[4795]: I1129 07:40:58.352411 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:58 crc kubenswrapper[4795]: I1129 07:40:58.352452 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:58 crc kubenswrapper[4795]: I1129 07:40:58.352463 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:58 crc kubenswrapper[4795]: I1129 07:40:58.352478 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:58 crc kubenswrapper[4795]: I1129 07:40:58.352489 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:58Z","lastTransitionTime":"2025-11-29T07:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:58 crc kubenswrapper[4795]: I1129 07:40:58.455540 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:58 crc kubenswrapper[4795]: I1129 07:40:58.455631 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:58 crc kubenswrapper[4795]: I1129 07:40:58.455650 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:58 crc kubenswrapper[4795]: I1129 07:40:58.455677 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:58 crc kubenswrapper[4795]: I1129 07:40:58.455694 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:58Z","lastTransitionTime":"2025-11-29T07:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:58 crc kubenswrapper[4795]: I1129 07:40:58.558927 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:58 crc kubenswrapper[4795]: I1129 07:40:58.558981 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:58 crc kubenswrapper[4795]: I1129 07:40:58.558995 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:58 crc kubenswrapper[4795]: I1129 07:40:58.559012 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:58 crc kubenswrapper[4795]: I1129 07:40:58.559027 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:58Z","lastTransitionTime":"2025-11-29T07:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:58 crc kubenswrapper[4795]: I1129 07:40:58.662652 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:58 crc kubenswrapper[4795]: I1129 07:40:58.662718 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:58 crc kubenswrapper[4795]: I1129 07:40:58.662728 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:58 crc kubenswrapper[4795]: I1129 07:40:58.662743 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:58 crc kubenswrapper[4795]: I1129 07:40:58.662755 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:58Z","lastTransitionTime":"2025-11-29T07:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:58 crc kubenswrapper[4795]: I1129 07:40:58.759581 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9be66670-47c2-4d05-bf3d-59ae6f4ff53b-metrics-certs\") pod \"network-metrics-daemon-bvmzq\" (UID: \"9be66670-47c2-4d05-bf3d-59ae6f4ff53b\") " pod="openshift-multus/network-metrics-daemon-bvmzq" Nov 29 07:40:58 crc kubenswrapper[4795]: E1129 07:40:58.759935 4795 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 29 07:40:58 crc kubenswrapper[4795]: E1129 07:40:58.760138 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9be66670-47c2-4d05-bf3d-59ae6f4ff53b-metrics-certs podName:9be66670-47c2-4d05-bf3d-59ae6f4ff53b nodeName:}" failed. No retries permitted until 2025-11-29 07:42:02.760089189 +0000 UTC m=+168.735665129 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9be66670-47c2-4d05-bf3d-59ae6f4ff53b-metrics-certs") pod "network-metrics-daemon-bvmzq" (UID: "9be66670-47c2-4d05-bf3d-59ae6f4ff53b") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 29 07:40:58 crc kubenswrapper[4795]: I1129 07:40:58.767368 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:58 crc kubenswrapper[4795]: I1129 07:40:58.767424 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:58 crc kubenswrapper[4795]: I1129 07:40:58.767436 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:58 crc kubenswrapper[4795]: I1129 07:40:58.767459 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:58 crc kubenswrapper[4795]: I1129 07:40:58.767471 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:58Z","lastTransitionTime":"2025-11-29T07:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:58 crc kubenswrapper[4795]: I1129 07:40:58.871571 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:58 crc kubenswrapper[4795]: I1129 07:40:58.871748 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:58 crc kubenswrapper[4795]: I1129 07:40:58.871772 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:58 crc kubenswrapper[4795]: I1129 07:40:58.871805 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:58 crc kubenswrapper[4795]: I1129 07:40:58.871826 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:58Z","lastTransitionTime":"2025-11-29T07:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:58 crc kubenswrapper[4795]: I1129 07:40:58.975259 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:58 crc kubenswrapper[4795]: I1129 07:40:58.975333 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:58 crc kubenswrapper[4795]: I1129 07:40:58.975348 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:58 crc kubenswrapper[4795]: I1129 07:40:58.975370 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:58 crc kubenswrapper[4795]: I1129 07:40:58.975386 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:58Z","lastTransitionTime":"2025-11-29T07:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:59 crc kubenswrapper[4795]: I1129 07:40:59.078450 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:59 crc kubenswrapper[4795]: I1129 07:40:59.078527 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:59 crc kubenswrapper[4795]: I1129 07:40:59.078538 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:59 crc kubenswrapper[4795]: I1129 07:40:59.078567 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:59 crc kubenswrapper[4795]: I1129 07:40:59.078585 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:59Z","lastTransitionTime":"2025-11-29T07:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:59 crc kubenswrapper[4795]: I1129 07:40:59.181623 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:59 crc kubenswrapper[4795]: I1129 07:40:59.181673 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:59 crc kubenswrapper[4795]: I1129 07:40:59.181683 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:59 crc kubenswrapper[4795]: I1129 07:40:59.181699 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:59 crc kubenswrapper[4795]: I1129 07:40:59.181711 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:59Z","lastTransitionTime":"2025-11-29T07:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:59 crc kubenswrapper[4795]: I1129 07:40:59.284895 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:59 crc kubenswrapper[4795]: I1129 07:40:59.284968 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:59 crc kubenswrapper[4795]: I1129 07:40:59.284992 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:59 crc kubenswrapper[4795]: I1129 07:40:59.285018 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:59 crc kubenswrapper[4795]: I1129 07:40:59.285038 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:59Z","lastTransitionTime":"2025-11-29T07:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:59 crc kubenswrapper[4795]: I1129 07:40:59.388887 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:59 crc kubenswrapper[4795]: I1129 07:40:59.388932 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:59 crc kubenswrapper[4795]: I1129 07:40:59.389005 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:59 crc kubenswrapper[4795]: I1129 07:40:59.389023 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:59 crc kubenswrapper[4795]: I1129 07:40:59.389037 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:59Z","lastTransitionTime":"2025-11-29T07:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:59 crc kubenswrapper[4795]: I1129 07:40:59.491547 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:59 crc kubenswrapper[4795]: I1129 07:40:59.491582 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:59 crc kubenswrapper[4795]: I1129 07:40:59.491611 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:59 crc kubenswrapper[4795]: I1129 07:40:59.491626 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:59 crc kubenswrapper[4795]: I1129 07:40:59.491638 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:59Z","lastTransitionTime":"2025-11-29T07:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:59 crc kubenswrapper[4795]: I1129 07:40:59.594421 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:59 crc kubenswrapper[4795]: I1129 07:40:59.594466 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:59 crc kubenswrapper[4795]: I1129 07:40:59.594476 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:59 crc kubenswrapper[4795]: I1129 07:40:59.594489 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:59 crc kubenswrapper[4795]: I1129 07:40:59.594498 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:59Z","lastTransitionTime":"2025-11-29T07:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:59 crc kubenswrapper[4795]: I1129 07:40:59.698536 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:59 crc kubenswrapper[4795]: I1129 07:40:59.698626 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:59 crc kubenswrapper[4795]: I1129 07:40:59.698643 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:59 crc kubenswrapper[4795]: I1129 07:40:59.698664 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:59 crc kubenswrapper[4795]: I1129 07:40:59.698676 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:59Z","lastTransitionTime":"2025-11-29T07:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:59 crc kubenswrapper[4795]: I1129 07:40:59.801759 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:59 crc kubenswrapper[4795]: I1129 07:40:59.801810 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:59 crc kubenswrapper[4795]: I1129 07:40:59.801822 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:59 crc kubenswrapper[4795]: I1129 07:40:59.801841 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:59 crc kubenswrapper[4795]: I1129 07:40:59.801853 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:59Z","lastTransitionTime":"2025-11-29T07:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:40:59 crc kubenswrapper[4795]: I1129 07:40:59.906424 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:40:59 crc kubenswrapper[4795]: I1129 07:40:59.906517 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:40:59 crc kubenswrapper[4795]: I1129 07:40:59.906556 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:40:59 crc kubenswrapper[4795]: I1129 07:40:59.906631 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:40:59 crc kubenswrapper[4795]: I1129 07:40:59.906665 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:40:59Z","lastTransitionTime":"2025-11-29T07:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:41:00 crc kubenswrapper[4795]: I1129 07:41:00.010641 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:41:00 crc kubenswrapper[4795]: I1129 07:41:00.010737 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:41:00 crc kubenswrapper[4795]: I1129 07:41:00.010764 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:41:00 crc kubenswrapper[4795]: I1129 07:41:00.010799 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:41:00 crc kubenswrapper[4795]: I1129 07:41:00.010825 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:41:00Z","lastTransitionTime":"2025-11-29T07:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:41:00 crc kubenswrapper[4795]: I1129 07:41:00.115049 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:41:00 crc kubenswrapper[4795]: I1129 07:41:00.115115 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:41:00 crc kubenswrapper[4795]: I1129 07:41:00.115135 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:41:00 crc kubenswrapper[4795]: I1129 07:41:00.115164 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:41:00 crc kubenswrapper[4795]: I1129 07:41:00.115185 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:41:00Z","lastTransitionTime":"2025-11-29T07:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:41:00 crc kubenswrapper[4795]: I1129 07:41:00.219078 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:41:00 crc kubenswrapper[4795]: I1129 07:41:00.219157 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:41:00 crc kubenswrapper[4795]: I1129 07:41:00.219181 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:41:00 crc kubenswrapper[4795]: I1129 07:41:00.219274 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:41:00 crc kubenswrapper[4795]: I1129 07:41:00.219303 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:41:00Z","lastTransitionTime":"2025-11-29T07:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:41:00 crc kubenswrapper[4795]: I1129 07:41:00.274885 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:41:00 crc kubenswrapper[4795]: I1129 07:41:00.274992 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:41:00 crc kubenswrapper[4795]: I1129 07:41:00.275083 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:41:00 crc kubenswrapper[4795]: E1129 07:41:00.275095 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:41:00 crc kubenswrapper[4795]: E1129 07:41:00.275250 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:41:00 crc kubenswrapper[4795]: I1129 07:41:00.275772 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvmzq" Nov 29 07:41:00 crc kubenswrapper[4795]: E1129 07:41:00.275925 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvmzq" podUID="9be66670-47c2-4d05-bf3d-59ae6f4ff53b" Nov 29 07:41:00 crc kubenswrapper[4795]: E1129 07:41:00.276203 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:41:00 crc kubenswrapper[4795]: I1129 07:41:00.323242 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:41:00 crc kubenswrapper[4795]: I1129 07:41:00.323304 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:41:00 crc kubenswrapper[4795]: I1129 07:41:00.323323 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:41:00 crc kubenswrapper[4795]: I1129 07:41:00.323347 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:41:00 crc kubenswrapper[4795]: I1129 07:41:00.323366 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:41:00Z","lastTransitionTime":"2025-11-29T07:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:41:00 crc kubenswrapper[4795]: I1129 07:41:00.426407 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:41:00 crc kubenswrapper[4795]: I1129 07:41:00.426523 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:41:00 crc kubenswrapper[4795]: I1129 07:41:00.426558 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:41:00 crc kubenswrapper[4795]: I1129 07:41:00.426636 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:41:00 crc kubenswrapper[4795]: I1129 07:41:00.426664 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:41:00Z","lastTransitionTime":"2025-11-29T07:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:41:00 crc kubenswrapper[4795]: I1129 07:41:00.529497 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:41:00 crc kubenswrapper[4795]: I1129 07:41:00.529535 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:41:00 crc kubenswrapper[4795]: I1129 07:41:00.529544 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:41:00 crc kubenswrapper[4795]: I1129 07:41:00.529558 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:41:00 crc kubenswrapper[4795]: I1129 07:41:00.529568 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:41:00Z","lastTransitionTime":"2025-11-29T07:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:41:00 crc kubenswrapper[4795]: I1129 07:41:00.632531 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:41:00 crc kubenswrapper[4795]: I1129 07:41:00.632561 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:41:00 crc kubenswrapper[4795]: I1129 07:41:00.632569 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:41:00 crc kubenswrapper[4795]: I1129 07:41:00.632580 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:41:00 crc kubenswrapper[4795]: I1129 07:41:00.632612 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:41:00Z","lastTransitionTime":"2025-11-29T07:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:41:00 crc kubenswrapper[4795]: I1129 07:41:00.735580 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:41:00 crc kubenswrapper[4795]: I1129 07:41:00.735641 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:41:00 crc kubenswrapper[4795]: I1129 07:41:00.735650 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:41:00 crc kubenswrapper[4795]: I1129 07:41:00.735665 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:41:00 crc kubenswrapper[4795]: I1129 07:41:00.735674 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:41:00Z","lastTransitionTime":"2025-11-29T07:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:41:00 crc kubenswrapper[4795]: I1129 07:41:00.839279 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:41:00 crc kubenswrapper[4795]: I1129 07:41:00.839340 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:41:00 crc kubenswrapper[4795]: I1129 07:41:00.839358 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:41:00 crc kubenswrapper[4795]: I1129 07:41:00.839378 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:41:00 crc kubenswrapper[4795]: I1129 07:41:00.839392 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:41:00Z","lastTransitionTime":"2025-11-29T07:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:41:00 crc kubenswrapper[4795]: I1129 07:41:00.943227 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:41:00 crc kubenswrapper[4795]: I1129 07:41:00.943297 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:41:00 crc kubenswrapper[4795]: I1129 07:41:00.943317 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:41:00 crc kubenswrapper[4795]: I1129 07:41:00.943341 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:41:00 crc kubenswrapper[4795]: I1129 07:41:00.943359 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:41:00Z","lastTransitionTime":"2025-11-29T07:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:41:01 crc kubenswrapper[4795]: I1129 07:41:01.045767 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:41:01 crc kubenswrapper[4795]: I1129 07:41:01.045797 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:41:01 crc kubenswrapper[4795]: I1129 07:41:01.045807 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:41:01 crc kubenswrapper[4795]: I1129 07:41:01.045821 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:41:01 crc kubenswrapper[4795]: I1129 07:41:01.045830 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:41:01Z","lastTransitionTime":"2025-11-29T07:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:41:01 crc kubenswrapper[4795]: I1129 07:41:01.149399 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:41:01 crc kubenswrapper[4795]: I1129 07:41:01.149473 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:41:01 crc kubenswrapper[4795]: I1129 07:41:01.149490 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:41:01 crc kubenswrapper[4795]: I1129 07:41:01.149514 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:41:01 crc kubenswrapper[4795]: I1129 07:41:01.149528 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:41:01Z","lastTransitionTime":"2025-11-29T07:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:41:01 crc kubenswrapper[4795]: I1129 07:41:01.253063 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:41:01 crc kubenswrapper[4795]: I1129 07:41:01.253149 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:41:01 crc kubenswrapper[4795]: I1129 07:41:01.253165 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:41:01 crc kubenswrapper[4795]: I1129 07:41:01.253193 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:41:01 crc kubenswrapper[4795]: I1129 07:41:01.253212 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:41:01Z","lastTransitionTime":"2025-11-29T07:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:41:01 crc kubenswrapper[4795]: I1129 07:41:01.356179 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:41:01 crc kubenswrapper[4795]: I1129 07:41:01.356224 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:41:01 crc kubenswrapper[4795]: I1129 07:41:01.356235 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:41:01 crc kubenswrapper[4795]: I1129 07:41:01.356253 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:41:01 crc kubenswrapper[4795]: I1129 07:41:01.356266 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:41:01Z","lastTransitionTime":"2025-11-29T07:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:41:01 crc kubenswrapper[4795]: I1129 07:41:01.460023 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:41:01 crc kubenswrapper[4795]: I1129 07:41:01.460110 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:41:01 crc kubenswrapper[4795]: I1129 07:41:01.460135 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:41:01 crc kubenswrapper[4795]: I1129 07:41:01.460169 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:41:01 crc kubenswrapper[4795]: I1129 07:41:01.460197 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:41:01Z","lastTransitionTime":"2025-11-29T07:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:41:01 crc kubenswrapper[4795]: I1129 07:41:01.563411 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:41:01 crc kubenswrapper[4795]: I1129 07:41:01.563476 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:41:01 crc kubenswrapper[4795]: I1129 07:41:01.563489 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:41:01 crc kubenswrapper[4795]: I1129 07:41:01.563514 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:41:01 crc kubenswrapper[4795]: I1129 07:41:01.563532 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:41:01Z","lastTransitionTime":"2025-11-29T07:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:41:01 crc kubenswrapper[4795]: I1129 07:41:01.666663 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:41:01 crc kubenswrapper[4795]: I1129 07:41:01.666718 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:41:01 crc kubenswrapper[4795]: I1129 07:41:01.666729 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:41:01 crc kubenswrapper[4795]: I1129 07:41:01.666750 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:41:01 crc kubenswrapper[4795]: I1129 07:41:01.666765 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:41:01Z","lastTransitionTime":"2025-11-29T07:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:41:01 crc kubenswrapper[4795]: I1129 07:41:01.770073 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:41:01 crc kubenswrapper[4795]: I1129 07:41:01.770133 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:41:01 crc kubenswrapper[4795]: I1129 07:41:01.770145 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:41:01 crc kubenswrapper[4795]: I1129 07:41:01.770166 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:41:01 crc kubenswrapper[4795]: I1129 07:41:01.770178 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:41:01Z","lastTransitionTime":"2025-11-29T07:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:41:01 crc kubenswrapper[4795]: I1129 07:41:01.873680 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:41:01 crc kubenswrapper[4795]: I1129 07:41:01.873726 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:41:01 crc kubenswrapper[4795]: I1129 07:41:01.873738 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:41:01 crc kubenswrapper[4795]: I1129 07:41:01.873757 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:41:01 crc kubenswrapper[4795]: I1129 07:41:01.873770 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:41:01Z","lastTransitionTime":"2025-11-29T07:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:41:01 crc kubenswrapper[4795]: I1129 07:41:01.976435 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:41:01 crc kubenswrapper[4795]: I1129 07:41:01.976499 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:41:01 crc kubenswrapper[4795]: I1129 07:41:01.976512 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:41:01 crc kubenswrapper[4795]: I1129 07:41:01.976536 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:41:01 crc kubenswrapper[4795]: I1129 07:41:01.976554 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:41:01Z","lastTransitionTime":"2025-11-29T07:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:41:02 crc kubenswrapper[4795]: I1129 07:41:02.079198 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:41:02 crc kubenswrapper[4795]: I1129 07:41:02.079250 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:41:02 crc kubenswrapper[4795]: I1129 07:41:02.079261 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:41:02 crc kubenswrapper[4795]: I1129 07:41:02.079279 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:41:02 crc kubenswrapper[4795]: I1129 07:41:02.079291 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:41:02Z","lastTransitionTime":"2025-11-29T07:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:41:02 crc kubenswrapper[4795]: I1129 07:41:02.182823 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:41:02 crc kubenswrapper[4795]: I1129 07:41:02.182867 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:41:02 crc kubenswrapper[4795]: I1129 07:41:02.182880 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:41:02 crc kubenswrapper[4795]: I1129 07:41:02.182900 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:41:02 crc kubenswrapper[4795]: I1129 07:41:02.182912 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:41:02Z","lastTransitionTime":"2025-11-29T07:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:41:02 crc kubenswrapper[4795]: I1129 07:41:02.275275 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:41:02 crc kubenswrapper[4795]: I1129 07:41:02.275277 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:41:02 crc kubenswrapper[4795]: I1129 07:41:02.275321 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:41:02 crc kubenswrapper[4795]: I1129 07:41:02.275651 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvmzq" Nov 29 07:41:02 crc kubenswrapper[4795]: E1129 07:41:02.275898 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:41:02 crc kubenswrapper[4795]: E1129 07:41:02.276030 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvmzq" podUID="9be66670-47c2-4d05-bf3d-59ae6f4ff53b" Nov 29 07:41:02 crc kubenswrapper[4795]: E1129 07:41:02.276402 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:41:02 crc kubenswrapper[4795]: E1129 07:41:02.276536 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:41:02 crc kubenswrapper[4795]: I1129 07:41:02.285425 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:41:02 crc kubenswrapper[4795]: I1129 07:41:02.285475 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:41:02 crc kubenswrapper[4795]: I1129 07:41:02.285488 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:41:02 crc kubenswrapper[4795]: I1129 07:41:02.285510 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:41:02 crc kubenswrapper[4795]: I1129 07:41:02.285525 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:41:02Z","lastTransitionTime":"2025-11-29T07:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:41:02 crc kubenswrapper[4795]: I1129 07:41:02.296444 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Nov 29 07:41:02 crc kubenswrapper[4795]: I1129 07:41:02.389422 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:41:02 crc kubenswrapper[4795]: I1129 07:41:02.389550 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:41:02 crc kubenswrapper[4795]: I1129 07:41:02.389571 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:41:02 crc kubenswrapper[4795]: I1129 07:41:02.389637 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:41:02 crc kubenswrapper[4795]: I1129 07:41:02.389658 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:41:02Z","lastTransitionTime":"2025-11-29T07:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:41:02 crc kubenswrapper[4795]: I1129 07:41:02.492247 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:41:02 crc kubenswrapper[4795]: I1129 07:41:02.492304 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:41:02 crc kubenswrapper[4795]: I1129 07:41:02.492323 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:41:02 crc kubenswrapper[4795]: I1129 07:41:02.492349 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:41:02 crc kubenswrapper[4795]: I1129 07:41:02.492367 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:41:02Z","lastTransitionTime":"2025-11-29T07:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:41:02 crc kubenswrapper[4795]: I1129 07:41:02.596426 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:41:02 crc kubenswrapper[4795]: I1129 07:41:02.596476 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:41:02 crc kubenswrapper[4795]: I1129 07:41:02.596487 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:41:02 crc kubenswrapper[4795]: I1129 07:41:02.596504 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:41:02 crc kubenswrapper[4795]: I1129 07:41:02.596517 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:41:02Z","lastTransitionTime":"2025-11-29T07:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:41:02 crc kubenswrapper[4795]: I1129 07:41:02.699396 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:41:02 crc kubenswrapper[4795]: I1129 07:41:02.699450 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:41:02 crc kubenswrapper[4795]: I1129 07:41:02.699462 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:41:02 crc kubenswrapper[4795]: I1129 07:41:02.699481 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:41:02 crc kubenswrapper[4795]: I1129 07:41:02.699494 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:41:02Z","lastTransitionTime":"2025-11-29T07:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:41:02 crc kubenswrapper[4795]: I1129 07:41:02.802886 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:41:02 crc kubenswrapper[4795]: I1129 07:41:02.802953 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:41:02 crc kubenswrapper[4795]: I1129 07:41:02.802968 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:41:02 crc kubenswrapper[4795]: I1129 07:41:02.802987 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:41:02 crc kubenswrapper[4795]: I1129 07:41:02.803000 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:41:02Z","lastTransitionTime":"2025-11-29T07:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:41:02 crc kubenswrapper[4795]: I1129 07:41:02.905941 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:41:02 crc kubenswrapper[4795]: I1129 07:41:02.906030 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:41:02 crc kubenswrapper[4795]: I1129 07:41:02.906057 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:41:02 crc kubenswrapper[4795]: I1129 07:41:02.906091 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:41:02 crc kubenswrapper[4795]: I1129 07:41:02.906115 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:41:02Z","lastTransitionTime":"2025-11-29T07:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:41:03 crc kubenswrapper[4795]: I1129 07:41:03.008359 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:41:03 crc kubenswrapper[4795]: I1129 07:41:03.008407 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:41:03 crc kubenswrapper[4795]: I1129 07:41:03.008418 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:41:03 crc kubenswrapper[4795]: I1129 07:41:03.008434 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:41:03 crc kubenswrapper[4795]: I1129 07:41:03.008448 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:41:03Z","lastTransitionTime":"2025-11-29T07:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:41:03 crc kubenswrapper[4795]: I1129 07:41:03.110954 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:41:03 crc kubenswrapper[4795]: I1129 07:41:03.111009 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:41:03 crc kubenswrapper[4795]: I1129 07:41:03.111023 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:41:03 crc kubenswrapper[4795]: I1129 07:41:03.111043 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:41:03 crc kubenswrapper[4795]: I1129 07:41:03.111059 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:41:03Z","lastTransitionTime":"2025-11-29T07:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:41:03 crc kubenswrapper[4795]: I1129 07:41:03.213159 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:41:03 crc kubenswrapper[4795]: I1129 07:41:03.213200 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:41:03 crc kubenswrapper[4795]: I1129 07:41:03.213209 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:41:03 crc kubenswrapper[4795]: I1129 07:41:03.213224 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:41:03 crc kubenswrapper[4795]: I1129 07:41:03.213233 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:41:03Z","lastTransitionTime":"2025-11-29T07:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:41:03 crc kubenswrapper[4795]: I1129 07:41:03.276081 4795 scope.go:117] "RemoveContainer" containerID="265a8143fe9383f13d2e5d972c3fdd716990d82b30245eeb740f8a0fdb4f089c" Nov 29 07:41:03 crc kubenswrapper[4795]: E1129 07:41:03.276302 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-km2g9_openshift-ovn-kubernetes(3d3ff2b2-cbaa-4309-805a-2b044f867d3a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" podUID="3d3ff2b2-cbaa-4309-805a-2b044f867d3a" Nov 29 07:41:03 crc kubenswrapper[4795]: I1129 07:41:03.315962 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:41:03 crc kubenswrapper[4795]: I1129 07:41:03.316016 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:41:03 crc kubenswrapper[4795]: I1129 07:41:03.316028 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:41:03 crc kubenswrapper[4795]: I1129 07:41:03.316047 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:41:03 crc kubenswrapper[4795]: I1129 07:41:03.316058 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:41:03Z","lastTransitionTime":"2025-11-29T07:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:41:03 crc kubenswrapper[4795]: I1129 07:41:03.418316 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:41:03 crc kubenswrapper[4795]: I1129 07:41:03.418352 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:41:03 crc kubenswrapper[4795]: I1129 07:41:03.418364 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:41:03 crc kubenswrapper[4795]: I1129 07:41:03.418378 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:41:03 crc kubenswrapper[4795]: I1129 07:41:03.418387 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:41:03Z","lastTransitionTime":"2025-11-29T07:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:41:03 crc kubenswrapper[4795]: I1129 07:41:03.520887 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:41:03 crc kubenswrapper[4795]: I1129 07:41:03.520941 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:41:03 crc kubenswrapper[4795]: I1129 07:41:03.520953 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:41:03 crc kubenswrapper[4795]: I1129 07:41:03.520976 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:41:03 crc kubenswrapper[4795]: I1129 07:41:03.520988 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:41:03Z","lastTransitionTime":"2025-11-29T07:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:41:03 crc kubenswrapper[4795]: I1129 07:41:03.624146 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:41:03 crc kubenswrapper[4795]: I1129 07:41:03.624228 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:41:03 crc kubenswrapper[4795]: I1129 07:41:03.624248 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:41:03 crc kubenswrapper[4795]: I1129 07:41:03.624276 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:41:03 crc kubenswrapper[4795]: I1129 07:41:03.624294 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:41:03Z","lastTransitionTime":"2025-11-29T07:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:41:03 crc kubenswrapper[4795]: I1129 07:41:03.726949 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:41:03 crc kubenswrapper[4795]: I1129 07:41:03.726979 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:41:03 crc kubenswrapper[4795]: I1129 07:41:03.726987 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:41:03 crc kubenswrapper[4795]: I1129 07:41:03.727000 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:41:03 crc kubenswrapper[4795]: I1129 07:41:03.727009 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:41:03Z","lastTransitionTime":"2025-11-29T07:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:41:03 crc kubenswrapper[4795]: I1129 07:41:03.829157 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:41:03 crc kubenswrapper[4795]: I1129 07:41:03.829217 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:41:03 crc kubenswrapper[4795]: I1129 07:41:03.829230 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:41:03 crc kubenswrapper[4795]: I1129 07:41:03.829246 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:41:03 crc kubenswrapper[4795]: I1129 07:41:03.829258 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:41:03Z","lastTransitionTime":"2025-11-29T07:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:41:03 crc kubenswrapper[4795]: I1129 07:41:03.931730 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:41:03 crc kubenswrapper[4795]: I1129 07:41:03.931791 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:41:03 crc kubenswrapper[4795]: I1129 07:41:03.931806 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:41:03 crc kubenswrapper[4795]: I1129 07:41:03.931829 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:41:03 crc kubenswrapper[4795]: I1129 07:41:03.931842 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:41:03Z","lastTransitionTime":"2025-11-29T07:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:41:04 crc kubenswrapper[4795]: I1129 07:41:04.035227 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:41:04 crc kubenswrapper[4795]: I1129 07:41:04.035290 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:41:04 crc kubenswrapper[4795]: I1129 07:41:04.035309 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:41:04 crc kubenswrapper[4795]: I1129 07:41:04.035333 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:41:04 crc kubenswrapper[4795]: I1129 07:41:04.035348 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:41:04Z","lastTransitionTime":"2025-11-29T07:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:41:04 crc kubenswrapper[4795]: I1129 07:41:04.139065 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:41:04 crc kubenswrapper[4795]: I1129 07:41:04.139122 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:41:04 crc kubenswrapper[4795]: I1129 07:41:04.139133 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:41:04 crc kubenswrapper[4795]: I1129 07:41:04.139157 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:41:04 crc kubenswrapper[4795]: I1129 07:41:04.139169 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:41:04Z","lastTransitionTime":"2025-11-29T07:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:41:04 crc kubenswrapper[4795]: I1129 07:41:04.242210 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:41:04 crc kubenswrapper[4795]: I1129 07:41:04.242644 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:41:04 crc kubenswrapper[4795]: I1129 07:41:04.242665 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:41:04 crc kubenswrapper[4795]: I1129 07:41:04.242691 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:41:04 crc kubenswrapper[4795]: I1129 07:41:04.242709 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:41:04Z","lastTransitionTime":"2025-11-29T07:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:41:04 crc kubenswrapper[4795]: I1129 07:41:04.275697 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:41:04 crc kubenswrapper[4795]: I1129 07:41:04.275697 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:41:04 crc kubenswrapper[4795]: I1129 07:41:04.275716 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvmzq" Nov 29 07:41:04 crc kubenswrapper[4795]: I1129 07:41:04.275879 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:41:04 crc kubenswrapper[4795]: E1129 07:41:04.276038 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:41:04 crc kubenswrapper[4795]: E1129 07:41:04.276324 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvmzq" podUID="9be66670-47c2-4d05-bf3d-59ae6f4ff53b" Nov 29 07:41:04 crc kubenswrapper[4795]: E1129 07:41:04.276375 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:41:04 crc kubenswrapper[4795]: E1129 07:41:04.276447 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:41:04 crc kubenswrapper[4795]: I1129 07:41:04.338767 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=61.33875079 podStartE2EDuration="1m1.33875079s" podCreationTimestamp="2025-11-29 07:40:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:41:04.338458822 +0000 UTC m=+110.314034622" watchObservedRunningTime="2025-11-29 07:41:04.33875079 +0000 UTC m=+110.314326580" Nov 29 07:41:04 crc kubenswrapper[4795]: I1129 07:41:04.345565 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:41:04 crc kubenswrapper[4795]: I1129 07:41:04.345621 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:41:04 crc kubenswrapper[4795]: I1129 07:41:04.345634 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:41:04 crc kubenswrapper[4795]: I1129 07:41:04.345651 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:41:04 crc kubenswrapper[4795]: I1129 07:41:04.345662 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:41:04Z","lastTransitionTime":"2025-11-29T07:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:41:04 crc kubenswrapper[4795]: I1129 07:41:04.390628 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podStartSLOduration=85.390584738 podStartE2EDuration="1m25.390584738s" podCreationTimestamp="2025-11-29 07:39:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:41:04.390016793 +0000 UTC m=+110.365592593" watchObservedRunningTime="2025-11-29 07:41:04.390584738 +0000 UTC m=+110.366160538" Nov 29 07:41:04 crc kubenswrapper[4795]: I1129 07:41:04.453150 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:41:04 crc kubenswrapper[4795]: I1129 07:41:04.453446 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:41:04 crc kubenswrapper[4795]: I1129 07:41:04.453509 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:41:04 crc kubenswrapper[4795]: I1129 07:41:04.453583 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:41:04 crc kubenswrapper[4795]: I1129 07:41:04.453663 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:41:04Z","lastTransitionTime":"2025-11-29T07:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:41:04 crc kubenswrapper[4795]: I1129 07:41:04.463538 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=21.463503413 podStartE2EDuration="21.463503413s" podCreationTimestamp="2025-11-29 07:40:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:41:04.43817576 +0000 UTC m=+110.413751550" watchObservedRunningTime="2025-11-29 07:41:04.463503413 +0000 UTC m=+110.439079203" Nov 29 07:41:04 crc kubenswrapper[4795]: I1129 07:41:04.482454 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=2.482431641 podStartE2EDuration="2.482431641s" podCreationTimestamp="2025-11-29 07:41:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:41:04.464583813 +0000 UTC m=+110.440159603" watchObservedRunningTime="2025-11-29 07:41:04.482431641 +0000 UTC m=+110.458007431" Nov 29 07:41:04 crc kubenswrapper[4795]: I1129 07:41:04.495702 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=86.495672333 podStartE2EDuration="1m26.495672333s" podCreationTimestamp="2025-11-29 07:39:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:41:04.4853038 +0000 UTC m=+110.460879600" watchObservedRunningTime="2025-11-29 07:41:04.495672333 +0000 UTC m=+110.471248113" Nov 29 07:41:04 crc kubenswrapper[4795]: I1129 07:41:04.511489 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-hbg2m" podStartSLOduration=84.511468846 podStartE2EDuration="1m24.511468846s" podCreationTimestamp="2025-11-29 07:39:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:41:04.510699975 +0000 UTC m=+110.486275775" watchObservedRunningTime="2025-11-29 07:41:04.511468846 +0000 UTC m=+110.487044636" Nov 29 07:41:04 crc kubenswrapper[4795]: I1129 07:41:04.521569 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-vcd5b" podStartSLOduration=85.521542491 podStartE2EDuration="1m25.521542491s" podCreationTimestamp="2025-11-29 07:39:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:41:04.520288407 +0000 UTC m=+110.495864197" watchObservedRunningTime="2025-11-29 07:41:04.521542491 +0000 UTC m=+110.497118291" Nov 29 07:41:04 crc kubenswrapper[4795]: I1129 07:41:04.536752 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-27975" podStartSLOduration=84.536734407 podStartE2EDuration="1m24.536734407s" podCreationTimestamp="2025-11-29 07:39:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:41:04.536342376 +0000 UTC m=+110.511918166" watchObservedRunningTime="2025-11-29 07:41:04.536734407 +0000 UTC m=+110.512310197" Nov 29 07:41:04 crc kubenswrapper[4795]: I1129 07:41:04.546158 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-s9x86" podStartSLOduration=85.546140724 podStartE2EDuration="1m25.546140724s" podCreationTimestamp="2025-11-29 07:39:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:41:04.545405134 +0000 UTC m=+110.520980934" watchObservedRunningTime="2025-11-29 07:41:04.546140724 +0000 UTC m=+110.521716534" Nov 29 07:41:04 crc kubenswrapper[4795]: I1129 07:41:04.555573 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:41:04 crc kubenswrapper[4795]: I1129 07:41:04.555642 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:41:04 crc kubenswrapper[4795]: I1129 07:41:04.555653 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:41:04 crc kubenswrapper[4795]: I1129 07:41:04.555669 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:41:04 crc kubenswrapper[4795]: I1129 07:41:04.555680 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:41:04Z","lastTransitionTime":"2025-11-29T07:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:41:04 crc kubenswrapper[4795]: I1129 07:41:04.556829 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qcmb5" podStartSLOduration=84.556810236 podStartE2EDuration="1m24.556810236s" podCreationTimestamp="2025-11-29 07:39:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:41:04.556697893 +0000 UTC m=+110.532273683" watchObservedRunningTime="2025-11-29 07:41:04.556810236 +0000 UTC m=+110.532386026" Nov 29 07:41:04 crc kubenswrapper[4795]: I1129 07:41:04.570832 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=81.570816399 podStartE2EDuration="1m21.570816399s" podCreationTimestamp="2025-11-29 07:39:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:41:04.570635174 +0000 UTC m=+110.546210974" watchObservedRunningTime="2025-11-29 07:41:04.570816399 +0000 UTC m=+110.546392189" Nov 29 07:41:04 crc kubenswrapper[4795]: I1129 07:41:04.657821 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:41:04 crc kubenswrapper[4795]: I1129 07:41:04.657864 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:41:04 crc kubenswrapper[4795]: I1129 07:41:04.657875 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:41:04 crc kubenswrapper[4795]: I1129 07:41:04.657892 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:41:04 crc kubenswrapper[4795]: I1129 07:41:04.657908 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:41:04Z","lastTransitionTime":"2025-11-29T07:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:41:04 crc kubenswrapper[4795]: I1129 07:41:04.759808 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:41:04 crc kubenswrapper[4795]: I1129 07:41:04.759852 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:41:04 crc kubenswrapper[4795]: I1129 07:41:04.759861 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:41:04 crc kubenswrapper[4795]: I1129 07:41:04.759878 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:41:04 crc kubenswrapper[4795]: I1129 07:41:04.759887 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:41:04Z","lastTransitionTime":"2025-11-29T07:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:41:04 crc kubenswrapper[4795]: I1129 07:41:04.862181 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:41:04 crc kubenswrapper[4795]: I1129 07:41:04.862223 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:41:04 crc kubenswrapper[4795]: I1129 07:41:04.862232 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:41:04 crc kubenswrapper[4795]: I1129 07:41:04.862248 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:41:04 crc kubenswrapper[4795]: I1129 07:41:04.862261 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:41:04Z","lastTransitionTime":"2025-11-29T07:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:41:04 crc kubenswrapper[4795]: I1129 07:41:04.964432 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:41:04 crc kubenswrapper[4795]: I1129 07:41:04.964467 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:41:04 crc kubenswrapper[4795]: I1129 07:41:04.964476 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:41:04 crc kubenswrapper[4795]: I1129 07:41:04.964488 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:41:04 crc kubenswrapper[4795]: I1129 07:41:04.964497 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:41:04Z","lastTransitionTime":"2025-11-29T07:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:41:05 crc kubenswrapper[4795]: I1129 07:41:05.066580 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:41:05 crc kubenswrapper[4795]: I1129 07:41:05.066629 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:41:05 crc kubenswrapper[4795]: I1129 07:41:05.066638 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:41:05 crc kubenswrapper[4795]: I1129 07:41:05.066653 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:41:05 crc kubenswrapper[4795]: I1129 07:41:05.066662 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:41:05Z","lastTransitionTime":"2025-11-29T07:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:41:05 crc kubenswrapper[4795]: I1129 07:41:05.168417 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:41:05 crc kubenswrapper[4795]: I1129 07:41:05.168447 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:41:05 crc kubenswrapper[4795]: I1129 07:41:05.168456 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:41:05 crc kubenswrapper[4795]: I1129 07:41:05.168470 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:41:05 crc kubenswrapper[4795]: I1129 07:41:05.168494 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:41:05Z","lastTransitionTime":"2025-11-29T07:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:41:05 crc kubenswrapper[4795]: I1129 07:41:05.270352 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:41:05 crc kubenswrapper[4795]: I1129 07:41:05.270392 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:41:05 crc kubenswrapper[4795]: I1129 07:41:05.270403 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:41:05 crc kubenswrapper[4795]: I1129 07:41:05.270416 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:41:05 crc kubenswrapper[4795]: I1129 07:41:05.270428 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:41:05Z","lastTransitionTime":"2025-11-29T07:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:41:05 crc kubenswrapper[4795]: I1129 07:41:05.365202 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:41:05 crc kubenswrapper[4795]: I1129 07:41:05.365263 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:41:05 crc kubenswrapper[4795]: I1129 07:41:05.365278 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:41:05 crc kubenswrapper[4795]: I1129 07:41:05.365297 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:41:05 crc kubenswrapper[4795]: I1129 07:41:05.365311 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:41:05Z","lastTransitionTime":"2025-11-29T07:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:41:05 crc kubenswrapper[4795]: I1129 07:41:05.382830 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:41:05 crc kubenswrapper[4795]: I1129 07:41:05.382874 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:41:05 crc kubenswrapper[4795]: I1129 07:41:05.382888 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:41:05 crc kubenswrapper[4795]: I1129 07:41:05.382907 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:41:05 crc kubenswrapper[4795]: I1129 07:41:05.382921 4795 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:41:05Z","lastTransitionTime":"2025-11-29T07:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:41:05 crc kubenswrapper[4795]: I1129 07:41:05.410185 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-8sp2m"] Nov 29 07:41:05 crc kubenswrapper[4795]: I1129 07:41:05.410692 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8sp2m" Nov 29 07:41:05 crc kubenswrapper[4795]: I1129 07:41:05.413173 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 29 07:41:05 crc kubenswrapper[4795]: I1129 07:41:05.413302 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 29 07:41:05 crc kubenswrapper[4795]: I1129 07:41:05.413416 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 29 07:41:05 crc kubenswrapper[4795]: I1129 07:41:05.413896 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 29 07:41:05 crc kubenswrapper[4795]: I1129 07:41:05.533972 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/c7bff8ad-7d8b-4241-b33d-1aa078f6bbae-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-8sp2m\" (UID: \"c7bff8ad-7d8b-4241-b33d-1aa078f6bbae\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8sp2m" Nov 29 07:41:05 crc kubenswrapper[4795]: I1129 07:41:05.534059 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c7bff8ad-7d8b-4241-b33d-1aa078f6bbae-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-8sp2m\" (UID: \"c7bff8ad-7d8b-4241-b33d-1aa078f6bbae\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8sp2m" Nov 29 07:41:05 crc kubenswrapper[4795]: I1129 07:41:05.534101 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c7bff8ad-7d8b-4241-b33d-1aa078f6bbae-service-ca\") pod \"cluster-version-operator-5c965bbfc6-8sp2m\" (UID: \"c7bff8ad-7d8b-4241-b33d-1aa078f6bbae\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8sp2m" Nov 29 07:41:05 crc kubenswrapper[4795]: I1129 07:41:05.534144 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/c7bff8ad-7d8b-4241-b33d-1aa078f6bbae-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-8sp2m\" (UID: \"c7bff8ad-7d8b-4241-b33d-1aa078f6bbae\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8sp2m" Nov 29 07:41:05 crc kubenswrapper[4795]: I1129 07:41:05.534280 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7bff8ad-7d8b-4241-b33d-1aa078f6bbae-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-8sp2m\" (UID: \"c7bff8ad-7d8b-4241-b33d-1aa078f6bbae\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8sp2m" Nov 29 07:41:05 crc kubenswrapper[4795]: I1129 07:41:05.635834 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7bff8ad-7d8b-4241-b33d-1aa078f6bbae-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-8sp2m\" (UID: \"c7bff8ad-7d8b-4241-b33d-1aa078f6bbae\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8sp2m" Nov 29 07:41:05 crc kubenswrapper[4795]: I1129 07:41:05.635889 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/c7bff8ad-7d8b-4241-b33d-1aa078f6bbae-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-8sp2m\" (UID: \"c7bff8ad-7d8b-4241-b33d-1aa078f6bbae\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8sp2m" Nov 29 07:41:05 crc kubenswrapper[4795]: I1129 07:41:05.635915 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c7bff8ad-7d8b-4241-b33d-1aa078f6bbae-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-8sp2m\" (UID: \"c7bff8ad-7d8b-4241-b33d-1aa078f6bbae\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8sp2m" Nov 29 07:41:05 crc kubenswrapper[4795]: I1129 07:41:05.635944 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c7bff8ad-7d8b-4241-b33d-1aa078f6bbae-service-ca\") pod \"cluster-version-operator-5c965bbfc6-8sp2m\" (UID: \"c7bff8ad-7d8b-4241-b33d-1aa078f6bbae\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8sp2m" Nov 29 07:41:05 crc kubenswrapper[4795]: I1129 07:41:05.635982 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/c7bff8ad-7d8b-4241-b33d-1aa078f6bbae-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-8sp2m\" (UID: \"c7bff8ad-7d8b-4241-b33d-1aa078f6bbae\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8sp2m" Nov 29 07:41:05 crc kubenswrapper[4795]: I1129 07:41:05.636074 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/c7bff8ad-7d8b-4241-b33d-1aa078f6bbae-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-8sp2m\" (UID: \"c7bff8ad-7d8b-4241-b33d-1aa078f6bbae\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8sp2m" Nov 29 07:41:05 crc kubenswrapper[4795]: I1129 07:41:05.636065 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/c7bff8ad-7d8b-4241-b33d-1aa078f6bbae-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-8sp2m\" (UID: \"c7bff8ad-7d8b-4241-b33d-1aa078f6bbae\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8sp2m" Nov 29 07:41:05 crc kubenswrapper[4795]: I1129 07:41:05.636825 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c7bff8ad-7d8b-4241-b33d-1aa078f6bbae-service-ca\") pod \"cluster-version-operator-5c965bbfc6-8sp2m\" (UID: \"c7bff8ad-7d8b-4241-b33d-1aa078f6bbae\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8sp2m" Nov 29 07:41:05 crc kubenswrapper[4795]: I1129 07:41:05.646226 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7bff8ad-7d8b-4241-b33d-1aa078f6bbae-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-8sp2m\" (UID: \"c7bff8ad-7d8b-4241-b33d-1aa078f6bbae\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8sp2m" Nov 29 07:41:05 crc kubenswrapper[4795]: I1129 07:41:05.654335 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c7bff8ad-7d8b-4241-b33d-1aa078f6bbae-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-8sp2m\" (UID: \"c7bff8ad-7d8b-4241-b33d-1aa078f6bbae\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8sp2m" Nov 29 07:41:05 crc kubenswrapper[4795]: I1129 07:41:05.728763 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8sp2m" Nov 29 07:41:06 crc kubenswrapper[4795]: I1129 07:41:06.275406 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:41:06 crc kubenswrapper[4795]: I1129 07:41:06.275488 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:41:06 crc kubenswrapper[4795]: E1129 07:41:06.275542 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:41:06 crc kubenswrapper[4795]: I1129 07:41:06.275643 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:41:06 crc kubenswrapper[4795]: E1129 07:41:06.275852 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:41:06 crc kubenswrapper[4795]: I1129 07:41:06.275981 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvmzq" Nov 29 07:41:06 crc kubenswrapper[4795]: E1129 07:41:06.276137 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:41:06 crc kubenswrapper[4795]: E1129 07:41:06.276261 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvmzq" podUID="9be66670-47c2-4d05-bf3d-59ae6f4ff53b" Nov 29 07:41:06 crc kubenswrapper[4795]: I1129 07:41:06.492693 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8sp2m" event={"ID":"c7bff8ad-7d8b-4241-b33d-1aa078f6bbae","Type":"ContainerStarted","Data":"d33269be8b7771e579327ab51b5721fb933a592fc1843161b7f8e619a3001122"} Nov 29 07:41:06 crc kubenswrapper[4795]: I1129 07:41:06.492732 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8sp2m" event={"ID":"c7bff8ad-7d8b-4241-b33d-1aa078f6bbae","Type":"ContainerStarted","Data":"2f098e60548a79b328676802750094b0cc8c5319f1af77cc2faab56694ecfa8f"} Nov 29 07:41:08 crc kubenswrapper[4795]: I1129 07:41:08.275344 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:41:08 crc kubenswrapper[4795]: I1129 07:41:08.275395 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:41:08 crc kubenswrapper[4795]: I1129 07:41:08.275458 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:41:08 crc kubenswrapper[4795]: I1129 07:41:08.275519 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvmzq" Nov 29 07:41:08 crc kubenswrapper[4795]: E1129 07:41:08.275532 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:41:08 crc kubenswrapper[4795]: E1129 07:41:08.275607 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:41:08 crc kubenswrapper[4795]: E1129 07:41:08.275682 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:41:08 crc kubenswrapper[4795]: E1129 07:41:08.275777 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvmzq" podUID="9be66670-47c2-4d05-bf3d-59ae6f4ff53b" Nov 29 07:41:10 crc kubenswrapper[4795]: I1129 07:41:10.275126 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:41:10 crc kubenswrapper[4795]: I1129 07:41:10.275187 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:41:10 crc kubenswrapper[4795]: E1129 07:41:10.275285 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:41:10 crc kubenswrapper[4795]: I1129 07:41:10.275330 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvmzq" Nov 29 07:41:10 crc kubenswrapper[4795]: I1129 07:41:10.275404 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:41:10 crc kubenswrapper[4795]: E1129 07:41:10.275500 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:41:10 crc kubenswrapper[4795]: E1129 07:41:10.275665 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvmzq" podUID="9be66670-47c2-4d05-bf3d-59ae6f4ff53b" Nov 29 07:41:10 crc kubenswrapper[4795]: E1129 07:41:10.275736 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:41:12 crc kubenswrapper[4795]: I1129 07:41:12.274883 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvmzq" Nov 29 07:41:12 crc kubenswrapper[4795]: I1129 07:41:12.274957 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:41:12 crc kubenswrapper[4795]: I1129 07:41:12.275000 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:41:12 crc kubenswrapper[4795]: I1129 07:41:12.275068 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:41:12 crc kubenswrapper[4795]: E1129 07:41:12.275082 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvmzq" podUID="9be66670-47c2-4d05-bf3d-59ae6f4ff53b" Nov 29 07:41:12 crc kubenswrapper[4795]: E1129 07:41:12.275175 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:41:12 crc kubenswrapper[4795]: E1129 07:41:12.275195 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:41:12 crc kubenswrapper[4795]: E1129 07:41:12.275341 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:41:14 crc kubenswrapper[4795]: E1129 07:41:14.229737 4795 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Nov 29 07:41:14 crc kubenswrapper[4795]: I1129 07:41:14.275189 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:41:14 crc kubenswrapper[4795]: I1129 07:41:14.275238 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvmzq" Nov 29 07:41:14 crc kubenswrapper[4795]: I1129 07:41:14.275307 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:41:14 crc kubenswrapper[4795]: E1129 07:41:14.277287 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:41:14 crc kubenswrapper[4795]: I1129 07:41:14.277320 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:41:14 crc kubenswrapper[4795]: E1129 07:41:14.277433 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvmzq" podUID="9be66670-47c2-4d05-bf3d-59ae6f4ff53b" Nov 29 07:41:14 crc kubenswrapper[4795]: E1129 07:41:14.277613 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:41:14 crc kubenswrapper[4795]: E1129 07:41:14.277742 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:41:14 crc kubenswrapper[4795]: E1129 07:41:14.374823 4795 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 29 07:41:16 crc kubenswrapper[4795]: I1129 07:41:16.275406 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:41:16 crc kubenswrapper[4795]: I1129 07:41:16.275445 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvmzq" Nov 29 07:41:16 crc kubenswrapper[4795]: I1129 07:41:16.275545 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:41:16 crc kubenswrapper[4795]: I1129 07:41:16.275583 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:41:16 crc kubenswrapper[4795]: E1129 07:41:16.275623 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:41:16 crc kubenswrapper[4795]: E1129 07:41:16.275756 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:41:16 crc kubenswrapper[4795]: E1129 07:41:16.275841 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvmzq" podUID="9be66670-47c2-4d05-bf3d-59ae6f4ff53b" Nov 29 07:41:16 crc kubenswrapper[4795]: E1129 07:41:16.275908 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:41:18 crc kubenswrapper[4795]: I1129 07:41:18.275140 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:41:18 crc kubenswrapper[4795]: I1129 07:41:18.275165 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:41:18 crc kubenswrapper[4795]: I1129 07:41:18.275165 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:41:18 crc kubenswrapper[4795]: I1129 07:41:18.275221 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvmzq" Nov 29 07:41:18 crc kubenswrapper[4795]: E1129 07:41:18.275303 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:41:18 crc kubenswrapper[4795]: E1129 07:41:18.275490 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvmzq" podUID="9be66670-47c2-4d05-bf3d-59ae6f4ff53b" Nov 29 07:41:18 crc kubenswrapper[4795]: E1129 07:41:18.275742 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:41:18 crc kubenswrapper[4795]: E1129 07:41:18.275835 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:41:18 crc kubenswrapper[4795]: I1129 07:41:18.276357 4795 scope.go:117] "RemoveContainer" containerID="265a8143fe9383f13d2e5d972c3fdd716990d82b30245eeb740f8a0fdb4f089c" Nov 29 07:41:18 crc kubenswrapper[4795]: E1129 07:41:18.276486 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-km2g9_openshift-ovn-kubernetes(3d3ff2b2-cbaa-4309-805a-2b044f867d3a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" podUID="3d3ff2b2-cbaa-4309-805a-2b044f867d3a" Nov 29 07:41:18 crc kubenswrapper[4795]: I1129 07:41:18.536976 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hbg2m_50b9c3ea-4ff5-434f-803c-2365a0938f9a/kube-multus/1.log" Nov 29 07:41:18 crc kubenswrapper[4795]: I1129 07:41:18.537352 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hbg2m_50b9c3ea-4ff5-434f-803c-2365a0938f9a/kube-multus/0.log" Nov 29 07:41:18 crc kubenswrapper[4795]: I1129 07:41:18.537393 4795 generic.go:334] "Generic (PLEG): container finished" podID="50b9c3ea-4ff5-434f-803c-2365a0938f9a" containerID="00a5c9d1a3e5371aa4899c05c4c4eecdbbbe3666777446a5fa82e13837a23050" exitCode=1 Nov 29 07:41:18 crc kubenswrapper[4795]: I1129 07:41:18.537424 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hbg2m" event={"ID":"50b9c3ea-4ff5-434f-803c-2365a0938f9a","Type":"ContainerDied","Data":"00a5c9d1a3e5371aa4899c05c4c4eecdbbbe3666777446a5fa82e13837a23050"} Nov 29 07:41:18 crc kubenswrapper[4795]: I1129 07:41:18.537459 4795 scope.go:117] "RemoveContainer" containerID="46105526d9b0e90d4e14597c7b9c505b01d89a55e947fb378e4587ec265cac89" Nov 29 07:41:18 crc kubenswrapper[4795]: I1129 07:41:18.537940 4795 scope.go:117] "RemoveContainer" containerID="00a5c9d1a3e5371aa4899c05c4c4eecdbbbe3666777446a5fa82e13837a23050" Nov 29 07:41:18 crc kubenswrapper[4795]: E1129 07:41:18.538131 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-hbg2m_openshift-multus(50b9c3ea-4ff5-434f-803c-2365a0938f9a)\"" pod="openshift-multus/multus-hbg2m" podUID="50b9c3ea-4ff5-434f-803c-2365a0938f9a" Nov 29 07:41:18 crc kubenswrapper[4795]: I1129 07:41:18.553442 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8sp2m" podStartSLOduration=99.55342391 podStartE2EDuration="1m39.55342391s" podCreationTimestamp="2025-11-29 07:39:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:41:06.50581626 +0000 UTC m=+112.481392060" watchObservedRunningTime="2025-11-29 07:41:18.55342391 +0000 UTC m=+124.528999700" Nov 29 07:41:19 crc kubenswrapper[4795]: E1129 07:41:19.376437 4795 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 29 07:41:19 crc kubenswrapper[4795]: I1129 07:41:19.543013 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hbg2m_50b9c3ea-4ff5-434f-803c-2365a0938f9a/kube-multus/1.log" Nov 29 07:41:20 crc kubenswrapper[4795]: I1129 07:41:20.275445 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:41:20 crc kubenswrapper[4795]: I1129 07:41:20.275501 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvmzq" Nov 29 07:41:20 crc kubenswrapper[4795]: I1129 07:41:20.275550 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:41:20 crc kubenswrapper[4795]: E1129 07:41:20.275628 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:41:20 crc kubenswrapper[4795]: I1129 07:41:20.275658 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:41:20 crc kubenswrapper[4795]: E1129 07:41:20.275718 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:41:20 crc kubenswrapper[4795]: E1129 07:41:20.275862 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:41:20 crc kubenswrapper[4795]: E1129 07:41:20.275965 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvmzq" podUID="9be66670-47c2-4d05-bf3d-59ae6f4ff53b" Nov 29 07:41:22 crc kubenswrapper[4795]: I1129 07:41:22.275412 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:41:22 crc kubenswrapper[4795]: I1129 07:41:22.275559 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:41:22 crc kubenswrapper[4795]: E1129 07:41:22.275627 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:41:22 crc kubenswrapper[4795]: E1129 07:41:22.275866 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:41:22 crc kubenswrapper[4795]: I1129 07:41:22.276006 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvmzq" Nov 29 07:41:22 crc kubenswrapper[4795]: E1129 07:41:22.276262 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvmzq" podUID="9be66670-47c2-4d05-bf3d-59ae6f4ff53b" Nov 29 07:41:22 crc kubenswrapper[4795]: I1129 07:41:22.276429 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:41:22 crc kubenswrapper[4795]: E1129 07:41:22.276657 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:41:24 crc kubenswrapper[4795]: I1129 07:41:24.274821 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:41:24 crc kubenswrapper[4795]: I1129 07:41:24.274924 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:41:24 crc kubenswrapper[4795]: I1129 07:41:24.275029 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:41:24 crc kubenswrapper[4795]: I1129 07:41:24.277777 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvmzq" Nov 29 07:41:24 crc kubenswrapper[4795]: E1129 07:41:24.277910 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:41:24 crc kubenswrapper[4795]: E1129 07:41:24.277892 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:41:24 crc kubenswrapper[4795]: E1129 07:41:24.277990 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvmzq" podUID="9be66670-47c2-4d05-bf3d-59ae6f4ff53b" Nov 29 07:41:24 crc kubenswrapper[4795]: E1129 07:41:24.278157 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:41:24 crc kubenswrapper[4795]: E1129 07:41:24.377244 4795 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 29 07:41:26 crc kubenswrapper[4795]: I1129 07:41:26.274974 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:41:26 crc kubenswrapper[4795]: I1129 07:41:26.275036 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvmzq" Nov 29 07:41:26 crc kubenswrapper[4795]: I1129 07:41:26.275126 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:41:26 crc kubenswrapper[4795]: E1129 07:41:26.275204 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:41:26 crc kubenswrapper[4795]: I1129 07:41:26.275288 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:41:26 crc kubenswrapper[4795]: E1129 07:41:26.275467 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvmzq" podUID="9be66670-47c2-4d05-bf3d-59ae6f4ff53b" Nov 29 07:41:26 crc kubenswrapper[4795]: E1129 07:41:26.275703 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:41:26 crc kubenswrapper[4795]: E1129 07:41:26.275792 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:41:28 crc kubenswrapper[4795]: I1129 07:41:28.274978 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:41:28 crc kubenswrapper[4795]: I1129 07:41:28.275056 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:41:28 crc kubenswrapper[4795]: I1129 07:41:28.275160 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:41:28 crc kubenswrapper[4795]: E1129 07:41:28.275154 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:41:28 crc kubenswrapper[4795]: I1129 07:41:28.275192 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvmzq" Nov 29 07:41:28 crc kubenswrapper[4795]: E1129 07:41:28.275270 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:41:28 crc kubenswrapper[4795]: E1129 07:41:28.275307 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvmzq" podUID="9be66670-47c2-4d05-bf3d-59ae6f4ff53b" Nov 29 07:41:28 crc kubenswrapper[4795]: E1129 07:41:28.275428 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:41:29 crc kubenswrapper[4795]: I1129 07:41:29.276071 4795 scope.go:117] "RemoveContainer" containerID="265a8143fe9383f13d2e5d972c3fdd716990d82b30245eeb740f8a0fdb4f089c" Nov 29 07:41:29 crc kubenswrapper[4795]: E1129 07:41:29.378297 4795 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 29 07:41:29 crc kubenswrapper[4795]: I1129 07:41:29.581935 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-km2g9_3d3ff2b2-cbaa-4309-805a-2b044f867d3a/ovnkube-controller/3.log" Nov 29 07:41:29 crc kubenswrapper[4795]: I1129 07:41:29.584456 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" event={"ID":"3d3ff2b2-cbaa-4309-805a-2b044f867d3a","Type":"ContainerStarted","Data":"c590287cbefc95d9033e3848de009232747de0853c936fe3696973b2f36cd332"} Nov 29 07:41:29 crc kubenswrapper[4795]: I1129 07:41:29.584786 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" Nov 29 07:41:29 crc kubenswrapper[4795]: I1129 07:41:29.615270 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" podStartSLOduration=109.615255043 podStartE2EDuration="1m49.615255043s" podCreationTimestamp="2025-11-29 07:39:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:41:29.61478246 +0000 UTC m=+135.590358250" watchObservedRunningTime="2025-11-29 07:41:29.615255043 +0000 UTC m=+135.590830833" Nov 29 07:41:30 crc kubenswrapper[4795]: I1129 07:41:30.135438 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-bvmzq"] Nov 29 07:41:30 crc kubenswrapper[4795]: I1129 07:41:30.135545 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvmzq" Nov 29 07:41:30 crc kubenswrapper[4795]: E1129 07:41:30.135664 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvmzq" podUID="9be66670-47c2-4d05-bf3d-59ae6f4ff53b" Nov 29 07:41:30 crc kubenswrapper[4795]: I1129 07:41:30.274810 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:41:30 crc kubenswrapper[4795]: E1129 07:41:30.274927 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:41:30 crc kubenswrapper[4795]: I1129 07:41:30.274810 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:41:30 crc kubenswrapper[4795]: E1129 07:41:30.274989 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:41:30 crc kubenswrapper[4795]: I1129 07:41:30.274810 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:41:30 crc kubenswrapper[4795]: E1129 07:41:30.275030 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:41:32 crc kubenswrapper[4795]: I1129 07:41:32.275665 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvmzq" Nov 29 07:41:32 crc kubenswrapper[4795]: I1129 07:41:32.275698 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:41:32 crc kubenswrapper[4795]: I1129 07:41:32.275759 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:41:32 crc kubenswrapper[4795]: I1129 07:41:32.275772 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:41:32 crc kubenswrapper[4795]: E1129 07:41:32.275795 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvmzq" podUID="9be66670-47c2-4d05-bf3d-59ae6f4ff53b" Nov 29 07:41:32 crc kubenswrapper[4795]: E1129 07:41:32.275888 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:41:32 crc kubenswrapper[4795]: E1129 07:41:32.275943 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:41:32 crc kubenswrapper[4795]: I1129 07:41:32.276215 4795 scope.go:117] "RemoveContainer" containerID="00a5c9d1a3e5371aa4899c05c4c4eecdbbbe3666777446a5fa82e13837a23050" Nov 29 07:41:32 crc kubenswrapper[4795]: E1129 07:41:32.276258 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:41:33 crc kubenswrapper[4795]: I1129 07:41:33.599569 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hbg2m_50b9c3ea-4ff5-434f-803c-2365a0938f9a/kube-multus/1.log" Nov 29 07:41:33 crc kubenswrapper[4795]: I1129 07:41:33.599964 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hbg2m" event={"ID":"50b9c3ea-4ff5-434f-803c-2365a0938f9a","Type":"ContainerStarted","Data":"7149d6e745acac3b4c7e1d322eb2093244224711537fd12d0f91070dfdf31932"} Nov 29 07:41:34 crc kubenswrapper[4795]: I1129 07:41:34.274628 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:41:34 crc kubenswrapper[4795]: E1129 07:41:34.274744 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:41:34 crc kubenswrapper[4795]: I1129 07:41:34.274828 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvmzq" Nov 29 07:41:34 crc kubenswrapper[4795]: I1129 07:41:34.274828 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:41:34 crc kubenswrapper[4795]: I1129 07:41:34.274913 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:41:34 crc kubenswrapper[4795]: E1129 07:41:34.275895 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvmzq" podUID="9be66670-47c2-4d05-bf3d-59ae6f4ff53b" Nov 29 07:41:34 crc kubenswrapper[4795]: E1129 07:41:34.275968 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:41:34 crc kubenswrapper[4795]: E1129 07:41:34.276032 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.882207 4795 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.934609 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-lgd6b"] Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.935345 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-lgd6b" Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.936316 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-sjwfz"] Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.936645 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-sjwfz" Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.939417 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-xdslc"] Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.939829 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xdslc" Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.941342 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-6w84b"] Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.942069 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6w84b" Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.943394 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-kfx9d"] Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.944276 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kfx9d" Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.947404 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-v4fx7"] Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.948053 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-v4fx7" Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.948911 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.949062 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-gxss8"] Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.949486 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-gxss8" Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.950725 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qbnb4"] Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.951570 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qbnb4" Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.961956 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.962501 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.962523 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.962748 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.962863 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-jttv5"] Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.962997 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.963233 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.963352 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.963436 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.963438 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jttv5" Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.963522 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.963563 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.963691 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.967872 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.967900 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84c50c19-d82f-444f-9558-6f9932e3ff86-client-ca\") pod \"route-controller-manager-6576b87f9c-kfx9d\" (UID: \"84c50c19-d82f-444f-9558-6f9932e3ff86\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kfx9d" Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.967938 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.967952 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/725af35a-cc1c-4178-ae7f-e909af583a5f-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-v4fx7\" (UID: \"725af35a-cc1c-4178-ae7f-e909af583a5f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-v4fx7" Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.967938 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.968007 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.968010 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b6996662-d230-4299-b913-b6bc38c50ef5-audit\") pod \"apiserver-76f77b778f-lgd6b\" (UID: \"b6996662-d230-4299-b913-b6bc38c50ef5\") " pod="openshift-apiserver/apiserver-76f77b778f-lgd6b" Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.984718 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6996662-d230-4299-b913-b6bc38c50ef5-trusted-ca-bundle\") pod \"apiserver-76f77b778f-lgd6b\" (UID: \"b6996662-d230-4299-b913-b6bc38c50ef5\") " pod="openshift-apiserver/apiserver-76f77b778f-lgd6b" Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.984783 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84c50c19-d82f-444f-9558-6f9932e3ff86-serving-cert\") pod \"route-controller-manager-6576b87f9c-kfx9d\" (UID: \"84c50c19-d82f-444f-9558-6f9932e3ff86\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kfx9d" Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.984829 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b6996662-d230-4299-b913-b6bc38c50ef5-encryption-config\") pod \"apiserver-76f77b778f-lgd6b\" (UID: \"b6996662-d230-4299-b913-b6bc38c50ef5\") " pod="openshift-apiserver/apiserver-76f77b778f-lgd6b" Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.984866 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/099aeef3-9f86-47a1-bc18-3784b8e87bcd-serving-cert\") pod \"controller-manager-879f6c89f-sjwfz\" (UID: \"099aeef3-9f86-47a1-bc18-3784b8e87bcd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sjwfz" Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.984895 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3fa186eb-0f11-42e6-a1d4-2f42d1a7dadd-encryption-config\") pod \"apiserver-7bbb656c7d-6w84b\" (UID: \"3fa186eb-0f11-42e6-a1d4-2f42d1a7dadd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6w84b" Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.984924 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84c50c19-d82f-444f-9558-6f9932e3ff86-config\") pod \"route-controller-manager-6576b87f9c-kfx9d\" (UID: \"84c50c19-d82f-444f-9558-6f9932e3ff86\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kfx9d" Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.984954 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4b4r\" (UniqueName: \"kubernetes.io/projected/42062f93-4804-4818-95b5-2b6b3225c433-kube-api-access-f4b4r\") pod \"machine-approver-56656f9798-xdslc\" (UID: \"42062f93-4804-4818-95b5-2b6b3225c433\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xdslc" Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.984982 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vqf7\" (UniqueName: \"kubernetes.io/projected/edf98afb-2fd4-45a3-afcf-2e7c6a50c5fc-kube-api-access-2vqf7\") pod \"openshift-apiserver-operator-796bbdcf4f-qbnb4\" (UID: \"edf98afb-2fd4-45a3-afcf-2e7c6a50c5fc\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qbnb4" Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.985013 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3fa186eb-0f11-42e6-a1d4-2f42d1a7dadd-audit-policies\") pod \"apiserver-7bbb656c7d-6w84b\" (UID: \"3fa186eb-0f11-42e6-a1d4-2f42d1a7dadd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6w84b" Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.985042 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/42062f93-4804-4818-95b5-2b6b3225c433-machine-approver-tls\") pod \"machine-approver-56656f9798-xdslc\" (UID: \"42062f93-4804-4818-95b5-2b6b3225c433\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xdslc" Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.985070 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0577cd40-0974-4c45-8be4-9458e712c6e5-serving-cert\") pod \"authentication-operator-69f744f599-gxss8\" (UID: \"0577cd40-0974-4c45-8be4-9458e712c6e5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gxss8" Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.985097 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6996662-d230-4299-b913-b6bc38c50ef5-config\") pod \"apiserver-76f77b778f-lgd6b\" (UID: \"b6996662-d230-4299-b913-b6bc38c50ef5\") " pod="openshift-apiserver/apiserver-76f77b778f-lgd6b" Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.985126 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-df5wx\" (UniqueName: \"kubernetes.io/projected/b6996662-d230-4299-b913-b6bc38c50ef5-kube-api-access-df5wx\") pod \"apiserver-76f77b778f-lgd6b\" (UID: \"b6996662-d230-4299-b913-b6bc38c50ef5\") " pod="openshift-apiserver/apiserver-76f77b778f-lgd6b" Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.985181 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edf98afb-2fd4-45a3-afcf-2e7c6a50c5fc-config\") pod \"openshift-apiserver-operator-796bbdcf4f-qbnb4\" (UID: \"edf98afb-2fd4-45a3-afcf-2e7c6a50c5fc\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qbnb4" Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.985211 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3fa186eb-0f11-42e6-a1d4-2f42d1a7dadd-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-6w84b\" (UID: \"3fa186eb-0f11-42e6-a1d4-2f42d1a7dadd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6w84b" Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.985232 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3fa186eb-0f11-42e6-a1d4-2f42d1a7dadd-audit-dir\") pod \"apiserver-7bbb656c7d-6w84b\" (UID: \"3fa186eb-0f11-42e6-a1d4-2f42d1a7dadd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6w84b" Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.985253 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42062f93-4804-4818-95b5-2b6b3225c433-config\") pod \"machine-approver-56656f9798-xdslc\" (UID: \"42062f93-4804-4818-95b5-2b6b3225c433\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xdslc" Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.985280 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/099aeef3-9f86-47a1-bc18-3784b8e87bcd-config\") pod \"controller-manager-879f6c89f-sjwfz\" (UID: \"099aeef3-9f86-47a1-bc18-3784b8e87bcd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sjwfz" Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.985299 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/725af35a-cc1c-4178-ae7f-e909af583a5f-images\") pod \"machine-api-operator-5694c8668f-v4fx7\" (UID: \"725af35a-cc1c-4178-ae7f-e909af583a5f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-v4fx7" Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.985337 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/42062f93-4804-4818-95b5-2b6b3225c433-auth-proxy-config\") pod \"machine-approver-56656f9798-xdslc\" (UID: \"42062f93-4804-4818-95b5-2b6b3225c433\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xdslc" Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.985513 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3fa186eb-0f11-42e6-a1d4-2f42d1a7dadd-etcd-client\") pod \"apiserver-7bbb656c7d-6w84b\" (UID: \"3fa186eb-0f11-42e6-a1d4-2f42d1a7dadd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6w84b" Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.985705 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.985823 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.985842 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.985943 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.985958 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.986036 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.985824 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.986324 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.986541 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.986543 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.987182 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.987250 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.987260 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.987419 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.987761 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.987773 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.987852 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.987916 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.987960 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.987984 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.988045 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.988073 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.988121 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 29 07:41:35 crc kubenswrapper[4795]: I1129 07:41:35.988669 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.003824 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.004079 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.004160 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.004756 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.004851 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.004929 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.005649 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.005764 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.006082 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.006117 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.006218 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:35.989015 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/099aeef3-9f86-47a1-bc18-3784b8e87bcd-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-sjwfz\" (UID: \"099aeef3-9f86-47a1-bc18-3784b8e87bcd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sjwfz" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.006358 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b6996662-d230-4299-b913-b6bc38c50ef5-etcd-serving-ca\") pod \"apiserver-76f77b778f-lgd6b\" (UID: \"b6996662-d230-4299-b913-b6bc38c50ef5\") " pod="openshift-apiserver/apiserver-76f77b778f-lgd6b" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.006384 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7v8x\" (UniqueName: \"kubernetes.io/projected/84c50c19-d82f-444f-9558-6f9932e3ff86-kube-api-access-f7v8x\") pod \"route-controller-manager-6576b87f9c-kfx9d\" (UID: \"84c50c19-d82f-444f-9558-6f9932e3ff86\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kfx9d" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.006404 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0577cd40-0974-4c45-8be4-9458e712c6e5-config\") pod \"authentication-operator-69f744f599-gxss8\" (UID: \"0577cd40-0974-4c45-8be4-9458e712c6e5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gxss8" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.006424 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b6996662-d230-4299-b913-b6bc38c50ef5-image-import-ca\") pod \"apiserver-76f77b778f-lgd6b\" (UID: \"b6996662-d230-4299-b913-b6bc38c50ef5\") " pod="openshift-apiserver/apiserver-76f77b778f-lgd6b" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.006438 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6996662-d230-4299-b913-b6bc38c50ef5-serving-cert\") pod \"apiserver-76f77b778f-lgd6b\" (UID: \"b6996662-d230-4299-b913-b6bc38c50ef5\") " pod="openshift-apiserver/apiserver-76f77b778f-lgd6b" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.006469 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztp6z\" (UniqueName: \"kubernetes.io/projected/0577cd40-0974-4c45-8be4-9458e712c6e5-kube-api-access-ztp6z\") pod \"authentication-operator-69f744f599-gxss8\" (UID: \"0577cd40-0974-4c45-8be4-9458e712c6e5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gxss8" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.006490 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edf98afb-2fd4-45a3-afcf-2e7c6a50c5fc-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-qbnb4\" (UID: \"edf98afb-2fd4-45a3-afcf-2e7c6a50c5fc\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qbnb4" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.006510 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b6996662-d230-4299-b913-b6bc38c50ef5-node-pullsecrets\") pod \"apiserver-76f77b778f-lgd6b\" (UID: \"b6996662-d230-4299-b913-b6bc38c50ef5\") " pod="openshift-apiserver/apiserver-76f77b778f-lgd6b" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.006532 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/725af35a-cc1c-4178-ae7f-e909af583a5f-config\") pod \"machine-api-operator-5694c8668f-v4fx7\" (UID: \"725af35a-cc1c-4178-ae7f-e909af583a5f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-v4fx7" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.006552 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b6996662-d230-4299-b913-b6bc38c50ef5-audit-dir\") pod \"apiserver-76f77b778f-lgd6b\" (UID: \"b6996662-d230-4299-b913-b6bc38c50ef5\") " pod="openshift-apiserver/apiserver-76f77b778f-lgd6b" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.006575 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdslk\" (UniqueName: \"kubernetes.io/projected/3fa186eb-0f11-42e6-a1d4-2f42d1a7dadd-kube-api-access-rdslk\") pod \"apiserver-7bbb656c7d-6w84b\" (UID: \"3fa186eb-0f11-42e6-a1d4-2f42d1a7dadd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6w84b" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.006616 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9p6r\" (UniqueName: \"kubernetes.io/projected/725af35a-cc1c-4178-ae7f-e909af583a5f-kube-api-access-f9p6r\") pod \"machine-api-operator-5694c8668f-v4fx7\" (UID: \"725af35a-cc1c-4178-ae7f-e909af583a5f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-v4fx7" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.006637 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b6996662-d230-4299-b913-b6bc38c50ef5-etcd-client\") pod \"apiserver-76f77b778f-lgd6b\" (UID: \"b6996662-d230-4299-b913-b6bc38c50ef5\") " pod="openshift-apiserver/apiserver-76f77b778f-lgd6b" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.006657 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3fa186eb-0f11-42e6-a1d4-2f42d1a7dadd-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-6w84b\" (UID: \"3fa186eb-0f11-42e6-a1d4-2f42d1a7dadd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6w84b" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.006686 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdzp2\" (UniqueName: \"kubernetes.io/projected/099aeef3-9f86-47a1-bc18-3784b8e87bcd-kube-api-access-zdzp2\") pod \"controller-manager-879f6c89f-sjwfz\" (UID: \"099aeef3-9f86-47a1-bc18-3784b8e87bcd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sjwfz" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.006718 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0577cd40-0974-4c45-8be4-9458e712c6e5-service-ca-bundle\") pod \"authentication-operator-69f744f599-gxss8\" (UID: \"0577cd40-0974-4c45-8be4-9458e712c6e5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gxss8" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.006754 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-klnm7"] Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.006781 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3fa186eb-0f11-42e6-a1d4-2f42d1a7dadd-serving-cert\") pod \"apiserver-7bbb656c7d-6w84b\" (UID: \"3fa186eb-0f11-42e6-a1d4-2f42d1a7dadd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6w84b" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.006810 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0577cd40-0974-4c45-8be4-9458e712c6e5-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-gxss8\" (UID: \"0577cd40-0974-4c45-8be4-9458e712c6e5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gxss8" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.006839 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/099aeef3-9f86-47a1-bc18-3784b8e87bcd-client-ca\") pod \"controller-manager-879f6c89f-sjwfz\" (UID: \"099aeef3-9f86-47a1-bc18-3784b8e87bcd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sjwfz" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.007266 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-klnm7" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.008179 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-w4g7w"] Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.008673 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-w4g7w" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.011094 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.011196 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.011248 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.011484 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.011581 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.011822 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.011944 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.012852 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-qpsg9"] Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.013373 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-qpsg9" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.013565 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-d2wzd"] Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.014149 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-d2wzd" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.016043 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mz266"] Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.016561 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-wbcfc"] Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.016902 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mz266" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.017030 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-wbcfc" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.017131 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.017781 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lm9wt"] Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.018212 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.018691 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-sjwfz"] Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.018916 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.020200 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-lgd6b"] Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.021414 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q4pdj"] Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.021834 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q4pdj" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.028495 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qbnb4"] Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.029309 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nlnvc"] Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.029352 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.029830 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nlnvc" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.030936 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-zfqtn"] Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.031178 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.031295 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zfqtn" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.031812 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.032638 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.032797 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.033823 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.034023 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.034487 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.034062 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.034109 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.041779 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.048882 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.050265 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.050357 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.050434 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.050506 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.050579 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.050780 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.050926 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.051006 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.052102 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.052289 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.061248 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.071985 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.072196 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.072307 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.072331 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.072360 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.072458 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.072495 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.072569 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.072608 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.072685 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.076240 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.076371 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.076435 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.076248 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.078061 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.079072 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.079198 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-d2wzd"] Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.081309 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.083480 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.084815 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.087460 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.091280 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-6w84b"] Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.091335 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-qpsg9"] Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.091347 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-kfx9d"] Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.091375 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-mznng"] Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.091931 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2zjvc"] Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.092285 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-mznng" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.098927 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-lwxtb"] Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.099145 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2zjvc" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.099768 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-692g9"] Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.099771 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.099881 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-lwxtb" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.100197 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.100294 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.100318 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-ddtz8"] Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.100694 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-692g9" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.100714 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-5g9td"] Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.100805 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-ddtz8" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.101726 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r76lc"] Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.101995 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5g9td" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.102299 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9bzls"] Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.102384 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r76lc" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.102832 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-l2jzc"] Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.103144 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9bzls" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.103368 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l2jzc" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.103566 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-6x84f"] Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.104731 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-6swbg"] Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.104923 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6x84f" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.105810 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-l2khx"] Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.106287 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-l2khx" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.108160 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tgqrw"] Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.109530 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tgqrw" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.109795 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztp6z\" (UniqueName: \"kubernetes.io/projected/0577cd40-0974-4c45-8be4-9458e712c6e5-kube-api-access-ztp6z\") pod \"authentication-operator-69f744f599-gxss8\" (UID: \"0577cd40-0974-4c45-8be4-9458e712c6e5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gxss8" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.109859 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9b02347e-bd6a-4a77-97d4-d8276d1b6167-console-config\") pod \"console-f9d7485db-w4g7w\" (UID: \"9b02347e-bd6a-4a77-97d4-d8276d1b6167\") " pod="openshift-console/console-f9d7485db-w4g7w" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.109887 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ae857e30-60ec-4e9f-867b-da07f179df65-trusted-ca\") pod \"console-operator-58897d9998-wbcfc\" (UID: \"ae857e30-60ec-4e9f-867b-da07f179df65\") " pod="openshift-console-operator/console-operator-58897d9998-wbcfc" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.109917 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edf98afb-2fd4-45a3-afcf-2e7c6a50c5fc-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-qbnb4\" (UID: \"edf98afb-2fd4-45a3-afcf-2e7c6a50c5fc\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qbnb4" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.109936 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b6996662-d230-4299-b913-b6bc38c50ef5-node-pullsecrets\") pod \"apiserver-76f77b778f-lgd6b\" (UID: \"b6996662-d230-4299-b913-b6bc38c50ef5\") " pod="openshift-apiserver/apiserver-76f77b778f-lgd6b" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.109965 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/725af35a-cc1c-4178-ae7f-e909af583a5f-config\") pod \"machine-api-operator-5694c8668f-v4fx7\" (UID: \"725af35a-cc1c-4178-ae7f-e909af583a5f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-v4fx7" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.109980 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b6996662-d230-4299-b913-b6bc38c50ef5-audit-dir\") pod \"apiserver-76f77b778f-lgd6b\" (UID: \"b6996662-d230-4299-b913-b6bc38c50ef5\") " pod="openshift-apiserver/apiserver-76f77b778f-lgd6b" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.110000 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-886lh\" (UniqueName: \"kubernetes.io/projected/9b02347e-bd6a-4a77-97d4-d8276d1b6167-kube-api-access-886lh\") pod \"console-f9d7485db-w4g7w\" (UID: \"9b02347e-bd6a-4a77-97d4-d8276d1b6167\") " pod="openshift-console/console-f9d7485db-w4g7w" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.110019 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9p6r\" (UniqueName: \"kubernetes.io/projected/725af35a-cc1c-4178-ae7f-e909af583a5f-kube-api-access-f9p6r\") pod \"machine-api-operator-5694c8668f-v4fx7\" (UID: \"725af35a-cc1c-4178-ae7f-e909af583a5f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-v4fx7" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.110033 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b6996662-d230-4299-b913-b6bc38c50ef5-etcd-client\") pod \"apiserver-76f77b778f-lgd6b\" (UID: \"b6996662-d230-4299-b913-b6bc38c50ef5\") " pod="openshift-apiserver/apiserver-76f77b778f-lgd6b" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.110052 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdslk\" (UniqueName: \"kubernetes.io/projected/3fa186eb-0f11-42e6-a1d4-2f42d1a7dadd-kube-api-access-rdslk\") pod \"apiserver-7bbb656c7d-6w84b\" (UID: \"3fa186eb-0f11-42e6-a1d4-2f42d1a7dadd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6w84b" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.110069 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3fa186eb-0f11-42e6-a1d4-2f42d1a7dadd-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-6w84b\" (UID: \"3fa186eb-0f11-42e6-a1d4-2f42d1a7dadd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6w84b" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.110085 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b02347e-bd6a-4a77-97d4-d8276d1b6167-trusted-ca-bundle\") pod \"console-f9d7485db-w4g7w\" (UID: \"9b02347e-bd6a-4a77-97d4-d8276d1b6167\") " pod="openshift-console/console-f9d7485db-w4g7w" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.110104 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdzp2\" (UniqueName: \"kubernetes.io/projected/099aeef3-9f86-47a1-bc18-3784b8e87bcd-kube-api-access-zdzp2\") pod \"controller-manager-879f6c89f-sjwfz\" (UID: \"099aeef3-9f86-47a1-bc18-3784b8e87bcd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sjwfz" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.110121 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0577cd40-0974-4c45-8be4-9458e712c6e5-service-ca-bundle\") pod \"authentication-operator-69f744f599-gxss8\" (UID: \"0577cd40-0974-4c45-8be4-9458e712c6e5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gxss8" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.110142 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dkcr\" (UniqueName: \"kubernetes.io/projected/1cefe5b8-af1f-4518-a53d-6a0151af7517-kube-api-access-2dkcr\") pod \"openshift-controller-manager-operator-756b6f6bc6-q4pdj\" (UID: \"1cefe5b8-af1f-4518-a53d-6a0151af7517\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q4pdj" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.110165 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3fa186eb-0f11-42e6-a1d4-2f42d1a7dadd-serving-cert\") pod \"apiserver-7bbb656c7d-6w84b\" (UID: \"3fa186eb-0f11-42e6-a1d4-2f42d1a7dadd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6w84b" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.110181 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0577cd40-0974-4c45-8be4-9458e712c6e5-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-gxss8\" (UID: \"0577cd40-0974-4c45-8be4-9458e712c6e5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gxss8" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.110221 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/099aeef3-9f86-47a1-bc18-3784b8e87bcd-client-ca\") pod \"controller-manager-879f6c89f-sjwfz\" (UID: \"099aeef3-9f86-47a1-bc18-3784b8e87bcd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sjwfz" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.110238 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84c50c19-d82f-444f-9558-6f9932e3ff86-client-ca\") pod \"route-controller-manager-6576b87f9c-kfx9d\" (UID: \"84c50c19-d82f-444f-9558-6f9932e3ff86\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kfx9d" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.110257 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/725af35a-cc1c-4178-ae7f-e909af583a5f-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-v4fx7\" (UID: \"725af35a-cc1c-4178-ae7f-e909af583a5f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-v4fx7" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.110279 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/094b43dd-ae40-43b1-824b-a8dd47bc9693-trusted-ca\") pod \"ingress-operator-5b745b69d9-zfqtn\" (UID: \"094b43dd-ae40-43b1-824b-a8dd47bc9693\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zfqtn" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.110299 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b6996662-d230-4299-b913-b6bc38c50ef5-audit\") pod \"apiserver-76f77b778f-lgd6b\" (UID: \"b6996662-d230-4299-b913-b6bc38c50ef5\") " pod="openshift-apiserver/apiserver-76f77b778f-lgd6b" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.110335 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6996662-d230-4299-b913-b6bc38c50ef5-trusted-ca-bundle\") pod \"apiserver-76f77b778f-lgd6b\" (UID: \"b6996662-d230-4299-b913-b6bc38c50ef5\") " pod="openshift-apiserver/apiserver-76f77b778f-lgd6b" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.110354 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/768de086-2681-4cee-b71f-a732b317fc64-config\") pod \"kube-apiserver-operator-766d6c64bb-nlnvc\" (UID: \"768de086-2681-4cee-b71f-a732b317fc64\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nlnvc" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.110376 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae857e30-60ec-4e9f-867b-da07f179df65-config\") pod \"console-operator-58897d9998-wbcfc\" (UID: \"ae857e30-60ec-4e9f-867b-da07f179df65\") " pod="openshift-console-operator/console-operator-58897d9998-wbcfc" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.110394 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfn86\" (UniqueName: \"kubernetes.io/projected/f773d9ba-78f9-4b56-8e33-706ee34ad32a-kube-api-access-nfn86\") pod \"openshift-config-operator-7777fb866f-jttv5\" (UID: \"f773d9ba-78f9-4b56-8e33-706ee34ad32a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jttv5" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.110409 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae857e30-60ec-4e9f-867b-da07f179df65-serving-cert\") pod \"console-operator-58897d9998-wbcfc\" (UID: \"ae857e30-60ec-4e9f-867b-da07f179df65\") " pod="openshift-console-operator/console-operator-58897d9998-wbcfc" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.110430 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84c50c19-d82f-444f-9558-6f9932e3ff86-serving-cert\") pod \"route-controller-manager-6576b87f9c-kfx9d\" (UID: \"84c50c19-d82f-444f-9558-6f9932e3ff86\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kfx9d" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.110446 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b6996662-d230-4299-b913-b6bc38c50ef5-encryption-config\") pod \"apiserver-76f77b778f-lgd6b\" (UID: \"b6996662-d230-4299-b913-b6bc38c50ef5\") " pod="openshift-apiserver/apiserver-76f77b778f-lgd6b" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.110466 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/099aeef3-9f86-47a1-bc18-3784b8e87bcd-serving-cert\") pod \"controller-manager-879f6c89f-sjwfz\" (UID: \"099aeef3-9f86-47a1-bc18-3784b8e87bcd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sjwfz" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.110486 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3fa186eb-0f11-42e6-a1d4-2f42d1a7dadd-encryption-config\") pod \"apiserver-7bbb656c7d-6w84b\" (UID: \"3fa186eb-0f11-42e6-a1d4-2f42d1a7dadd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6w84b" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.110503 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84c50c19-d82f-444f-9558-6f9932e3ff86-config\") pod \"route-controller-manager-6576b87f9c-kfx9d\" (UID: \"84c50c19-d82f-444f-9558-6f9932e3ff86\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kfx9d" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.110525 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f773d9ba-78f9-4b56-8e33-706ee34ad32a-available-featuregates\") pod \"openshift-config-operator-7777fb866f-jttv5\" (UID: \"f773d9ba-78f9-4b56-8e33-706ee34ad32a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jttv5" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.110542 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4b4r\" (UniqueName: \"kubernetes.io/projected/42062f93-4804-4818-95b5-2b6b3225c433-kube-api-access-f4b4r\") pod \"machine-approver-56656f9798-xdslc\" (UID: \"42062f93-4804-4818-95b5-2b6b3225c433\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xdslc" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.110560 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vqf7\" (UniqueName: \"kubernetes.io/projected/edf98afb-2fd4-45a3-afcf-2e7c6a50c5fc-kube-api-access-2vqf7\") pod \"openshift-apiserver-operator-796bbdcf4f-qbnb4\" (UID: \"edf98afb-2fd4-45a3-afcf-2e7c6a50c5fc\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qbnb4" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.110576 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9b02347e-bd6a-4a77-97d4-d8276d1b6167-console-oauth-config\") pod \"console-f9d7485db-w4g7w\" (UID: \"9b02347e-bd6a-4a77-97d4-d8276d1b6167\") " pod="openshift-console/console-f9d7485db-w4g7w" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.110613 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/42062f93-4804-4818-95b5-2b6b3225c433-machine-approver-tls\") pod \"machine-approver-56656f9798-xdslc\" (UID: \"42062f93-4804-4818-95b5-2b6b3225c433\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xdslc" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.110630 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0577cd40-0974-4c45-8be4-9458e712c6e5-serving-cert\") pod \"authentication-operator-69f744f599-gxss8\" (UID: \"0577cd40-0974-4c45-8be4-9458e712c6e5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gxss8" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.110656 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6996662-d230-4299-b913-b6bc38c50ef5-config\") pod \"apiserver-76f77b778f-lgd6b\" (UID: \"b6996662-d230-4299-b913-b6bc38c50ef5\") " pod="openshift-apiserver/apiserver-76f77b778f-lgd6b" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.110674 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3fa186eb-0f11-42e6-a1d4-2f42d1a7dadd-audit-policies\") pod \"apiserver-7bbb656c7d-6w84b\" (UID: \"3fa186eb-0f11-42e6-a1d4-2f42d1a7dadd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6w84b" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.110690 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-df5wx\" (UniqueName: \"kubernetes.io/projected/b6996662-d230-4299-b913-b6bc38c50ef5-kube-api-access-df5wx\") pod \"apiserver-76f77b778f-lgd6b\" (UID: \"b6996662-d230-4299-b913-b6bc38c50ef5\") " pod="openshift-apiserver/apiserver-76f77b778f-lgd6b" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.110707 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/094b43dd-ae40-43b1-824b-a8dd47bc9693-bound-sa-token\") pod \"ingress-operator-5b745b69d9-zfqtn\" (UID: \"094b43dd-ae40-43b1-824b-a8dd47bc9693\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zfqtn" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.110730 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9b02347e-bd6a-4a77-97d4-d8276d1b6167-service-ca\") pod \"console-f9d7485db-w4g7w\" (UID: \"9b02347e-bd6a-4a77-97d4-d8276d1b6167\") " pod="openshift-console/console-f9d7485db-w4g7w" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.110802 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bvlq\" (UniqueName: \"kubernetes.io/projected/ae857e30-60ec-4e9f-867b-da07f179df65-kube-api-access-8bvlq\") pod \"console-operator-58897d9998-wbcfc\" (UID: \"ae857e30-60ec-4e9f-867b-da07f179df65\") " pod="openshift-console-operator/console-operator-58897d9998-wbcfc" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.111037 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b6996662-d230-4299-b913-b6bc38c50ef5-audit-dir\") pod \"apiserver-76f77b778f-lgd6b\" (UID: \"b6996662-d230-4299-b913-b6bc38c50ef5\") " pod="openshift-apiserver/apiserver-76f77b778f-lgd6b" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.111056 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b6996662-d230-4299-b913-b6bc38c50ef5-node-pullsecrets\") pod \"apiserver-76f77b778f-lgd6b\" (UID: \"b6996662-d230-4299-b913-b6bc38c50ef5\") " pod="openshift-apiserver/apiserver-76f77b778f-lgd6b" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.111827 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/725af35a-cc1c-4178-ae7f-e909af583a5f-config\") pod \"machine-api-operator-5694c8668f-v4fx7\" (UID: \"725af35a-cc1c-4178-ae7f-e909af583a5f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-v4fx7" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.111921 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/099aeef3-9f86-47a1-bc18-3784b8e87bcd-client-ca\") pod \"controller-manager-879f6c89f-sjwfz\" (UID: \"099aeef3-9f86-47a1-bc18-3784b8e87bcd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sjwfz" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.112041 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84c50c19-d82f-444f-9558-6f9932e3ff86-client-ca\") pod \"route-controller-manager-6576b87f9c-kfx9d\" (UID: \"84c50c19-d82f-444f-9558-6f9932e3ff86\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kfx9d" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.110704 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qvn2d"] Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.112743 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0577cd40-0974-4c45-8be4-9458e712c6e5-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-gxss8\" (UID: \"0577cd40-0974-4c45-8be4-9458e712c6e5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gxss8" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.112339 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0577cd40-0974-4c45-8be4-9458e712c6e5-service-ca-bundle\") pod \"authentication-operator-69f744f599-gxss8\" (UID: \"0577cd40-0974-4c45-8be4-9458e712c6e5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gxss8" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.112878 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3fa186eb-0f11-42e6-a1d4-2f42d1a7dadd-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-6w84b\" (UID: \"3fa186eb-0f11-42e6-a1d4-2f42d1a7dadd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6w84b" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.112930 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1cefe5b8-af1f-4518-a53d-6a0151af7517-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-q4pdj\" (UID: \"1cefe5b8-af1f-4518-a53d-6a0151af7517\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q4pdj" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.112962 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edf98afb-2fd4-45a3-afcf-2e7c6a50c5fc-config\") pod \"openshift-apiserver-operator-796bbdcf4f-qbnb4\" (UID: \"edf98afb-2fd4-45a3-afcf-2e7c6a50c5fc\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qbnb4" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.112997 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3fa186eb-0f11-42e6-a1d4-2f42d1a7dadd-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-6w84b\" (UID: \"3fa186eb-0f11-42e6-a1d4-2f42d1a7dadd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6w84b" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.113014 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3fa186eb-0f11-42e6-a1d4-2f42d1a7dadd-audit-dir\") pod \"apiserver-7bbb656c7d-6w84b\" (UID: \"3fa186eb-0f11-42e6-a1d4-2f42d1a7dadd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6w84b" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.113031 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42062f93-4804-4818-95b5-2b6b3225c433-config\") pod \"machine-approver-56656f9798-xdslc\" (UID: \"42062f93-4804-4818-95b5-2b6b3225c433\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xdslc" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.113054 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d79589e5-c434-4157-8cfd-a51e92aa0c2f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-d2wzd\" (UID: \"d79589e5-c434-4157-8cfd-a51e92aa0c2f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-d2wzd" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.113074 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sh8p6\" (UniqueName: \"kubernetes.io/projected/892d0338-c59f-481e-8d70-3143d4954f38-kube-api-access-sh8p6\") pod \"downloads-7954f5f757-klnm7\" (UID: \"892d0338-c59f-481e-8d70-3143d4954f38\") " pod="openshift-console/downloads-7954f5f757-klnm7" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.113092 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f773d9ba-78f9-4b56-8e33-706ee34ad32a-serving-cert\") pod \"openshift-config-operator-7777fb866f-jttv5\" (UID: \"f773d9ba-78f9-4b56-8e33-706ee34ad32a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jttv5" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.113108 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9b02347e-bd6a-4a77-97d4-d8276d1b6167-oauth-serving-cert\") pod \"console-f9d7485db-w4g7w\" (UID: \"9b02347e-bd6a-4a77-97d4-d8276d1b6167\") " pod="openshift-console/console-f9d7485db-w4g7w" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.113160 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hx8c6"] Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.113539 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hx8c6" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.113779 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qvn2d" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.114836 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edf98afb-2fd4-45a3-afcf-2e7c6a50c5fc-config\") pod \"openshift-apiserver-operator-796bbdcf4f-qbnb4\" (UID: \"edf98afb-2fd4-45a3-afcf-2e7c6a50c5fc\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qbnb4" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.114951 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84c50c19-d82f-444f-9558-6f9932e3ff86-config\") pod \"route-controller-manager-6576b87f9c-kfx9d\" (UID: \"84c50c19-d82f-444f-9558-6f9932e3ff86\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kfx9d" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.115071 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3fa186eb-0f11-42e6-a1d4-2f42d1a7dadd-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-6w84b\" (UID: \"3fa186eb-0f11-42e6-a1d4-2f42d1a7dadd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6w84b" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.115176 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3fa186eb-0f11-42e6-a1d4-2f42d1a7dadd-audit-dir\") pod \"apiserver-7bbb656c7d-6w84b\" (UID: \"3fa186eb-0f11-42e6-a1d4-2f42d1a7dadd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6w84b" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.115788 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6996662-d230-4299-b913-b6bc38c50ef5-trusted-ca-bundle\") pod \"apiserver-76f77b778f-lgd6b\" (UID: \"b6996662-d230-4299-b913-b6bc38c50ef5\") " pod="openshift-apiserver/apiserver-76f77b778f-lgd6b" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.116184 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0577cd40-0974-4c45-8be4-9458e712c6e5-serving-cert\") pod \"authentication-operator-69f744f599-gxss8\" (UID: \"0577cd40-0974-4c45-8be4-9458e712c6e5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gxss8" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.116257 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9b02347e-bd6a-4a77-97d4-d8276d1b6167-console-serving-cert\") pod \"console-f9d7485db-w4g7w\" (UID: \"9b02347e-bd6a-4a77-97d4-d8276d1b6167\") " pod="openshift-console/console-f9d7485db-w4g7w" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.116716 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3fa186eb-0f11-42e6-a1d4-2f42d1a7dadd-audit-policies\") pod \"apiserver-7bbb656c7d-6w84b\" (UID: \"3fa186eb-0f11-42e6-a1d4-2f42d1a7dadd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6w84b" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.116825 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/099aeef3-9f86-47a1-bc18-3784b8e87bcd-config\") pod \"controller-manager-879f6c89f-sjwfz\" (UID: \"099aeef3-9f86-47a1-bc18-3784b8e87bcd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sjwfz" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.116860 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/725af35a-cc1c-4178-ae7f-e909af583a5f-images\") pod \"machine-api-operator-5694c8668f-v4fx7\" (UID: \"725af35a-cc1c-4178-ae7f-e909af583a5f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-v4fx7" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.116943 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/768de086-2681-4cee-b71f-a732b317fc64-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-nlnvc\" (UID: \"768de086-2681-4cee-b71f-a732b317fc64\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nlnvc" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.117210 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b6996662-d230-4299-b913-b6bc38c50ef5-etcd-client\") pod \"apiserver-76f77b778f-lgd6b\" (UID: \"b6996662-d230-4299-b913-b6bc38c50ef5\") " pod="openshift-apiserver/apiserver-76f77b778f-lgd6b" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.117293 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6996662-d230-4299-b913-b6bc38c50ef5-config\") pod \"apiserver-76f77b778f-lgd6b\" (UID: \"b6996662-d230-4299-b913-b6bc38c50ef5\") " pod="openshift-apiserver/apiserver-76f77b778f-lgd6b" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.117467 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42062f93-4804-4818-95b5-2b6b3225c433-config\") pod \"machine-approver-56656f9798-xdslc\" (UID: \"42062f93-4804-4818-95b5-2b6b3225c433\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xdslc" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.117704 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/42062f93-4804-4818-95b5-2b6b3225c433-auth-proxy-config\") pod \"machine-approver-56656f9798-xdslc\" (UID: \"42062f93-4804-4818-95b5-2b6b3225c433\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xdslc" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.117705 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-6swbg" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.117785 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/725af35a-cc1c-4178-ae7f-e909af583a5f-images\") pod \"machine-api-operator-5694c8668f-v4fx7\" (UID: \"725af35a-cc1c-4178-ae7f-e909af583a5f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-v4fx7" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.117844 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lst97\" (UniqueName: \"kubernetes.io/projected/094b43dd-ae40-43b1-824b-a8dd47bc9693-kube-api-access-lst97\") pod \"ingress-operator-5b745b69d9-zfqtn\" (UID: \"094b43dd-ae40-43b1-824b-a8dd47bc9693\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zfqtn" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.124756 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b6996662-d230-4299-b913-b6bc38c50ef5-audit\") pod \"apiserver-76f77b778f-lgd6b\" (UID: \"b6996662-d230-4299-b913-b6bc38c50ef5\") " pod="openshift-apiserver/apiserver-76f77b778f-lgd6b" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.125404 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/42062f93-4804-4818-95b5-2b6b3225c433-auth-proxy-config\") pod \"machine-approver-56656f9798-xdslc\" (UID: \"42062f93-4804-4818-95b5-2b6b3225c433\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xdslc" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.126137 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84c50c19-d82f-444f-9558-6f9932e3ff86-serving-cert\") pod \"route-controller-manager-6576b87f9c-kfx9d\" (UID: \"84c50c19-d82f-444f-9558-6f9932e3ff86\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kfx9d" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.126461 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3fa186eb-0f11-42e6-a1d4-2f42d1a7dadd-serving-cert\") pod \"apiserver-7bbb656c7d-6w84b\" (UID: \"3fa186eb-0f11-42e6-a1d4-2f42d1a7dadd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6w84b" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.126682 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b6996662-d230-4299-b913-b6bc38c50ef5-encryption-config\") pod \"apiserver-76f77b778f-lgd6b\" (UID: \"b6996662-d230-4299-b913-b6bc38c50ef5\") " pod="openshift-apiserver/apiserver-76f77b778f-lgd6b" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.127117 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/099aeef3-9f86-47a1-bc18-3784b8e87bcd-serving-cert\") pod \"controller-manager-879f6c89f-sjwfz\" (UID: \"099aeef3-9f86-47a1-bc18-3784b8e87bcd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sjwfz" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.127137 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/768de086-2681-4cee-b71f-a732b317fc64-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-nlnvc\" (UID: \"768de086-2681-4cee-b71f-a732b317fc64\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nlnvc" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.127190 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3fa186eb-0f11-42e6-a1d4-2f42d1a7dadd-etcd-client\") pod \"apiserver-7bbb656c7d-6w84b\" (UID: \"3fa186eb-0f11-42e6-a1d4-2f42d1a7dadd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6w84b" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.127233 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czxcz\" (UniqueName: \"kubernetes.io/projected/d79589e5-c434-4157-8cfd-a51e92aa0c2f-kube-api-access-czxcz\") pod \"cluster-samples-operator-665b6dd947-d2wzd\" (UID: \"d79589e5-c434-4157-8cfd-a51e92aa0c2f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-d2wzd" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.127266 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cefe5b8-af1f-4518-a53d-6a0151af7517-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-q4pdj\" (UID: \"1cefe5b8-af1f-4518-a53d-6a0151af7517\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q4pdj" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.127302 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b6996662-d230-4299-b913-b6bc38c50ef5-etcd-serving-ca\") pod \"apiserver-76f77b778f-lgd6b\" (UID: \"b6996662-d230-4299-b913-b6bc38c50ef5\") " pod="openshift-apiserver/apiserver-76f77b778f-lgd6b" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.127357 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/099aeef3-9f86-47a1-bc18-3784b8e87bcd-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-sjwfz\" (UID: \"099aeef3-9f86-47a1-bc18-3784b8e87bcd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sjwfz" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.127448 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/094b43dd-ae40-43b1-824b-a8dd47bc9693-metrics-tls\") pod \"ingress-operator-5b745b69d9-zfqtn\" (UID: \"094b43dd-ae40-43b1-824b-a8dd47bc9693\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zfqtn" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.127699 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/42062f93-4804-4818-95b5-2b6b3225c433-machine-approver-tls\") pod \"machine-approver-56656f9798-xdslc\" (UID: \"42062f93-4804-4818-95b5-2b6b3225c433\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xdslc" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.127732 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7v8x\" (UniqueName: \"kubernetes.io/projected/84c50c19-d82f-444f-9558-6f9932e3ff86-kube-api-access-f7v8x\") pod \"route-controller-manager-6576b87f9c-kfx9d\" (UID: \"84c50c19-d82f-444f-9558-6f9932e3ff86\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kfx9d" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.127773 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0577cd40-0974-4c45-8be4-9458e712c6e5-config\") pod \"authentication-operator-69f744f599-gxss8\" (UID: \"0577cd40-0974-4c45-8be4-9458e712c6e5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gxss8" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.127806 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b6996662-d230-4299-b913-b6bc38c50ef5-image-import-ca\") pod \"apiserver-76f77b778f-lgd6b\" (UID: \"b6996662-d230-4299-b913-b6bc38c50ef5\") " pod="openshift-apiserver/apiserver-76f77b778f-lgd6b" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.127832 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6996662-d230-4299-b913-b6bc38c50ef5-serving-cert\") pod \"apiserver-76f77b778f-lgd6b\" (UID: \"b6996662-d230-4299-b913-b6bc38c50ef5\") " pod="openshift-apiserver/apiserver-76f77b778f-lgd6b" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.128832 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3fa186eb-0f11-42e6-a1d4-2f42d1a7dadd-encryption-config\") pod \"apiserver-7bbb656c7d-6w84b\" (UID: \"3fa186eb-0f11-42e6-a1d4-2f42d1a7dadd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6w84b" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.129360 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b6996662-d230-4299-b913-b6bc38c50ef5-etcd-serving-ca\") pod \"apiserver-76f77b778f-lgd6b\" (UID: \"b6996662-d230-4299-b913-b6bc38c50ef5\") " pod="openshift-apiserver/apiserver-76f77b778f-lgd6b" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.133508 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bf7sx"] Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.135183 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0577cd40-0974-4c45-8be4-9458e712c6e5-config\") pod \"authentication-operator-69f744f599-gxss8\" (UID: \"0577cd40-0974-4c45-8be4-9458e712c6e5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gxss8" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.135292 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-blw2q"] Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.135458 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6996662-d230-4299-b913-b6bc38c50ef5-serving-cert\") pod \"apiserver-76f77b778f-lgd6b\" (UID: \"b6996662-d230-4299-b913-b6bc38c50ef5\") " pod="openshift-apiserver/apiserver-76f77b778f-lgd6b" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.135618 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.135956 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/099aeef3-9f86-47a1-bc18-3784b8e87bcd-config\") pod \"controller-manager-879f6c89f-sjwfz\" (UID: \"099aeef3-9f86-47a1-bc18-3784b8e87bcd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sjwfz" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.135978 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b6996662-d230-4299-b913-b6bc38c50ef5-image-import-ca\") pod \"apiserver-76f77b778f-lgd6b\" (UID: \"b6996662-d230-4299-b913-b6bc38c50ef5\") " pod="openshift-apiserver/apiserver-76f77b778f-lgd6b" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.136015 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/099aeef3-9f86-47a1-bc18-3784b8e87bcd-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-sjwfz\" (UID: \"099aeef3-9f86-47a1-bc18-3784b8e87bcd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sjwfz" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.136281 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bf7sx" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.136747 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/725af35a-cc1c-4178-ae7f-e909af583a5f-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-v4fx7\" (UID: \"725af35a-cc1c-4178-ae7f-e909af583a5f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-v4fx7" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.137400 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.137810 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t6scb"] Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.139055 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-blw2q" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.140273 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edf98afb-2fd4-45a3-afcf-2e7c6a50c5fc-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-qbnb4\" (UID: \"edf98afb-2fd4-45a3-afcf-2e7c6a50c5fc\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qbnb4" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.140753 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t6scb" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.141218 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-d7qg2"] Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.142487 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-d7qg2" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.145994 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406690-dtj9t"] Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.147187 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-wbcfc"] Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.147310 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-gxss8"] Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.147499 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-dtj9t" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.158578 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q4pdj"] Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.158664 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mz266"] Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.165312 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.165633 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3fa186eb-0f11-42e6-a1d4-2f42d1a7dadd-etcd-client\") pod \"apiserver-7bbb656c7d-6w84b\" (UID: \"3fa186eb-0f11-42e6-a1d4-2f42d1a7dadd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6w84b" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.165403 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nlnvc"] Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.167809 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-v4fx7"] Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.171854 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-lwxtb"] Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.178024 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.178812 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-w4g7w"] Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.179814 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-jttv5"] Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.181805 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9bzls"] Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.182967 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-zfqtn"] Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.183486 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-5g9td"] Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.184499 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r76lc"] Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.187039 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-mznng"] Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.187962 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-klnm7"] Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.189164 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-6swbg"] Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.190223 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t6scb"] Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.191262 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-l2jzc"] Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.192235 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-blw2q"] Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.193344 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-2fgjg"] Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.194032 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-2fgjg" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.194489 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-twmws"] Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.194945 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-twmws" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.195644 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-ddtz8"] Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.196638 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hx8c6"] Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.197765 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-d7qg2"] Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.197998 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.198816 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lm9wt"] Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.199817 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2zjvc"] Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.201359 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-6x84f"] Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.202531 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-l2khx"] Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.203680 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qvn2d"] Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.204859 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tgqrw"] Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.206069 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bf7sx"] Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.207066 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406690-dtj9t"] Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.208277 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-twmws"] Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.209389 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-4jk72"] Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.210279 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-4jk72" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.210570 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-4jk72"] Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.218052 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.228723 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czxcz\" (UniqueName: \"kubernetes.io/projected/d79589e5-c434-4157-8cfd-a51e92aa0c2f-kube-api-access-czxcz\") pod \"cluster-samples-operator-665b6dd947-d2wzd\" (UID: \"d79589e5-c434-4157-8cfd-a51e92aa0c2f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-d2wzd" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.228753 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cefe5b8-af1f-4518-a53d-6a0151af7517-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-q4pdj\" (UID: \"1cefe5b8-af1f-4518-a53d-6a0151af7517\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q4pdj" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.228783 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/094b43dd-ae40-43b1-824b-a8dd47bc9693-metrics-tls\") pod \"ingress-operator-5b745b69d9-zfqtn\" (UID: \"094b43dd-ae40-43b1-824b-a8dd47bc9693\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zfqtn" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.228822 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9b02347e-bd6a-4a77-97d4-d8276d1b6167-console-config\") pod \"console-f9d7485db-w4g7w\" (UID: \"9b02347e-bd6a-4a77-97d4-d8276d1b6167\") " pod="openshift-console/console-f9d7485db-w4g7w" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.228838 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ae857e30-60ec-4e9f-867b-da07f179df65-trusted-ca\") pod \"console-operator-58897d9998-wbcfc\" (UID: \"ae857e30-60ec-4e9f-867b-da07f179df65\") " pod="openshift-console-operator/console-operator-58897d9998-wbcfc" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.228854 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-886lh\" (UniqueName: \"kubernetes.io/projected/9b02347e-bd6a-4a77-97d4-d8276d1b6167-kube-api-access-886lh\") pod \"console-f9d7485db-w4g7w\" (UID: \"9b02347e-bd6a-4a77-97d4-d8276d1b6167\") " pod="openshift-console/console-f9d7485db-w4g7w" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.228881 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b02347e-bd6a-4a77-97d4-d8276d1b6167-trusted-ca-bundle\") pod \"console-f9d7485db-w4g7w\" (UID: \"9b02347e-bd6a-4a77-97d4-d8276d1b6167\") " pod="openshift-console/console-f9d7485db-w4g7w" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.228899 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dkcr\" (UniqueName: \"kubernetes.io/projected/1cefe5b8-af1f-4518-a53d-6a0151af7517-kube-api-access-2dkcr\") pod \"openshift-controller-manager-operator-756b6f6bc6-q4pdj\" (UID: \"1cefe5b8-af1f-4518-a53d-6a0151af7517\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q4pdj" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.228926 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/094b43dd-ae40-43b1-824b-a8dd47bc9693-trusted-ca\") pod \"ingress-operator-5b745b69d9-zfqtn\" (UID: \"094b43dd-ae40-43b1-824b-a8dd47bc9693\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zfqtn" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.228951 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/768de086-2681-4cee-b71f-a732b317fc64-config\") pod \"kube-apiserver-operator-766d6c64bb-nlnvc\" (UID: \"768de086-2681-4cee-b71f-a732b317fc64\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nlnvc" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.228970 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae857e30-60ec-4e9f-867b-da07f179df65-config\") pod \"console-operator-58897d9998-wbcfc\" (UID: \"ae857e30-60ec-4e9f-867b-da07f179df65\") " pod="openshift-console-operator/console-operator-58897d9998-wbcfc" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.228987 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfn86\" (UniqueName: \"kubernetes.io/projected/f773d9ba-78f9-4b56-8e33-706ee34ad32a-kube-api-access-nfn86\") pod \"openshift-config-operator-7777fb866f-jttv5\" (UID: \"f773d9ba-78f9-4b56-8e33-706ee34ad32a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jttv5" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.229005 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae857e30-60ec-4e9f-867b-da07f179df65-serving-cert\") pod \"console-operator-58897d9998-wbcfc\" (UID: \"ae857e30-60ec-4e9f-867b-da07f179df65\") " pod="openshift-console-operator/console-operator-58897d9998-wbcfc" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.229037 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9b02347e-bd6a-4a77-97d4-d8276d1b6167-console-oauth-config\") pod \"console-f9d7485db-w4g7w\" (UID: \"9b02347e-bd6a-4a77-97d4-d8276d1b6167\") " pod="openshift-console/console-f9d7485db-w4g7w" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.229055 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f773d9ba-78f9-4b56-8e33-706ee34ad32a-available-featuregates\") pod \"openshift-config-operator-7777fb866f-jttv5\" (UID: \"f773d9ba-78f9-4b56-8e33-706ee34ad32a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jttv5" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.229079 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/094b43dd-ae40-43b1-824b-a8dd47bc9693-bound-sa-token\") pod \"ingress-operator-5b745b69d9-zfqtn\" (UID: \"094b43dd-ae40-43b1-824b-a8dd47bc9693\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zfqtn" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.229094 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1cefe5b8-af1f-4518-a53d-6a0151af7517-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-q4pdj\" (UID: \"1cefe5b8-af1f-4518-a53d-6a0151af7517\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q4pdj" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.229112 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9b02347e-bd6a-4a77-97d4-d8276d1b6167-service-ca\") pod \"console-f9d7485db-w4g7w\" (UID: \"9b02347e-bd6a-4a77-97d4-d8276d1b6167\") " pod="openshift-console/console-f9d7485db-w4g7w" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.229127 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bvlq\" (UniqueName: \"kubernetes.io/projected/ae857e30-60ec-4e9f-867b-da07f179df65-kube-api-access-8bvlq\") pod \"console-operator-58897d9998-wbcfc\" (UID: \"ae857e30-60ec-4e9f-867b-da07f179df65\") " pod="openshift-console-operator/console-operator-58897d9998-wbcfc" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.229151 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d79589e5-c434-4157-8cfd-a51e92aa0c2f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-d2wzd\" (UID: \"d79589e5-c434-4157-8cfd-a51e92aa0c2f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-d2wzd" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.229169 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sh8p6\" (UniqueName: \"kubernetes.io/projected/892d0338-c59f-481e-8d70-3143d4954f38-kube-api-access-sh8p6\") pod \"downloads-7954f5f757-klnm7\" (UID: \"892d0338-c59f-481e-8d70-3143d4954f38\") " pod="openshift-console/downloads-7954f5f757-klnm7" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.229185 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f773d9ba-78f9-4b56-8e33-706ee34ad32a-serving-cert\") pod \"openshift-config-operator-7777fb866f-jttv5\" (UID: \"f773d9ba-78f9-4b56-8e33-706ee34ad32a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jttv5" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.229202 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9b02347e-bd6a-4a77-97d4-d8276d1b6167-console-serving-cert\") pod \"console-f9d7485db-w4g7w\" (UID: \"9b02347e-bd6a-4a77-97d4-d8276d1b6167\") " pod="openshift-console/console-f9d7485db-w4g7w" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.229226 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9b02347e-bd6a-4a77-97d4-d8276d1b6167-oauth-serving-cert\") pod \"console-f9d7485db-w4g7w\" (UID: \"9b02347e-bd6a-4a77-97d4-d8276d1b6167\") " pod="openshift-console/console-f9d7485db-w4g7w" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.229241 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/768de086-2681-4cee-b71f-a732b317fc64-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-nlnvc\" (UID: \"768de086-2681-4cee-b71f-a732b317fc64\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nlnvc" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.229273 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lst97\" (UniqueName: \"kubernetes.io/projected/094b43dd-ae40-43b1-824b-a8dd47bc9693-kube-api-access-lst97\") pod \"ingress-operator-5b745b69d9-zfqtn\" (UID: \"094b43dd-ae40-43b1-824b-a8dd47bc9693\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zfqtn" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.229300 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/768de086-2681-4cee-b71f-a732b317fc64-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-nlnvc\" (UID: \"768de086-2681-4cee-b71f-a732b317fc64\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nlnvc" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.229950 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9b02347e-bd6a-4a77-97d4-d8276d1b6167-console-config\") pod \"console-f9d7485db-w4g7w\" (UID: \"9b02347e-bd6a-4a77-97d4-d8276d1b6167\") " pod="openshift-console/console-f9d7485db-w4g7w" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.230150 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b02347e-bd6a-4a77-97d4-d8276d1b6167-trusted-ca-bundle\") pod \"console-f9d7485db-w4g7w\" (UID: \"9b02347e-bd6a-4a77-97d4-d8276d1b6167\") " pod="openshift-console/console-f9d7485db-w4g7w" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.230439 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/768de086-2681-4cee-b71f-a732b317fc64-config\") pod \"kube-apiserver-operator-766d6c64bb-nlnvc\" (UID: \"768de086-2681-4cee-b71f-a732b317fc64\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nlnvc" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.230630 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ae857e30-60ec-4e9f-867b-da07f179df65-trusted-ca\") pod \"console-operator-58897d9998-wbcfc\" (UID: \"ae857e30-60ec-4e9f-867b-da07f179df65\") " pod="openshift-console-operator/console-operator-58897d9998-wbcfc" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.230651 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae857e30-60ec-4e9f-867b-da07f179df65-config\") pod \"console-operator-58897d9998-wbcfc\" (UID: \"ae857e30-60ec-4e9f-867b-da07f179df65\") " pod="openshift-console-operator/console-operator-58897d9998-wbcfc" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.231011 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cefe5b8-af1f-4518-a53d-6a0151af7517-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-q4pdj\" (UID: \"1cefe5b8-af1f-4518-a53d-6a0151af7517\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q4pdj" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.231349 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9b02347e-bd6a-4a77-97d4-d8276d1b6167-service-ca\") pod \"console-f9d7485db-w4g7w\" (UID: \"9b02347e-bd6a-4a77-97d4-d8276d1b6167\") " pod="openshift-console/console-f9d7485db-w4g7w" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.231360 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f773d9ba-78f9-4b56-8e33-706ee34ad32a-available-featuregates\") pod \"openshift-config-operator-7777fb866f-jttv5\" (UID: \"f773d9ba-78f9-4b56-8e33-706ee34ad32a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jttv5" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.231510 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9b02347e-bd6a-4a77-97d4-d8276d1b6167-oauth-serving-cert\") pod \"console-f9d7485db-w4g7w\" (UID: \"9b02347e-bd6a-4a77-97d4-d8276d1b6167\") " pod="openshift-console/console-f9d7485db-w4g7w" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.231955 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9b02347e-bd6a-4a77-97d4-d8276d1b6167-console-oauth-config\") pod \"console-f9d7485db-w4g7w\" (UID: \"9b02347e-bd6a-4a77-97d4-d8276d1b6167\") " pod="openshift-console/console-f9d7485db-w4g7w" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.233305 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1cefe5b8-af1f-4518-a53d-6a0151af7517-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-q4pdj\" (UID: \"1cefe5b8-af1f-4518-a53d-6a0151af7517\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q4pdj" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.233526 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d79589e5-c434-4157-8cfd-a51e92aa0c2f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-d2wzd\" (UID: \"d79589e5-c434-4157-8cfd-a51e92aa0c2f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-d2wzd" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.233692 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae857e30-60ec-4e9f-867b-da07f179df65-serving-cert\") pod \"console-operator-58897d9998-wbcfc\" (UID: \"ae857e30-60ec-4e9f-867b-da07f179df65\") " pod="openshift-console-operator/console-operator-58897d9998-wbcfc" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.233924 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9b02347e-bd6a-4a77-97d4-d8276d1b6167-console-serving-cert\") pod \"console-f9d7485db-w4g7w\" (UID: \"9b02347e-bd6a-4a77-97d4-d8276d1b6167\") " pod="openshift-console/console-f9d7485db-w4g7w" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.234121 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f773d9ba-78f9-4b56-8e33-706ee34ad32a-serving-cert\") pod \"openshift-config-operator-7777fb866f-jttv5\" (UID: \"f773d9ba-78f9-4b56-8e33-706ee34ad32a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jttv5" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.236579 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/768de086-2681-4cee-b71f-a732b317fc64-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-nlnvc\" (UID: \"768de086-2681-4cee-b71f-a732b317fc64\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nlnvc" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.237441 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.257446 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.262734 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/094b43dd-ae40-43b1-824b-a8dd47bc9693-metrics-tls\") pod \"ingress-operator-5b745b69d9-zfqtn\" (UID: \"094b43dd-ae40-43b1-824b-a8dd47bc9693\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zfqtn" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.276832 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.276853 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.276863 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvmzq" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.276840 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.283926 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.290364 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/094b43dd-ae40-43b1-824b-a8dd47bc9693-trusted-ca\") pod \"ingress-operator-5b745b69d9-zfqtn\" (UID: \"094b43dd-ae40-43b1-824b-a8dd47bc9693\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zfqtn" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.298194 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.359482 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.378363 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.398576 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.418316 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.438166 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.458631 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.478342 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.498212 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.518271 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.537924 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.558001 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.577999 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.598320 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.618709 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.638488 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.658006 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.678106 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.697401 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.717450 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.739151 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.758309 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.778176 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.798449 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.817325 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.837785 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.858271 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.878516 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.897860 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.918260 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.938430 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.957965 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.978154 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 29 07:41:36 crc kubenswrapper[4795]: I1129 07:41:36.998349 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 29 07:41:37 crc kubenswrapper[4795]: I1129 07:41:37.017948 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 29 07:41:37 crc kubenswrapper[4795]: I1129 07:41:37.038749 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 29 07:41:37 crc kubenswrapper[4795]: I1129 07:41:37.058218 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 29 07:41:37 crc kubenswrapper[4795]: I1129 07:41:37.077943 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 29 07:41:37 crc kubenswrapper[4795]: I1129 07:41:37.097376 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 29 07:41:37 crc kubenswrapper[4795]: I1129 07:41:37.116348 4795 request.go:700] Waited for 1.012840537s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-controller-dockercfg-c2lfx&limit=500&resourceVersion=0 Nov 29 07:41:37 crc kubenswrapper[4795]: I1129 07:41:37.117893 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 29 07:41:37 crc kubenswrapper[4795]: I1129 07:41:37.138152 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 29 07:41:37 crc kubenswrapper[4795]: I1129 07:41:37.157482 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 29 07:41:37 crc kubenswrapper[4795]: I1129 07:41:37.178716 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 29 07:41:37 crc kubenswrapper[4795]: I1129 07:41:37.198380 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 29 07:41:37 crc kubenswrapper[4795]: I1129 07:41:37.218282 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 29 07:41:37 crc kubenswrapper[4795]: I1129 07:41:37.237867 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 29 07:41:37 crc kubenswrapper[4795]: I1129 07:41:37.258011 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 29 07:41:37 crc kubenswrapper[4795]: I1129 07:41:37.284795 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 29 07:41:37 crc kubenswrapper[4795]: I1129 07:41:37.297459 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 29 07:41:37 crc kubenswrapper[4795]: I1129 07:41:37.317379 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 29 07:41:37 crc kubenswrapper[4795]: I1129 07:41:37.338086 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 29 07:41:37 crc kubenswrapper[4795]: I1129 07:41:37.358136 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 29 07:41:37 crc kubenswrapper[4795]: I1129 07:41:37.377501 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 29 07:41:37 crc kubenswrapper[4795]: I1129 07:41:37.398081 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 29 07:41:37 crc kubenswrapper[4795]: I1129 07:41:37.432020 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdzp2\" (UniqueName: \"kubernetes.io/projected/099aeef3-9f86-47a1-bc18-3784b8e87bcd-kube-api-access-zdzp2\") pod \"controller-manager-879f6c89f-sjwfz\" (UID: \"099aeef3-9f86-47a1-bc18-3784b8e87bcd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sjwfz" Nov 29 07:41:37 crc kubenswrapper[4795]: I1129 07:41:37.452826 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztp6z\" (UniqueName: \"kubernetes.io/projected/0577cd40-0974-4c45-8be4-9458e712c6e5-kube-api-access-ztp6z\") pod \"authentication-operator-69f744f599-gxss8\" (UID: \"0577cd40-0974-4c45-8be4-9458e712c6e5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gxss8" Nov 29 07:41:37 crc kubenswrapper[4795]: I1129 07:41:37.462925 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-sjwfz" Nov 29 07:41:37 crc kubenswrapper[4795]: I1129 07:41:37.473496 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9p6r\" (UniqueName: \"kubernetes.io/projected/725af35a-cc1c-4178-ae7f-e909af583a5f-kube-api-access-f9p6r\") pod \"machine-api-operator-5694c8668f-v4fx7\" (UID: \"725af35a-cc1c-4178-ae7f-e909af583a5f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-v4fx7" Nov 29 07:41:37 crc kubenswrapper[4795]: I1129 07:41:37.491771 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4b4r\" (UniqueName: \"kubernetes.io/projected/42062f93-4804-4818-95b5-2b6b3225c433-kube-api-access-f4b4r\") pod \"machine-approver-56656f9798-xdslc\" (UID: \"42062f93-4804-4818-95b5-2b6b3225c433\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xdslc" Nov 29 07:41:37 crc kubenswrapper[4795]: I1129 07:41:37.507121 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xdslc" Nov 29 07:41:37 crc kubenswrapper[4795]: I1129 07:41:37.513888 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vqf7\" (UniqueName: \"kubernetes.io/projected/edf98afb-2fd4-45a3-afcf-2e7c6a50c5fc-kube-api-access-2vqf7\") pod \"openshift-apiserver-operator-796bbdcf4f-qbnb4\" (UID: \"edf98afb-2fd4-45a3-afcf-2e7c6a50c5fc\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qbnb4" Nov 29 07:41:37 crc kubenswrapper[4795]: I1129 07:41:37.517900 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 29 07:41:37 crc kubenswrapper[4795]: W1129 07:41:37.522364 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42062f93_4804_4818_95b5_2b6b3225c433.slice/crio-8ef8dad81c1a4edb5f2f41e6c20e119cb2483037ebfe58f5aab3d711a86a5d68 WatchSource:0}: Error finding container 8ef8dad81c1a4edb5f2f41e6c20e119cb2483037ebfe58f5aab3d711a86a5d68: Status 404 returned error can't find the container with id 8ef8dad81c1a4edb5f2f41e6c20e119cb2483037ebfe58f5aab3d711a86a5d68 Nov 29 07:41:37 crc kubenswrapper[4795]: I1129 07:41:37.541493 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-v4fx7" Nov 29 07:41:37 crc kubenswrapper[4795]: I1129 07:41:37.556141 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdslk\" (UniqueName: \"kubernetes.io/projected/3fa186eb-0f11-42e6-a1d4-2f42d1a7dadd-kube-api-access-rdslk\") pod \"apiserver-7bbb656c7d-6w84b\" (UID: \"3fa186eb-0f11-42e6-a1d4-2f42d1a7dadd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6w84b" Nov 29 07:41:37 crc kubenswrapper[4795]: I1129 07:41:37.559112 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 29 07:41:37 crc kubenswrapper[4795]: I1129 07:41:37.580407 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-gxss8" Nov 29 07:41:37 crc kubenswrapper[4795]: I1129 07:41:37.596725 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-df5wx\" (UniqueName: \"kubernetes.io/projected/b6996662-d230-4299-b913-b6bc38c50ef5-kube-api-access-df5wx\") pod \"apiserver-76f77b778f-lgd6b\" (UID: \"b6996662-d230-4299-b913-b6bc38c50ef5\") " pod="openshift-apiserver/apiserver-76f77b778f-lgd6b" Nov 29 07:41:37 crc kubenswrapper[4795]: I1129 07:41:37.597898 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 29 07:41:37 crc kubenswrapper[4795]: I1129 07:41:37.601494 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qbnb4" Nov 29 07:41:37 crc kubenswrapper[4795]: I1129 07:41:37.614501 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xdslc" event={"ID":"42062f93-4804-4818-95b5-2b6b3225c433","Type":"ContainerStarted","Data":"8ef8dad81c1a4edb5f2f41e6c20e119cb2483037ebfe58f5aab3d711a86a5d68"} Nov 29 07:41:37 crc kubenswrapper[4795]: I1129 07:41:37.617274 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 29 07:41:37 crc kubenswrapper[4795]: I1129 07:41:37.640331 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 29 07:41:37 crc kubenswrapper[4795]: I1129 07:41:37.663243 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 29 07:41:37 crc kubenswrapper[4795]: I1129 07:41:37.664865 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-sjwfz"] Nov 29 07:41:37 crc kubenswrapper[4795]: I1129 07:41:37.679120 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 29 07:41:37 crc kubenswrapper[4795]: I1129 07:41:37.719768 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 29 07:41:37 crc kubenswrapper[4795]: I1129 07:41:37.720291 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7v8x\" (UniqueName: \"kubernetes.io/projected/84c50c19-d82f-444f-9558-6f9932e3ff86-kube-api-access-f7v8x\") pod \"route-controller-manager-6576b87f9c-kfx9d\" (UID: \"84c50c19-d82f-444f-9558-6f9932e3ff86\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kfx9d" Nov 29 07:41:37 crc kubenswrapper[4795]: I1129 07:41:37.737940 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 29 07:41:37 crc kubenswrapper[4795]: I1129 07:41:37.741156 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-v4fx7"] Nov 29 07:41:37 crc kubenswrapper[4795]: W1129 07:41:37.750996 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod725af35a_cc1c_4178_ae7f_e909af583a5f.slice/crio-a1a4f2e44ceb61ba99ddd12d95ba1f27f1bf3a2a9a3ddab23a97b0292627e09a WatchSource:0}: Error finding container a1a4f2e44ceb61ba99ddd12d95ba1f27f1bf3a2a9a3ddab23a97b0292627e09a: Status 404 returned error can't find the container with id a1a4f2e44ceb61ba99ddd12d95ba1f27f1bf3a2a9a3ddab23a97b0292627e09a Nov 29 07:41:37 crc kubenswrapper[4795]: I1129 07:41:37.752461 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-lgd6b" Nov 29 07:41:37 crc kubenswrapper[4795]: I1129 07:41:37.758191 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 29 07:41:37 crc kubenswrapper[4795]: I1129 07:41:37.777188 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 29 07:41:37 crc kubenswrapper[4795]: I1129 07:41:37.799052 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 29 07:41:37 crc kubenswrapper[4795]: I1129 07:41:37.817724 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-gxss8"] Nov 29 07:41:37 crc kubenswrapper[4795]: I1129 07:41:37.818327 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 29 07:41:37 crc kubenswrapper[4795]: I1129 07:41:37.820446 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6w84b" Nov 29 07:41:37 crc kubenswrapper[4795]: I1129 07:41:37.832465 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kfx9d" Nov 29 07:41:37 crc kubenswrapper[4795]: W1129 07:41:37.834346 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0577cd40_0974_4c45_8be4_9458e712c6e5.slice/crio-8c7e954598d0351ffa9746ba34dceab41dcd13b41178517f331ec1e6f962d4bf WatchSource:0}: Error finding container 8c7e954598d0351ffa9746ba34dceab41dcd13b41178517f331ec1e6f962d4bf: Status 404 returned error can't find the container with id 8c7e954598d0351ffa9746ba34dceab41dcd13b41178517f331ec1e6f962d4bf Nov 29 07:41:37 crc kubenswrapper[4795]: I1129 07:41:37.840539 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 29 07:41:37 crc kubenswrapper[4795]: I1129 07:41:37.855533 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qbnb4"] Nov 29 07:41:37 crc kubenswrapper[4795]: I1129 07:41:37.858162 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 29 07:41:37 crc kubenswrapper[4795]: I1129 07:41:37.879156 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 29 07:41:37 crc kubenswrapper[4795]: W1129 07:41:37.897656 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podedf98afb_2fd4_45a3_afcf_2e7c6a50c5fc.slice/crio-de5544c48451b9109bf3c8780e6d9a131a9a0f7d24f3d75c09337de6202a8ef1 WatchSource:0}: Error finding container de5544c48451b9109bf3c8780e6d9a131a9a0f7d24f3d75c09337de6202a8ef1: Status 404 returned error can't find the container with id de5544c48451b9109bf3c8780e6d9a131a9a0f7d24f3d75c09337de6202a8ef1 Nov 29 07:41:37 crc kubenswrapper[4795]: I1129 07:41:37.898763 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 29 07:41:37 crc kubenswrapper[4795]: I1129 07:41:37.917577 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 29 07:41:37 crc kubenswrapper[4795]: I1129 07:41:37.926574 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-lgd6b"] Nov 29 07:41:37 crc kubenswrapper[4795]: I1129 07:41:37.938411 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 29 07:41:37 crc kubenswrapper[4795]: W1129 07:41:37.941822 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6996662_d230_4299_b913_b6bc38c50ef5.slice/crio-9b00f5f095dc071ef81b1248f44a77de2f8c53f8e397a4c806d7c05d61ba32cf WatchSource:0}: Error finding container 9b00f5f095dc071ef81b1248f44a77de2f8c53f8e397a4c806d7c05d61ba32cf: Status 404 returned error can't find the container with id 9b00f5f095dc071ef81b1248f44a77de2f8c53f8e397a4c806d7c05d61ba32cf Nov 29 07:41:37 crc kubenswrapper[4795]: I1129 07:41:37.963466 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 29 07:41:37 crc kubenswrapper[4795]: I1129 07:41:37.978913 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 29 07:41:37 crc kubenswrapper[4795]: I1129 07:41:37.998480 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.017612 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.037534 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.056322 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-kfx9d"] Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.058168 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.078191 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.098057 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.110778 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-6w84b"] Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.117952 4795 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 29 07:41:38 crc kubenswrapper[4795]: W1129 07:41:38.122260 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3fa186eb_0f11_42e6_a1d4_2f42d1a7dadd.slice/crio-30c23d69edbaf3ae06e8cae68f00b75fd1a22aba452f73b4180abf6954efd3fd WatchSource:0}: Error finding container 30c23d69edbaf3ae06e8cae68f00b75fd1a22aba452f73b4180abf6954efd3fd: Status 404 returned error can't find the container with id 30c23d69edbaf3ae06e8cae68f00b75fd1a22aba452f73b4180abf6954efd3fd Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.135735 4795 request.go:700] Waited for 1.925231415s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.143830 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.178219 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czxcz\" (UniqueName: \"kubernetes.io/projected/d79589e5-c434-4157-8cfd-a51e92aa0c2f-kube-api-access-czxcz\") pod \"cluster-samples-operator-665b6dd947-d2wzd\" (UID: \"d79589e5-c434-4157-8cfd-a51e92aa0c2f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-d2wzd" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.191703 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-886lh\" (UniqueName: \"kubernetes.io/projected/9b02347e-bd6a-4a77-97d4-d8276d1b6167-kube-api-access-886lh\") pod \"console-f9d7485db-w4g7w\" (UID: \"9b02347e-bd6a-4a77-97d4-d8276d1b6167\") " pod="openshift-console/console-f9d7485db-w4g7w" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.215286 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dkcr\" (UniqueName: \"kubernetes.io/projected/1cefe5b8-af1f-4518-a53d-6a0151af7517-kube-api-access-2dkcr\") pod \"openshift-controller-manager-operator-756b6f6bc6-q4pdj\" (UID: \"1cefe5b8-af1f-4518-a53d-6a0151af7517\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q4pdj" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.232126 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfn86\" (UniqueName: \"kubernetes.io/projected/f773d9ba-78f9-4b56-8e33-706ee34ad32a-kube-api-access-nfn86\") pod \"openshift-config-operator-7777fb866f-jttv5\" (UID: \"f773d9ba-78f9-4b56-8e33-706ee34ad32a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jttv5" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.254512 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sh8p6\" (UniqueName: \"kubernetes.io/projected/892d0338-c59f-481e-8d70-3143d4954f38-kube-api-access-sh8p6\") pod \"downloads-7954f5f757-klnm7\" (UID: \"892d0338-c59f-481e-8d70-3143d4954f38\") " pod="openshift-console/downloads-7954f5f757-klnm7" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.268778 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-klnm7" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.280483 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-w4g7w" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.296121 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-d2wzd" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.298975 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/768de086-2681-4cee-b71f-a732b317fc64-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-nlnvc\" (UID: \"768de086-2681-4cee-b71f-a732b317fc64\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nlnvc" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.312960 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lst97\" (UniqueName: \"kubernetes.io/projected/094b43dd-ae40-43b1-824b-a8dd47bc9693-kube-api-access-lst97\") pod \"ingress-operator-5b745b69d9-zfqtn\" (UID: \"094b43dd-ae40-43b1-824b-a8dd47bc9693\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zfqtn" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.325288 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q4pdj" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.332493 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/094b43dd-ae40-43b1-824b-a8dd47bc9693-bound-sa-token\") pod \"ingress-operator-5b745b69d9-zfqtn\" (UID: \"094b43dd-ae40-43b1-824b-a8dd47bc9693\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zfqtn" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.332718 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nlnvc" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.338035 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.359161 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.362984 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zfqtn" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.378302 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.398426 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.417747 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.438133 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.680271 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bvlq\" (UniqueName: \"kubernetes.io/projected/ae857e30-60ec-4e9f-867b-da07f179df65-kube-api-access-8bvlq\") pod \"console-operator-58897d9998-wbcfc\" (UID: \"ae857e30-60ec-4e9f-867b-da07f179df65\") " pod="openshift-console-operator/console-operator-58897d9998-wbcfc" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.685116 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jttv5" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.685177 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04596222-6779-478e-96cd-3aa99a923aa4-trusted-ca\") pod \"image-registry-697d97f7c8-lm9wt\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.685427 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/04596222-6779-478e-96cd-3aa99a923aa4-installation-pull-secrets\") pod \"image-registry-697d97f7c8-lm9wt\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.685488 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lm9wt\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.685530 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/04596222-6779-478e-96cd-3aa99a923aa4-registry-tls\") pod \"image-registry-697d97f7c8-lm9wt\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.685560 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/04596222-6779-478e-96cd-3aa99a923aa4-ca-trust-extracted\") pod \"image-registry-697d97f7c8-lm9wt\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.686142 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/04596222-6779-478e-96cd-3aa99a923aa4-bound-sa-token\") pod \"image-registry-697d97f7c8-lm9wt\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.686173 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhph7\" (UniqueName: \"kubernetes.io/projected/04596222-6779-478e-96cd-3aa99a923aa4-kube-api-access-mhph7\") pod \"image-registry-697d97f7c8-lm9wt\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.686465 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/04596222-6779-478e-96cd-3aa99a923aa4-registry-certificates\") pod \"image-registry-697d97f7c8-lm9wt\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:38 crc kubenswrapper[4795]: E1129 07:41:38.686814 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:41:39.186578502 +0000 UTC m=+145.162154292 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lm9wt" (UID: "04596222-6779-478e-96cd-3aa99a923aa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.702441 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-lgd6b" event={"ID":"b6996662-d230-4299-b913-b6bc38c50ef5","Type":"ContainerStarted","Data":"9b00f5f095dc071ef81b1248f44a77de2f8c53f8e397a4c806d7c05d61ba32cf"} Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.702700 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-klnm7"] Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.721060 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xdslc" event={"ID":"42062f93-4804-4818-95b5-2b6b3225c433","Type":"ContainerStarted","Data":"0ef3d7e292474e242c003bdf543dfd1903bb7da7b0686b02f6a21ecf19fe9d26"} Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.743020 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-v4fx7" event={"ID":"725af35a-cc1c-4178-ae7f-e909af583a5f","Type":"ContainerStarted","Data":"4999b0b2049ff3f67c0e8952fba0d55bc07963b23ac37ec7d70b5e30e6546675"} Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.743097 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-v4fx7" event={"ID":"725af35a-cc1c-4178-ae7f-e909af583a5f","Type":"ContainerStarted","Data":"a1a4f2e44ceb61ba99ddd12d95ba1f27f1bf3a2a9a3ddab23a97b0292627e09a"} Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.748454 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-gxss8" event={"ID":"0577cd40-0974-4c45-8be4-9458e712c6e5","Type":"ContainerStarted","Data":"5ad6ec6ea49f1d956ba7181bc0f850b49a2390ae9625d05ba8d95f9e80e3863e"} Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.748514 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-gxss8" event={"ID":"0577cd40-0974-4c45-8be4-9458e712c6e5","Type":"ContainerStarted","Data":"8c7e954598d0351ffa9746ba34dceab41dcd13b41178517f331ec1e6f962d4bf"} Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.790143 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:41:38 crc kubenswrapper[4795]: E1129 07:41:38.790227 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:41:39.290207931 +0000 UTC m=+145.265783721 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.791148 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/34ec03b0-6d17-4934-84fa-4323c7599fd0-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-qpsg9\" (UID: \"34ec03b0-6d17-4934-84fa-4323c7599fd0\") " pod="openshift-authentication/oauth-openshift-558db77b4-qpsg9" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.791192 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/26978e65-c4ba-410c-b61a-9b4157d71e78-etcd-service-ca\") pod \"etcd-operator-b45778765-mznng\" (UID: \"26978e65-c4ba-410c-b61a-9b4157d71e78\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mznng" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.791232 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/34ec03b0-6d17-4934-84fa-4323c7599fd0-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-qpsg9\" (UID: \"34ec03b0-6d17-4934-84fa-4323c7599fd0\") " pod="openshift-authentication/oauth-openshift-558db77b4-qpsg9" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.791257 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/24f58977-7eb4-4fb1-a812-8b656753537d-images\") pod \"machine-config-operator-74547568cd-6x84f\" (UID: \"24f58977-7eb4-4fb1-a812-8b656753537d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6x84f" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.791299 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04596222-6779-478e-96cd-3aa99a923aa4-trusted-ca\") pod \"image-registry-697d97f7c8-lm9wt\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.791321 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15ff94b5-aaa1-4673-8aab-04ed7f44ce88-serving-cert\") pod \"service-ca-operator-777779d784-d7qg2\" (UID: \"15ff94b5-aaa1-4673-8aab-04ed7f44ce88\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-d7qg2" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.791365 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/2fc75ee7-f4b7-40e9-9163-2b5dfac54d24-signing-key\") pod \"service-ca-9c57cc56f-6swbg\" (UID: \"2fc75ee7-f4b7-40e9-9163-2b5dfac54d24\") " pod="openshift-service-ca/service-ca-9c57cc56f-6swbg" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.791404 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26978e65-c4ba-410c-b61a-9b4157d71e78-serving-cert\") pod \"etcd-operator-b45778765-mznng\" (UID: \"26978e65-c4ba-410c-b61a-9b4157d71e78\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mznng" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.791467 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/34ec03b0-6d17-4934-84fa-4323c7599fd0-audit-dir\") pod \"oauth-openshift-558db77b4-qpsg9\" (UID: \"34ec03b0-6d17-4934-84fa-4323c7599fd0\") " pod="openshift-authentication/oauth-openshift-558db77b4-qpsg9" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.791495 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/04596222-6779-478e-96cd-3aa99a923aa4-installation-pull-secrets\") pod \"image-registry-697d97f7c8-lm9wt\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.791544 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lm9wt\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.791571 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv5wf\" (UniqueName: \"kubernetes.io/projected/15ff94b5-aaa1-4673-8aab-04ed7f44ce88-kube-api-access-cv5wf\") pod \"service-ca-operator-777779d784-d7qg2\" (UID: \"15ff94b5-aaa1-4673-8aab-04ed7f44ce88\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-d7qg2" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.791680 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/04596222-6779-478e-96cd-3aa99a923aa4-registry-tls\") pod \"image-registry-697d97f7c8-lm9wt\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.791707 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/34ec03b0-6d17-4934-84fa-4323c7599fd0-audit-policies\") pod \"oauth-openshift-558db77b4-qpsg9\" (UID: \"34ec03b0-6d17-4934-84fa-4323c7599fd0\") " pod="openshift-authentication/oauth-openshift-558db77b4-qpsg9" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.791732 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/26978e65-c4ba-410c-b61a-9b4157d71e78-etcd-ca\") pod \"etcd-operator-b45778765-mznng\" (UID: \"26978e65-c4ba-410c-b61a-9b4157d71e78\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mznng" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.792059 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl6v7\" (UniqueName: \"kubernetes.io/projected/7a5cb1ec-767a-42f1-90c7-870910e5e5d9-kube-api-access-fl6v7\") pod \"cluster-image-registry-operator-dc59b4c8b-mz266\" (UID: \"7a5cb1ec-767a-42f1-90c7-870910e5e5d9\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mz266" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.792385 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/24f58977-7eb4-4fb1-a812-8b656753537d-auth-proxy-config\") pod \"machine-config-operator-74547568cd-6x84f\" (UID: \"24f58977-7eb4-4fb1-a812-8b656753537d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6x84f" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.796203 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-sjwfz" event={"ID":"099aeef3-9f86-47a1-bc18-3784b8e87bcd","Type":"ContainerStarted","Data":"ebf74afe6c83cd03b3a294268c80a8e5451ce88c9a81433e530df92fd69e6d41"} Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.796239 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-sjwfz" event={"ID":"099aeef3-9f86-47a1-bc18-3784b8e87bcd","Type":"ContainerStarted","Data":"eb68c6013ef112ccd9569693c5fe89a89810c26e6d5906acf1109a5c4261dd6d"} Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.797136 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04596222-6779-478e-96cd-3aa99a923aa4-trusted-ca\") pod \"image-registry-697d97f7c8-lm9wt\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:38 crc kubenswrapper[4795]: E1129 07:41:38.802448 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:41:39.302428682 +0000 UTC m=+145.278004552 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lm9wt" (UID: "04596222-6779-478e-96cd-3aa99a923aa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.804270 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/04596222-6779-478e-96cd-3aa99a923aa4-installation-pull-secrets\") pod \"image-registry-697d97f7c8-lm9wt\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.811631 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/e2701908-11c2-44df-9318-4868e49d7ebc-certs\") pod \"machine-config-server-2fgjg\" (UID: \"e2701908-11c2-44df-9318-4868e49d7ebc\") " pod="openshift-machine-config-operator/machine-config-server-2fgjg" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.812294 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/34ec03b0-6d17-4934-84fa-4323c7599fd0-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-qpsg9\" (UID: \"34ec03b0-6d17-4934-84fa-4323c7599fd0\") " pod="openshift-authentication/oauth-openshift-558db77b4-qpsg9" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.812407 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7a5cb1ec-767a-42f1-90c7-870910e5e5d9-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-mz266\" (UID: \"7a5cb1ec-767a-42f1-90c7-870910e5e5d9\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mz266" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.812760 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/04596222-6779-478e-96cd-3aa99a923aa4-registry-certificates\") pod \"image-registry-697d97f7c8-lm9wt\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.813316 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/34ec03b0-6d17-4934-84fa-4323c7599fd0-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-qpsg9\" (UID: \"34ec03b0-6d17-4934-84fa-4323c7599fd0\") " pod="openshift-authentication/oauth-openshift-558db77b4-qpsg9" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.813355 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/7a5cb1ec-767a-42f1-90c7-870910e5e5d9-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-mz266\" (UID: \"7a5cb1ec-767a-42f1-90c7-870910e5e5d9\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mz266" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.813398 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpnvv\" (UniqueName: \"kubernetes.io/projected/e2701908-11c2-44df-9318-4868e49d7ebc-kube-api-access-fpnvv\") pod \"machine-config-server-2fgjg\" (UID: \"e2701908-11c2-44df-9318-4868e49d7ebc\") " pod="openshift-machine-config-operator/machine-config-server-2fgjg" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.813699 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zk7r\" (UniqueName: \"kubernetes.io/projected/24f58977-7eb4-4fb1-a812-8b656753537d-kube-api-access-7zk7r\") pod \"machine-config-operator-74547568cd-6x84f\" (UID: \"24f58977-7eb4-4fb1-a812-8b656753537d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6x84f" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.813780 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kh5xp\" (UniqueName: \"kubernetes.io/projected/26978e65-c4ba-410c-b61a-9b4157d71e78-kube-api-access-kh5xp\") pod \"etcd-operator-b45778765-mznng\" (UID: \"26978e65-c4ba-410c-b61a-9b4157d71e78\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mznng" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.813887 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-529k2\" (UniqueName: \"kubernetes.io/projected/3028c8af-2d6e-4674-9ff8-dff8f68c511c-kube-api-access-529k2\") pod \"ingress-canary-twmws\" (UID: \"3028c8af-2d6e-4674-9ff8-dff8f68c511c\") " pod="openshift-ingress-canary/ingress-canary-twmws" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.814026 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gnlv\" (UniqueName: \"kubernetes.io/projected/34ec03b0-6d17-4934-84fa-4323c7599fd0-kube-api-access-9gnlv\") pod \"oauth-openshift-558db77b4-qpsg9\" (UID: \"34ec03b0-6d17-4934-84fa-4323c7599fd0\") " pod="openshift-authentication/oauth-openshift-558db77b4-qpsg9" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.814229 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/34ec03b0-6d17-4934-84fa-4323c7599fd0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-qpsg9\" (UID: \"34ec03b0-6d17-4934-84fa-4323c7599fd0\") " pod="openshift-authentication/oauth-openshift-558db77b4-qpsg9" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.814252 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/34ec03b0-6d17-4934-84fa-4323c7599fd0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-qpsg9\" (UID: \"34ec03b0-6d17-4934-84fa-4323c7599fd0\") " pod="openshift-authentication/oauth-openshift-558db77b4-qpsg9" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.814269 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15ff94b5-aaa1-4673-8aab-04ed7f44ce88-config\") pod \"service-ca-operator-777779d784-d7qg2\" (UID: \"15ff94b5-aaa1-4673-8aab-04ed7f44ce88\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-d7qg2" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.814317 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/04596222-6779-478e-96cd-3aa99a923aa4-registry-certificates\") pod \"image-registry-697d97f7c8-lm9wt\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.814348 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/34ec03b0-6d17-4934-84fa-4323c7599fd0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-qpsg9\" (UID: \"34ec03b0-6d17-4934-84fa-4323c7599fd0\") " pod="openshift-authentication/oauth-openshift-558db77b4-qpsg9" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.815867 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7a5cb1ec-767a-42f1-90c7-870910e5e5d9-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-mz266\" (UID: \"7a5cb1ec-767a-42f1-90c7-870910e5e5d9\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mz266" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.815984 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/04596222-6779-478e-96cd-3aa99a923aa4-ca-trust-extracted\") pod \"image-registry-697d97f7c8-lm9wt\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.816743 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/04596222-6779-478e-96cd-3aa99a923aa4-ca-trust-extracted\") pod \"image-registry-697d97f7c8-lm9wt\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.816863 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/e2701908-11c2-44df-9318-4868e49d7ebc-node-bootstrap-token\") pod \"machine-config-server-2fgjg\" (UID: \"e2701908-11c2-44df-9318-4868e49d7ebc\") " pod="openshift-machine-config-operator/machine-config-server-2fgjg" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.816899 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/24f58977-7eb4-4fb1-a812-8b656753537d-proxy-tls\") pod \"machine-config-operator-74547568cd-6x84f\" (UID: \"24f58977-7eb4-4fb1-a812-8b656753537d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6x84f" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.816922 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/04596222-6779-478e-96cd-3aa99a923aa4-bound-sa-token\") pod \"image-registry-697d97f7c8-lm9wt\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.816941 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhph7\" (UniqueName: \"kubernetes.io/projected/04596222-6779-478e-96cd-3aa99a923aa4-kube-api-access-mhph7\") pod \"image-registry-697d97f7c8-lm9wt\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.816972 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/34ec03b0-6d17-4934-84fa-4323c7599fd0-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-qpsg9\" (UID: \"34ec03b0-6d17-4934-84fa-4323c7599fd0\") " pod="openshift-authentication/oauth-openshift-558db77b4-qpsg9" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.816994 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3028c8af-2d6e-4674-9ff8-dff8f68c511c-cert\") pod \"ingress-canary-twmws\" (UID: \"3028c8af-2d6e-4674-9ff8-dff8f68c511c\") " pod="openshift-ingress-canary/ingress-canary-twmws" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.817027 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/34ec03b0-6d17-4934-84fa-4323c7599fd0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-qpsg9\" (UID: \"34ec03b0-6d17-4934-84fa-4323c7599fd0\") " pod="openshift-authentication/oauth-openshift-558db77b4-qpsg9" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.817043 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/34ec03b0-6d17-4934-84fa-4323c7599fd0-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-qpsg9\" (UID: \"34ec03b0-6d17-4934-84fa-4323c7599fd0\") " pod="openshift-authentication/oauth-openshift-558db77b4-qpsg9" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.817057 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34ec03b0-6d17-4934-84fa-4323c7599fd0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-qpsg9\" (UID: \"34ec03b0-6d17-4934-84fa-4323c7599fd0\") " pod="openshift-authentication/oauth-openshift-558db77b4-qpsg9" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.817076 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26978e65-c4ba-410c-b61a-9b4157d71e78-config\") pod \"etcd-operator-b45778765-mznng\" (UID: \"26978e65-c4ba-410c-b61a-9b4157d71e78\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mznng" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.817099 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/26978e65-c4ba-410c-b61a-9b4157d71e78-etcd-client\") pod \"etcd-operator-b45778765-mznng\" (UID: \"26978e65-c4ba-410c-b61a-9b4157d71e78\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mznng" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.819355 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/04596222-6779-478e-96cd-3aa99a923aa4-registry-tls\") pod \"image-registry-697d97f7c8-lm9wt\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.843323 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/04596222-6779-478e-96cd-3aa99a923aa4-bound-sa-token\") pod \"image-registry-697d97f7c8-lm9wt\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.849425 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhph7\" (UniqueName: \"kubernetes.io/projected/04596222-6779-478e-96cd-3aa99a923aa4-kube-api-access-mhph7\") pod \"image-registry-697d97f7c8-lm9wt\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.849716 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kfx9d" event={"ID":"84c50c19-d82f-444f-9558-6f9932e3ff86","Type":"ContainerStarted","Data":"353fc410fff6bbd6da1fafe3aa492b30c49b74953879186129640a95724c6a94"} Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.866058 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qbnb4" event={"ID":"edf98afb-2fd4-45a3-afcf-2e7c6a50c5fc","Type":"ContainerStarted","Data":"308070ce9c1d3b0b79dcc2e2f32a5f58d2a64c5f4ea61317c6fe207c396d2442"} Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.866102 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qbnb4" event={"ID":"edf98afb-2fd4-45a3-afcf-2e7c6a50c5fc","Type":"ContainerStarted","Data":"de5544c48451b9109bf3c8780e6d9a131a9a0f7d24f3d75c09337de6202a8ef1"} Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.880424 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6w84b" event={"ID":"3fa186eb-0f11-42e6-a1d4-2f42d1a7dadd","Type":"ContainerStarted","Data":"30c23d69edbaf3ae06e8cae68f00b75fd1a22aba452f73b4180abf6954efd3fd"} Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.908475 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-wbcfc" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.918153 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.918686 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/acfaa12b-166f-4c03-a208-6ed705af199d-registration-dir\") pod \"csi-hostpathplugin-4jk72\" (UID: \"acfaa12b-166f-4c03-a208-6ed705af199d\") " pod="hostpath-provisioner/csi-hostpathplugin-4jk72" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.918709 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe1144c0-6804-4a78-bad8-3319ddb3c30c-service-ca-bundle\") pod \"router-default-5444994796-692g9\" (UID: \"fe1144c0-6804-4a78-bad8-3319ddb3c30c\") " pod="openshift-ingress/router-default-5444994796-692g9" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.918729 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/be320286-1cef-4b6f-92a0-e9e66a34ad3e-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-qvn2d\" (UID: \"be320286-1cef-4b6f-92a0-e9e66a34ad3e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qvn2d" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.918753 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7a5cb1ec-767a-42f1-90c7-870910e5e5d9-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-mz266\" (UID: \"7a5cb1ec-767a-42f1-90c7-870910e5e5d9\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mz266" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.918771 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94527576-d5a0-4b6e-a3d2-3e1df7494d95-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-r76lc\" (UID: \"94527576-d5a0-4b6e-a3d2-3e1df7494d95\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r76lc" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.918789 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2265\" (UniqueName: \"kubernetes.io/projected/3afcac64-b24d-4150-a997-123947d36f3a-kube-api-access-t2265\") pod \"migrator-59844c95c7-5g9td\" (UID: \"3afcac64-b24d-4150-a997-123947d36f3a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5g9td" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.918806 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5255a440-64ac-4f31-8a9e-4c3688ec128a-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-l2jzc\" (UID: \"5255a440-64ac-4f31-8a9e-4c3688ec128a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l2jzc" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.918822 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2aba5f59-2760-4b54-94a0-c051d081ce70-srv-cert\") pod \"catalog-operator-68c6474976-bf7sx\" (UID: \"2aba5f59-2760-4b54-94a0-c051d081ce70\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bf7sx" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.918837 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nht7v\" (UniqueName: \"kubernetes.io/projected/2aba5f59-2760-4b54-94a0-c051d081ce70-kube-api-access-nht7v\") pod \"catalog-operator-68c6474976-bf7sx\" (UID: \"2aba5f59-2760-4b54-94a0-c051d081ce70\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bf7sx" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.918859 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/acfaa12b-166f-4c03-a208-6ed705af199d-plugins-dir\") pod \"csi-hostpathplugin-4jk72\" (UID: \"acfaa12b-166f-4c03-a208-6ed705af199d\") " pod="hostpath-provisioner/csi-hostpathplugin-4jk72" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.918879 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/e2701908-11c2-44df-9318-4868e49d7ebc-node-bootstrap-token\") pod \"machine-config-server-2fgjg\" (UID: \"e2701908-11c2-44df-9318-4868e49d7ebc\") " pod="openshift-machine-config-operator/machine-config-server-2fgjg" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.918896 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ecf9ad47-4004-45d4-84e6-986e63258092-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-blw2q\" (UID: \"ecf9ad47-4004-45d4-84e6-986e63258092\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-blw2q" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.918915 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/24f58977-7eb4-4fb1-a812-8b656753537d-proxy-tls\") pod \"machine-config-operator-74547568cd-6x84f\" (UID: \"24f58977-7eb4-4fb1-a812-8b656753537d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6x84f" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.918931 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f54838e7-e080-4c36-9f22-37173dca9044-srv-cert\") pod \"olm-operator-6b444d44fb-tgqrw\" (UID: \"f54838e7-e080-4c36-9f22-37173dca9044\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tgqrw" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.918947 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/34ec03b0-6d17-4934-84fa-4323c7599fd0-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-qpsg9\" (UID: \"34ec03b0-6d17-4934-84fa-4323c7599fd0\") " pod="openshift-authentication/oauth-openshift-558db77b4-qpsg9" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.918963 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3028c8af-2d6e-4674-9ff8-dff8f68c511c-cert\") pod \"ingress-canary-twmws\" (UID: \"3028c8af-2d6e-4674-9ff8-dff8f68c511c\") " pod="openshift-ingress-canary/ingress-canary-twmws" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.918979 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2aba5f59-2760-4b54-94a0-c051d081ce70-profile-collector-cert\") pod \"catalog-operator-68c6474976-bf7sx\" (UID: \"2aba5f59-2760-4b54-94a0-c051d081ce70\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bf7sx" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.918995 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4f25efaa-5792-4a51-ba83-e8733af29fdf-config-volume\") pod \"collect-profiles-29406690-dtj9t\" (UID: \"4f25efaa-5792-4a51-ba83-e8733af29fdf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-dtj9t" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.919010 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/fe1144c0-6804-4a78-bad8-3319ddb3c30c-default-certificate\") pod \"router-default-5444994796-692g9\" (UID: \"fe1144c0-6804-4a78-bad8-3319ddb3c30c\") " pod="openshift-ingress/router-default-5444994796-692g9" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.919036 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/34ec03b0-6d17-4934-84fa-4323c7599fd0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-qpsg9\" (UID: \"34ec03b0-6d17-4934-84fa-4323c7599fd0\") " pod="openshift-authentication/oauth-openshift-558db77b4-qpsg9" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.919058 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/34ec03b0-6d17-4934-84fa-4323c7599fd0-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-qpsg9\" (UID: \"34ec03b0-6d17-4934-84fa-4323c7599fd0\") " pod="openshift-authentication/oauth-openshift-558db77b4-qpsg9" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.919084 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34ec03b0-6d17-4934-84fa-4323c7599fd0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-qpsg9\" (UID: \"34ec03b0-6d17-4934-84fa-4323c7599fd0\") " pod="openshift-authentication/oauth-openshift-558db77b4-qpsg9" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.919104 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26978e65-c4ba-410c-b61a-9b4157d71e78-config\") pod \"etcd-operator-b45778765-mznng\" (UID: \"26978e65-c4ba-410c-b61a-9b4157d71e78\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mznng" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.919123 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/26978e65-c4ba-410c-b61a-9b4157d71e78-etcd-client\") pod \"etcd-operator-b45778765-mznng\" (UID: \"26978e65-c4ba-410c-b61a-9b4157d71e78\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mznng" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.919171 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4336a365-68f2-4aa1-9818-efdd9ab0b9f8-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-2zjvc\" (UID: \"4336a365-68f2-4aa1-9818-efdd9ab0b9f8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2zjvc" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.919194 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/26978e65-c4ba-410c-b61a-9b4157d71e78-etcd-service-ca\") pod \"etcd-operator-b45778765-mznng\" (UID: \"26978e65-c4ba-410c-b61a-9b4157d71e78\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mznng" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.919213 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/34ec03b0-6d17-4934-84fa-4323c7599fd0-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-qpsg9\" (UID: \"34ec03b0-6d17-4934-84fa-4323c7599fd0\") " pod="openshift-authentication/oauth-openshift-558db77b4-qpsg9" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.919234 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/34ec03b0-6d17-4934-84fa-4323c7599fd0-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-qpsg9\" (UID: \"34ec03b0-6d17-4934-84fa-4323c7599fd0\") " pod="openshift-authentication/oauth-openshift-558db77b4-qpsg9" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.919253 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/24f58977-7eb4-4fb1-a812-8b656753537d-images\") pod \"machine-config-operator-74547568cd-6x84f\" (UID: \"24f58977-7eb4-4fb1-a812-8b656753537d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6x84f" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.919292 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjhhf\" (UniqueName: \"kubernetes.io/projected/4f25efaa-5792-4a51-ba83-e8733af29fdf-kube-api-access-bjhhf\") pod \"collect-profiles-29406690-dtj9t\" (UID: \"4f25efaa-5792-4a51-ba83-e8733af29fdf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-dtj9t" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.919310 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gz574\" (UniqueName: \"kubernetes.io/projected/f54838e7-e080-4c36-9f22-37173dca9044-kube-api-access-gz574\") pod \"olm-operator-6b444d44fb-tgqrw\" (UID: \"f54838e7-e080-4c36-9f22-37173dca9044\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tgqrw" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.919325 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7390eae4-5ab6-4606-90d5-e5468d745208-config-volume\") pod \"dns-default-ddtz8\" (UID: \"7390eae4-5ab6-4606-90d5-e5468d745208\") " pod="openshift-dns/dns-default-ddtz8" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.919345 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15ff94b5-aaa1-4673-8aab-04ed7f44ce88-serving-cert\") pod \"service-ca-operator-777779d784-d7qg2\" (UID: \"15ff94b5-aaa1-4673-8aab-04ed7f44ce88\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-d7qg2" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.919366 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fp5n\" (UniqueName: \"kubernetes.io/projected/2fc75ee7-f4b7-40e9-9163-2b5dfac54d24-kube-api-access-8fp5n\") pod \"service-ca-9c57cc56f-6swbg\" (UID: \"2fc75ee7-f4b7-40e9-9163-2b5dfac54d24\") " pod="openshift-service-ca/service-ca-9c57cc56f-6swbg" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.919388 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94527576-d5a0-4b6e-a3d2-3e1df7494d95-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-r76lc\" (UID: \"94527576-d5a0-4b6e-a3d2-3e1df7494d95\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r76lc" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.919430 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ct24d\" (UniqueName: \"kubernetes.io/projected/ecf9ad47-4004-45d4-84e6-986e63258092-kube-api-access-ct24d\") pod \"multus-admission-controller-857f4d67dd-blw2q\" (UID: \"ecf9ad47-4004-45d4-84e6-986e63258092\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-blw2q" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.919456 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/2fc75ee7-f4b7-40e9-9163-2b5dfac54d24-signing-key\") pod \"service-ca-9c57cc56f-6swbg\" (UID: \"2fc75ee7-f4b7-40e9-9163-2b5dfac54d24\") " pod="openshift-service-ca/service-ca-9c57cc56f-6swbg" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.919471 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4336a365-68f2-4aa1-9818-efdd9ab0b9f8-config\") pod \"kube-controller-manager-operator-78b949d7b-2zjvc\" (UID: \"4336a365-68f2-4aa1-9818-efdd9ab0b9f8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2zjvc" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.919487 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7390eae4-5ab6-4606-90d5-e5468d745208-metrics-tls\") pod \"dns-default-ddtz8\" (UID: \"7390eae4-5ab6-4606-90d5-e5468d745208\") " pod="openshift-dns/dns-default-ddtz8" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.919515 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26978e65-c4ba-410c-b61a-9b4157d71e78-serving-cert\") pod \"etcd-operator-b45778765-mznng\" (UID: \"26978e65-c4ba-410c-b61a-9b4157d71e78\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mznng" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.919531 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3ca85e73-a2e9-416e-9542-5ea8f2f3aea5-metrics-tls\") pod \"dns-operator-744455d44c-lwxtb\" (UID: \"3ca85e73-a2e9-416e-9542-5ea8f2f3aea5\") " pod="openshift-dns-operator/dns-operator-744455d44c-lwxtb" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.919547 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz874\" (UniqueName: \"kubernetes.io/projected/3ca85e73-a2e9-416e-9542-5ea8f2f3aea5-kube-api-access-fz874\") pod \"dns-operator-744455d44c-lwxtb\" (UID: \"3ca85e73-a2e9-416e-9542-5ea8f2f3aea5\") " pod="openshift-dns-operator/dns-operator-744455d44c-lwxtb" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.919564 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eed419fc-89c2-4aa1-ba4a-4caf47aa0181-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-9bzls\" (UID: \"eed419fc-89c2-4aa1-ba4a-4caf47aa0181\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9bzls" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.920645 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/26978e65-c4ba-410c-b61a-9b4157d71e78-etcd-service-ca\") pod \"etcd-operator-b45778765-mznng\" (UID: \"26978e65-c4ba-410c-b61a-9b4157d71e78\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mznng" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.925102 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4336a365-68f2-4aa1-9818-efdd9ab0b9f8-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-2zjvc\" (UID: \"4336a365-68f2-4aa1-9818-efdd9ab0b9f8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2zjvc" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.925146 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5550b70a-4101-4b56-8f88-c6339baaf188-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-l2khx\" (UID: \"5550b70a-4101-4b56-8f88-c6339baaf188\") " pod="openshift-marketplace/marketplace-operator-79b997595-l2khx" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.925176 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qx8f\" (UniqueName: \"kubernetes.io/projected/a9829eb2-ba44-41ca-a0f7-fd92d6114927-kube-api-access-9qx8f\") pod \"control-plane-machine-set-operator-78cbb6b69f-t6scb\" (UID: \"a9829eb2-ba44-41ca-a0f7-fd92d6114927\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t6scb" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.925193 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5kpq\" (UniqueName: \"kubernetes.io/projected/5550b70a-4101-4b56-8f88-c6339baaf188-kube-api-access-c5kpq\") pod \"marketplace-operator-79b997595-l2khx\" (UID: \"5550b70a-4101-4b56-8f88-c6339baaf188\") " pod="openshift-marketplace/marketplace-operator-79b997595-l2khx" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.925209 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f54838e7-e080-4c36-9f22-37173dca9044-profile-collector-cert\") pod \"olm-operator-6b444d44fb-tgqrw\" (UID: \"f54838e7-e080-4c36-9f22-37173dca9044\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tgqrw" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.925232 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/34ec03b0-6d17-4934-84fa-4323c7599fd0-audit-dir\") pod \"oauth-openshift-558db77b4-qpsg9\" (UID: \"34ec03b0-6d17-4934-84fa-4323c7599fd0\") " pod="openshift-authentication/oauth-openshift-558db77b4-qpsg9" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.925248 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/306cb351-6b97-406a-b3af-741e0d81e630-webhook-cert\") pod \"packageserver-d55dfcdfc-hx8c6\" (UID: \"306cb351-6b97-406a-b3af-741e0d81e630\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hx8c6" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.925290 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cv5wf\" (UniqueName: \"kubernetes.io/projected/15ff94b5-aaa1-4673-8aab-04ed7f44ce88-kube-api-access-cv5wf\") pod \"service-ca-operator-777779d784-d7qg2\" (UID: \"15ff94b5-aaa1-4673-8aab-04ed7f44ce88\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-d7qg2" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.925308 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg859\" (UniqueName: \"kubernetes.io/projected/5255a440-64ac-4f31-8a9e-4c3688ec128a-kube-api-access-fg859\") pod \"machine-config-controller-84d6567774-l2jzc\" (UID: \"5255a440-64ac-4f31-8a9e-4c3688ec128a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l2jzc" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.925326 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/34ec03b0-6d17-4934-84fa-4323c7599fd0-audit-policies\") pod \"oauth-openshift-558db77b4-qpsg9\" (UID: \"34ec03b0-6d17-4934-84fa-4323c7599fd0\") " pod="openshift-authentication/oauth-openshift-558db77b4-qpsg9" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.925343 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/26978e65-c4ba-410c-b61a-9b4157d71e78-etcd-ca\") pod \"etcd-operator-b45778765-mznng\" (UID: \"26978e65-c4ba-410c-b61a-9b4157d71e78\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mznng" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.925362 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fl6v7\" (UniqueName: \"kubernetes.io/projected/7a5cb1ec-767a-42f1-90c7-870910e5e5d9-kube-api-access-fl6v7\") pod \"cluster-image-registry-operator-dc59b4c8b-mz266\" (UID: \"7a5cb1ec-767a-42f1-90c7-870910e5e5d9\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mz266" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.925380 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/306cb351-6b97-406a-b3af-741e0d81e630-apiservice-cert\") pod \"packageserver-d55dfcdfc-hx8c6\" (UID: \"306cb351-6b97-406a-b3af-741e0d81e630\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hx8c6" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.925406 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eed419fc-89c2-4aa1-ba4a-4caf47aa0181-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-9bzls\" (UID: \"eed419fc-89c2-4aa1-ba4a-4caf47aa0181\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9bzls" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.925423 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/acfaa12b-166f-4c03-a208-6ed705af199d-mountpoint-dir\") pod \"csi-hostpathplugin-4jk72\" (UID: \"acfaa12b-166f-4c03-a208-6ed705af199d\") " pod="hostpath-provisioner/csi-hostpathplugin-4jk72" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.925450 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/24f58977-7eb4-4fb1-a812-8b656753537d-auth-proxy-config\") pod \"machine-config-operator-74547568cd-6x84f\" (UID: \"24f58977-7eb4-4fb1-a812-8b656753537d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6x84f" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.925469 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4f25efaa-5792-4a51-ba83-e8733af29fdf-secret-volume\") pod \"collect-profiles-29406690-dtj9t\" (UID: \"4f25efaa-5792-4a51-ba83-e8733af29fdf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-dtj9t" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.925482 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/acfaa12b-166f-4c03-a208-6ed705af199d-socket-dir\") pod \"csi-hostpathplugin-4jk72\" (UID: \"acfaa12b-166f-4c03-a208-6ed705af199d\") " pod="hostpath-provisioner/csi-hostpathplugin-4jk72" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.925521 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/e2701908-11c2-44df-9318-4868e49d7ebc-certs\") pod \"machine-config-server-2fgjg\" (UID: \"e2701908-11c2-44df-9318-4868e49d7ebc\") " pod="openshift-machine-config-operator/machine-config-server-2fgjg" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.925537 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr9g5\" (UniqueName: \"kubernetes.io/projected/be320286-1cef-4b6f-92a0-e9e66a34ad3e-kube-api-access-fr9g5\") pod \"package-server-manager-789f6589d5-qvn2d\" (UID: \"be320286-1cef-4b6f-92a0-e9e66a34ad3e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qvn2d" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.925669 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/34ec03b0-6d17-4934-84fa-4323c7599fd0-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-qpsg9\" (UID: \"34ec03b0-6d17-4934-84fa-4323c7599fd0\") " pod="openshift-authentication/oauth-openshift-558db77b4-qpsg9" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.925716 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/2fc75ee7-f4b7-40e9-9163-2b5dfac54d24-signing-cabundle\") pod \"service-ca-9c57cc56f-6swbg\" (UID: \"2fc75ee7-f4b7-40e9-9163-2b5dfac54d24\") " pod="openshift-service-ca/service-ca-9c57cc56f-6swbg" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.925741 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5255a440-64ac-4f31-8a9e-4c3688ec128a-proxy-tls\") pod \"machine-config-controller-84d6567774-l2jzc\" (UID: \"5255a440-64ac-4f31-8a9e-4c3688ec128a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l2jzc" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.925775 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7a5cb1ec-767a-42f1-90c7-870910e5e5d9-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-mz266\" (UID: \"7a5cb1ec-767a-42f1-90c7-870910e5e5d9\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mz266" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.925801 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/acfaa12b-166f-4c03-a208-6ed705af199d-csi-data-dir\") pod \"csi-hostpathplugin-4jk72\" (UID: \"acfaa12b-166f-4c03-a208-6ed705af199d\") " pod="hostpath-provisioner/csi-hostpathplugin-4jk72" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.925824 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/34ec03b0-6d17-4934-84fa-4323c7599fd0-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-qpsg9\" (UID: \"34ec03b0-6d17-4934-84fa-4323c7599fd0\") " pod="openshift-authentication/oauth-openshift-558db77b4-qpsg9" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.925844 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/7a5cb1ec-767a-42f1-90c7-870910e5e5d9-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-mz266\" (UID: \"7a5cb1ec-767a-42f1-90c7-870910e5e5d9\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mz266" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.925872 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpnvv\" (UniqueName: \"kubernetes.io/projected/e2701908-11c2-44df-9318-4868e49d7ebc-kube-api-access-fpnvv\") pod \"machine-config-server-2fgjg\" (UID: \"e2701908-11c2-44df-9318-4868e49d7ebc\") " pod="openshift-machine-config-operator/machine-config-server-2fgjg" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.925903 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2f54\" (UniqueName: \"kubernetes.io/projected/306cb351-6b97-406a-b3af-741e0d81e630-kube-api-access-z2f54\") pod \"packageserver-d55dfcdfc-hx8c6\" (UID: \"306cb351-6b97-406a-b3af-741e0d81e630\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hx8c6" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.925945 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fe1144c0-6804-4a78-bad8-3319ddb3c30c-metrics-certs\") pod \"router-default-5444994796-692g9\" (UID: \"fe1144c0-6804-4a78-bad8-3319ddb3c30c\") " pod="openshift-ingress/router-default-5444994796-692g9" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.926119 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/e2701908-11c2-44df-9318-4868e49d7ebc-node-bootstrap-token\") pod \"machine-config-server-2fgjg\" (UID: \"e2701908-11c2-44df-9318-4868e49d7ebc\") " pod="openshift-machine-config-operator/machine-config-server-2fgjg" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.926261 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/34ec03b0-6d17-4934-84fa-4323c7599fd0-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-qpsg9\" (UID: \"34ec03b0-6d17-4934-84fa-4323c7599fd0\") " pod="openshift-authentication/oauth-openshift-558db77b4-qpsg9" Nov 29 07:41:38 crc kubenswrapper[4795]: E1129 07:41:38.926431 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:41:39.426406938 +0000 UTC m=+145.401982808 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.926696 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/24f58977-7eb4-4fb1-a812-8b656753537d-auth-proxy-config\") pod \"machine-config-operator-74547568cd-6x84f\" (UID: \"24f58977-7eb4-4fb1-a812-8b656753537d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6x84f" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.926847 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/34ec03b0-6d17-4934-84fa-4323c7599fd0-audit-policies\") pod \"oauth-openshift-558db77b4-qpsg9\" (UID: \"34ec03b0-6d17-4934-84fa-4323c7599fd0\") " pod="openshift-authentication/oauth-openshift-558db77b4-qpsg9" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.927334 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/24f58977-7eb4-4fb1-a812-8b656753537d-images\") pod \"machine-config-operator-74547568cd-6x84f\" (UID: \"24f58977-7eb4-4fb1-a812-8b656753537d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6x84f" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.927527 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/34ec03b0-6d17-4934-84fa-4323c7599fd0-audit-dir\") pod \"oauth-openshift-558db77b4-qpsg9\" (UID: \"34ec03b0-6d17-4934-84fa-4323c7599fd0\") " pod="openshift-authentication/oauth-openshift-558db77b4-qpsg9" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.928339 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/26978e65-c4ba-410c-b61a-9b4157d71e78-etcd-ca\") pod \"etcd-operator-b45778765-mznng\" (UID: \"26978e65-c4ba-410c-b61a-9b4157d71e78\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mznng" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.933645 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/26978e65-c4ba-410c-b61a-9b4157d71e78-etcd-client\") pod \"etcd-operator-b45778765-mznng\" (UID: \"26978e65-c4ba-410c-b61a-9b4157d71e78\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mznng" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.933938 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26978e65-c4ba-410c-b61a-9b4157d71e78-config\") pod \"etcd-operator-b45778765-mznng\" (UID: \"26978e65-c4ba-410c-b61a-9b4157d71e78\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mznng" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.933991 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zk7r\" (UniqueName: \"kubernetes.io/projected/24f58977-7eb4-4fb1-a812-8b656753537d-kube-api-access-7zk7r\") pod \"machine-config-operator-74547568cd-6x84f\" (UID: \"24f58977-7eb4-4fb1-a812-8b656753537d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6x84f" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.934025 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/306cb351-6b97-406a-b3af-741e0d81e630-tmpfs\") pod \"packageserver-d55dfcdfc-hx8c6\" (UID: \"306cb351-6b97-406a-b3af-741e0d81e630\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hx8c6" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.934053 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kh5xp\" (UniqueName: \"kubernetes.io/projected/26978e65-c4ba-410c-b61a-9b4157d71e78-kube-api-access-kh5xp\") pod \"etcd-operator-b45778765-mznng\" (UID: \"26978e65-c4ba-410c-b61a-9b4157d71e78\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mznng" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.934050 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7a5cb1ec-767a-42f1-90c7-870910e5e5d9-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-mz266\" (UID: \"7a5cb1ec-767a-42f1-90c7-870910e5e5d9\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mz266" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.934077 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-529k2\" (UniqueName: \"kubernetes.io/projected/3028c8af-2d6e-4674-9ff8-dff8f68c511c-kube-api-access-529k2\") pod \"ingress-canary-twmws\" (UID: \"3028c8af-2d6e-4674-9ff8-dff8f68c511c\") " pod="openshift-ingress-canary/ingress-canary-twmws" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.934121 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15ff94b5-aaa1-4673-8aab-04ed7f44ce88-serving-cert\") pod \"service-ca-operator-777779d784-d7qg2\" (UID: \"15ff94b5-aaa1-4673-8aab-04ed7f44ce88\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-d7qg2" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.934142 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/fe1144c0-6804-4a78-bad8-3319ddb3c30c-stats-auth\") pod \"router-default-5444994796-692g9\" (UID: \"fe1144c0-6804-4a78-bad8-3319ddb3c30c\") " pod="openshift-ingress/router-default-5444994796-692g9" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.934256 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gnlv\" (UniqueName: \"kubernetes.io/projected/34ec03b0-6d17-4934-84fa-4323c7599fd0-kube-api-access-9gnlv\") pod \"oauth-openshift-558db77b4-qpsg9\" (UID: \"34ec03b0-6d17-4934-84fa-4323c7599fd0\") " pod="openshift-authentication/oauth-openshift-558db77b4-qpsg9" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.934305 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdxz9\" (UniqueName: \"kubernetes.io/projected/eed419fc-89c2-4aa1-ba4a-4caf47aa0181-kube-api-access-hdxz9\") pod \"kube-storage-version-migrator-operator-b67b599dd-9bzls\" (UID: \"eed419fc-89c2-4aa1-ba4a-4caf47aa0181\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9bzls" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.934338 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/34ec03b0-6d17-4934-84fa-4323c7599fd0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-qpsg9\" (UID: \"34ec03b0-6d17-4934-84fa-4323c7599fd0\") " pod="openshift-authentication/oauth-openshift-558db77b4-qpsg9" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.934349 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/34ec03b0-6d17-4934-84fa-4323c7599fd0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-qpsg9\" (UID: \"34ec03b0-6d17-4934-84fa-4323c7599fd0\") " pod="openshift-authentication/oauth-openshift-558db77b4-qpsg9" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.936063 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5550b70a-4101-4b56-8f88-c6339baaf188-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-l2khx\" (UID: \"5550b70a-4101-4b56-8f88-c6339baaf188\") " pod="openshift-marketplace/marketplace-operator-79b997595-l2khx" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.936130 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lxf6\" (UniqueName: \"kubernetes.io/projected/acfaa12b-166f-4c03-a208-6ed705af199d-kube-api-access-7lxf6\") pod \"csi-hostpathplugin-4jk72\" (UID: \"acfaa12b-166f-4c03-a208-6ed705af199d\") " pod="hostpath-provisioner/csi-hostpathplugin-4jk72" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.936167 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/34ec03b0-6d17-4934-84fa-4323c7599fd0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-qpsg9\" (UID: \"34ec03b0-6d17-4934-84fa-4323c7599fd0\") " pod="openshift-authentication/oauth-openshift-558db77b4-qpsg9" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.936202 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15ff94b5-aaa1-4673-8aab-04ed7f44ce88-config\") pod \"service-ca-operator-777779d784-d7qg2\" (UID: \"15ff94b5-aaa1-4673-8aab-04ed7f44ce88\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-d7qg2" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.936279 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a9829eb2-ba44-41ca-a0f7-fd92d6114927-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-t6scb\" (UID: \"a9829eb2-ba44-41ca-a0f7-fd92d6114927\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t6scb" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.936307 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/94527576-d5a0-4b6e-a3d2-3e1df7494d95-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-r76lc\" (UID: \"94527576-d5a0-4b6e-a3d2-3e1df7494d95\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r76lc" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.936335 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8f7s\" (UniqueName: \"kubernetes.io/projected/fe1144c0-6804-4a78-bad8-3319ddb3c30c-kube-api-access-k8f7s\") pod \"router-default-5444994796-692g9\" (UID: \"fe1144c0-6804-4a78-bad8-3319ddb3c30c\") " pod="openshift-ingress/router-default-5444994796-692g9" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.936397 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/34ec03b0-6d17-4934-84fa-4323c7599fd0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-qpsg9\" (UID: \"34ec03b0-6d17-4934-84fa-4323c7599fd0\") " pod="openshift-authentication/oauth-openshift-558db77b4-qpsg9" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.936418 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2tcj\" (UniqueName: \"kubernetes.io/projected/7390eae4-5ab6-4606-90d5-e5468d745208-kube-api-access-s2tcj\") pod \"dns-default-ddtz8\" (UID: \"7390eae4-5ab6-4606-90d5-e5468d745208\") " pod="openshift-dns/dns-default-ddtz8" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.937218 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15ff94b5-aaa1-4673-8aab-04ed7f44ce88-config\") pod \"service-ca-operator-777779d784-d7qg2\" (UID: \"15ff94b5-aaa1-4673-8aab-04ed7f44ce88\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-d7qg2" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.937873 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/24f58977-7eb4-4fb1-a812-8b656753537d-proxy-tls\") pod \"machine-config-operator-74547568cd-6x84f\" (UID: \"24f58977-7eb4-4fb1-a812-8b656753537d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6x84f" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.938746 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34ec03b0-6d17-4934-84fa-4323c7599fd0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-qpsg9\" (UID: \"34ec03b0-6d17-4934-84fa-4323c7599fd0\") " pod="openshift-authentication/oauth-openshift-558db77b4-qpsg9" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.944911 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/34ec03b0-6d17-4934-84fa-4323c7599fd0-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-qpsg9\" (UID: \"34ec03b0-6d17-4934-84fa-4323c7599fd0\") " pod="openshift-authentication/oauth-openshift-558db77b4-qpsg9" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.945949 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/34ec03b0-6d17-4934-84fa-4323c7599fd0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-qpsg9\" (UID: \"34ec03b0-6d17-4934-84fa-4323c7599fd0\") " pod="openshift-authentication/oauth-openshift-558db77b4-qpsg9" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.946184 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3028c8af-2d6e-4674-9ff8-dff8f68c511c-cert\") pod \"ingress-canary-twmws\" (UID: \"3028c8af-2d6e-4674-9ff8-dff8f68c511c\") " pod="openshift-ingress-canary/ingress-canary-twmws" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.946473 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26978e65-c4ba-410c-b61a-9b4157d71e78-serving-cert\") pod \"etcd-operator-b45778765-mznng\" (UID: \"26978e65-c4ba-410c-b61a-9b4157d71e78\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mznng" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.946676 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/34ec03b0-6d17-4934-84fa-4323c7599fd0-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-qpsg9\" (UID: \"34ec03b0-6d17-4934-84fa-4323c7599fd0\") " pod="openshift-authentication/oauth-openshift-558db77b4-qpsg9" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.946778 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/e2701908-11c2-44df-9318-4868e49d7ebc-certs\") pod \"machine-config-server-2fgjg\" (UID: \"e2701908-11c2-44df-9318-4868e49d7ebc\") " pod="openshift-machine-config-operator/machine-config-server-2fgjg" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.947400 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/34ec03b0-6d17-4934-84fa-4323c7599fd0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-qpsg9\" (UID: \"34ec03b0-6d17-4934-84fa-4323c7599fd0\") " pod="openshift-authentication/oauth-openshift-558db77b4-qpsg9" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.949189 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/2fc75ee7-f4b7-40e9-9163-2b5dfac54d24-signing-key\") pod \"service-ca-9c57cc56f-6swbg\" (UID: \"2fc75ee7-f4b7-40e9-9163-2b5dfac54d24\") " pod="openshift-service-ca/service-ca-9c57cc56f-6swbg" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.949871 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/34ec03b0-6d17-4934-84fa-4323c7599fd0-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-qpsg9\" (UID: \"34ec03b0-6d17-4934-84fa-4323c7599fd0\") " pod="openshift-authentication/oauth-openshift-558db77b4-qpsg9" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.950686 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7a5cb1ec-767a-42f1-90c7-870910e5e5d9-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-mz266\" (UID: \"7a5cb1ec-767a-42f1-90c7-870910e5e5d9\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mz266" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.952991 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpnvv\" (UniqueName: \"kubernetes.io/projected/e2701908-11c2-44df-9318-4868e49d7ebc-kube-api-access-fpnvv\") pod \"machine-config-server-2fgjg\" (UID: \"e2701908-11c2-44df-9318-4868e49d7ebc\") " pod="openshift-machine-config-operator/machine-config-server-2fgjg" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.953135 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fl6v7\" (UniqueName: \"kubernetes.io/projected/7a5cb1ec-767a-42f1-90c7-870910e5e5d9-kube-api-access-fl6v7\") pod \"cluster-image-registry-operator-dc59b4c8b-mz266\" (UID: \"7a5cb1ec-767a-42f1-90c7-870910e5e5d9\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mz266" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.953195 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/34ec03b0-6d17-4934-84fa-4323c7599fd0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-qpsg9\" (UID: \"34ec03b0-6d17-4934-84fa-4323c7599fd0\") " pod="openshift-authentication/oauth-openshift-558db77b4-qpsg9" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.953915 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/34ec03b0-6d17-4934-84fa-4323c7599fd0-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-qpsg9\" (UID: \"34ec03b0-6d17-4934-84fa-4323c7599fd0\") " pod="openshift-authentication/oauth-openshift-558db77b4-qpsg9" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.954257 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/34ec03b0-6d17-4934-84fa-4323c7599fd0-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-qpsg9\" (UID: \"34ec03b0-6d17-4934-84fa-4323c7599fd0\") " pod="openshift-authentication/oauth-openshift-558db77b4-qpsg9" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.955069 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/7a5cb1ec-767a-42f1-90c7-870910e5e5d9-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-mz266\" (UID: \"7a5cb1ec-767a-42f1-90c7-870910e5e5d9\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mz266" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.973257 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-529k2\" (UniqueName: \"kubernetes.io/projected/3028c8af-2d6e-4674-9ff8-dff8f68c511c-kube-api-access-529k2\") pod \"ingress-canary-twmws\" (UID: \"3028c8af-2d6e-4674-9ff8-dff8f68c511c\") " pod="openshift-ingress-canary/ingress-canary-twmws" Nov 29 07:41:38 crc kubenswrapper[4795]: I1129 07:41:38.993948 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-w4g7w"] Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.005917 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zk7r\" (UniqueName: \"kubernetes.io/projected/24f58977-7eb4-4fb1-a812-8b656753537d-kube-api-access-7zk7r\") pod \"machine-config-operator-74547568cd-6x84f\" (UID: \"24f58977-7eb4-4fb1-a812-8b656753537d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6x84f" Nov 29 07:41:39 crc kubenswrapper[4795]: W1129 07:41:39.006605 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b02347e_bd6a_4a77_97d4_d8276d1b6167.slice/crio-abc0c8c629c54a627a9a9a6cc91813a8848d0844d2ec5afbb925ebc66a9b34bb WatchSource:0}: Error finding container abc0c8c629c54a627a9a9a6cc91813a8848d0844d2ec5afbb925ebc66a9b34bb: Status 404 returned error can't find the container with id abc0c8c629c54a627a9a9a6cc91813a8848d0844d2ec5afbb925ebc66a9b34bb Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.016230 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kh5xp\" (UniqueName: \"kubernetes.io/projected/26978e65-c4ba-410c-b61a-9b4157d71e78-kube-api-access-kh5xp\") pod \"etcd-operator-b45778765-mznng\" (UID: \"26978e65-c4ba-410c-b61a-9b4157d71e78\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mznng" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.035741 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gnlv\" (UniqueName: \"kubernetes.io/projected/34ec03b0-6d17-4934-84fa-4323c7599fd0-kube-api-access-9gnlv\") pod \"oauth-openshift-558db77b4-qpsg9\" (UID: \"34ec03b0-6d17-4934-84fa-4323c7599fd0\") " pod="openshift-authentication/oauth-openshift-558db77b4-qpsg9" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.038483 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eed419fc-89c2-4aa1-ba4a-4caf47aa0181-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-9bzls\" (UID: \"eed419fc-89c2-4aa1-ba4a-4caf47aa0181\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9bzls" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.038520 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/acfaa12b-166f-4c03-a208-6ed705af199d-mountpoint-dir\") pod \"csi-hostpathplugin-4jk72\" (UID: \"acfaa12b-166f-4c03-a208-6ed705af199d\") " pod="hostpath-provisioner/csi-hostpathplugin-4jk72" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.038548 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4f25efaa-5792-4a51-ba83-e8733af29fdf-secret-volume\") pod \"collect-profiles-29406690-dtj9t\" (UID: \"4f25efaa-5792-4a51-ba83-e8733af29fdf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-dtj9t" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.038568 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/acfaa12b-166f-4c03-a208-6ed705af199d-socket-dir\") pod \"csi-hostpathplugin-4jk72\" (UID: \"acfaa12b-166f-4c03-a208-6ed705af199d\") " pod="hostpath-provisioner/csi-hostpathplugin-4jk72" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.038573 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cv5wf\" (UniqueName: \"kubernetes.io/projected/15ff94b5-aaa1-4673-8aab-04ed7f44ce88-kube-api-access-cv5wf\") pod \"service-ca-operator-777779d784-d7qg2\" (UID: \"15ff94b5-aaa1-4673-8aab-04ed7f44ce88\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-d7qg2" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.038616 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fr9g5\" (UniqueName: \"kubernetes.io/projected/be320286-1cef-4b6f-92a0-e9e66a34ad3e-kube-api-access-fr9g5\") pod \"package-server-manager-789f6589d5-qvn2d\" (UID: \"be320286-1cef-4b6f-92a0-e9e66a34ad3e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qvn2d" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.038646 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/2fc75ee7-f4b7-40e9-9163-2b5dfac54d24-signing-cabundle\") pod \"service-ca-9c57cc56f-6swbg\" (UID: \"2fc75ee7-f4b7-40e9-9163-2b5dfac54d24\") " pod="openshift-service-ca/service-ca-9c57cc56f-6swbg" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.038670 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5255a440-64ac-4f31-8a9e-4c3688ec128a-proxy-tls\") pod \"machine-config-controller-84d6567774-l2jzc\" (UID: \"5255a440-64ac-4f31-8a9e-4c3688ec128a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l2jzc" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.038699 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/acfaa12b-166f-4c03-a208-6ed705af199d-csi-data-dir\") pod \"csi-hostpathplugin-4jk72\" (UID: \"acfaa12b-166f-4c03-a208-6ed705af199d\") " pod="hostpath-provisioner/csi-hostpathplugin-4jk72" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.038728 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2f54\" (UniqueName: \"kubernetes.io/projected/306cb351-6b97-406a-b3af-741e0d81e630-kube-api-access-z2f54\") pod \"packageserver-d55dfcdfc-hx8c6\" (UID: \"306cb351-6b97-406a-b3af-741e0d81e630\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hx8c6" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.038751 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fe1144c0-6804-4a78-bad8-3319ddb3c30c-metrics-certs\") pod \"router-default-5444994796-692g9\" (UID: \"fe1144c0-6804-4a78-bad8-3319ddb3c30c\") " pod="openshift-ingress/router-default-5444994796-692g9" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.038787 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/306cb351-6b97-406a-b3af-741e0d81e630-tmpfs\") pod \"packageserver-d55dfcdfc-hx8c6\" (UID: \"306cb351-6b97-406a-b3af-741e0d81e630\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hx8c6" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.038811 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/fe1144c0-6804-4a78-bad8-3319ddb3c30c-stats-auth\") pod \"router-default-5444994796-692g9\" (UID: \"fe1144c0-6804-4a78-bad8-3319ddb3c30c\") " pod="openshift-ingress/router-default-5444994796-692g9" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.038839 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5550b70a-4101-4b56-8f88-c6339baaf188-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-l2khx\" (UID: \"5550b70a-4101-4b56-8f88-c6339baaf188\") " pod="openshift-marketplace/marketplace-operator-79b997595-l2khx" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.038865 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lxf6\" (UniqueName: \"kubernetes.io/projected/acfaa12b-166f-4c03-a208-6ed705af199d-kube-api-access-7lxf6\") pod \"csi-hostpathplugin-4jk72\" (UID: \"acfaa12b-166f-4c03-a208-6ed705af199d\") " pod="hostpath-provisioner/csi-hostpathplugin-4jk72" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.038886 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdxz9\" (UniqueName: \"kubernetes.io/projected/eed419fc-89c2-4aa1-ba4a-4caf47aa0181-kube-api-access-hdxz9\") pod \"kube-storage-version-migrator-operator-b67b599dd-9bzls\" (UID: \"eed419fc-89c2-4aa1-ba4a-4caf47aa0181\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9bzls" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.038914 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a9829eb2-ba44-41ca-a0f7-fd92d6114927-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-t6scb\" (UID: \"a9829eb2-ba44-41ca-a0f7-fd92d6114927\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t6scb" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.038943 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/94527576-d5a0-4b6e-a3d2-3e1df7494d95-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-r76lc\" (UID: \"94527576-d5a0-4b6e-a3d2-3e1df7494d95\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r76lc" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.038962 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/acfaa12b-166f-4c03-a208-6ed705af199d-socket-dir\") pod \"csi-hostpathplugin-4jk72\" (UID: \"acfaa12b-166f-4c03-a208-6ed705af199d\") " pod="hostpath-provisioner/csi-hostpathplugin-4jk72" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.038980 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8f7s\" (UniqueName: \"kubernetes.io/projected/fe1144c0-6804-4a78-bad8-3319ddb3c30c-kube-api-access-k8f7s\") pod \"router-default-5444994796-692g9\" (UID: \"fe1144c0-6804-4a78-bad8-3319ddb3c30c\") " pod="openshift-ingress/router-default-5444994796-692g9" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.039006 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2tcj\" (UniqueName: \"kubernetes.io/projected/7390eae4-5ab6-4606-90d5-e5468d745208-kube-api-access-s2tcj\") pod \"dns-default-ddtz8\" (UID: \"7390eae4-5ab6-4606-90d5-e5468d745208\") " pod="openshift-dns/dns-default-ddtz8" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.039031 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/acfaa12b-166f-4c03-a208-6ed705af199d-registration-dir\") pod \"csi-hostpathplugin-4jk72\" (UID: \"acfaa12b-166f-4c03-a208-6ed705af199d\") " pod="hostpath-provisioner/csi-hostpathplugin-4jk72" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.039054 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe1144c0-6804-4a78-bad8-3319ddb3c30c-service-ca-bundle\") pod \"router-default-5444994796-692g9\" (UID: \"fe1144c0-6804-4a78-bad8-3319ddb3c30c\") " pod="openshift-ingress/router-default-5444994796-692g9" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.039079 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/be320286-1cef-4b6f-92a0-e9e66a34ad3e-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-qvn2d\" (UID: \"be320286-1cef-4b6f-92a0-e9e66a34ad3e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qvn2d" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.039105 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94527576-d5a0-4b6e-a3d2-3e1df7494d95-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-r76lc\" (UID: \"94527576-d5a0-4b6e-a3d2-3e1df7494d95\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r76lc" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.039132 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2265\" (UniqueName: \"kubernetes.io/projected/3afcac64-b24d-4150-a997-123947d36f3a-kube-api-access-t2265\") pod \"migrator-59844c95c7-5g9td\" (UID: \"3afcac64-b24d-4150-a997-123947d36f3a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5g9td" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.039158 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5255a440-64ac-4f31-8a9e-4c3688ec128a-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-l2jzc\" (UID: \"5255a440-64ac-4f31-8a9e-4c3688ec128a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l2jzc" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.039183 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2aba5f59-2760-4b54-94a0-c051d081ce70-srv-cert\") pod \"catalog-operator-68c6474976-bf7sx\" (UID: \"2aba5f59-2760-4b54-94a0-c051d081ce70\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bf7sx" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.039246 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nht7v\" (UniqueName: \"kubernetes.io/projected/2aba5f59-2760-4b54-94a0-c051d081ce70-kube-api-access-nht7v\") pod \"catalog-operator-68c6474976-bf7sx\" (UID: \"2aba5f59-2760-4b54-94a0-c051d081ce70\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bf7sx" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.039282 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/acfaa12b-166f-4c03-a208-6ed705af199d-plugins-dir\") pod \"csi-hostpathplugin-4jk72\" (UID: \"acfaa12b-166f-4c03-a208-6ed705af199d\") " pod="hostpath-provisioner/csi-hostpathplugin-4jk72" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.039310 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ecf9ad47-4004-45d4-84e6-986e63258092-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-blw2q\" (UID: \"ecf9ad47-4004-45d4-84e6-986e63258092\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-blw2q" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.039337 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f54838e7-e080-4c36-9f22-37173dca9044-srv-cert\") pod \"olm-operator-6b444d44fb-tgqrw\" (UID: \"f54838e7-e080-4c36-9f22-37173dca9044\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tgqrw" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.039363 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2aba5f59-2760-4b54-94a0-c051d081ce70-profile-collector-cert\") pod \"catalog-operator-68c6474976-bf7sx\" (UID: \"2aba5f59-2760-4b54-94a0-c051d081ce70\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bf7sx" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.039386 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4f25efaa-5792-4a51-ba83-e8733af29fdf-config-volume\") pod \"collect-profiles-29406690-dtj9t\" (UID: \"4f25efaa-5792-4a51-ba83-e8733af29fdf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-dtj9t" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.039409 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/fe1144c0-6804-4a78-bad8-3319ddb3c30c-default-certificate\") pod \"router-default-5444994796-692g9\" (UID: \"fe1144c0-6804-4a78-bad8-3319ddb3c30c\") " pod="openshift-ingress/router-default-5444994796-692g9" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.039448 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4336a365-68f2-4aa1-9818-efdd9ab0b9f8-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-2zjvc\" (UID: \"4336a365-68f2-4aa1-9818-efdd9ab0b9f8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2zjvc" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.039479 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjhhf\" (UniqueName: \"kubernetes.io/projected/4f25efaa-5792-4a51-ba83-e8733af29fdf-kube-api-access-bjhhf\") pod \"collect-profiles-29406690-dtj9t\" (UID: \"4f25efaa-5792-4a51-ba83-e8733af29fdf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-dtj9t" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.039502 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gz574\" (UniqueName: \"kubernetes.io/projected/f54838e7-e080-4c36-9f22-37173dca9044-kube-api-access-gz574\") pod \"olm-operator-6b444d44fb-tgqrw\" (UID: \"f54838e7-e080-4c36-9f22-37173dca9044\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tgqrw" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.039526 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7390eae4-5ab6-4606-90d5-e5468d745208-config-volume\") pod \"dns-default-ddtz8\" (UID: \"7390eae4-5ab6-4606-90d5-e5468d745208\") " pod="openshift-dns/dns-default-ddtz8" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.039550 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fp5n\" (UniqueName: \"kubernetes.io/projected/2fc75ee7-f4b7-40e9-9163-2b5dfac54d24-kube-api-access-8fp5n\") pod \"service-ca-9c57cc56f-6swbg\" (UID: \"2fc75ee7-f4b7-40e9-9163-2b5dfac54d24\") " pod="openshift-service-ca/service-ca-9c57cc56f-6swbg" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.039574 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94527576-d5a0-4b6e-a3d2-3e1df7494d95-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-r76lc\" (UID: \"94527576-d5a0-4b6e-a3d2-3e1df7494d95\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r76lc" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.039633 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ct24d\" (UniqueName: \"kubernetes.io/projected/ecf9ad47-4004-45d4-84e6-986e63258092-kube-api-access-ct24d\") pod \"multus-admission-controller-857f4d67dd-blw2q\" (UID: \"ecf9ad47-4004-45d4-84e6-986e63258092\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-blw2q" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.039659 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4336a365-68f2-4aa1-9818-efdd9ab0b9f8-config\") pod \"kube-controller-manager-operator-78b949d7b-2zjvc\" (UID: \"4336a365-68f2-4aa1-9818-efdd9ab0b9f8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2zjvc" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.039682 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7390eae4-5ab6-4606-90d5-e5468d745208-metrics-tls\") pod \"dns-default-ddtz8\" (UID: \"7390eae4-5ab6-4606-90d5-e5468d745208\") " pod="openshift-dns/dns-default-ddtz8" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.039689 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/2fc75ee7-f4b7-40e9-9163-2b5dfac54d24-signing-cabundle\") pod \"service-ca-9c57cc56f-6swbg\" (UID: \"2fc75ee7-f4b7-40e9-9163-2b5dfac54d24\") " pod="openshift-service-ca/service-ca-9c57cc56f-6swbg" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.039710 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3ca85e73-a2e9-416e-9542-5ea8f2f3aea5-metrics-tls\") pod \"dns-operator-744455d44c-lwxtb\" (UID: \"3ca85e73-a2e9-416e-9542-5ea8f2f3aea5\") " pod="openshift-dns-operator/dns-operator-744455d44c-lwxtb" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.039735 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fz874\" (UniqueName: \"kubernetes.io/projected/3ca85e73-a2e9-416e-9542-5ea8f2f3aea5-kube-api-access-fz874\") pod \"dns-operator-744455d44c-lwxtb\" (UID: \"3ca85e73-a2e9-416e-9542-5ea8f2f3aea5\") " pod="openshift-dns-operator/dns-operator-744455d44c-lwxtb" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.039761 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eed419fc-89c2-4aa1-ba4a-4caf47aa0181-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-9bzls\" (UID: \"eed419fc-89c2-4aa1-ba4a-4caf47aa0181\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9bzls" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.039796 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4336a365-68f2-4aa1-9818-efdd9ab0b9f8-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-2zjvc\" (UID: \"4336a365-68f2-4aa1-9818-efdd9ab0b9f8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2zjvc" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.039818 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/acfaa12b-166f-4c03-a208-6ed705af199d-csi-data-dir\") pod \"csi-hostpathplugin-4jk72\" (UID: \"acfaa12b-166f-4c03-a208-6ed705af199d\") " pod="hostpath-provisioner/csi-hostpathplugin-4jk72" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.039835 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5550b70a-4101-4b56-8f88-c6339baaf188-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-l2khx\" (UID: \"5550b70a-4101-4b56-8f88-c6339baaf188\") " pod="openshift-marketplace/marketplace-operator-79b997595-l2khx" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.039863 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qx8f\" (UniqueName: \"kubernetes.io/projected/a9829eb2-ba44-41ca-a0f7-fd92d6114927-kube-api-access-9qx8f\") pod \"control-plane-machine-set-operator-78cbb6b69f-t6scb\" (UID: \"a9829eb2-ba44-41ca-a0f7-fd92d6114927\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t6scb" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.039888 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5kpq\" (UniqueName: \"kubernetes.io/projected/5550b70a-4101-4b56-8f88-c6339baaf188-kube-api-access-c5kpq\") pod \"marketplace-operator-79b997595-l2khx\" (UID: \"5550b70a-4101-4b56-8f88-c6339baaf188\") " pod="openshift-marketplace/marketplace-operator-79b997595-l2khx" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.039911 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f54838e7-e080-4c36-9f22-37173dca9044-profile-collector-cert\") pod \"olm-operator-6b444d44fb-tgqrw\" (UID: \"f54838e7-e080-4c36-9f22-37173dca9044\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tgqrw" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.039936 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/306cb351-6b97-406a-b3af-741e0d81e630-webhook-cert\") pod \"packageserver-d55dfcdfc-hx8c6\" (UID: \"306cb351-6b97-406a-b3af-741e0d81e630\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hx8c6" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.039970 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lm9wt\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.039997 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fg859\" (UniqueName: \"kubernetes.io/projected/5255a440-64ac-4f31-8a9e-4c3688ec128a-kube-api-access-fg859\") pod \"machine-config-controller-84d6567774-l2jzc\" (UID: \"5255a440-64ac-4f31-8a9e-4c3688ec128a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l2jzc" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.040026 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/306cb351-6b97-406a-b3af-741e0d81e630-apiservice-cert\") pod \"packageserver-d55dfcdfc-hx8c6\" (UID: \"306cb351-6b97-406a-b3af-741e0d81e630\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hx8c6" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.042099 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6x84f" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.038647 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/acfaa12b-166f-4c03-a208-6ed705af199d-mountpoint-dir\") pod \"csi-hostpathplugin-4jk72\" (UID: \"acfaa12b-166f-4c03-a208-6ed705af199d\") " pod="hostpath-provisioner/csi-hostpathplugin-4jk72" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.043424 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5255a440-64ac-4f31-8a9e-4c3688ec128a-proxy-tls\") pod \"machine-config-controller-84d6567774-l2jzc\" (UID: \"5255a440-64ac-4f31-8a9e-4c3688ec128a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l2jzc" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.044435 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/306cb351-6b97-406a-b3af-741e0d81e630-apiservice-cert\") pod \"packageserver-d55dfcdfc-hx8c6\" (UID: \"306cb351-6b97-406a-b3af-741e0d81e630\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hx8c6" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.044792 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/fe1144c0-6804-4a78-bad8-3319ddb3c30c-stats-auth\") pod \"router-default-5444994796-692g9\" (UID: \"fe1144c0-6804-4a78-bad8-3319ddb3c30c\") " pod="openshift-ingress/router-default-5444994796-692g9" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.045561 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/306cb351-6b97-406a-b3af-741e0d81e630-tmpfs\") pod \"packageserver-d55dfcdfc-hx8c6\" (UID: \"306cb351-6b97-406a-b3af-741e0d81e630\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hx8c6" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.046832 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5255a440-64ac-4f31-8a9e-4c3688ec128a-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-l2jzc\" (UID: \"5255a440-64ac-4f31-8a9e-4c3688ec128a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l2jzc" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.047509 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/be320286-1cef-4b6f-92a0-e9e66a34ad3e-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-qvn2d\" (UID: \"be320286-1cef-4b6f-92a0-e9e66a34ad3e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qvn2d" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.047583 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7390eae4-5ab6-4606-90d5-e5468d745208-config-volume\") pod \"dns-default-ddtz8\" (UID: \"7390eae4-5ab6-4606-90d5-e5468d745208\") " pod="openshift-dns/dns-default-ddtz8" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.047701 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eed419fc-89c2-4aa1-ba4a-4caf47aa0181-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-9bzls\" (UID: \"eed419fc-89c2-4aa1-ba4a-4caf47aa0181\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9bzls" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.047720 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/fe1144c0-6804-4a78-bad8-3319ddb3c30c-default-certificate\") pod \"router-default-5444994796-692g9\" (UID: \"fe1144c0-6804-4a78-bad8-3319ddb3c30c\") " pod="openshift-ingress/router-default-5444994796-692g9" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.047760 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/acfaa12b-166f-4c03-a208-6ed705af199d-registration-dir\") pod \"csi-hostpathplugin-4jk72\" (UID: \"acfaa12b-166f-4c03-a208-6ed705af199d\") " pod="hostpath-provisioner/csi-hostpathplugin-4jk72" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.048266 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4336a365-68f2-4aa1-9818-efdd9ab0b9f8-config\") pod \"kube-controller-manager-operator-78b949d7b-2zjvc\" (UID: \"4336a365-68f2-4aa1-9818-efdd9ab0b9f8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2zjvc" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.049673 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eed419fc-89c2-4aa1-ba4a-4caf47aa0181-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-9bzls\" (UID: \"eed419fc-89c2-4aa1-ba4a-4caf47aa0181\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9bzls" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.050340 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/acfaa12b-166f-4c03-a208-6ed705af199d-plugins-dir\") pod \"csi-hostpathplugin-4jk72\" (UID: \"acfaa12b-166f-4c03-a208-6ed705af199d\") " pod="hostpath-provisioner/csi-hostpathplugin-4jk72" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.050432 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fe1144c0-6804-4a78-bad8-3319ddb3c30c-metrics-certs\") pod \"router-default-5444994796-692g9\" (UID: \"fe1144c0-6804-4a78-bad8-3319ddb3c30c\") " pod="openshift-ingress/router-default-5444994796-692g9" Nov 29 07:41:39 crc kubenswrapper[4795]: E1129 07:41:39.050973 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:41:39.550957881 +0000 UTC m=+145.526533671 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lm9wt" (UID: "04596222-6779-478e-96cd-3aa99a923aa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.051561 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4336a365-68f2-4aa1-9818-efdd9ab0b9f8-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-2zjvc\" (UID: \"4336a365-68f2-4aa1-9818-efdd9ab0b9f8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2zjvc" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.051690 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/306cb351-6b97-406a-b3af-741e0d81e630-webhook-cert\") pod \"packageserver-d55dfcdfc-hx8c6\" (UID: \"306cb351-6b97-406a-b3af-741e0d81e630\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hx8c6" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.052650 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94527576-d5a0-4b6e-a3d2-3e1df7494d95-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-r76lc\" (UID: \"94527576-d5a0-4b6e-a3d2-3e1df7494d95\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r76lc" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.052767 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4f25efaa-5792-4a51-ba83-e8733af29fdf-secret-volume\") pod \"collect-profiles-29406690-dtj9t\" (UID: \"4f25efaa-5792-4a51-ba83-e8733af29fdf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-dtj9t" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.052783 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5550b70a-4101-4b56-8f88-c6339baaf188-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-l2khx\" (UID: \"5550b70a-4101-4b56-8f88-c6339baaf188\") " pod="openshift-marketplace/marketplace-operator-79b997595-l2khx" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.052906 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2aba5f59-2760-4b54-94a0-c051d081ce70-profile-collector-cert\") pod \"catalog-operator-68c6474976-bf7sx\" (UID: \"2aba5f59-2760-4b54-94a0-c051d081ce70\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bf7sx" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.053656 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe1144c0-6804-4a78-bad8-3319ddb3c30c-service-ca-bundle\") pod \"router-default-5444994796-692g9\" (UID: \"fe1144c0-6804-4a78-bad8-3319ddb3c30c\") " pod="openshift-ingress/router-default-5444994796-692g9" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.053851 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5550b70a-4101-4b56-8f88-c6339baaf188-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-l2khx\" (UID: \"5550b70a-4101-4b56-8f88-c6339baaf188\") " pod="openshift-marketplace/marketplace-operator-79b997595-l2khx" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.054408 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f54838e7-e080-4c36-9f22-37173dca9044-srv-cert\") pod \"olm-operator-6b444d44fb-tgqrw\" (UID: \"f54838e7-e080-4c36-9f22-37173dca9044\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tgqrw" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.055438 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a9829eb2-ba44-41ca-a0f7-fd92d6114927-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-t6scb\" (UID: \"a9829eb2-ba44-41ca-a0f7-fd92d6114927\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t6scb" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.057289 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2aba5f59-2760-4b54-94a0-c051d081ce70-srv-cert\") pod \"catalog-operator-68c6474976-bf7sx\" (UID: \"2aba5f59-2760-4b54-94a0-c051d081ce70\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bf7sx" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.057646 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3ca85e73-a2e9-416e-9542-5ea8f2f3aea5-metrics-tls\") pod \"dns-operator-744455d44c-lwxtb\" (UID: \"3ca85e73-a2e9-416e-9542-5ea8f2f3aea5\") " pod="openshift-dns-operator/dns-operator-744455d44c-lwxtb" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.057847 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4f25efaa-5792-4a51-ba83-e8733af29fdf-config-volume\") pod \"collect-profiles-29406690-dtj9t\" (UID: \"4f25efaa-5792-4a51-ba83-e8733af29fdf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-dtj9t" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.058011 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94527576-d5a0-4b6e-a3d2-3e1df7494d95-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-r76lc\" (UID: \"94527576-d5a0-4b6e-a3d2-3e1df7494d95\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r76lc" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.059361 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7390eae4-5ab6-4606-90d5-e5468d745208-metrics-tls\") pod \"dns-default-ddtz8\" (UID: \"7390eae4-5ab6-4606-90d5-e5468d745208\") " pod="openshift-dns/dns-default-ddtz8" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.059698 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ecf9ad47-4004-45d4-84e6-986e63258092-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-blw2q\" (UID: \"ecf9ad47-4004-45d4-84e6-986e63258092\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-blw2q" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.059979 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f54838e7-e080-4c36-9f22-37173dca9044-profile-collector-cert\") pod \"olm-operator-6b444d44fb-tgqrw\" (UID: \"f54838e7-e080-4c36-9f22-37173dca9044\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tgqrw" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.076011 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fr9g5\" (UniqueName: \"kubernetes.io/projected/be320286-1cef-4b6f-92a0-e9e66a34ad3e-kube-api-access-fr9g5\") pod \"package-server-manager-789f6589d5-qvn2d\" (UID: \"be320286-1cef-4b6f-92a0-e9e66a34ad3e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qvn2d" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.104133 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2f54\" (UniqueName: \"kubernetes.io/projected/306cb351-6b97-406a-b3af-741e0d81e630-kube-api-access-z2f54\") pod \"packageserver-d55dfcdfc-hx8c6\" (UID: \"306cb351-6b97-406a-b3af-741e0d81e630\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hx8c6" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.108852 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q4pdj"] Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.112772 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-d7qg2" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.114480 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-jttv5"] Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.130500 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-2fgjg" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.137310 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nht7v\" (UniqueName: \"kubernetes.io/projected/2aba5f59-2760-4b54-94a0-c051d081ce70-kube-api-access-nht7v\") pod \"catalog-operator-68c6474976-bf7sx\" (UID: \"2aba5f59-2760-4b54-94a0-c051d081ce70\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bf7sx" Nov 29 07:41:39 crc kubenswrapper[4795]: W1129 07:41:39.138386 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf773d9ba_78f9_4b56_8e33_706ee34ad32a.slice/crio-caf8af00145a99d63fc312798093fc88d86c9fb61c9b83380c85a5fab9ac6c5a WatchSource:0}: Error finding container caf8af00145a99d63fc312798093fc88d86c9fb61c9b83380c85a5fab9ac6c5a: Status 404 returned error can't find the container with id caf8af00145a99d63fc312798093fc88d86c9fb61c9b83380c85a5fab9ac6c5a Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.139455 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-twmws" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.140628 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:41:39 crc kubenswrapper[4795]: E1129 07:41:39.141034 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:41:39.641012681 +0000 UTC m=+145.616588511 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.141250 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lm9wt\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:39 crc kubenswrapper[4795]: E1129 07:41:39.160130 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:41:39.660106034 +0000 UTC m=+145.635681824 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lm9wt" (UID: "04596222-6779-478e-96cd-3aa99a923aa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.160359 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fz874\" (UniqueName: \"kubernetes.io/projected/3ca85e73-a2e9-416e-9542-5ea8f2f3aea5-kube-api-access-fz874\") pod \"dns-operator-744455d44c-lwxtb\" (UID: \"3ca85e73-a2e9-416e-9542-5ea8f2f3aea5\") " pod="openshift-dns-operator/dns-operator-744455d44c-lwxtb" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.185192 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjhhf\" (UniqueName: \"kubernetes.io/projected/4f25efaa-5792-4a51-ba83-e8733af29fdf-kube-api-access-bjhhf\") pod \"collect-profiles-29406690-dtj9t\" (UID: \"4f25efaa-5792-4a51-ba83-e8733af29fdf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-dtj9t" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.186043 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-qpsg9" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.189618 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-wbcfc"] Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.190044 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gz574\" (UniqueName: \"kubernetes.io/projected/f54838e7-e080-4c36-9f22-37173dca9044-kube-api-access-gz574\") pod \"olm-operator-6b444d44fb-tgqrw\" (UID: \"f54838e7-e080-4c36-9f22-37173dca9044\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tgqrw" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.201530 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mz266" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.203763 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nlnvc"] Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.205820 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-d2wzd"] Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.228759 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5kpq\" (UniqueName: \"kubernetes.io/projected/5550b70a-4101-4b56-8f88-c6339baaf188-kube-api-access-c5kpq\") pod \"marketplace-operator-79b997595-l2khx\" (UID: \"5550b70a-4101-4b56-8f88-c6339baaf188\") " pod="openshift-marketplace/marketplace-operator-79b997595-l2khx" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.240273 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2265\" (UniqueName: \"kubernetes.io/projected/3afcac64-b24d-4150-a997-123947d36f3a-kube-api-access-t2265\") pod \"migrator-59844c95c7-5g9td\" (UID: \"3afcac64-b24d-4150-a997-123947d36f3a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5g9td" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.248780 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:41:39 crc kubenswrapper[4795]: E1129 07:41:39.249171 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:41:39.749153736 +0000 UTC m=+145.724729526 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.253883 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-zfqtn"] Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.269062 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-mznng" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.287191 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-lwxtb" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.302836 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fp5n\" (UniqueName: \"kubernetes.io/projected/2fc75ee7-f4b7-40e9-9163-2b5dfac54d24-kube-api-access-8fp5n\") pod \"service-ca-9c57cc56f-6swbg\" (UID: \"2fc75ee7-f4b7-40e9-9163-2b5dfac54d24\") " pod="openshift-service-ca/service-ca-9c57cc56f-6swbg" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.304035 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-6x84f"] Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.309930 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5g9td" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.322270 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8f7s\" (UniqueName: \"kubernetes.io/projected/fe1144c0-6804-4a78-bad8-3319ddb3c30c-kube-api-access-k8f7s\") pod \"router-default-5444994796-692g9\" (UID: \"fe1144c0-6804-4a78-bad8-3319ddb3c30c\") " pod="openshift-ingress/router-default-5444994796-692g9" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.331336 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2tcj\" (UniqueName: \"kubernetes.io/projected/7390eae4-5ab6-4606-90d5-e5468d745208-kube-api-access-s2tcj\") pod \"dns-default-ddtz8\" (UID: \"7390eae4-5ab6-4606-90d5-e5468d745208\") " pod="openshift-dns/dns-default-ddtz8" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.350750 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lm9wt\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:39 crc kubenswrapper[4795]: E1129 07:41:39.351245 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:41:39.851227632 +0000 UTC m=+145.826803422 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lm9wt" (UID: "04596222-6779-478e-96cd-3aa99a923aa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.352574 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-d7qg2"] Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.354625 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-l2khx" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.356536 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4336a365-68f2-4aa1-9818-efdd9ab0b9f8-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-2zjvc\" (UID: \"4336a365-68f2-4aa1-9818-efdd9ab0b9f8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2zjvc" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.359323 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tgqrw" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.367201 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hx8c6" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.370225 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdxz9\" (UniqueName: \"kubernetes.io/projected/eed419fc-89c2-4aa1-ba4a-4caf47aa0181-kube-api-access-hdxz9\") pod \"kube-storage-version-migrator-operator-b67b599dd-9bzls\" (UID: \"eed419fc-89c2-4aa1-ba4a-4caf47aa0181\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9bzls" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.374074 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qvn2d" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.380988 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-6swbg" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.388624 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bf7sx" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.389951 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lxf6\" (UniqueName: \"kubernetes.io/projected/acfaa12b-166f-4c03-a208-6ed705af199d-kube-api-access-7lxf6\") pod \"csi-hostpathplugin-4jk72\" (UID: \"acfaa12b-166f-4c03-a208-6ed705af199d\") " pod="hostpath-provisioner/csi-hostpathplugin-4jk72" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.393144 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-twmws"] Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.409916 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qx8f\" (UniqueName: \"kubernetes.io/projected/a9829eb2-ba44-41ca-a0f7-fd92d6114927-kube-api-access-9qx8f\") pod \"control-plane-machine-set-operator-78cbb6b69f-t6scb\" (UID: \"a9829eb2-ba44-41ca-a0f7-fd92d6114927\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t6scb" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.421790 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-dtj9t" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.430138 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fg859\" (UniqueName: \"kubernetes.io/projected/5255a440-64ac-4f31-8a9e-4c3688ec128a-kube-api-access-fg859\") pod \"machine-config-controller-84d6567774-l2jzc\" (UID: \"5255a440-64ac-4f31-8a9e-4c3688ec128a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l2jzc" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.451654 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:41:39 crc kubenswrapper[4795]: E1129 07:41:39.451796 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:41:39.951773535 +0000 UTC m=+145.927349325 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.451949 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lm9wt\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:39 crc kubenswrapper[4795]: E1129 07:41:39.452254 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:41:39.952247159 +0000 UTC m=+145.927822949 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lm9wt" (UID: "04596222-6779-478e-96cd-3aa99a923aa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.457886 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-4jk72" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.552737 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:41:39 crc kubenswrapper[4795]: E1129 07:41:39.552923 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:41:40.052896985 +0000 UTC m=+146.028472775 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.553059 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lm9wt\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:39 crc kubenswrapper[4795]: E1129 07:41:39.553339 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:41:40.053328077 +0000 UTC m=+146.028903867 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lm9wt" (UID: "04596222-6779-478e-96cd-3aa99a923aa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.580949 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2zjvc" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.595216 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-692g9" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.602432 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-ddtz8" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.626329 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9bzls" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.632675 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l2jzc" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.654661 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:41:39 crc kubenswrapper[4795]: E1129 07:41:39.655056 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:41:40.154969291 +0000 UTC m=+146.130545121 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.655419 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lm9wt\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:39 crc kubenswrapper[4795]: E1129 07:41:39.657939 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:41:40.157917023 +0000 UTC m=+146.133492843 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lm9wt" (UID: "04596222-6779-478e-96cd-3aa99a923aa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.700688 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ct24d\" (UniqueName: \"kubernetes.io/projected/ecf9ad47-4004-45d4-84e6-986e63258092-kube-api-access-ct24d\") pod \"multus-admission-controller-857f4d67dd-blw2q\" (UID: \"ecf9ad47-4004-45d4-84e6-986e63258092\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-blw2q" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.703003 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/94527576-d5a0-4b6e-a3d2-3e1df7494d95-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-r76lc\" (UID: \"94527576-d5a0-4b6e-a3d2-3e1df7494d95\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r76lc" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.704052 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t6scb" Nov 29 07:41:39 crc kubenswrapper[4795]: W1129 07:41:39.734119 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod094b43dd_ae40_43b1_824b_a8dd47bc9693.slice/crio-1205f5bd1f9778af8c8c99718d792f1e27817e28df3ca87bfd5c15d77dc46ba5 WatchSource:0}: Error finding container 1205f5bd1f9778af8c8c99718d792f1e27817e28df3ca87bfd5c15d77dc46ba5: Status 404 returned error can't find the container with id 1205f5bd1f9778af8c8c99718d792f1e27817e28df3ca87bfd5c15d77dc46ba5 Nov 29 07:41:39 crc kubenswrapper[4795]: W1129 07:41:39.739509 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24f58977_7eb4_4fb1_a812_8b656753537d.slice/crio-ea46fbe578b7b48e690c9236bbe317ad8a1b797882df9856d3da770136dbe615 WatchSource:0}: Error finding container ea46fbe578b7b48e690c9236bbe317ad8a1b797882df9856d3da770136dbe615: Status 404 returned error can't find the container with id ea46fbe578b7b48e690c9236bbe317ad8a1b797882df9856d3da770136dbe615 Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.761527 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:41:39 crc kubenswrapper[4795]: E1129 07:41:39.762295 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:41:40.262275262 +0000 UTC m=+146.237851082 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.871990 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lm9wt\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:39 crc kubenswrapper[4795]: E1129 07:41:39.872552 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:41:40.372536356 +0000 UTC m=+146.348112146 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lm9wt" (UID: "04596222-6779-478e-96cd-3aa99a923aa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.892468 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-w4g7w" event={"ID":"9b02347e-bd6a-4a77-97d4-d8276d1b6167","Type":"ContainerStarted","Data":"abc0c8c629c54a627a9a9a6cc91813a8848d0844d2ec5afbb925ebc66a9b34bb"} Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.893492 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-twmws" event={"ID":"3028c8af-2d6e-4674-9ff8-dff8f68c511c","Type":"ContainerStarted","Data":"fe5cd0085bea196387ae306346115a54b9ed32a6de72148956560048d16fe99e"} Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.894606 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q4pdj" event={"ID":"1cefe5b8-af1f-4518-a53d-6a0151af7517","Type":"ContainerStarted","Data":"a6b2d88f06aa3d145ffd3176d445a49062300ceade6f37c678647e8bea10be87"} Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.898743 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-2fgjg" event={"ID":"e2701908-11c2-44df-9318-4868e49d7ebc","Type":"ContainerStarted","Data":"835bb192c46e07f7d3f9ac67c54c0ebb2e2cb1698afbe848605cdeb4d13d669e"} Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.899757 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-d7qg2" event={"ID":"15ff94b5-aaa1-4673-8aab-04ed7f44ce88","Type":"ContainerStarted","Data":"e07782764438f399bb09506dee9943eb7ab70271bb3bb37671ecb1d1c59ab4fe"} Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.900498 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jttv5" event={"ID":"f773d9ba-78f9-4b56-8e33-706ee34ad32a","Type":"ContainerStarted","Data":"caf8af00145a99d63fc312798093fc88d86c9fb61c9b83380c85a5fab9ac6c5a"} Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.901312 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-wbcfc" event={"ID":"ae857e30-60ec-4e9f-867b-da07f179df65","Type":"ContainerStarted","Data":"a23c5b0355eeab545f93075c8eee8d83195d081201d85964c5ed34cafc8225e0"} Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.902088 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-klnm7" event={"ID":"892d0338-c59f-481e-8d70-3143d4954f38","Type":"ContainerStarted","Data":"8322c5794a73ebbb1f45ffaac5d4949d41062ad6e41769261b6bdcc9aa1b596c"} Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.903187 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6x84f" event={"ID":"24f58977-7eb4-4fb1-a812-8b656753537d","Type":"ContainerStarted","Data":"ea46fbe578b7b48e690c9236bbe317ad8a1b797882df9856d3da770136dbe615"} Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.904303 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zfqtn" event={"ID":"094b43dd-ae40-43b1-824b-a8dd47bc9693","Type":"ContainerStarted","Data":"1205f5bd1f9778af8c8c99718d792f1e27817e28df3ca87bfd5c15d77dc46ba5"} Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.905116 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nlnvc" event={"ID":"768de086-2681-4cee-b71f-a732b317fc64","Type":"ContainerStarted","Data":"f5e83ceef6187d6422bf5bdd3cb2af3d6316ed40ad0feb8e16ac47666d80a6c0"} Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.905355 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-sjwfz" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.906855 4795 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-sjwfz container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.906900 4795 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-sjwfz" podUID="099aeef3-9f86-47a1-bc18-3784b8e87bcd" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.918664 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r76lc" Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.943034 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-qpsg9"] Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.973435 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:41:39 crc kubenswrapper[4795]: E1129 07:41:39.973619 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:41:40.473581253 +0000 UTC m=+146.449157043 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.973785 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lm9wt\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:39 crc kubenswrapper[4795]: E1129 07:41:39.974844 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:41:40.474833498 +0000 UTC m=+146.450409288 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lm9wt" (UID: "04596222-6779-478e-96cd-3aa99a923aa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:39 crc kubenswrapper[4795]: I1129 07:41:39.996480 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-blw2q" Nov 29 07:41:40 crc kubenswrapper[4795]: I1129 07:41:40.076110 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:41:40 crc kubenswrapper[4795]: E1129 07:41:40.076578 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:41:40.576541143 +0000 UTC m=+146.552116973 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:40 crc kubenswrapper[4795]: I1129 07:41:40.076679 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lm9wt\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:40 crc kubenswrapper[4795]: E1129 07:41:40.076970 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:41:40.576960485 +0000 UTC m=+146.552536275 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lm9wt" (UID: "04596222-6779-478e-96cd-3aa99a923aa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:40 crc kubenswrapper[4795]: W1129 07:41:40.115230 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34ec03b0_6d17_4934_84fa_4323c7599fd0.slice/crio-8b4f04abdb491d97dfffa3429e02c871dd9c6f788dbc691b6ab0774379030ef1 WatchSource:0}: Error finding container 8b4f04abdb491d97dfffa3429e02c871dd9c6f788dbc691b6ab0774379030ef1: Status 404 returned error can't find the container with id 8b4f04abdb491d97dfffa3429e02c871dd9c6f788dbc691b6ab0774379030ef1 Nov 29 07:41:40 crc kubenswrapper[4795]: I1129 07:41:40.179660 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:41:40 crc kubenswrapper[4795]: E1129 07:41:40.179925 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:41:40.679895305 +0000 UTC m=+146.655471095 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:40 crc kubenswrapper[4795]: E1129 07:41:40.182178 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:41:40.682167838 +0000 UTC m=+146.657743618 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lm9wt" (UID: "04596222-6779-478e-96cd-3aa99a923aa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:40 crc kubenswrapper[4795]: I1129 07:41:40.202641 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lm9wt\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:40 crc kubenswrapper[4795]: I1129 07:41:40.305888 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:41:40 crc kubenswrapper[4795]: E1129 07:41:40.306242 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:41:40.806212997 +0000 UTC m=+146.781788797 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:40 crc kubenswrapper[4795]: I1129 07:41:40.306397 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lm9wt\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:40 crc kubenswrapper[4795]: E1129 07:41:40.306801 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:41:40.806786273 +0000 UTC m=+146.782362063 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lm9wt" (UID: "04596222-6779-478e-96cd-3aa99a923aa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:40 crc kubenswrapper[4795]: I1129 07:41:40.307334 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mz266"] Nov 29 07:41:40 crc kubenswrapper[4795]: I1129 07:41:40.409308 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:41:40 crc kubenswrapper[4795]: E1129 07:41:40.409752 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:41:40.909720952 +0000 UTC m=+146.885296742 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:40 crc kubenswrapper[4795]: I1129 07:41:40.463602 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t6scb"] Nov 29 07:41:40 crc kubenswrapper[4795]: I1129 07:41:40.511145 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lm9wt\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:40 crc kubenswrapper[4795]: E1129 07:41:40.511726 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:41:41.011713596 +0000 UTC m=+146.987289386 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lm9wt" (UID: "04596222-6779-478e-96cd-3aa99a923aa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:40 crc kubenswrapper[4795]: I1129 07:41:40.595729 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-lwxtb"] Nov 29 07:41:40 crc kubenswrapper[4795]: I1129 07:41:40.612435 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bf7sx"] Nov 29 07:41:40 crc kubenswrapper[4795]: I1129 07:41:40.615008 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:41:40 crc kubenswrapper[4795]: E1129 07:41:40.615481 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:41:41.115462948 +0000 UTC m=+147.091038748 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:40 crc kubenswrapper[4795]: I1129 07:41:40.699964 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-sjwfz" podStartSLOduration=120.699925143 podStartE2EDuration="2m0.699925143s" podCreationTimestamp="2025-11-29 07:39:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:41:40.694840601 +0000 UTC m=+146.670416391" watchObservedRunningTime="2025-11-29 07:41:40.699925143 +0000 UTC m=+146.675500943" Nov 29 07:41:40 crc kubenswrapper[4795]: I1129 07:41:40.719239 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lm9wt\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:40 crc kubenswrapper[4795]: E1129 07:41:40.719733 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:41:41.219718385 +0000 UTC m=+147.195294175 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lm9wt" (UID: "04596222-6779-478e-96cd-3aa99a923aa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:40 crc kubenswrapper[4795]: I1129 07:41:40.831309 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:41:40 crc kubenswrapper[4795]: E1129 07:41:40.831476 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:41:41.33145281 +0000 UTC m=+147.307028600 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:40 crc kubenswrapper[4795]: I1129 07:41:40.831602 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lm9wt\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:40 crc kubenswrapper[4795]: E1129 07:41:40.831866 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:41:41.331857452 +0000 UTC m=+147.307433242 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lm9wt" (UID: "04596222-6779-478e-96cd-3aa99a923aa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:40 crc kubenswrapper[4795]: I1129 07:41:40.910127 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-qpsg9" event={"ID":"34ec03b0-6d17-4934-84fa-4323c7599fd0","Type":"ContainerStarted","Data":"8b4f04abdb491d97dfffa3429e02c871dd9c6f788dbc691b6ab0774379030ef1"} Nov 29 07:41:40 crc kubenswrapper[4795]: I1129 07:41:40.925085 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-w4g7w" event={"ID":"9b02347e-bd6a-4a77-97d4-d8276d1b6167","Type":"ContainerStarted","Data":"988ef1e2a4db717859d2cc1f5d2af7487b8891115d030bac5aa1a59b869dbf18"} Nov 29 07:41:40 crc kubenswrapper[4795]: I1129 07:41:40.939093 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:41:40 crc kubenswrapper[4795]: E1129 07:41:40.939176 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:41:41.439152653 +0000 UTC m=+147.414728443 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:40 crc kubenswrapper[4795]: I1129 07:41:40.939413 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lm9wt\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:40 crc kubenswrapper[4795]: E1129 07:41:40.939780 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:41:41.43976592 +0000 UTC m=+147.415341710 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lm9wt" (UID: "04596222-6779-478e-96cd-3aa99a923aa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:40 crc kubenswrapper[4795]: I1129 07:41:40.945168 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mz266" event={"ID":"7a5cb1ec-767a-42f1-90c7-870910e5e5d9","Type":"ContainerStarted","Data":"415db437bc54f644961720922188ccab136ccde4e23b1d8906ca2a40902bfd1e"} Nov 29 07:41:40 crc kubenswrapper[4795]: I1129 07:41:40.949391 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-lwxtb" event={"ID":"3ca85e73-a2e9-416e-9542-5ea8f2f3aea5","Type":"ContainerStarted","Data":"ae3fba500a9816807de3c720cf8409be3cd0cce22ac3e1419bdac319d7c17892"} Nov 29 07:41:40 crc kubenswrapper[4795]: I1129 07:41:40.957220 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q4pdj" event={"ID":"1cefe5b8-af1f-4518-a53d-6a0151af7517","Type":"ContainerStarted","Data":"ef0a7286bd93cf78ad082130ffd5787a488a6b32f5e45322174910d6cd7d5d61"} Nov 29 07:41:41 crc kubenswrapper[4795]: I1129 07:41:41.032043 4795 generic.go:334] "Generic (PLEG): container finished" podID="b6996662-d230-4299-b913-b6bc38c50ef5" containerID="96af1aa788d79a887d5915722b4c0d8d66e97a7d2fa8401629690a1ccd2b4095" exitCode=0 Nov 29 07:41:41 crc kubenswrapper[4795]: I1129 07:41:41.032763 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-lgd6b" event={"ID":"b6996662-d230-4299-b913-b6bc38c50ef5","Type":"ContainerDied","Data":"96af1aa788d79a887d5915722b4c0d8d66e97a7d2fa8401629690a1ccd2b4095"} Nov 29 07:41:41 crc kubenswrapper[4795]: I1129 07:41:41.033862 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-w4g7w" podStartSLOduration=121.033834683 podStartE2EDuration="2m1.033834683s" podCreationTimestamp="2025-11-29 07:39:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:41:40.970565929 +0000 UTC m=+146.946141749" watchObservedRunningTime="2025-11-29 07:41:41.033834683 +0000 UTC m=+147.009410473" Nov 29 07:41:41 crc kubenswrapper[4795]: I1129 07:41:41.041035 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:41:41 crc kubenswrapper[4795]: E1129 07:41:41.042227 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:41:41.542207906 +0000 UTC m=+147.517783706 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:41 crc kubenswrapper[4795]: I1129 07:41:41.059368 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t6scb" event={"ID":"a9829eb2-ba44-41ca-a0f7-fd92d6114927","Type":"ContainerStarted","Data":"83edf623b4c07e48bf3c7a4fa5a84bb40c80b8eab8bc77928e8a8b97abb256e3"} Nov 29 07:41:41 crc kubenswrapper[4795]: I1129 07:41:41.070620 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q4pdj" podStartSLOduration=121.070579577 podStartE2EDuration="2m1.070579577s" podCreationTimestamp="2025-11-29 07:39:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:41:41.03877544 +0000 UTC m=+147.014351230" watchObservedRunningTime="2025-11-29 07:41:41.070579577 +0000 UTC m=+147.046155367" Nov 29 07:41:41 crc kubenswrapper[4795]: I1129 07:41:41.081461 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-692g9" event={"ID":"fe1144c0-6804-4a78-bad8-3319ddb3c30c","Type":"ContainerStarted","Data":"427adc73ac8373a9a2caacb56f8fdce72d18a81e9210181e74b3517b4ec59296"} Nov 29 07:41:41 crc kubenswrapper[4795]: I1129 07:41:41.081837 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-mznng"] Nov 29 07:41:41 crc kubenswrapper[4795]: I1129 07:41:41.094378 4795 generic.go:334] "Generic (PLEG): container finished" podID="3fa186eb-0f11-42e6-a1d4-2f42d1a7dadd" containerID="e5ccba7814490f48b29cd025ff9969e6d16bf0f6129d4a5faf859abd189760f6" exitCode=0 Nov 29 07:41:41 crc kubenswrapper[4795]: I1129 07:41:41.094445 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6w84b" event={"ID":"3fa186eb-0f11-42e6-a1d4-2f42d1a7dadd","Type":"ContainerDied","Data":"e5ccba7814490f48b29cd025ff9969e6d16bf0f6129d4a5faf859abd189760f6"} Nov 29 07:41:41 crc kubenswrapper[4795]: I1129 07:41:41.095804 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-d2wzd" event={"ID":"d79589e5-c434-4157-8cfd-a51e92aa0c2f","Type":"ContainerStarted","Data":"426024f57839ee335b596b770f32902a3eded9d2c25e2cf3414de1a97d6842aa"} Nov 29 07:41:41 crc kubenswrapper[4795]: I1129 07:41:41.097947 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kfx9d" event={"ID":"84c50c19-d82f-444f-9558-6f9932e3ff86","Type":"ContainerStarted","Data":"1514eb0604c20cb1a9a9f512ad022da757b52d39a5e26e669af5ba95c43181d3"} Nov 29 07:41:41 crc kubenswrapper[4795]: I1129 07:41:41.100046 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kfx9d" Nov 29 07:41:41 crc kubenswrapper[4795]: I1129 07:41:41.101914 4795 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-kfx9d container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Nov 29 07:41:41 crc kubenswrapper[4795]: I1129 07:41:41.101957 4795 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kfx9d" podUID="84c50c19-d82f-444f-9558-6f9932e3ff86" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Nov 29 07:41:41 crc kubenswrapper[4795]: I1129 07:41:41.119206 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-klnm7" event={"ID":"892d0338-c59f-481e-8d70-3143d4954f38","Type":"ContainerStarted","Data":"b2c3ba6a2614b539f9db2ae2ca476f9b8b3dc68fdb0d0a30db7f860bc768b389"} Nov 29 07:41:41 crc kubenswrapper[4795]: I1129 07:41:41.120683 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-klnm7" Nov 29 07:41:41 crc kubenswrapper[4795]: I1129 07:41:41.122848 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kfx9d" podStartSLOduration=121.122831264 podStartE2EDuration="2m1.122831264s" podCreationTimestamp="2025-11-29 07:39:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:41:41.119022088 +0000 UTC m=+147.094597878" watchObservedRunningTime="2025-11-29 07:41:41.122831264 +0000 UTC m=+147.098407054" Nov 29 07:41:41 crc kubenswrapper[4795]: I1129 07:41:41.123968 4795 patch_prober.go:28] interesting pod/downloads-7954f5f757-klnm7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Nov 29 07:41:41 crc kubenswrapper[4795]: I1129 07:41:41.124014 4795 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-klnm7" podUID="892d0338-c59f-481e-8d70-3143d4954f38" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" Nov 29 07:41:41 crc kubenswrapper[4795]: I1129 07:41:41.143313 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lm9wt\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:41 crc kubenswrapper[4795]: E1129 07:41:41.143634 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:41:41.643622574 +0000 UTC m=+147.619198364 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lm9wt" (UID: "04596222-6779-478e-96cd-3aa99a923aa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:41 crc kubenswrapper[4795]: I1129 07:41:41.153243 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-sjwfz" Nov 29 07:41:41 crc kubenswrapper[4795]: I1129 07:41:41.181228 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-klnm7" podStartSLOduration=121.181211272 podStartE2EDuration="2m1.181211272s" podCreationTimestamp="2025-11-29 07:39:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:41:41.180113411 +0000 UTC m=+147.155689201" watchObservedRunningTime="2025-11-29 07:41:41.181211272 +0000 UTC m=+147.156787062" Nov 29 07:41:41 crc kubenswrapper[4795]: I1129 07:41:41.182033 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-gxss8" podStartSLOduration=122.182027214 podStartE2EDuration="2m2.182027214s" podCreationTimestamp="2025-11-29 07:39:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:41:41.1438589 +0000 UTC m=+147.119434690" watchObservedRunningTime="2025-11-29 07:41:41.182027214 +0000 UTC m=+147.157603004" Nov 29 07:41:41 crc kubenswrapper[4795]: I1129 07:41:41.244418 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:41:41 crc kubenswrapper[4795]: E1129 07:41:41.244724 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:41:41.744698742 +0000 UTC m=+147.720274532 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:41 crc kubenswrapper[4795]: I1129 07:41:41.245140 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lm9wt\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:41 crc kubenswrapper[4795]: E1129 07:41:41.248716 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:41:41.748700233 +0000 UTC m=+147.724276023 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lm9wt" (UID: "04596222-6779-478e-96cd-3aa99a923aa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:41 crc kubenswrapper[4795]: I1129 07:41:41.253559 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qbnb4" podStartSLOduration=122.253540588 podStartE2EDuration="2m2.253540588s" podCreationTimestamp="2025-11-29 07:39:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:41:41.212085742 +0000 UTC m=+147.187661552" watchObservedRunningTime="2025-11-29 07:41:41.253540588 +0000 UTC m=+147.229116378" Nov 29 07:41:41 crc kubenswrapper[4795]: I1129 07:41:41.346807 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:41:41 crc kubenswrapper[4795]: E1129 07:41:41.347114 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:41:41.847095497 +0000 UTC m=+147.822671287 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:41 crc kubenswrapper[4795]: I1129 07:41:41.447967 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lm9wt\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:41 crc kubenswrapper[4795]: E1129 07:41:41.448299 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:41:41.948287198 +0000 UTC m=+147.923862988 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lm9wt" (UID: "04596222-6779-478e-96cd-3aa99a923aa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:41 crc kubenswrapper[4795]: I1129 07:41:41.549602 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:41:41 crc kubenswrapper[4795]: E1129 07:41:41.550527 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:41:42.050506138 +0000 UTC m=+148.026081928 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:41 crc kubenswrapper[4795]: I1129 07:41:41.564754 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-l2jzc"] Nov 29 07:41:41 crc kubenswrapper[4795]: I1129 07:41:41.583408 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-4jk72"] Nov 29 07:41:41 crc kubenswrapper[4795]: I1129 07:41:41.651773 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lm9wt\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:41 crc kubenswrapper[4795]: E1129 07:41:41.652144 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:41:42.152124521 +0000 UTC m=+148.127700311 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lm9wt" (UID: "04596222-6779-478e-96cd-3aa99a923aa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:41 crc kubenswrapper[4795]: I1129 07:41:41.663483 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2zjvc"] Nov 29 07:41:41 crc kubenswrapper[4795]: I1129 07:41:41.666105 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tgqrw"] Nov 29 07:41:41 crc kubenswrapper[4795]: I1129 07:41:41.760818 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:41:41 crc kubenswrapper[4795]: E1129 07:41:41.761036 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:41:42.260998816 +0000 UTC m=+148.236574606 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:41 crc kubenswrapper[4795]: I1129 07:41:41.761146 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lm9wt\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:41 crc kubenswrapper[4795]: E1129 07:41:41.761646 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:41:42.261625944 +0000 UTC m=+148.237201734 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lm9wt" (UID: "04596222-6779-478e-96cd-3aa99a923aa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:41 crc kubenswrapper[4795]: I1129 07:41:41.801525 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9bzls"] Nov 29 07:41:41 crc kubenswrapper[4795]: I1129 07:41:41.817889 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-l2khx"] Nov 29 07:41:41 crc kubenswrapper[4795]: I1129 07:41:41.817946 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406690-dtj9t"] Nov 29 07:41:41 crc kubenswrapper[4795]: I1129 07:41:41.843923 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-blw2q"] Nov 29 07:41:41 crc kubenswrapper[4795]: I1129 07:41:41.856012 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qvn2d"] Nov 29 07:41:41 crc kubenswrapper[4795]: I1129 07:41:41.856316 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hx8c6"] Nov 29 07:41:41 crc kubenswrapper[4795]: I1129 07:41:41.861822 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:41:41 crc kubenswrapper[4795]: E1129 07:41:41.861931 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:41:42.36191212 +0000 UTC m=+148.337487910 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:41 crc kubenswrapper[4795]: I1129 07:41:41.862162 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lm9wt\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:41 crc kubenswrapper[4795]: E1129 07:41:41.862454 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:41:42.362445355 +0000 UTC m=+148.338021145 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lm9wt" (UID: "04596222-6779-478e-96cd-3aa99a923aa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:41 crc kubenswrapper[4795]: I1129 07:41:41.919294 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-6swbg"] Nov 29 07:41:41 crc kubenswrapper[4795]: I1129 07:41:41.941477 4795 patch_prober.go:28] interesting pod/machine-config-daemon-bkmq6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:41:41 crc kubenswrapper[4795]: I1129 07:41:41.941578 4795 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:41:41 crc kubenswrapper[4795]: I1129 07:41:41.963713 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:41:41 crc kubenswrapper[4795]: E1129 07:41:41.964008 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:41:42.463993676 +0000 UTC m=+148.439569466 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:42 crc kubenswrapper[4795]: I1129 07:41:42.017092 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r76lc"] Nov 29 07:41:42 crc kubenswrapper[4795]: I1129 07:41:42.048545 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-5g9td"] Nov 29 07:41:42 crc kubenswrapper[4795]: I1129 07:41:42.048746 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-ddtz8"] Nov 29 07:41:42 crc kubenswrapper[4795]: W1129 07:41:42.062512 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeed419fc_89c2_4aa1_ba4a_4caf47aa0181.slice/crio-a91c1603800d597ef6dcad11a107d96988610123e546d9f3836e8f01fee9adb1 WatchSource:0}: Error finding container a91c1603800d597ef6dcad11a107d96988610123e546d9f3836e8f01fee9adb1: Status 404 returned error can't find the container with id a91c1603800d597ef6dcad11a107d96988610123e546d9f3836e8f01fee9adb1 Nov 29 07:41:42 crc kubenswrapper[4795]: I1129 07:41:42.065250 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lm9wt\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:42 crc kubenswrapper[4795]: E1129 07:41:42.065566 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:41:42.565556207 +0000 UTC m=+148.541131997 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lm9wt" (UID: "04596222-6779-478e-96cd-3aa99a923aa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:42 crc kubenswrapper[4795]: W1129 07:41:42.084984 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f25efaa_5792_4a51_ba83_e8733af29fdf.slice/crio-bd969ee68502067f3fe216cec41c3eb14e71300d3381eb3b3d8b3b1f3d9dfc09 WatchSource:0}: Error finding container bd969ee68502067f3fe216cec41c3eb14e71300d3381eb3b3d8b3b1f3d9dfc09: Status 404 returned error can't find the container with id bd969ee68502067f3fe216cec41c3eb14e71300d3381eb3b3d8b3b1f3d9dfc09 Nov 29 07:41:42 crc kubenswrapper[4795]: W1129 07:41:42.100556 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podecf9ad47_4004_45d4_84e6_986e63258092.slice/crio-edd40166daa10da1b36c41b560275ac479c7eaeebb28e1c39e08e08c40241dd7 WatchSource:0}: Error finding container edd40166daa10da1b36c41b560275ac479c7eaeebb28e1c39e08e08c40241dd7: Status 404 returned error can't find the container with id edd40166daa10da1b36c41b560275ac479c7eaeebb28e1c39e08e08c40241dd7 Nov 29 07:41:42 crc kubenswrapper[4795]: W1129 07:41:42.143330 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe320286_1cef_4b6f_92a0_e9e66a34ad3e.slice/crio-556be7ad4dbbe7c9602f5ab84628deebcf15b43c812dae9afdada9d0b2958acd WatchSource:0}: Error finding container 556be7ad4dbbe7c9602f5ab84628deebcf15b43c812dae9afdada9d0b2958acd: Status 404 returned error can't find the container with id 556be7ad4dbbe7c9602f5ab84628deebcf15b43c812dae9afdada9d0b2958acd Nov 29 07:41:42 crc kubenswrapper[4795]: I1129 07:41:42.144634 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-dtj9t" event={"ID":"4f25efaa-5792-4a51-ba83-e8733af29fdf","Type":"ContainerStarted","Data":"bd969ee68502067f3fe216cec41c3eb14e71300d3381eb3b3d8b3b1f3d9dfc09"} Nov 29 07:41:42 crc kubenswrapper[4795]: I1129 07:41:42.150258 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9bzls" event={"ID":"eed419fc-89c2-4aa1-ba4a-4caf47aa0181","Type":"ContainerStarted","Data":"a91c1603800d597ef6dcad11a107d96988610123e546d9f3836e8f01fee9adb1"} Nov 29 07:41:42 crc kubenswrapper[4795]: I1129 07:41:42.153059 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2zjvc" event={"ID":"4336a365-68f2-4aa1-9818-efdd9ab0b9f8","Type":"ContainerStarted","Data":"f59e61508a3690b479040cb5e75f6a17316691cf79d2acb37ebe620008641149"} Nov 29 07:41:42 crc kubenswrapper[4795]: I1129 07:41:42.155231 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-l2khx" event={"ID":"5550b70a-4101-4b56-8f88-c6339baaf188","Type":"ContainerStarted","Data":"8b6dc9460cb05eccb1836001427b77918c1a8e4aaa671f120f1b02ce46a0f98c"} Nov 29 07:41:42 crc kubenswrapper[4795]: I1129 07:41:42.157811 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-wbcfc" event={"ID":"ae857e30-60ec-4e9f-867b-da07f179df65","Type":"ContainerStarted","Data":"163be3969dde7d4a1e030371e7da252b63d15ab500e5f09e5d78f8a3313b951c"} Nov 29 07:41:42 crc kubenswrapper[4795]: I1129 07:41:42.158751 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l2jzc" event={"ID":"5255a440-64ac-4f31-8a9e-4c3688ec128a","Type":"ContainerStarted","Data":"cbf4f4e28b3adef0cf17cd01349cb8619f02d5c632e3f05d2090de1cdd95b5ea"} Nov 29 07:41:42 crc kubenswrapper[4795]: I1129 07:41:42.159987 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-mznng" event={"ID":"26978e65-c4ba-410c-b61a-9b4157d71e78","Type":"ContainerStarted","Data":"0cae260bc70d10cb2c3f6db1b6a3efb793ebb17ea4352f7595bff3723ce98f7b"} Nov 29 07:41:42 crc kubenswrapper[4795]: I1129 07:41:42.166324 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:41:42 crc kubenswrapper[4795]: E1129 07:41:42.166731 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:41:42.666696417 +0000 UTC m=+148.642272227 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:42 crc kubenswrapper[4795]: I1129 07:41:42.173222 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bf7sx" event={"ID":"2aba5f59-2760-4b54-94a0-c051d081ce70","Type":"ContainerStarted","Data":"62d17833fa50ee3d65e377a91eb5407816f9131c9478fdd98dcf9ac9bb40947a"} Nov 29 07:41:42 crc kubenswrapper[4795]: W1129 07:41:42.173332 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2fc75ee7_f4b7_40e9_9163_2b5dfac54d24.slice/crio-349ca7a1775b88ba3135495eb1d18e15d1017b7b597303df14056d783962ac82 WatchSource:0}: Error finding container 349ca7a1775b88ba3135495eb1d18e15d1017b7b597303df14056d783962ac82: Status 404 returned error can't find the container with id 349ca7a1775b88ba3135495eb1d18e15d1017b7b597303df14056d783962ac82 Nov 29 07:41:42 crc kubenswrapper[4795]: I1129 07:41:42.174318 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-blw2q" event={"ID":"ecf9ad47-4004-45d4-84e6-986e63258092","Type":"ContainerStarted","Data":"edd40166daa10da1b36c41b560275ac479c7eaeebb28e1c39e08e08c40241dd7"} Nov 29 07:41:42 crc kubenswrapper[4795]: I1129 07:41:42.175417 4795 generic.go:334] "Generic (PLEG): container finished" podID="f773d9ba-78f9-4b56-8e33-706ee34ad32a" containerID="4938bbe9dd4327bcc1534abd0728b3e1d3ce6c105e7d8199e6c4d36108bd8afb" exitCode=0 Nov 29 07:41:42 crc kubenswrapper[4795]: I1129 07:41:42.175479 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jttv5" event={"ID":"f773d9ba-78f9-4b56-8e33-706ee34ad32a","Type":"ContainerDied","Data":"4938bbe9dd4327bcc1534abd0728b3e1d3ce6c105e7d8199e6c4d36108bd8afb"} Nov 29 07:41:42 crc kubenswrapper[4795]: I1129 07:41:42.182669 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hx8c6" event={"ID":"306cb351-6b97-406a-b3af-741e0d81e630","Type":"ContainerStarted","Data":"c831cb9d3bb16078456b3b5f1293e772034c09dd34f31430f30183195e978a54"} Nov 29 07:41:42 crc kubenswrapper[4795]: I1129 07:41:42.202166 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tgqrw" event={"ID":"f54838e7-e080-4c36-9f22-37173dca9044","Type":"ContainerStarted","Data":"60d8367043c92c0b25a2d150fae5aeba62253c2b1dd65e3c860c146492f245d1"} Nov 29 07:41:42 crc kubenswrapper[4795]: I1129 07:41:42.269341 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lm9wt\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:42 crc kubenswrapper[4795]: E1129 07:41:42.275164 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:41:42.775141681 +0000 UTC m=+148.750717471 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lm9wt" (UID: "04596222-6779-478e-96cd-3aa99a923aa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:42 crc kubenswrapper[4795]: I1129 07:41:42.305046 4795 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-kfx9d container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Nov 29 07:41:42 crc kubenswrapper[4795]: I1129 07:41:42.305114 4795 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kfx9d" podUID="84c50c19-d82f-444f-9558-6f9932e3ff86" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Nov 29 07:41:42 crc kubenswrapper[4795]: I1129 07:41:42.305448 4795 patch_prober.go:28] interesting pod/downloads-7954f5f757-klnm7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Nov 29 07:41:42 crc kubenswrapper[4795]: I1129 07:41:42.305607 4795 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-klnm7" podUID="892d0338-c59f-481e-8d70-3143d4954f38" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" Nov 29 07:41:42 crc kubenswrapper[4795]: I1129 07:41:42.338524 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-4jk72" event={"ID":"acfaa12b-166f-4c03-a208-6ed705af199d","Type":"ContainerStarted","Data":"17dc6b7d22946b994296480d8a70130f32627c1fa39d1dd18f79e5c05cf91345"} Nov 29 07:41:42 crc kubenswrapper[4795]: I1129 07:41:42.371698 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:41:42 crc kubenswrapper[4795]: E1129 07:41:42.372192 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:41:42.872172296 +0000 UTC m=+148.847748086 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:42 crc kubenswrapper[4795]: I1129 07:41:42.487289 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lm9wt\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:42 crc kubenswrapper[4795]: E1129 07:41:42.494624 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:41:42.994608069 +0000 UTC m=+148.970183859 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lm9wt" (UID: "04596222-6779-478e-96cd-3aa99a923aa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:42 crc kubenswrapper[4795]: I1129 07:41:42.589410 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:41:42 crc kubenswrapper[4795]: E1129 07:41:42.589929 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:41:43.089909206 +0000 UTC m=+149.065484996 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:42 crc kubenswrapper[4795]: I1129 07:41:42.692178 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lm9wt\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:42 crc kubenswrapper[4795]: E1129 07:41:42.692934 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:41:43.192921038 +0000 UTC m=+149.168496828 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lm9wt" (UID: "04596222-6779-478e-96cd-3aa99a923aa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:42 crc kubenswrapper[4795]: I1129 07:41:42.793790 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:41:42 crc kubenswrapper[4795]: E1129 07:41:42.794159 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:41:43.2941385 +0000 UTC m=+149.269714290 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:42 crc kubenswrapper[4795]: I1129 07:41:42.895438 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lm9wt\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:42 crc kubenswrapper[4795]: E1129 07:41:42.896292 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:41:43.396280168 +0000 UTC m=+149.371855958 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lm9wt" (UID: "04596222-6779-478e-96cd-3aa99a923aa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:43 crc kubenswrapper[4795]: I1129 07:41:43.002412 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:41:43 crc kubenswrapper[4795]: E1129 07:41:43.002773 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:41:43.502755917 +0000 UTC m=+149.478331707 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:43 crc kubenswrapper[4795]: I1129 07:41:43.002836 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lm9wt\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:43 crc kubenswrapper[4795]: E1129 07:41:43.003249 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:41:43.50324134 +0000 UTC m=+149.478817130 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lm9wt" (UID: "04596222-6779-478e-96cd-3aa99a923aa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:43 crc kubenswrapper[4795]: I1129 07:41:43.104670 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:41:43 crc kubenswrapper[4795]: E1129 07:41:43.105144 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:41:43.605121251 +0000 UTC m=+149.580697041 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:43 crc kubenswrapper[4795]: I1129 07:41:43.215889 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lm9wt\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:43 crc kubenswrapper[4795]: E1129 07:41:43.216753 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:41:43.716730932 +0000 UTC m=+149.692306722 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lm9wt" (UID: "04596222-6779-478e-96cd-3aa99a923aa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:43 crc kubenswrapper[4795]: I1129 07:41:43.319181 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:41:43 crc kubenswrapper[4795]: E1129 07:41:43.319531 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:41:43.819515328 +0000 UTC m=+149.795091118 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:43 crc kubenswrapper[4795]: I1129 07:41:43.359872 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2zjvc" event={"ID":"4336a365-68f2-4aa1-9818-efdd9ab0b9f8","Type":"ContainerStarted","Data":"c5cbb9a24be51eeeeacff7b2bf7dce27f305f3e0a0bd79f27ea75e68fa64d71f"} Nov 29 07:41:43 crc kubenswrapper[4795]: I1129 07:41:43.393988 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2zjvc" podStartSLOduration=123.393971574 podStartE2EDuration="2m3.393971574s" podCreationTimestamp="2025-11-29 07:39:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:41:43.393664405 +0000 UTC m=+149.369240195" watchObservedRunningTime="2025-11-29 07:41:43.393971574 +0000 UTC m=+149.369547364" Nov 29 07:41:43 crc kubenswrapper[4795]: I1129 07:41:43.399988 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5g9td" event={"ID":"3afcac64-b24d-4150-a997-123947d36f3a","Type":"ContainerStarted","Data":"8a2ba0656957ddb6d73e582e8968725100d4309ee748e2480a127cfb2b71d8f6"} Nov 29 07:41:43 crc kubenswrapper[4795]: I1129 07:41:43.420058 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lm9wt\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:43 crc kubenswrapper[4795]: E1129 07:41:43.436785 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:41:43.936766077 +0000 UTC m=+149.912341867 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lm9wt" (UID: "04596222-6779-478e-96cd-3aa99a923aa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:43 crc kubenswrapper[4795]: I1129 07:41:43.446468 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mz266" event={"ID":"7a5cb1ec-767a-42f1-90c7-870910e5e5d9","Type":"ContainerStarted","Data":"8b5fffa806a5e1723687a19361d56330274c4a67f5f87e2ee4d6b561eb1b8e12"} Nov 29 07:41:43 crc kubenswrapper[4795]: I1129 07:41:43.468330 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-mznng" event={"ID":"26978e65-c4ba-410c-b61a-9b4157d71e78","Type":"ContainerStarted","Data":"7eb09c92d850f2e0be06f6a4ab2d06345d63705e950101a4204118a5be937342"} Nov 29 07:41:43 crc kubenswrapper[4795]: I1129 07:41:43.485836 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mz266" podStartSLOduration=123.485820854 podStartE2EDuration="2m3.485820854s" podCreationTimestamp="2025-11-29 07:39:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:41:43.485093744 +0000 UTC m=+149.460669534" watchObservedRunningTime="2025-11-29 07:41:43.485820854 +0000 UTC m=+149.461396634" Nov 29 07:41:43 crc kubenswrapper[4795]: I1129 07:41:43.520772 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-mznng" podStartSLOduration=123.520738777 podStartE2EDuration="2m3.520738777s" podCreationTimestamp="2025-11-29 07:39:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:41:43.52014423 +0000 UTC m=+149.495720020" watchObservedRunningTime="2025-11-29 07:41:43.520738777 +0000 UTC m=+149.496314567" Nov 29 07:41:43 crc kubenswrapper[4795]: I1129 07:41:43.522193 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:41:43 crc kubenswrapper[4795]: E1129 07:41:43.523394 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:41:44.02336876 +0000 UTC m=+149.998944550 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:43 crc kubenswrapper[4795]: I1129 07:41:43.546064 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bf7sx" event={"ID":"2aba5f59-2760-4b54-94a0-c051d081ce70","Type":"ContainerStarted","Data":"2ade7918bc91051f0be08b365d8f330b137c9d38fccf818a68a8621859c87d6f"} Nov 29 07:41:43 crc kubenswrapper[4795]: I1129 07:41:43.547512 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bf7sx" Nov 29 07:41:43 crc kubenswrapper[4795]: I1129 07:41:43.580724 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-6swbg" event={"ID":"2fc75ee7-f4b7-40e9-9163-2b5dfac54d24","Type":"ContainerStarted","Data":"349ca7a1775b88ba3135495eb1d18e15d1017b7b597303df14056d783962ac82"} Nov 29 07:41:43 crc kubenswrapper[4795]: I1129 07:41:43.599300 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xdslc" event={"ID":"42062f93-4804-4818-95b5-2b6b3225c433","Type":"ContainerStarted","Data":"f6e2b265455b284a0099fbc6adca40f39da942530ef225ec7dc23f1271125b3c"} Nov 29 07:41:43 crc kubenswrapper[4795]: I1129 07:41:43.608418 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bf7sx" podStartSLOduration=123.608396741 podStartE2EDuration="2m3.608396741s" podCreationTimestamp="2025-11-29 07:39:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:41:43.595408769 +0000 UTC m=+149.570984559" watchObservedRunningTime="2025-11-29 07:41:43.608396741 +0000 UTC m=+149.583972521" Nov 29 07:41:43 crc kubenswrapper[4795]: I1129 07:41:43.622603 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l2jzc" event={"ID":"5255a440-64ac-4f31-8a9e-4c3688ec128a","Type":"ContainerStarted","Data":"1203ad0403d709144a6092007cc9ddae13c8af5440927b55bc8a190d721de79f"} Nov 29 07:41:43 crc kubenswrapper[4795]: I1129 07:41:43.623664 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lm9wt\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:43 crc kubenswrapper[4795]: E1129 07:41:43.624842 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:41:44.124825929 +0000 UTC m=+150.100401719 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lm9wt" (UID: "04596222-6779-478e-96cd-3aa99a923aa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:43 crc kubenswrapper[4795]: I1129 07:41:43.641084 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-dtj9t" event={"ID":"4f25efaa-5792-4a51-ba83-e8733af29fdf","Type":"ContainerStarted","Data":"5e2c76259e18576139e5a2976c5246e5f5e801f18ac9837de36eef93473c7277"} Nov 29 07:41:43 crc kubenswrapper[4795]: I1129 07:41:43.658037 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bf7sx" Nov 29 07:41:43 crc kubenswrapper[4795]: I1129 07:41:43.665868 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tgqrw" event={"ID":"f54838e7-e080-4c36-9f22-37173dca9044","Type":"ContainerStarted","Data":"ec739eba832ede12db35f70a8843efed1cb82d593a95372a71df772fd567de48"} Nov 29 07:41:43 crc kubenswrapper[4795]: I1129 07:41:43.666692 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tgqrw" Nov 29 07:41:43 crc kubenswrapper[4795]: I1129 07:41:43.689687 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tgqrw" Nov 29 07:41:43 crc kubenswrapper[4795]: I1129 07:41:43.694496 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-l2khx" event={"ID":"5550b70a-4101-4b56-8f88-c6339baaf188","Type":"ContainerStarted","Data":"779841ac56099d3f6afb28f8490df34f6b23d4f3552629dbb24b7e04a0767446"} Nov 29 07:41:43 crc kubenswrapper[4795]: I1129 07:41:43.695554 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-l2khx" Nov 29 07:41:43 crc kubenswrapper[4795]: I1129 07:41:43.707062 4795 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-l2khx container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" start-of-body= Nov 29 07:41:43 crc kubenswrapper[4795]: I1129 07:41:43.707117 4795 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-l2khx" podUID="5550b70a-4101-4b56-8f88-c6339baaf188" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" Nov 29 07:41:43 crc kubenswrapper[4795]: I1129 07:41:43.723371 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xdslc" podStartSLOduration=124.723352546 podStartE2EDuration="2m4.723352546s" podCreationTimestamp="2025-11-29 07:39:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:41:43.631456834 +0000 UTC m=+149.607032624" watchObservedRunningTime="2025-11-29 07:41:43.723352546 +0000 UTC m=+149.698928356" Nov 29 07:41:43 crc kubenswrapper[4795]: I1129 07:41:43.724328 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:41:43 crc kubenswrapper[4795]: I1129 07:41:43.724511 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-lgd6b" event={"ID":"b6996662-d230-4299-b913-b6bc38c50ef5","Type":"ContainerStarted","Data":"8ff144278dd0b4f7366ac3f17c842b7cc417550e345bcf691b86bdfdb8b1f1b1"} Nov 29 07:41:43 crc kubenswrapper[4795]: E1129 07:41:43.725411 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:41:44.225388773 +0000 UTC m=+150.200964583 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:43 crc kubenswrapper[4795]: I1129 07:41:43.755511 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-dtj9t" podStartSLOduration=124.755489172 podStartE2EDuration="2m4.755489172s" podCreationTimestamp="2025-11-29 07:39:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:41:43.722982556 +0000 UTC m=+149.698558346" watchObservedRunningTime="2025-11-29 07:41:43.755489172 +0000 UTC m=+149.731064962" Nov 29 07:41:43 crc kubenswrapper[4795]: I1129 07:41:43.757327 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tgqrw" podStartSLOduration=123.757314733 podStartE2EDuration="2m3.757314733s" podCreationTimestamp="2025-11-29 07:39:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:41:43.757165749 +0000 UTC m=+149.732741539" watchObservedRunningTime="2025-11-29 07:41:43.757314733 +0000 UTC m=+149.732890523" Nov 29 07:41:43 crc kubenswrapper[4795]: I1129 07:41:43.781330 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-2fgjg" event={"ID":"e2701908-11c2-44df-9318-4868e49d7ebc","Type":"ContainerStarted","Data":"d406715fb61462290a427923942601361c533d80b6d1e7d8f438c3b1211ff586"} Nov 29 07:41:43 crc kubenswrapper[4795]: I1129 07:41:43.826844 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lm9wt\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:43 crc kubenswrapper[4795]: E1129 07:41:43.827208 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:41:44.327193801 +0000 UTC m=+150.302769591 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lm9wt" (UID: "04596222-6779-478e-96cd-3aa99a923aa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:43 crc kubenswrapper[4795]: I1129 07:41:43.852746 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hx8c6" Nov 29 07:41:43 crc kubenswrapper[4795]: I1129 07:41:43.871782 4795 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-hx8c6 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:5443/healthz\": dial tcp 10.217.0.31:5443: connect: connection refused" start-of-body= Nov 29 07:41:43 crc kubenswrapper[4795]: I1129 07:41:43.871854 4795 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hx8c6" podUID="306cb351-6b97-406a-b3af-741e0d81e630" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.31:5443/healthz\": dial tcp 10.217.0.31:5443: connect: connection refused" Nov 29 07:41:43 crc kubenswrapper[4795]: I1129 07:41:43.889227 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-l2khx" podStartSLOduration=123.88920584 podStartE2EDuration="2m3.88920584s" podCreationTimestamp="2025-11-29 07:39:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:41:43.824042753 +0000 UTC m=+149.799618543" watchObservedRunningTime="2025-11-29 07:41:43.88920584 +0000 UTC m=+149.864781630" Nov 29 07:41:43 crc kubenswrapper[4795]: I1129 07:41:43.892383 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-twmws" event={"ID":"3028c8af-2d6e-4674-9ff8-dff8f68c511c","Type":"ContainerStarted","Data":"8242b70cd87ec35fbcd9b8b8bc5b87f7dd7812b8d231e36f6558b0ca5ae1250a"} Nov 29 07:41:43 crc kubenswrapper[4795]: I1129 07:41:43.906959 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zfqtn" event={"ID":"094b43dd-ae40-43b1-824b-a8dd47bc9693","Type":"ContainerStarted","Data":"0769760b3dc7f20a3962026b5771eb5bfe71f1f9215d9a276baa2509b1a83302"} Nov 29 07:41:43 crc kubenswrapper[4795]: I1129 07:41:43.928458 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:41:43 crc kubenswrapper[4795]: E1129 07:41:43.944801 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:41:44.444755959 +0000 UTC m=+150.420331749 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:43 crc kubenswrapper[4795]: I1129 07:41:43.962426 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t6scb" event={"ID":"a9829eb2-ba44-41ca-a0f7-fd92d6114927","Type":"ContainerStarted","Data":"b8de29d8ca01a473a32dae03bd387333f754959d31c92b9f969ef5e89e6111ab"} Nov 29 07:41:43 crc kubenswrapper[4795]: I1129 07:41:43.993791 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-lwxtb" event={"ID":"3ca85e73-a2e9-416e-9542-5ea8f2f3aea5","Type":"ContainerStarted","Data":"4ade8e4330bb6d236a9009e1e06fe5c91d55ec2e46fc092be82e555a630560d7"} Nov 29 07:41:43 crc kubenswrapper[4795]: I1129 07:41:43.995825 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-2fgjg" podStartSLOduration=7.995806602 podStartE2EDuration="7.995806602s" podCreationTimestamp="2025-11-29 07:41:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:41:43.943628517 +0000 UTC m=+149.919204527" watchObservedRunningTime="2025-11-29 07:41:43.995806602 +0000 UTC m=+149.971382392" Nov 29 07:41:44 crc kubenswrapper[4795]: I1129 07:41:44.031669 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lm9wt\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:44 crc kubenswrapper[4795]: E1129 07:41:44.031942 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:41:44.531929839 +0000 UTC m=+150.507505629 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lm9wt" (UID: "04596222-6779-478e-96cd-3aa99a923aa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:44 crc kubenswrapper[4795]: I1129 07:41:44.063039 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hx8c6" podStartSLOduration=124.063020816 podStartE2EDuration="2m4.063020816s" podCreationTimestamp="2025-11-29 07:39:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:41:44.007543529 +0000 UTC m=+149.983119319" watchObservedRunningTime="2025-11-29 07:41:44.063020816 +0000 UTC m=+150.038596606" Nov 29 07:41:44 crc kubenswrapper[4795]: I1129 07:41:44.064749 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-twmws" podStartSLOduration=9.064738854 podStartE2EDuration="9.064738854s" podCreationTimestamp="2025-11-29 07:41:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:41:44.060173077 +0000 UTC m=+150.035748867" watchObservedRunningTime="2025-11-29 07:41:44.064738854 +0000 UTC m=+150.040314644" Nov 29 07:41:44 crc kubenswrapper[4795]: I1129 07:41:44.075443 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-d7qg2" event={"ID":"15ff94b5-aaa1-4673-8aab-04ed7f44ce88","Type":"ContainerStarted","Data":"92cd17e63532411c4e1fd230d1b7a86c5007a0182f30a6b93d72f841b7663120"} Nov 29 07:41:44 crc kubenswrapper[4795]: I1129 07:41:44.088289 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nlnvc" event={"ID":"768de086-2681-4cee-b71f-a732b317fc64","Type":"ContainerStarted","Data":"276d280c6636236e7625702017742483a165eb9caa4e66a6005e7c48d6a2b761"} Nov 29 07:41:44 crc kubenswrapper[4795]: I1129 07:41:44.112196 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-692g9" event={"ID":"fe1144c0-6804-4a78-bad8-3319ddb3c30c","Type":"ContainerStarted","Data":"2fc1b65f0ec8c48473fa8b99be1461b1126dbd1488c068e49f3d174376239d50"} Nov 29 07:41:44 crc kubenswrapper[4795]: I1129 07:41:44.133473 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:41:44 crc kubenswrapper[4795]: E1129 07:41:44.134901 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:41:44.634877309 +0000 UTC m=+150.610453099 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:44 crc kubenswrapper[4795]: I1129 07:41:44.142878 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qvn2d" event={"ID":"be320286-1cef-4b6f-92a0-e9e66a34ad3e","Type":"ContainerStarted","Data":"7ca9c7f9d12c69242654c14bd48737b4c9bd3ea26aa88ff866699fdd16da889f"} Nov 29 07:41:44 crc kubenswrapper[4795]: I1129 07:41:44.142924 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qvn2d" event={"ID":"be320286-1cef-4b6f-92a0-e9e66a34ad3e","Type":"ContainerStarted","Data":"556be7ad4dbbe7c9602f5ab84628deebcf15b43c812dae9afdada9d0b2958acd"} Nov 29 07:41:44 crc kubenswrapper[4795]: I1129 07:41:44.147394 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t6scb" podStartSLOduration=124.147367888 podStartE2EDuration="2m4.147367888s" podCreationTimestamp="2025-11-29 07:39:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:41:44.14314801 +0000 UTC m=+150.118723800" watchObservedRunningTime="2025-11-29 07:41:44.147367888 +0000 UTC m=+150.122943668" Nov 29 07:41:44 crc kubenswrapper[4795]: I1129 07:41:44.149310 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zfqtn" podStartSLOduration=124.149302342 podStartE2EDuration="2m4.149302342s" podCreationTimestamp="2025-11-29 07:39:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:41:44.100782619 +0000 UTC m=+150.076358419" watchObservedRunningTime="2025-11-29 07:41:44.149302342 +0000 UTC m=+150.124878122" Nov 29 07:41:44 crc kubenswrapper[4795]: I1129 07:41:44.178850 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6x84f" event={"ID":"24f58977-7eb4-4fb1-a812-8b656753537d","Type":"ContainerStarted","Data":"c2e1386333f6fefa50da6037bfa0cdaa1d7322c375d5414d950568b58d7e9557"} Nov 29 07:41:44 crc kubenswrapper[4795]: I1129 07:41:44.185011 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nlnvc" podStartSLOduration=124.184995247 podStartE2EDuration="2m4.184995247s" podCreationTimestamp="2025-11-29 07:39:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:41:44.182847487 +0000 UTC m=+150.158423287" watchObservedRunningTime="2025-11-29 07:41:44.184995247 +0000 UTC m=+150.160571037" Nov 29 07:41:44 crc kubenswrapper[4795]: I1129 07:41:44.196221 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-qpsg9" event={"ID":"34ec03b0-6d17-4934-84fa-4323c7599fd0","Type":"ContainerStarted","Data":"168f05a266d19b8a31f0e3df3d0757e2ae6323220b53f4762a6cfced03f1224e"} Nov 29 07:41:44 crc kubenswrapper[4795]: I1129 07:41:44.197115 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-qpsg9" Nov 29 07:41:44 crc kubenswrapper[4795]: I1129 07:41:44.200998 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r76lc" event={"ID":"94527576-d5a0-4b6e-a3d2-3e1df7494d95","Type":"ContainerStarted","Data":"b662e3cacf94989a7d10e098cd06177b4bde64988fc0ce7682a5e35e75398afd"} Nov 29 07:41:44 crc kubenswrapper[4795]: I1129 07:41:44.223881 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-ddtz8" event={"ID":"7390eae4-5ab6-4606-90d5-e5468d745208","Type":"ContainerStarted","Data":"5c172b375031dba30490dc3fe5dd19ab00986734d8681d5f653bea535969edd7"} Nov 29 07:41:44 crc kubenswrapper[4795]: I1129 07:41:44.235475 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lm9wt\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:44 crc kubenswrapper[4795]: E1129 07:41:44.237244 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:41:44.737188582 +0000 UTC m=+150.712764372 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lm9wt" (UID: "04596222-6779-478e-96cd-3aa99a923aa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:44 crc kubenswrapper[4795]: I1129 07:41:44.241883 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-d2wzd" event={"ID":"d79589e5-c434-4157-8cfd-a51e92aa0c2f","Type":"ContainerStarted","Data":"acb0e233533c7455eab5d0f96cc015113be9d8c1ece998c192bce3e0135af5f6"} Nov 29 07:41:44 crc kubenswrapper[4795]: I1129 07:41:44.261294 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-692g9" podStartSLOduration=124.261275933 podStartE2EDuration="2m4.261275933s" podCreationTimestamp="2025-11-29 07:39:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:41:44.255245295 +0000 UTC m=+150.230821085" watchObservedRunningTime="2025-11-29 07:41:44.261275933 +0000 UTC m=+150.236851723" Nov 29 07:41:44 crc kubenswrapper[4795]: I1129 07:41:44.267567 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-v4fx7" event={"ID":"725af35a-cc1c-4178-ae7f-e909af583a5f","Type":"ContainerStarted","Data":"51cc578af273d99101dd28dd3878ae62fcd99afeef0e2762c96bcfe5a6ce7668"} Nov 29 07:41:44 crc kubenswrapper[4795]: I1129 07:41:44.301662 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9bzls" event={"ID":"eed419fc-89c2-4aa1-ba4a-4caf47aa0181","Type":"ContainerStarted","Data":"852bc50fa60bf829842ae2dad79f19fa95c310ec00b3c0abe7443b64ebc3c859"} Nov 29 07:41:44 crc kubenswrapper[4795]: I1129 07:41:44.303300 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-wbcfc" Nov 29 07:41:44 crc kubenswrapper[4795]: I1129 07:41:44.311493 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kfx9d" Nov 29 07:41:44 crc kubenswrapper[4795]: I1129 07:41:44.322332 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-d7qg2" podStartSLOduration=124.322315035 podStartE2EDuration="2m4.322315035s" podCreationTimestamp="2025-11-29 07:39:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:41:44.320613948 +0000 UTC m=+150.296189738" watchObservedRunningTime="2025-11-29 07:41:44.322315035 +0000 UTC m=+150.297890825" Nov 29 07:41:44 crc kubenswrapper[4795]: I1129 07:41:44.336541 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:41:44 crc kubenswrapper[4795]: E1129 07:41:44.338142 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:41:44.838110025 +0000 UTC m=+150.813685815 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:44 crc kubenswrapper[4795]: I1129 07:41:44.392700 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6x84f" podStartSLOduration=124.392683757 podStartE2EDuration="2m4.392683757s" podCreationTimestamp="2025-11-29 07:39:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:41:44.391440732 +0000 UTC m=+150.367016522" watchObservedRunningTime="2025-11-29 07:41:44.392683757 +0000 UTC m=+150.368259547" Nov 29 07:41:44 crc kubenswrapper[4795]: I1129 07:41:44.405370 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-wbcfc" Nov 29 07:41:44 crc kubenswrapper[4795]: I1129 07:41:44.438558 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lm9wt\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:44 crc kubenswrapper[4795]: E1129 07:41:44.439940 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:41:44.939924284 +0000 UTC m=+150.915500084 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lm9wt" (UID: "04596222-6779-478e-96cd-3aa99a923aa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:44 crc kubenswrapper[4795]: I1129 07:41:44.449150 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-v4fx7" podStartSLOduration=124.449133511 podStartE2EDuration="2m4.449133511s" podCreationTimestamp="2025-11-29 07:39:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:41:44.447745192 +0000 UTC m=+150.423320992" watchObservedRunningTime="2025-11-29 07:41:44.449133511 +0000 UTC m=+150.424709301" Nov 29 07:41:44 crc kubenswrapper[4795]: I1129 07:41:44.541025 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:41:44 crc kubenswrapper[4795]: E1129 07:41:44.541335 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:41:45.041319711 +0000 UTC m=+151.016895501 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:44 crc kubenswrapper[4795]: I1129 07:41:44.556054 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-qpsg9" podStartSLOduration=125.556039492 podStartE2EDuration="2m5.556039492s" podCreationTimestamp="2025-11-29 07:39:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:41:44.531552369 +0000 UTC m=+150.507128159" watchObservedRunningTime="2025-11-29 07:41:44.556039492 +0000 UTC m=+150.531615282" Nov 29 07:41:44 crc kubenswrapper[4795]: I1129 07:41:44.556241 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9bzls" podStartSLOduration=124.556237647 podStartE2EDuration="2m4.556237647s" podCreationTimestamp="2025-11-29 07:39:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:41:44.553946373 +0000 UTC m=+150.529522163" watchObservedRunningTime="2025-11-29 07:41:44.556237647 +0000 UTC m=+150.531813437" Nov 29 07:41:44 crc kubenswrapper[4795]: I1129 07:41:44.585606 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r76lc" podStartSLOduration=124.585571785 podStartE2EDuration="2m4.585571785s" podCreationTimestamp="2025-11-29 07:39:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:41:44.585307287 +0000 UTC m=+150.560883077" watchObservedRunningTime="2025-11-29 07:41:44.585571785 +0000 UTC m=+150.561147575" Nov 29 07:41:44 crc kubenswrapper[4795]: I1129 07:41:44.597067 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-692g9" Nov 29 07:41:44 crc kubenswrapper[4795]: I1129 07:41:44.600854 4795 patch_prober.go:28] interesting pod/router-default-5444994796-692g9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 29 07:41:44 crc kubenswrapper[4795]: [-]has-synced failed: reason withheld Nov 29 07:41:44 crc kubenswrapper[4795]: [+]process-running ok Nov 29 07:41:44 crc kubenswrapper[4795]: healthz check failed Nov 29 07:41:44 crc kubenswrapper[4795]: I1129 07:41:44.600897 4795 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-692g9" podUID="fe1144c0-6804-4a78-bad8-3319ddb3c30c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 29 07:41:44 crc kubenswrapper[4795]: I1129 07:41:44.610970 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-wbcfc" podStartSLOduration=124.610954343 podStartE2EDuration="2m4.610954343s" podCreationTimestamp="2025-11-29 07:39:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:41:44.60834282 +0000 UTC m=+150.583918610" watchObservedRunningTime="2025-11-29 07:41:44.610954343 +0000 UTC m=+150.586530123" Nov 29 07:41:44 crc kubenswrapper[4795]: I1129 07:41:44.644463 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lm9wt\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:44 crc kubenswrapper[4795]: E1129 07:41:44.644806 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:41:45.144792656 +0000 UTC m=+151.120368456 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lm9wt" (UID: "04596222-6779-478e-96cd-3aa99a923aa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:44 crc kubenswrapper[4795]: I1129 07:41:44.678750 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-d2wzd" podStartSLOduration=125.678734252 podStartE2EDuration="2m5.678734252s" podCreationTimestamp="2025-11-29 07:39:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:41:44.629989803 +0000 UTC m=+150.605565593" watchObservedRunningTime="2025-11-29 07:41:44.678734252 +0000 UTC m=+150.654310042" Nov 29 07:41:44 crc kubenswrapper[4795]: I1129 07:41:44.745854 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:41:44 crc kubenswrapper[4795]: E1129 07:41:44.746036 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:41:45.246014848 +0000 UTC m=+151.221590648 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:44 crc kubenswrapper[4795]: I1129 07:41:44.746117 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lm9wt\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:44 crc kubenswrapper[4795]: E1129 07:41:44.746381 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:41:45.246373598 +0000 UTC m=+151.221949388 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lm9wt" (UID: "04596222-6779-478e-96cd-3aa99a923aa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:44 crc kubenswrapper[4795]: I1129 07:41:44.847574 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:41:44 crc kubenswrapper[4795]: E1129 07:41:44.847738 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:41:45.347709863 +0000 UTC m=+151.323285643 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:44 crc kubenswrapper[4795]: I1129 07:41:44.847891 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lm9wt\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:44 crc kubenswrapper[4795]: E1129 07:41:44.848212 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:41:45.348204037 +0000 UTC m=+151.323779817 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lm9wt" (UID: "04596222-6779-478e-96cd-3aa99a923aa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:44 crc kubenswrapper[4795]: I1129 07:41:44.949318 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:41:44 crc kubenswrapper[4795]: E1129 07:41:44.949512 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:41:45.449480731 +0000 UTC m=+151.425056531 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:44 crc kubenswrapper[4795]: I1129 07:41:44.949636 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lm9wt\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:44 crc kubenswrapper[4795]: E1129 07:41:44.949911 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:41:45.449899352 +0000 UTC m=+151.425475142 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lm9wt" (UID: "04596222-6779-478e-96cd-3aa99a923aa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:44 crc kubenswrapper[4795]: I1129 07:41:44.962991 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vx8d5"] Nov 29 07:41:44 crc kubenswrapper[4795]: I1129 07:41:44.964209 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vx8d5" Nov 29 07:41:44 crc kubenswrapper[4795]: I1129 07:41:44.967608 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 29 07:41:44 crc kubenswrapper[4795]: I1129 07:41:44.991113 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vx8d5"] Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.051184 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:41:45 crc kubenswrapper[4795]: E1129 07:41:45.051353 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:41:45.55132547 +0000 UTC m=+151.526901260 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.051765 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0deb15dc-57ff-4c83-8e81-ea7ebfda038d-utilities\") pod \"community-operators-vx8d5\" (UID: \"0deb15dc-57ff-4c83-8e81-ea7ebfda038d\") " pod="openshift-marketplace/community-operators-vx8d5" Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.051804 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7kf7\" (UniqueName: \"kubernetes.io/projected/0deb15dc-57ff-4c83-8e81-ea7ebfda038d-kube-api-access-k7kf7\") pod \"community-operators-vx8d5\" (UID: \"0deb15dc-57ff-4c83-8e81-ea7ebfda038d\") " pod="openshift-marketplace/community-operators-vx8d5" Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.051889 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lm9wt\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.051906 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0deb15dc-57ff-4c83-8e81-ea7ebfda038d-catalog-content\") pod \"community-operators-vx8d5\" (UID: \"0deb15dc-57ff-4c83-8e81-ea7ebfda038d\") " pod="openshift-marketplace/community-operators-vx8d5" Nov 29 07:41:45 crc kubenswrapper[4795]: E1129 07:41:45.052175 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:41:45.552163594 +0000 UTC m=+151.527739384 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lm9wt" (UID: "04596222-6779-478e-96cd-3aa99a923aa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.098458 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-qpsg9" Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.153321 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.153621 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0deb15dc-57ff-4c83-8e81-ea7ebfda038d-catalog-content\") pod \"community-operators-vx8d5\" (UID: \"0deb15dc-57ff-4c83-8e81-ea7ebfda038d\") " pod="openshift-marketplace/community-operators-vx8d5" Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.153665 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0deb15dc-57ff-4c83-8e81-ea7ebfda038d-utilities\") pod \"community-operators-vx8d5\" (UID: \"0deb15dc-57ff-4c83-8e81-ea7ebfda038d\") " pod="openshift-marketplace/community-operators-vx8d5" Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.153695 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7kf7\" (UniqueName: \"kubernetes.io/projected/0deb15dc-57ff-4c83-8e81-ea7ebfda038d-kube-api-access-k7kf7\") pod \"community-operators-vx8d5\" (UID: \"0deb15dc-57ff-4c83-8e81-ea7ebfda038d\") " pod="openshift-marketplace/community-operators-vx8d5" Nov 29 07:41:45 crc kubenswrapper[4795]: E1129 07:41:45.154010 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:41:45.653996143 +0000 UTC m=+151.629571933 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.154384 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0deb15dc-57ff-4c83-8e81-ea7ebfda038d-catalog-content\") pod \"community-operators-vx8d5\" (UID: \"0deb15dc-57ff-4c83-8e81-ea7ebfda038d\") " pod="openshift-marketplace/community-operators-vx8d5" Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.154549 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0deb15dc-57ff-4c83-8e81-ea7ebfda038d-utilities\") pod \"community-operators-vx8d5\" (UID: \"0deb15dc-57ff-4c83-8e81-ea7ebfda038d\") " pod="openshift-marketplace/community-operators-vx8d5" Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.171068 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-h8pp6"] Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.171998 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h8pp6" Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.175221 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.193482 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h8pp6"] Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.194577 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7kf7\" (UniqueName: \"kubernetes.io/projected/0deb15dc-57ff-4c83-8e81-ea7ebfda038d-kube-api-access-k7kf7\") pod \"community-operators-vx8d5\" (UID: \"0deb15dc-57ff-4c83-8e81-ea7ebfda038d\") " pod="openshift-marketplace/community-operators-vx8d5" Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.255130 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lm9wt\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:45 crc kubenswrapper[4795]: E1129 07:41:45.255643 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:41:45.755631176 +0000 UTC m=+151.731206966 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lm9wt" (UID: "04596222-6779-478e-96cd-3aa99a923aa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.289731 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vx8d5" Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.358929 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.359245 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a04ecf63-b125-4a0e-9869-403c9cca5648-utilities\") pod \"certified-operators-h8pp6\" (UID: \"a04ecf63-b125-4a0e-9869-403c9cca5648\") " pod="openshift-marketplace/certified-operators-h8pp6" Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.359301 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a04ecf63-b125-4a0e-9869-403c9cca5648-catalog-content\") pod \"certified-operators-h8pp6\" (UID: \"a04ecf63-b125-4a0e-9869-403c9cca5648\") " pod="openshift-marketplace/certified-operators-h8pp6" Nov 29 07:41:45 crc kubenswrapper[4795]: E1129 07:41:45.359470 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:41:45.85944278 +0000 UTC m=+151.835018570 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.359529 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kmmg\" (UniqueName: \"kubernetes.io/projected/a04ecf63-b125-4a0e-9869-403c9cca5648-kube-api-access-4kmmg\") pod \"certified-operators-h8pp6\" (UID: \"a04ecf63-b125-4a0e-9869-403c9cca5648\") " pod="openshift-marketplace/certified-operators-h8pp6" Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.367252 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l2jzc" event={"ID":"5255a440-64ac-4f31-8a9e-4c3688ec128a","Type":"ContainerStarted","Data":"ca6c01e599b8eab04c88e3aa55071a32644f735ce4876f7b4d77c5ce3ca0f675"} Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.376158 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cgrzq"] Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.384769 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cgrzq" Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.397317 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zfqtn" event={"ID":"094b43dd-ae40-43b1-824b-a8dd47bc9693","Type":"ContainerStarted","Data":"4be90c946aa95151316080aa1f045a691d91a83d0f707357a7ef72e91b62fb1f"} Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.428949 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6x84f" event={"ID":"24f58977-7eb4-4fb1-a812-8b656753537d","Type":"ContainerStarted","Data":"090b2f9133e5a07c743e570b0f21827e7b532c67a5903a9150c2dda958a36f9a"} Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.435730 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l2jzc" podStartSLOduration=125.435707707 podStartE2EDuration="2m5.435707707s" podCreationTimestamp="2025-11-29 07:39:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:41:45.427783046 +0000 UTC m=+151.403358836" watchObservedRunningTime="2025-11-29 07:41:45.435707707 +0000 UTC m=+151.411283497" Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.452687 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cgrzq"] Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.461344 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a04ecf63-b125-4a0e-9869-403c9cca5648-utilities\") pod \"certified-operators-h8pp6\" (UID: \"a04ecf63-b125-4a0e-9869-403c9cca5648\") " pod="openshift-marketplace/certified-operators-h8pp6" Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.461436 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a04ecf63-b125-4a0e-9869-403c9cca5648-catalog-content\") pod \"certified-operators-h8pp6\" (UID: \"a04ecf63-b125-4a0e-9869-403c9cca5648\") " pod="openshift-marketplace/certified-operators-h8pp6" Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.461500 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lm9wt\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.461663 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kmmg\" (UniqueName: \"kubernetes.io/projected/a04ecf63-b125-4a0e-9869-403c9cca5648-kube-api-access-4kmmg\") pod \"certified-operators-h8pp6\" (UID: \"a04ecf63-b125-4a0e-9869-403c9cca5648\") " pod="openshift-marketplace/certified-operators-h8pp6" Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.465388 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a04ecf63-b125-4a0e-9869-403c9cca5648-catalog-content\") pod \"certified-operators-h8pp6\" (UID: \"a04ecf63-b125-4a0e-9869-403c9cca5648\") " pod="openshift-marketplace/certified-operators-h8pp6" Nov 29 07:41:45 crc kubenswrapper[4795]: E1129 07:41:45.465659 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:41:45.965646541 +0000 UTC m=+151.941222331 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lm9wt" (UID: "04596222-6779-478e-96cd-3aa99a923aa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.474143 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a04ecf63-b125-4a0e-9869-403c9cca5648-utilities\") pod \"certified-operators-h8pp6\" (UID: \"a04ecf63-b125-4a0e-9869-403c9cca5648\") " pod="openshift-marketplace/certified-operators-h8pp6" Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.499435 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kmmg\" (UniqueName: \"kubernetes.io/projected/a04ecf63-b125-4a0e-9869-403c9cca5648-kube-api-access-4kmmg\") pod \"certified-operators-h8pp6\" (UID: \"a04ecf63-b125-4a0e-9869-403c9cca5648\") " pod="openshift-marketplace/certified-operators-h8pp6" Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.506962 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-ddtz8" event={"ID":"7390eae4-5ab6-4606-90d5-e5468d745208","Type":"ContainerStarted","Data":"c57e7d8098ac7c5b615f872aa5eb0c5e9b6a21d08300e6ee2b8170998bce45d9"} Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.532301 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h8pp6" Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.539051 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-6swbg" event={"ID":"2fc75ee7-f4b7-40e9-9163-2b5dfac54d24","Type":"ContainerStarted","Data":"7fc068f01039bed972d95535da1f539bfed342de6f005e485a30a5ab8b69fa4e"} Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.569246 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-6swbg" podStartSLOduration=125.569228009 podStartE2EDuration="2m5.569228009s" podCreationTimestamp="2025-11-29 07:39:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:41:45.566483633 +0000 UTC m=+151.542059423" watchObservedRunningTime="2025-11-29 07:41:45.569228009 +0000 UTC m=+151.544803819" Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.571725 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:41:45 crc kubenswrapper[4795]: E1129 07:41:45.571816 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:41:46.071799691 +0000 UTC m=+152.047375481 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.577721 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfndp\" (UniqueName: \"kubernetes.io/projected/b36dad37-4975-4010-b93e-0ea5c932caab-kube-api-access-kfndp\") pod \"community-operators-cgrzq\" (UID: \"b36dad37-4975-4010-b93e-0ea5c932caab\") " pod="openshift-marketplace/community-operators-cgrzq" Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.578005 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qvn2d" event={"ID":"be320286-1cef-4b6f-92a0-e9e66a34ad3e","Type":"ContainerStarted","Data":"0e4d9ce9196d166334eff839f93f7284e8d6e1ac149fef8368f329b11998e450"} Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.578632 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qvn2d" Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.582652 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b36dad37-4975-4010-b93e-0ea5c932caab-catalog-content\") pod \"community-operators-cgrzq\" (UID: \"b36dad37-4975-4010-b93e-0ea5c932caab\") " pod="openshift-marketplace/community-operators-cgrzq" Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.583136 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b36dad37-4975-4010-b93e-0ea5c932caab-utilities\") pod \"community-operators-cgrzq\" (UID: \"b36dad37-4975-4010-b93e-0ea5c932caab\") " pod="openshift-marketplace/community-operators-cgrzq" Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.583307 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lm9wt\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:45 crc kubenswrapper[4795]: E1129 07:41:45.584028 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:41:46.084007001 +0000 UTC m=+152.059582851 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lm9wt" (UID: "04596222-6779-478e-96cd-3aa99a923aa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.602663 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-t8rg9"] Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.603808 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t8rg9" Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.606390 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t8rg9"] Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.611154 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hx8c6" event={"ID":"306cb351-6b97-406a-b3af-741e0d81e630","Type":"ContainerStarted","Data":"af931bbb7c2a04ed80d0c8732cbf75c788fd2ca4b7eff6700c770b6ec17cefc3"} Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.623502 4795 patch_prober.go:28] interesting pod/router-default-5444994796-692g9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 29 07:41:45 crc kubenswrapper[4795]: [-]has-synced failed: reason withheld Nov 29 07:41:45 crc kubenswrapper[4795]: [+]process-running ok Nov 29 07:41:45 crc kubenswrapper[4795]: healthz check failed Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.623560 4795 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-692g9" podUID="fe1144c0-6804-4a78-bad8-3319ddb3c30c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.624442 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qvn2d" podStartSLOduration=125.624426758 podStartE2EDuration="2m5.624426758s" podCreationTimestamp="2025-11-29 07:39:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:41:45.623498522 +0000 UTC m=+151.599074312" watchObservedRunningTime="2025-11-29 07:41:45.624426758 +0000 UTC m=+151.600002548" Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.653918 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r76lc" event={"ID":"94527576-d5a0-4b6e-a3d2-3e1df7494d95","Type":"ContainerStarted","Data":"4a52080427bd3fa8f8aea06106c3b784815697cde8932afc441b3d0c7edc8453"} Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.684795 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.685064 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1461195-363d-49f5-b5c2-61b99f16ece2-utilities\") pod \"certified-operators-t8rg9\" (UID: \"e1461195-363d-49f5-b5c2-61b99f16ece2\") " pod="openshift-marketplace/certified-operators-t8rg9" Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.685113 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b36dad37-4975-4010-b93e-0ea5c932caab-catalog-content\") pod \"community-operators-cgrzq\" (UID: \"b36dad37-4975-4010-b93e-0ea5c932caab\") " pod="openshift-marketplace/community-operators-cgrzq" Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.685143 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b36dad37-4975-4010-b93e-0ea5c932caab-utilities\") pod \"community-operators-cgrzq\" (UID: \"b36dad37-4975-4010-b93e-0ea5c932caab\") " pod="openshift-marketplace/community-operators-cgrzq" Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.685207 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfndp\" (UniqueName: \"kubernetes.io/projected/b36dad37-4975-4010-b93e-0ea5c932caab-kube-api-access-kfndp\") pod \"community-operators-cgrzq\" (UID: \"b36dad37-4975-4010-b93e-0ea5c932caab\") " pod="openshift-marketplace/community-operators-cgrzq" Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.685234 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7swxq\" (UniqueName: \"kubernetes.io/projected/e1461195-363d-49f5-b5c2-61b99f16ece2-kube-api-access-7swxq\") pod \"certified-operators-t8rg9\" (UID: \"e1461195-363d-49f5-b5c2-61b99f16ece2\") " pod="openshift-marketplace/certified-operators-t8rg9" Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.685259 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1461195-363d-49f5-b5c2-61b99f16ece2-catalog-content\") pod \"certified-operators-t8rg9\" (UID: \"e1461195-363d-49f5-b5c2-61b99f16ece2\") " pod="openshift-marketplace/certified-operators-t8rg9" Nov 29 07:41:45 crc kubenswrapper[4795]: E1129 07:41:45.685941 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:41:46.185926413 +0000 UTC m=+152.161502203 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.686389 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b36dad37-4975-4010-b93e-0ea5c932caab-catalog-content\") pod \"community-operators-cgrzq\" (UID: \"b36dad37-4975-4010-b93e-0ea5c932caab\") " pod="openshift-marketplace/community-operators-cgrzq" Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.686634 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b36dad37-4975-4010-b93e-0ea5c932caab-utilities\") pod \"community-operators-cgrzq\" (UID: \"b36dad37-4975-4010-b93e-0ea5c932caab\") " pod="openshift-marketplace/community-operators-cgrzq" Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.718701 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-lwxtb" event={"ID":"3ca85e73-a2e9-416e-9542-5ea8f2f3aea5","Type":"ContainerStarted","Data":"7fc5c0bbcec40a450c2509daa0d768028a1c737776e44445be6c32a0d746b7c7"} Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.720958 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfndp\" (UniqueName: \"kubernetes.io/projected/b36dad37-4975-4010-b93e-0ea5c932caab-kube-api-access-kfndp\") pod \"community-operators-cgrzq\" (UID: \"b36dad37-4975-4010-b93e-0ea5c932caab\") " pod="openshift-marketplace/community-operators-cgrzq" Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.734359 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-d2wzd" event={"ID":"d79589e5-c434-4157-8cfd-a51e92aa0c2f","Type":"ContainerStarted","Data":"461a7dfe81ee6f5924d663b473f5350a8c545eb0b69dea88c32d76db80bc608a"} Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.746420 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-lwxtb" podStartSLOduration=125.746407469 podStartE2EDuration="2m5.746407469s" podCreationTimestamp="2025-11-29 07:39:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:41:45.744900817 +0000 UTC m=+151.720476607" watchObservedRunningTime="2025-11-29 07:41:45.746407469 +0000 UTC m=+151.721983259" Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.759926 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-blw2q" event={"ID":"ecf9ad47-4004-45d4-84e6-986e63258092","Type":"ContainerStarted","Data":"6a6a6a6229daa1d6fa885a3b7863ce10691dc65686d66226d7757217b1add527"} Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.789223 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7swxq\" (UniqueName: \"kubernetes.io/projected/e1461195-363d-49f5-b5c2-61b99f16ece2-kube-api-access-7swxq\") pod \"certified-operators-t8rg9\" (UID: \"e1461195-363d-49f5-b5c2-61b99f16ece2\") " pod="openshift-marketplace/certified-operators-t8rg9" Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.789270 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1461195-363d-49f5-b5c2-61b99f16ece2-catalog-content\") pod \"certified-operators-t8rg9\" (UID: \"e1461195-363d-49f5-b5c2-61b99f16ece2\") " pod="openshift-marketplace/certified-operators-t8rg9" Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.789315 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1461195-363d-49f5-b5c2-61b99f16ece2-utilities\") pod \"certified-operators-t8rg9\" (UID: \"e1461195-363d-49f5-b5c2-61b99f16ece2\") " pod="openshift-marketplace/certified-operators-t8rg9" Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.789415 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lm9wt\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:45 crc kubenswrapper[4795]: E1129 07:41:45.789684 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:41:46.289673405 +0000 UTC m=+152.265249185 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lm9wt" (UID: "04596222-6779-478e-96cd-3aa99a923aa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.791196 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1461195-363d-49f5-b5c2-61b99f16ece2-catalog-content\") pod \"certified-operators-t8rg9\" (UID: \"e1461195-363d-49f5-b5c2-61b99f16ece2\") " pod="openshift-marketplace/certified-operators-t8rg9" Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.801174 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5g9td" event={"ID":"3afcac64-b24d-4150-a997-123947d36f3a","Type":"ContainerStarted","Data":"0b853ab9384de8201318d9b1fa11f4a156827472f27271d98bedb0286245491d"} Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.813794 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7swxq\" (UniqueName: \"kubernetes.io/projected/e1461195-363d-49f5-b5c2-61b99f16ece2-kube-api-access-7swxq\") pod \"certified-operators-t8rg9\" (UID: \"e1461195-363d-49f5-b5c2-61b99f16ece2\") " pod="openshift-marketplace/certified-operators-t8rg9" Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.821115 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1461195-363d-49f5-b5c2-61b99f16ece2-utilities\") pod \"certified-operators-t8rg9\" (UID: \"e1461195-363d-49f5-b5c2-61b99f16ece2\") " pod="openshift-marketplace/certified-operators-t8rg9" Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.822292 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hx8c6" Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.893091 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:41:45 crc kubenswrapper[4795]: E1129 07:41:45.893439 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:41:46.393424088 +0000 UTC m=+152.368999878 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.922126 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-lgd6b" event={"ID":"b6996662-d230-4299-b913-b6bc38c50ef5","Type":"ContainerStarted","Data":"7b528445fa36ddbba6829116b4b5da5a0fa2f4f934c890c871a5b9d09954718f"} Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.937713 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-4jk72" event={"ID":"acfaa12b-166f-4c03-a208-6ed705af199d","Type":"ContainerStarted","Data":"15f046a9900553098c8901bda6bb0ffb2cf3299fb948e7de2c7cd62b7e2098a4"} Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.964898 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vx8d5"] Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.966227 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-lgd6b" podStartSLOduration=126.966209577 podStartE2EDuration="2m6.966209577s" podCreationTimestamp="2025-11-29 07:39:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:41:45.961968119 +0000 UTC m=+151.937543919" watchObservedRunningTime="2025-11-29 07:41:45.966209577 +0000 UTC m=+151.941785367" Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.985896 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t8rg9" Nov 29 07:41:45 crc kubenswrapper[4795]: I1129 07:41:45.995764 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lm9wt\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:45 crc kubenswrapper[4795]: E1129 07:41:45.996221 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:41:46.496205373 +0000 UTC m=+152.471781163 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lm9wt" (UID: "04596222-6779-478e-96cd-3aa99a923aa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:46 crc kubenswrapper[4795]: I1129 07:41:46.014909 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cgrzq" Nov 29 07:41:46 crc kubenswrapper[4795]: I1129 07:41:46.043045 4795 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-l2khx container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" start-of-body= Nov 29 07:41:46 crc kubenswrapper[4795]: I1129 07:41:46.043090 4795 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-l2khx" podUID="5550b70a-4101-4b56-8f88-c6339baaf188" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" Nov 29 07:41:46 crc kubenswrapper[4795]: I1129 07:41:46.096426 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:41:46 crc kubenswrapper[4795]: E1129 07:41:46.096971 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:41:46.596948552 +0000 UTC m=+152.572524352 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:46 crc kubenswrapper[4795]: I1129 07:41:46.097292 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lm9wt\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:46 crc kubenswrapper[4795]: E1129 07:41:46.101239 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:41:46.601218731 +0000 UTC m=+152.576794551 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lm9wt" (UID: "04596222-6779-478e-96cd-3aa99a923aa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:46 crc kubenswrapper[4795]: I1129 07:41:46.199406 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:41:46 crc kubenswrapper[4795]: E1129 07:41:46.205422 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:41:46.705387895 +0000 UTC m=+152.680963695 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:46 crc kubenswrapper[4795]: I1129 07:41:46.244957 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h8pp6"] Nov 29 07:41:46 crc kubenswrapper[4795]: I1129 07:41:46.321275 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lm9wt\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:46 crc kubenswrapper[4795]: E1129 07:41:46.321644 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:41:46.821630266 +0000 UTC m=+152.797206056 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lm9wt" (UID: "04596222-6779-478e-96cd-3aa99a923aa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:46 crc kubenswrapper[4795]: I1129 07:41:46.429611 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:41:46 crc kubenswrapper[4795]: E1129 07:41:46.430481 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:41:46.93046079 +0000 UTC m=+152.906036580 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:46 crc kubenswrapper[4795]: I1129 07:41:46.532312 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lm9wt\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:46 crc kubenswrapper[4795]: E1129 07:41:46.532693 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:41:47.03268267 +0000 UTC m=+153.008258460 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lm9wt" (UID: "04596222-6779-478e-96cd-3aa99a923aa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:46 crc kubenswrapper[4795]: I1129 07:41:46.580034 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t8rg9"] Nov 29 07:41:46 crc kubenswrapper[4795]: I1129 07:41:46.611889 4795 patch_prober.go:28] interesting pod/router-default-5444994796-692g9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 29 07:41:46 crc kubenswrapper[4795]: [-]has-synced failed: reason withheld Nov 29 07:41:46 crc kubenswrapper[4795]: [+]process-running ok Nov 29 07:41:46 crc kubenswrapper[4795]: healthz check failed Nov 29 07:41:46 crc kubenswrapper[4795]: I1129 07:41:46.611938 4795 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-692g9" podUID="fe1144c0-6804-4a78-bad8-3319ddb3c30c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 29 07:41:46 crc kubenswrapper[4795]: I1129 07:41:46.636116 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:41:46 crc kubenswrapper[4795]: I1129 07:41:46.636336 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:41:46 crc kubenswrapper[4795]: I1129 07:41:46.636370 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:41:46 crc kubenswrapper[4795]: I1129 07:41:46.636419 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:41:46 crc kubenswrapper[4795]: I1129 07:41:46.636467 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:41:46 crc kubenswrapper[4795]: E1129 07:41:46.636611 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:41:47.136571757 +0000 UTC m=+153.112147547 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:46 crc kubenswrapper[4795]: I1129 07:41:46.638542 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:41:46 crc kubenswrapper[4795]: I1129 07:41:46.665621 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:41:46 crc kubenswrapper[4795]: I1129 07:41:46.665714 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:41:46 crc kubenswrapper[4795]: I1129 07:41:46.669397 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:41:46 crc kubenswrapper[4795]: I1129 07:41:46.678242 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:41:46 crc kubenswrapper[4795]: I1129 07:41:46.689024 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:41:46 crc kubenswrapper[4795]: I1129 07:41:46.693844 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:41:46 crc kubenswrapper[4795]: I1129 07:41:46.739271 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lm9wt\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:46 crc kubenswrapper[4795]: E1129 07:41:46.739536 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:41:47.239524257 +0000 UTC m=+153.215100047 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lm9wt" (UID: "04596222-6779-478e-96cd-3aa99a923aa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:46 crc kubenswrapper[4795]: I1129 07:41:46.760370 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cgrzq"] Nov 29 07:41:46 crc kubenswrapper[4795]: I1129 07:41:46.839905 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:41:46 crc kubenswrapper[4795]: E1129 07:41:46.840127 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:41:47.340104071 +0000 UTC m=+153.315679871 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:46 crc kubenswrapper[4795]: I1129 07:41:46.840440 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lm9wt\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:46 crc kubenswrapper[4795]: E1129 07:41:46.840851 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:41:47.340841752 +0000 UTC m=+153.316417542 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lm9wt" (UID: "04596222-6779-478e-96cd-3aa99a923aa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:46 crc kubenswrapper[4795]: I1129 07:41:46.848832 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" Nov 29 07:41:46 crc kubenswrapper[4795]: I1129 07:41:46.941685 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:41:46 crc kubenswrapper[4795]: E1129 07:41:46.942374 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:41:47.442346272 +0000 UTC m=+153.417922062 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:47 crc kubenswrapper[4795]: I1129 07:41:47.043029 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lm9wt\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:47 crc kubenswrapper[4795]: E1129 07:41:47.044611 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:41:47.54450028 +0000 UTC m=+153.520076120 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lm9wt" (UID: "04596222-6779-478e-96cd-3aa99a923aa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:47 crc kubenswrapper[4795]: I1129 07:41:47.065934 4795 generic.go:334] "Generic (PLEG): container finished" podID="4f25efaa-5792-4a51-ba83-e8733af29fdf" containerID="5e2c76259e18576139e5a2976c5246e5f5e801f18ac9837de36eef93473c7277" exitCode=0 Nov 29 07:41:47 crc kubenswrapper[4795]: I1129 07:41:47.066030 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-dtj9t" event={"ID":"4f25efaa-5792-4a51-ba83-e8733af29fdf","Type":"ContainerDied","Data":"5e2c76259e18576139e5a2976c5246e5f5e801f18ac9837de36eef93473c7277"} Nov 29 07:41:47 crc kubenswrapper[4795]: I1129 07:41:47.082773 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-blw2q" event={"ID":"ecf9ad47-4004-45d4-84e6-986e63258092","Type":"ContainerStarted","Data":"cd8984ad76d2ebb0b5c1b455219f6dfe32cf4af4db5149562340a746e56bc372"} Nov 29 07:41:47 crc kubenswrapper[4795]: I1129 07:41:47.094421 4795 generic.go:334] "Generic (PLEG): container finished" podID="a04ecf63-b125-4a0e-9869-403c9cca5648" containerID="639d98257cba6e6ec2309b46dd4dcb61798babcd021c0856eb4ff5e297830a3c" exitCode=0 Nov 29 07:41:47 crc kubenswrapper[4795]: I1129 07:41:47.094482 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h8pp6" event={"ID":"a04ecf63-b125-4a0e-9869-403c9cca5648","Type":"ContainerDied","Data":"639d98257cba6e6ec2309b46dd4dcb61798babcd021c0856eb4ff5e297830a3c"} Nov 29 07:41:47 crc kubenswrapper[4795]: I1129 07:41:47.094503 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h8pp6" event={"ID":"a04ecf63-b125-4a0e-9869-403c9cca5648","Type":"ContainerStarted","Data":"2060d5fa0c4006d668aec5b7d7ca234de16309d36552db5f2db0feb93fc0991b"} Nov 29 07:41:47 crc kubenswrapper[4795]: I1129 07:41:47.097029 4795 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 29 07:41:47 crc kubenswrapper[4795]: I1129 07:41:47.107051 4795 generic.go:334] "Generic (PLEG): container finished" podID="0deb15dc-57ff-4c83-8e81-ea7ebfda038d" containerID="26d7c6cf7ad98ba57174b6e4dfe85d7c7581dc10c57022b2683c79bc887e772c" exitCode=0 Nov 29 07:41:47 crc kubenswrapper[4795]: I1129 07:41:47.107117 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vx8d5" event={"ID":"0deb15dc-57ff-4c83-8e81-ea7ebfda038d","Type":"ContainerDied","Data":"26d7c6cf7ad98ba57174b6e4dfe85d7c7581dc10c57022b2683c79bc887e772c"} Nov 29 07:41:47 crc kubenswrapper[4795]: I1129 07:41:47.107142 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vx8d5" event={"ID":"0deb15dc-57ff-4c83-8e81-ea7ebfda038d","Type":"ContainerStarted","Data":"1bf46239af51a19271ecea1d13ef3235d767e6b4225e906ac6e7e8dd561ce1f9"} Nov 29 07:41:47 crc kubenswrapper[4795]: I1129 07:41:47.140340 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-blw2q" podStartSLOduration=127.14030902 podStartE2EDuration="2m7.14030902s" podCreationTimestamp="2025-11-29 07:39:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:41:47.129652573 +0000 UTC m=+153.105228353" watchObservedRunningTime="2025-11-29 07:41:47.14030902 +0000 UTC m=+153.115884810" Nov 29 07:41:47 crc kubenswrapper[4795]: I1129 07:41:47.146690 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:41:47 crc kubenswrapper[4795]: E1129 07:41:47.147126 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:41:47.64710854 +0000 UTC m=+153.622684330 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:47 crc kubenswrapper[4795]: I1129 07:41:47.164794 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-ddtz8" event={"ID":"7390eae4-5ab6-4606-90d5-e5468d745208","Type":"ContainerStarted","Data":"2838b5223ebf542e80948af6795d1ceac058a226332b47bb0ed8e4fb9d577659"} Nov 29 07:41:47 crc kubenswrapper[4795]: I1129 07:41:47.165527 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-ddtz8" Nov 29 07:41:47 crc kubenswrapper[4795]: I1129 07:41:47.179050 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8bzbm"] Nov 29 07:41:47 crc kubenswrapper[4795]: I1129 07:41:47.203938 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8bzbm" Nov 29 07:41:47 crc kubenswrapper[4795]: I1129 07:41:47.206086 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5g9td" event={"ID":"3afcac64-b24d-4150-a997-123947d36f3a","Type":"ContainerStarted","Data":"afa1776518cfbcc67865f3ec314e9ba186b52a711dbbf30b982022c0128d3140"} Nov 29 07:41:47 crc kubenswrapper[4795]: I1129 07:41:47.206124 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cgrzq" event={"ID":"b36dad37-4975-4010-b93e-0ea5c932caab","Type":"ContainerStarted","Data":"faa52db34fcdb4ff3f31af383fbd8a283796d8be470e6c709c9530470f010c7a"} Nov 29 07:41:47 crc kubenswrapper[4795]: I1129 07:41:47.206137 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8bzbm"] Nov 29 07:41:47 crc kubenswrapper[4795]: I1129 07:41:47.213380 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 29 07:41:47 crc kubenswrapper[4795]: I1129 07:41:47.245975 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6w84b" event={"ID":"3fa186eb-0f11-42e6-a1d4-2f42d1a7dadd","Type":"ContainerStarted","Data":"b203dd87b32e03aa5aec075d2678feae0e5013d8d4b928b7de6f43621922fcc7"} Nov 29 07:41:47 crc kubenswrapper[4795]: I1129 07:41:47.248150 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lm9wt\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:47 crc kubenswrapper[4795]: E1129 07:41:47.251094 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:41:47.751078168 +0000 UTC m=+153.726653958 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lm9wt" (UID: "04596222-6779-478e-96cd-3aa99a923aa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:47 crc kubenswrapper[4795]: I1129 07:41:47.252092 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jttv5" event={"ID":"f773d9ba-78f9-4b56-8e33-706ee34ad32a","Type":"ContainerStarted","Data":"a50ce5b0759ef63eb45a67c247031dcda8fc2e25d73a01a7875584a9382b817b"} Nov 29 07:41:47 crc kubenswrapper[4795]: I1129 07:41:47.252381 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jttv5" Nov 29 07:41:47 crc kubenswrapper[4795]: I1129 07:41:47.270501 4795 generic.go:334] "Generic (PLEG): container finished" podID="e1461195-363d-49f5-b5c2-61b99f16ece2" containerID="cde60f25cbe20ca4ba16e97201b8a5cdf54f5fe2bf7fbdf55941c2092f209b9c" exitCode=0 Nov 29 07:41:47 crc kubenswrapper[4795]: I1129 07:41:47.271747 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t8rg9" event={"ID":"e1461195-363d-49f5-b5c2-61b99f16ece2","Type":"ContainerDied","Data":"cde60f25cbe20ca4ba16e97201b8a5cdf54f5fe2bf7fbdf55941c2092f209b9c"} Nov 29 07:41:47 crc kubenswrapper[4795]: I1129 07:41:47.271782 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t8rg9" event={"ID":"e1461195-363d-49f5-b5c2-61b99f16ece2","Type":"ContainerStarted","Data":"19f03f7b2ef52a2d34ff0260d110528204c398ab5599d211c542a6ca9ca9145c"} Nov 29 07:41:47 crc kubenswrapper[4795]: I1129 07:41:47.290521 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-ddtz8" podStartSLOduration=12.290499837 podStartE2EDuration="12.290499837s" podCreationTimestamp="2025-11-29 07:41:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:41:47.243221529 +0000 UTC m=+153.218797319" watchObservedRunningTime="2025-11-29 07:41:47.290499837 +0000 UTC m=+153.266075637" Nov 29 07:41:47 crc kubenswrapper[4795]: I1129 07:41:47.300105 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-l2khx" Nov 29 07:41:47 crc kubenswrapper[4795]: I1129 07:41:47.319499 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5g9td" podStartSLOduration=127.319484146 podStartE2EDuration="2m7.319484146s" podCreationTimestamp="2025-11-29 07:39:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:41:47.318351024 +0000 UTC m=+153.293926814" watchObservedRunningTime="2025-11-29 07:41:47.319484146 +0000 UTC m=+153.295059936" Nov 29 07:41:47 crc kubenswrapper[4795]: I1129 07:41:47.374084 4795 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Nov 29 07:41:47 crc kubenswrapper[4795]: I1129 07:41:47.374487 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:41:47 crc kubenswrapper[4795]: I1129 07:41:47.374917 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96ff138b-b30f-4a36-9c9c-76cf2c9e8e89-catalog-content\") pod \"redhat-marketplace-8bzbm\" (UID: \"96ff138b-b30f-4a36-9c9c-76cf2c9e8e89\") " pod="openshift-marketplace/redhat-marketplace-8bzbm" Nov 29 07:41:47 crc kubenswrapper[4795]: I1129 07:41:47.375083 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96ff138b-b30f-4a36-9c9c-76cf2c9e8e89-utilities\") pod \"redhat-marketplace-8bzbm\" (UID: \"96ff138b-b30f-4a36-9c9c-76cf2c9e8e89\") " pod="openshift-marketplace/redhat-marketplace-8bzbm" Nov 29 07:41:47 crc kubenswrapper[4795]: I1129 07:41:47.375116 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z98xv\" (UniqueName: \"kubernetes.io/projected/96ff138b-b30f-4a36-9c9c-76cf2c9e8e89-kube-api-access-z98xv\") pod \"redhat-marketplace-8bzbm\" (UID: \"96ff138b-b30f-4a36-9c9c-76cf2c9e8e89\") " pod="openshift-marketplace/redhat-marketplace-8bzbm" Nov 29 07:41:47 crc kubenswrapper[4795]: E1129 07:41:47.375976 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:41:47.87595713 +0000 UTC m=+153.851532930 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:47 crc kubenswrapper[4795]: I1129 07:41:47.397886 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6w84b" podStartSLOduration=127.397862391 podStartE2EDuration="2m7.397862391s" podCreationTimestamp="2025-11-29 07:39:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:41:47.394300301 +0000 UTC m=+153.369876091" watchObservedRunningTime="2025-11-29 07:41:47.397862391 +0000 UTC m=+153.373438201" Nov 29 07:41:47 crc kubenswrapper[4795]: I1129 07:41:47.426048 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jttv5" podStartSLOduration=128.426032476 podStartE2EDuration="2m8.426032476s" podCreationTimestamp="2025-11-29 07:39:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:41:47.422874558 +0000 UTC m=+153.398450348" watchObservedRunningTime="2025-11-29 07:41:47.426032476 +0000 UTC m=+153.401608266" Nov 29 07:41:47 crc kubenswrapper[4795]: I1129 07:41:47.476798 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lm9wt\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:47 crc kubenswrapper[4795]: I1129 07:41:47.477241 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96ff138b-b30f-4a36-9c9c-76cf2c9e8e89-utilities\") pod \"redhat-marketplace-8bzbm\" (UID: \"96ff138b-b30f-4a36-9c9c-76cf2c9e8e89\") " pod="openshift-marketplace/redhat-marketplace-8bzbm" Nov 29 07:41:47 crc kubenswrapper[4795]: I1129 07:41:47.477298 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z98xv\" (UniqueName: \"kubernetes.io/projected/96ff138b-b30f-4a36-9c9c-76cf2c9e8e89-kube-api-access-z98xv\") pod \"redhat-marketplace-8bzbm\" (UID: \"96ff138b-b30f-4a36-9c9c-76cf2c9e8e89\") " pod="openshift-marketplace/redhat-marketplace-8bzbm" Nov 29 07:41:47 crc kubenswrapper[4795]: I1129 07:41:47.477697 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96ff138b-b30f-4a36-9c9c-76cf2c9e8e89-catalog-content\") pod \"redhat-marketplace-8bzbm\" (UID: \"96ff138b-b30f-4a36-9c9c-76cf2c9e8e89\") " pod="openshift-marketplace/redhat-marketplace-8bzbm" Nov 29 07:41:47 crc kubenswrapper[4795]: E1129 07:41:47.483001 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:41:47.982983844 +0000 UTC m=+153.958559634 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lm9wt" (UID: "04596222-6779-478e-96cd-3aa99a923aa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:41:47 crc kubenswrapper[4795]: I1129 07:41:47.497113 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96ff138b-b30f-4a36-9c9c-76cf2c9e8e89-catalog-content\") pod \"redhat-marketplace-8bzbm\" (UID: \"96ff138b-b30f-4a36-9c9c-76cf2c9e8e89\") " pod="openshift-marketplace/redhat-marketplace-8bzbm" Nov 29 07:41:47 crc kubenswrapper[4795]: I1129 07:41:47.497747 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96ff138b-b30f-4a36-9c9c-76cf2c9e8e89-utilities\") pod \"redhat-marketplace-8bzbm\" (UID: \"96ff138b-b30f-4a36-9c9c-76cf2c9e8e89\") " pod="openshift-marketplace/redhat-marketplace-8bzbm" Nov 29 07:41:47 crc kubenswrapper[4795]: I1129 07:41:47.509137 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z98xv\" (UniqueName: \"kubernetes.io/projected/96ff138b-b30f-4a36-9c9c-76cf2c9e8e89-kube-api-access-z98xv\") pod \"redhat-marketplace-8bzbm\" (UID: \"96ff138b-b30f-4a36-9c9c-76cf2c9e8e89\") " pod="openshift-marketplace/redhat-marketplace-8bzbm" Nov 29 07:41:47 crc kubenswrapper[4795]: I1129 07:41:47.517174 4795 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-11-29T07:41:47.374121329Z","Handler":null,"Name":""} Nov 29 07:41:47 crc kubenswrapper[4795]: I1129 07:41:47.534164 4795 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Nov 29 07:41:47 crc kubenswrapper[4795]: I1129 07:41:47.534402 4795 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Nov 29 07:41:47 crc kubenswrapper[4795]: I1129 07:41:47.551041 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8bzbm" Nov 29 07:41:47 crc kubenswrapper[4795]: I1129 07:41:47.558350 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tl6cb"] Nov 29 07:41:47 crc kubenswrapper[4795]: I1129 07:41:47.559390 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tl6cb" Nov 29 07:41:47 crc kubenswrapper[4795]: I1129 07:41:47.578765 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:41:47 crc kubenswrapper[4795]: I1129 07:41:47.579188 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tl6cb"] Nov 29 07:41:47 crc kubenswrapper[4795]: I1129 07:41:47.621487 4795 patch_prober.go:28] interesting pod/router-default-5444994796-692g9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 29 07:41:47 crc kubenswrapper[4795]: [-]has-synced failed: reason withheld Nov 29 07:41:47 crc kubenswrapper[4795]: [+]process-running ok Nov 29 07:41:47 crc kubenswrapper[4795]: healthz check failed Nov 29 07:41:47 crc kubenswrapper[4795]: I1129 07:41:47.621806 4795 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-692g9" podUID="fe1144c0-6804-4a78-bad8-3319ddb3c30c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 29 07:41:47 crc kubenswrapper[4795]: I1129 07:41:47.640822 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 29 07:41:47 crc kubenswrapper[4795]: I1129 07:41:47.679687 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc49b5c4-ec36-4051-b975-25f1e872a6f7-utilities\") pod \"redhat-marketplace-tl6cb\" (UID: \"cc49b5c4-ec36-4051-b975-25f1e872a6f7\") " pod="openshift-marketplace/redhat-marketplace-tl6cb" Nov 29 07:41:47 crc kubenswrapper[4795]: I1129 07:41:47.681005 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc49b5c4-ec36-4051-b975-25f1e872a6f7-catalog-content\") pod \"redhat-marketplace-tl6cb\" (UID: \"cc49b5c4-ec36-4051-b975-25f1e872a6f7\") " pod="openshift-marketplace/redhat-marketplace-tl6cb" Nov 29 07:41:47 crc kubenswrapper[4795]: I1129 07:41:47.681164 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lm9wt\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:47 crc kubenswrapper[4795]: I1129 07:41:47.681263 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jp6pp\" (UniqueName: \"kubernetes.io/projected/cc49b5c4-ec36-4051-b975-25f1e872a6f7-kube-api-access-jp6pp\") pod \"redhat-marketplace-tl6cb\" (UID: \"cc49b5c4-ec36-4051-b975-25f1e872a6f7\") " pod="openshift-marketplace/redhat-marketplace-tl6cb" Nov 29 07:41:47 crc kubenswrapper[4795]: I1129 07:41:47.690329 4795 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 29 07:41:47 crc kubenswrapper[4795]: I1129 07:41:47.690549 4795 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lm9wt\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:47 crc kubenswrapper[4795]: I1129 07:41:47.752601 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-lgd6b" Nov 29 07:41:47 crc kubenswrapper[4795]: I1129 07:41:47.752650 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-lgd6b" Nov 29 07:41:47 crc kubenswrapper[4795]: I1129 07:41:47.771243 4795 patch_prober.go:28] interesting pod/apiserver-76f77b778f-lgd6b container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Nov 29 07:41:47 crc kubenswrapper[4795]: [+]log ok Nov 29 07:41:47 crc kubenswrapper[4795]: [+]etcd ok Nov 29 07:41:47 crc kubenswrapper[4795]: [+]poststarthook/start-apiserver-admission-initializer ok Nov 29 07:41:47 crc kubenswrapper[4795]: [+]poststarthook/generic-apiserver-start-informers ok Nov 29 07:41:47 crc kubenswrapper[4795]: [+]poststarthook/max-in-flight-filter ok Nov 29 07:41:47 crc kubenswrapper[4795]: [+]poststarthook/storage-object-count-tracker-hook ok Nov 29 07:41:47 crc kubenswrapper[4795]: [+]poststarthook/image.openshift.io-apiserver-caches ok Nov 29 07:41:47 crc kubenswrapper[4795]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Nov 29 07:41:47 crc kubenswrapper[4795]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Nov 29 07:41:47 crc kubenswrapper[4795]: [+]poststarthook/project.openshift.io-projectcache ok Nov 29 07:41:47 crc kubenswrapper[4795]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Nov 29 07:41:47 crc kubenswrapper[4795]: [-]poststarthook/openshift.io-startinformers failed: reason withheld Nov 29 07:41:47 crc kubenswrapper[4795]: [+]poststarthook/openshift.io-restmapperupdater ok Nov 29 07:41:47 crc kubenswrapper[4795]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Nov 29 07:41:47 crc kubenswrapper[4795]: livez check failed Nov 29 07:41:47 crc kubenswrapper[4795]: I1129 07:41:47.771307 4795 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-lgd6b" podUID="b6996662-d230-4299-b913-b6bc38c50ef5" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 29 07:41:47 crc kubenswrapper[4795]: I1129 07:41:47.781909 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc49b5c4-ec36-4051-b975-25f1e872a6f7-utilities\") pod \"redhat-marketplace-tl6cb\" (UID: \"cc49b5c4-ec36-4051-b975-25f1e872a6f7\") " pod="openshift-marketplace/redhat-marketplace-tl6cb" Nov 29 07:41:47 crc kubenswrapper[4795]: I1129 07:41:47.782099 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc49b5c4-ec36-4051-b975-25f1e872a6f7-catalog-content\") pod \"redhat-marketplace-tl6cb\" (UID: \"cc49b5c4-ec36-4051-b975-25f1e872a6f7\") " pod="openshift-marketplace/redhat-marketplace-tl6cb" Nov 29 07:41:47 crc kubenswrapper[4795]: I1129 07:41:47.782233 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jp6pp\" (UniqueName: \"kubernetes.io/projected/cc49b5c4-ec36-4051-b975-25f1e872a6f7-kube-api-access-jp6pp\") pod \"redhat-marketplace-tl6cb\" (UID: \"cc49b5c4-ec36-4051-b975-25f1e872a6f7\") " pod="openshift-marketplace/redhat-marketplace-tl6cb" Nov 29 07:41:47 crc kubenswrapper[4795]: I1129 07:41:47.782445 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc49b5c4-ec36-4051-b975-25f1e872a6f7-utilities\") pod \"redhat-marketplace-tl6cb\" (UID: \"cc49b5c4-ec36-4051-b975-25f1e872a6f7\") " pod="openshift-marketplace/redhat-marketplace-tl6cb" Nov 29 07:41:47 crc kubenswrapper[4795]: I1129 07:41:47.782843 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc49b5c4-ec36-4051-b975-25f1e872a6f7-catalog-content\") pod \"redhat-marketplace-tl6cb\" (UID: \"cc49b5c4-ec36-4051-b975-25f1e872a6f7\") " pod="openshift-marketplace/redhat-marketplace-tl6cb" Nov 29 07:41:47 crc kubenswrapper[4795]: I1129 07:41:47.804720 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lm9wt\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:47 crc kubenswrapper[4795]: I1129 07:41:47.828840 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jp6pp\" (UniqueName: \"kubernetes.io/projected/cc49b5c4-ec36-4051-b975-25f1e872a6f7-kube-api-access-jp6pp\") pod \"redhat-marketplace-tl6cb\" (UID: \"cc49b5c4-ec36-4051-b975-25f1e872a6f7\") " pod="openshift-marketplace/redhat-marketplace-tl6cb" Nov 29 07:41:47 crc kubenswrapper[4795]: I1129 07:41:47.831732 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6w84b" Nov 29 07:41:47 crc kubenswrapper[4795]: I1129 07:41:47.832441 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6w84b" Nov 29 07:41:47 crc kubenswrapper[4795]: I1129 07:41:47.849297 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6w84b" Nov 29 07:41:47 crc kubenswrapper[4795]: I1129 07:41:47.920855 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:47 crc kubenswrapper[4795]: I1129 07:41:47.957606 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8bzbm"] Nov 29 07:41:47 crc kubenswrapper[4795]: I1129 07:41:47.964837 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tl6cb" Nov 29 07:41:47 crc kubenswrapper[4795]: W1129 07:41:47.972081 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96ff138b_b30f_4a36_9c9c_76cf2c9e8e89.slice/crio-ca97bbc1c53a21185c86c199b52df2571c303d0468f4ec18e347028bc614c41f WatchSource:0}: Error finding container ca97bbc1c53a21185c86c199b52df2571c303d0468f4ec18e347028bc614c41f: Status 404 returned error can't find the container with id ca97bbc1c53a21185c86c199b52df2571c303d0468f4ec18e347028bc614c41f Nov 29 07:41:48 crc kubenswrapper[4795]: I1129 07:41:48.166750 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-d4htg"] Nov 29 07:41:48 crc kubenswrapper[4795]: I1129 07:41:48.167754 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d4htg" Nov 29 07:41:48 crc kubenswrapper[4795]: I1129 07:41:48.170189 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 29 07:41:48 crc kubenswrapper[4795]: I1129 07:41:48.203367 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d4htg"] Nov 29 07:41:48 crc kubenswrapper[4795]: I1129 07:41:48.210465 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lm9wt"] Nov 29 07:41:48 crc kubenswrapper[4795]: W1129 07:41:48.214784 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod04596222_6779_478e_96cd_3aa99a923aa4.slice/crio-6a6c60f71e6c71d575c2c0201236875b99a1dcdcdecd104c4af5914393d78bb3 WatchSource:0}: Error finding container 6a6c60f71e6c71d575c2c0201236875b99a1dcdcdecd104c4af5914393d78bb3: Status 404 returned error can't find the container with id 6a6c60f71e6c71d575c2c0201236875b99a1dcdcdecd104c4af5914393d78bb3 Nov 29 07:41:48 crc kubenswrapper[4795]: I1129 07:41:48.245452 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tl6cb"] Nov 29 07:41:48 crc kubenswrapper[4795]: I1129 07:41:48.270011 4795 patch_prober.go:28] interesting pod/downloads-7954f5f757-klnm7 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Nov 29 07:41:48 crc kubenswrapper[4795]: I1129 07:41:48.270063 4795 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-klnm7" podUID="892d0338-c59f-481e-8d70-3143d4954f38" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" Nov 29 07:41:48 crc kubenswrapper[4795]: I1129 07:41:48.270074 4795 patch_prober.go:28] interesting pod/downloads-7954f5f757-klnm7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Nov 29 07:41:48 crc kubenswrapper[4795]: I1129 07:41:48.270134 4795 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-klnm7" podUID="892d0338-c59f-481e-8d70-3143d4954f38" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" Nov 29 07:41:48 crc kubenswrapper[4795]: I1129 07:41:48.281915 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Nov 29 07:41:48 crc kubenswrapper[4795]: I1129 07:41:48.282400 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" event={"ID":"04596222-6779-478e-96cd-3aa99a923aa4","Type":"ContainerStarted","Data":"6a6c60f71e6c71d575c2c0201236875b99a1dcdcdecd104c4af5914393d78bb3"} Nov 29 07:41:48 crc kubenswrapper[4795]: I1129 07:41:48.282830 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-w4g7w" Nov 29 07:41:48 crc kubenswrapper[4795]: I1129 07:41:48.282996 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-w4g7w" Nov 29 07:41:48 crc kubenswrapper[4795]: I1129 07:41:48.285109 4795 patch_prober.go:28] interesting pod/console-f9d7485db-w4g7w container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.19:8443/health\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Nov 29 07:41:48 crc kubenswrapper[4795]: I1129 07:41:48.285173 4795 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-w4g7w" podUID="9b02347e-bd6a-4a77-97d4-d8276d1b6167" containerName="console" probeResult="failure" output="Get \"https://10.217.0.19:8443/health\": dial tcp 10.217.0.19:8443: connect: connection refused" Nov 29 07:41:48 crc kubenswrapper[4795]: I1129 07:41:48.285861 4795 generic.go:334] "Generic (PLEG): container finished" podID="b36dad37-4975-4010-b93e-0ea5c932caab" containerID="d8cdd411472f4ac1940b6d995f3b01d1491999de590ab94dcb85889f1b58149f" exitCode=0 Nov 29 07:41:48 crc kubenswrapper[4795]: I1129 07:41:48.285887 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cgrzq" event={"ID":"b36dad37-4975-4010-b93e-0ea5c932caab","Type":"ContainerDied","Data":"d8cdd411472f4ac1940b6d995f3b01d1491999de590ab94dcb85889f1b58149f"} Nov 29 07:41:48 crc kubenswrapper[4795]: I1129 07:41:48.288390 4795 generic.go:334] "Generic (PLEG): container finished" podID="96ff138b-b30f-4a36-9c9c-76cf2c9e8e89" containerID="60c2de715ea55e2c78e75a0bd87f028677e22ba9e78e12e450593cdfbb817d03" exitCode=0 Nov 29 07:41:48 crc kubenswrapper[4795]: I1129 07:41:48.288443 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8bzbm" event={"ID":"96ff138b-b30f-4a36-9c9c-76cf2c9e8e89","Type":"ContainerDied","Data":"60c2de715ea55e2c78e75a0bd87f028677e22ba9e78e12e450593cdfbb817d03"} Nov 29 07:41:48 crc kubenswrapper[4795]: I1129 07:41:48.288464 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8bzbm" event={"ID":"96ff138b-b30f-4a36-9c9c-76cf2c9e8e89","Type":"ContainerStarted","Data":"ca97bbc1c53a21185c86c199b52df2571c303d0468f4ec18e347028bc614c41f"} Nov 29 07:41:48 crc kubenswrapper[4795]: I1129 07:41:48.288569 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/097562dd-99cb-4451-ac96-c1cdfd8cc4f4-catalog-content\") pod \"redhat-operators-d4htg\" (UID: \"097562dd-99cb-4451-ac96-c1cdfd8cc4f4\") " pod="openshift-marketplace/redhat-operators-d4htg" Nov 29 07:41:48 crc kubenswrapper[4795]: I1129 07:41:48.288622 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/097562dd-99cb-4451-ac96-c1cdfd8cc4f4-utilities\") pod \"redhat-operators-d4htg\" (UID: \"097562dd-99cb-4451-ac96-c1cdfd8cc4f4\") " pod="openshift-marketplace/redhat-operators-d4htg" Nov 29 07:41:48 crc kubenswrapper[4795]: I1129 07:41:48.288672 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lp5vh\" (UniqueName: \"kubernetes.io/projected/097562dd-99cb-4451-ac96-c1cdfd8cc4f4-kube-api-access-lp5vh\") pod \"redhat-operators-d4htg\" (UID: \"097562dd-99cb-4451-ac96-c1cdfd8cc4f4\") " pod="openshift-marketplace/redhat-operators-d4htg" Nov 29 07:41:48 crc kubenswrapper[4795]: I1129 07:41:48.300347 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-4jk72" event={"ID":"acfaa12b-166f-4c03-a208-6ed705af199d","Type":"ContainerStarted","Data":"ca2ff79e36e956df793bf2f8d37731ca825c4591e5df07b56a2e8a474d597d22"} Nov 29 07:41:48 crc kubenswrapper[4795]: I1129 07:41:48.313285 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-4jk72" event={"ID":"acfaa12b-166f-4c03-a208-6ed705af199d","Type":"ContainerStarted","Data":"2860fa5fd21f990e642cf5e5fec5db072ffbf71ceec7f2d3b3b7682f5eb6884c"} Nov 29 07:41:48 crc kubenswrapper[4795]: I1129 07:41:48.313388 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-4jk72" event={"ID":"acfaa12b-166f-4c03-a208-6ed705af199d","Type":"ContainerStarted","Data":"6e18245f8418c9edd4693a17d66bebbd767d8874b7e6526a4f6b6c875c90c3f0"} Nov 29 07:41:48 crc kubenswrapper[4795]: I1129 07:41:48.313438 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"28ef597a6b7ae420731d53c73f5383f11e2ee67aedefeaf80f89e62bcc7aff27"} Nov 29 07:41:48 crc kubenswrapper[4795]: I1129 07:41:48.313453 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"badbc72f8fe47bf11986c03afdfccbd0b92422364137450981e78c484c607df7"} Nov 29 07:41:48 crc kubenswrapper[4795]: I1129 07:41:48.313465 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"3c7fd4e50c0b063fea8aef472b58f0065cf1a89d1a3e15667821cc058e1cd083"} Nov 29 07:41:48 crc kubenswrapper[4795]: W1129 07:41:48.313462 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc49b5c4_ec36_4051_b975_25f1e872a6f7.slice/crio-e5c5fbdb8ca78693e89201429e91538827eee78cc5974836f43bddbfbec30a1c WatchSource:0}: Error finding container e5c5fbdb8ca78693e89201429e91538827eee78cc5974836f43bddbfbec30a1c: Status 404 returned error can't find the container with id e5c5fbdb8ca78693e89201429e91538827eee78cc5974836f43bddbfbec30a1c Nov 29 07:41:48 crc kubenswrapper[4795]: I1129 07:41:48.313477 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"4d843960a7e7783d49392b0095c700596505f753bdca0781cf0020d650798fb1"} Nov 29 07:41:48 crc kubenswrapper[4795]: I1129 07:41:48.313540 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"c47c05f2ddb474111a76872aa16c13c6a5df2e9c6f9d9df69cae05b40eb9123a"} Nov 29 07:41:48 crc kubenswrapper[4795]: I1129 07:41:48.313616 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"a74f8ae629d4b85d940602aaadd5617e8821abc182fe4bf6f7f8deba9b9c5c8a"} Nov 29 07:41:48 crc kubenswrapper[4795]: I1129 07:41:48.335545 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jttv5" Nov 29 07:41:48 crc kubenswrapper[4795]: I1129 07:41:48.336139 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6w84b" Nov 29 07:41:48 crc kubenswrapper[4795]: I1129 07:41:48.349246 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-4jk72" podStartSLOduration=12.349224825 podStartE2EDuration="12.349224825s" podCreationTimestamp="2025-11-29 07:41:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:41:48.345841641 +0000 UTC m=+154.321417451" watchObservedRunningTime="2025-11-29 07:41:48.349224825 +0000 UTC m=+154.324800625" Nov 29 07:41:48 crc kubenswrapper[4795]: I1129 07:41:48.392428 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/097562dd-99cb-4451-ac96-c1cdfd8cc4f4-catalog-content\") pod \"redhat-operators-d4htg\" (UID: \"097562dd-99cb-4451-ac96-c1cdfd8cc4f4\") " pod="openshift-marketplace/redhat-operators-d4htg" Nov 29 07:41:48 crc kubenswrapper[4795]: I1129 07:41:48.392477 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/097562dd-99cb-4451-ac96-c1cdfd8cc4f4-utilities\") pod \"redhat-operators-d4htg\" (UID: \"097562dd-99cb-4451-ac96-c1cdfd8cc4f4\") " pod="openshift-marketplace/redhat-operators-d4htg" Nov 29 07:41:48 crc kubenswrapper[4795]: I1129 07:41:48.392647 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lp5vh\" (UniqueName: \"kubernetes.io/projected/097562dd-99cb-4451-ac96-c1cdfd8cc4f4-kube-api-access-lp5vh\") pod \"redhat-operators-d4htg\" (UID: \"097562dd-99cb-4451-ac96-c1cdfd8cc4f4\") " pod="openshift-marketplace/redhat-operators-d4htg" Nov 29 07:41:48 crc kubenswrapper[4795]: I1129 07:41:48.399294 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/097562dd-99cb-4451-ac96-c1cdfd8cc4f4-catalog-content\") pod \"redhat-operators-d4htg\" (UID: \"097562dd-99cb-4451-ac96-c1cdfd8cc4f4\") " pod="openshift-marketplace/redhat-operators-d4htg" Nov 29 07:41:48 crc kubenswrapper[4795]: I1129 07:41:48.400134 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/097562dd-99cb-4451-ac96-c1cdfd8cc4f4-utilities\") pod \"redhat-operators-d4htg\" (UID: \"097562dd-99cb-4451-ac96-c1cdfd8cc4f4\") " pod="openshift-marketplace/redhat-operators-d4htg" Nov 29 07:41:48 crc kubenswrapper[4795]: I1129 07:41:48.444179 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lp5vh\" (UniqueName: \"kubernetes.io/projected/097562dd-99cb-4451-ac96-c1cdfd8cc4f4-kube-api-access-lp5vh\") pod \"redhat-operators-d4htg\" (UID: \"097562dd-99cb-4451-ac96-c1cdfd8cc4f4\") " pod="openshift-marketplace/redhat-operators-d4htg" Nov 29 07:41:48 crc kubenswrapper[4795]: I1129 07:41:48.496296 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d4htg" Nov 29 07:41:48 crc kubenswrapper[4795]: I1129 07:41:48.568814 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tr5zq"] Nov 29 07:41:48 crc kubenswrapper[4795]: I1129 07:41:48.574162 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tr5zq" Nov 29 07:41:48 crc kubenswrapper[4795]: I1129 07:41:48.612344 4795 patch_prober.go:28] interesting pod/router-default-5444994796-692g9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 29 07:41:48 crc kubenswrapper[4795]: [-]has-synced failed: reason withheld Nov 29 07:41:48 crc kubenswrapper[4795]: [+]process-running ok Nov 29 07:41:48 crc kubenswrapper[4795]: healthz check failed Nov 29 07:41:48 crc kubenswrapper[4795]: I1129 07:41:48.612392 4795 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-692g9" podUID="fe1144c0-6804-4a78-bad8-3319ddb3c30c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 29 07:41:48 crc kubenswrapper[4795]: I1129 07:41:48.622167 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tr5zq"] Nov 29 07:41:48 crc kubenswrapper[4795]: I1129 07:41:48.708884 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0cc4a28-2815-4b61-9893-3cf1b256872a-catalog-content\") pod \"redhat-operators-tr5zq\" (UID: \"e0cc4a28-2815-4b61-9893-3cf1b256872a\") " pod="openshift-marketplace/redhat-operators-tr5zq" Nov 29 07:41:48 crc kubenswrapper[4795]: I1129 07:41:48.709325 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbn9z\" (UniqueName: \"kubernetes.io/projected/e0cc4a28-2815-4b61-9893-3cf1b256872a-kube-api-access-xbn9z\") pod \"redhat-operators-tr5zq\" (UID: \"e0cc4a28-2815-4b61-9893-3cf1b256872a\") " pod="openshift-marketplace/redhat-operators-tr5zq" Nov 29 07:41:48 crc kubenswrapper[4795]: I1129 07:41:48.709417 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0cc4a28-2815-4b61-9893-3cf1b256872a-utilities\") pod \"redhat-operators-tr5zq\" (UID: \"e0cc4a28-2815-4b61-9893-3cf1b256872a\") " pod="openshift-marketplace/redhat-operators-tr5zq" Nov 29 07:41:48 crc kubenswrapper[4795]: I1129 07:41:48.794099 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-dtj9t" Nov 29 07:41:48 crc kubenswrapper[4795]: I1129 07:41:48.810513 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0cc4a28-2815-4b61-9893-3cf1b256872a-catalog-content\") pod \"redhat-operators-tr5zq\" (UID: \"e0cc4a28-2815-4b61-9893-3cf1b256872a\") " pod="openshift-marketplace/redhat-operators-tr5zq" Nov 29 07:41:48 crc kubenswrapper[4795]: I1129 07:41:48.810615 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbn9z\" (UniqueName: \"kubernetes.io/projected/e0cc4a28-2815-4b61-9893-3cf1b256872a-kube-api-access-xbn9z\") pod \"redhat-operators-tr5zq\" (UID: \"e0cc4a28-2815-4b61-9893-3cf1b256872a\") " pod="openshift-marketplace/redhat-operators-tr5zq" Nov 29 07:41:48 crc kubenswrapper[4795]: I1129 07:41:48.810667 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0cc4a28-2815-4b61-9893-3cf1b256872a-utilities\") pod \"redhat-operators-tr5zq\" (UID: \"e0cc4a28-2815-4b61-9893-3cf1b256872a\") " pod="openshift-marketplace/redhat-operators-tr5zq" Nov 29 07:41:48 crc kubenswrapper[4795]: I1129 07:41:48.811673 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0cc4a28-2815-4b61-9893-3cf1b256872a-utilities\") pod \"redhat-operators-tr5zq\" (UID: \"e0cc4a28-2815-4b61-9893-3cf1b256872a\") " pod="openshift-marketplace/redhat-operators-tr5zq" Nov 29 07:41:48 crc kubenswrapper[4795]: I1129 07:41:48.814284 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0cc4a28-2815-4b61-9893-3cf1b256872a-catalog-content\") pod \"redhat-operators-tr5zq\" (UID: \"e0cc4a28-2815-4b61-9893-3cf1b256872a\") " pod="openshift-marketplace/redhat-operators-tr5zq" Nov 29 07:41:48 crc kubenswrapper[4795]: I1129 07:41:48.835994 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbn9z\" (UniqueName: \"kubernetes.io/projected/e0cc4a28-2815-4b61-9893-3cf1b256872a-kube-api-access-xbn9z\") pod \"redhat-operators-tr5zq\" (UID: \"e0cc4a28-2815-4b61-9893-3cf1b256872a\") " pod="openshift-marketplace/redhat-operators-tr5zq" Nov 29 07:41:48 crc kubenswrapper[4795]: I1129 07:41:48.911131 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4f25efaa-5792-4a51-ba83-e8733af29fdf-config-volume\") pod \"4f25efaa-5792-4a51-ba83-e8733af29fdf\" (UID: \"4f25efaa-5792-4a51-ba83-e8733af29fdf\") " Nov 29 07:41:48 crc kubenswrapper[4795]: I1129 07:41:48.911194 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4f25efaa-5792-4a51-ba83-e8733af29fdf-secret-volume\") pod \"4f25efaa-5792-4a51-ba83-e8733af29fdf\" (UID: \"4f25efaa-5792-4a51-ba83-e8733af29fdf\") " Nov 29 07:41:48 crc kubenswrapper[4795]: I1129 07:41:48.911232 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjhhf\" (UniqueName: \"kubernetes.io/projected/4f25efaa-5792-4a51-ba83-e8733af29fdf-kube-api-access-bjhhf\") pod \"4f25efaa-5792-4a51-ba83-e8733af29fdf\" (UID: \"4f25efaa-5792-4a51-ba83-e8733af29fdf\") " Nov 29 07:41:48 crc kubenswrapper[4795]: I1129 07:41:48.911900 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f25efaa-5792-4a51-ba83-e8733af29fdf-config-volume" (OuterVolumeSpecName: "config-volume") pod "4f25efaa-5792-4a51-ba83-e8733af29fdf" (UID: "4f25efaa-5792-4a51-ba83-e8733af29fdf"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:41:48 crc kubenswrapper[4795]: I1129 07:41:48.920608 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f25efaa-5792-4a51-ba83-e8733af29fdf-kube-api-access-bjhhf" (OuterVolumeSpecName: "kube-api-access-bjhhf") pod "4f25efaa-5792-4a51-ba83-e8733af29fdf" (UID: "4f25efaa-5792-4a51-ba83-e8733af29fdf"). InnerVolumeSpecName "kube-api-access-bjhhf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:41:48 crc kubenswrapper[4795]: I1129 07:41:48.920946 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f25efaa-5792-4a51-ba83-e8733af29fdf-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4f25efaa-5792-4a51-ba83-e8733af29fdf" (UID: "4f25efaa-5792-4a51-ba83-e8733af29fdf"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:41:48 crc kubenswrapper[4795]: I1129 07:41:48.996221 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 29 07:41:48 crc kubenswrapper[4795]: E1129 07:41:48.996623 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f25efaa-5792-4a51-ba83-e8733af29fdf" containerName="collect-profiles" Nov 29 07:41:48 crc kubenswrapper[4795]: I1129 07:41:48.996751 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f25efaa-5792-4a51-ba83-e8733af29fdf" containerName="collect-profiles" Nov 29 07:41:48 crc kubenswrapper[4795]: I1129 07:41:48.996921 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f25efaa-5792-4a51-ba83-e8733af29fdf" containerName="collect-profiles" Nov 29 07:41:48 crc kubenswrapper[4795]: I1129 07:41:48.997359 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 29 07:41:49 crc kubenswrapper[4795]: I1129 07:41:49.005933 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 29 07:41:49 crc kubenswrapper[4795]: I1129 07:41:49.006395 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Nov 29 07:41:49 crc kubenswrapper[4795]: I1129 07:41:49.007770 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Nov 29 07:41:49 crc kubenswrapper[4795]: I1129 07:41:49.020393 4795 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4f25efaa-5792-4a51-ba83-e8733af29fdf-config-volume\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:49 crc kubenswrapper[4795]: I1129 07:41:49.020432 4795 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4f25efaa-5792-4a51-ba83-e8733af29fdf-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:49 crc kubenswrapper[4795]: I1129 07:41:49.020444 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjhhf\" (UniqueName: \"kubernetes.io/projected/4f25efaa-5792-4a51-ba83-e8733af29fdf-kube-api-access-bjhhf\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:49 crc kubenswrapper[4795]: I1129 07:41:49.021955 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tr5zq" Nov 29 07:41:49 crc kubenswrapper[4795]: I1129 07:41:49.070341 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d4htg"] Nov 29 07:41:49 crc kubenswrapper[4795]: W1129 07:41:49.081191 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod097562dd_99cb_4451_ac96_c1cdfd8cc4f4.slice/crio-40ef80d60fd17bccfefb9b5b91dcb12a71f324459f3aaa9ffd40f7de31330187 WatchSource:0}: Error finding container 40ef80d60fd17bccfefb9b5b91dcb12a71f324459f3aaa9ffd40f7de31330187: Status 404 returned error can't find the container with id 40ef80d60fd17bccfefb9b5b91dcb12a71f324459f3aaa9ffd40f7de31330187 Nov 29 07:41:49 crc kubenswrapper[4795]: I1129 07:41:49.121238 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6623fff6-cf51-4f70-a3c7-9dab9c329c5c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"6623fff6-cf51-4f70-a3c7-9dab9c329c5c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 29 07:41:49 crc kubenswrapper[4795]: I1129 07:41:49.121579 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6623fff6-cf51-4f70-a3c7-9dab9c329c5c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"6623fff6-cf51-4f70-a3c7-9dab9c329c5c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 29 07:41:49 crc kubenswrapper[4795]: I1129 07:41:49.222346 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6623fff6-cf51-4f70-a3c7-9dab9c329c5c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"6623fff6-cf51-4f70-a3c7-9dab9c329c5c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 29 07:41:49 crc kubenswrapper[4795]: I1129 07:41:49.222943 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6623fff6-cf51-4f70-a3c7-9dab9c329c5c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"6623fff6-cf51-4f70-a3c7-9dab9c329c5c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 29 07:41:49 crc kubenswrapper[4795]: I1129 07:41:49.223034 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6623fff6-cf51-4f70-a3c7-9dab9c329c5c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"6623fff6-cf51-4f70-a3c7-9dab9c329c5c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 29 07:41:49 crc kubenswrapper[4795]: I1129 07:41:49.249255 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6623fff6-cf51-4f70-a3c7-9dab9c329c5c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"6623fff6-cf51-4f70-a3c7-9dab9c329c5c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 29 07:41:49 crc kubenswrapper[4795]: I1129 07:41:49.342189 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 29 07:41:49 crc kubenswrapper[4795]: I1129 07:41:49.384407 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-dtj9t" event={"ID":"4f25efaa-5792-4a51-ba83-e8733af29fdf","Type":"ContainerDied","Data":"bd969ee68502067f3fe216cec41c3eb14e71300d3381eb3b3d8b3b1f3d9dfc09"} Nov 29 07:41:49 crc kubenswrapper[4795]: I1129 07:41:49.384452 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd969ee68502067f3fe216cec41c3eb14e71300d3381eb3b3d8b3b1f3d9dfc09" Nov 29 07:41:49 crc kubenswrapper[4795]: I1129 07:41:49.384509 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-dtj9t" Nov 29 07:41:49 crc kubenswrapper[4795]: I1129 07:41:49.413333 4795 generic.go:334] "Generic (PLEG): container finished" podID="cc49b5c4-ec36-4051-b975-25f1e872a6f7" containerID="e10df092c1261177eeb72db701ad9830ccfa4a19a01134d8bc4e5593bb18f573" exitCode=0 Nov 29 07:41:49 crc kubenswrapper[4795]: I1129 07:41:49.413648 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tl6cb" event={"ID":"cc49b5c4-ec36-4051-b975-25f1e872a6f7","Type":"ContainerDied","Data":"e10df092c1261177eeb72db701ad9830ccfa4a19a01134d8bc4e5593bb18f573"} Nov 29 07:41:49 crc kubenswrapper[4795]: I1129 07:41:49.413676 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tl6cb" event={"ID":"cc49b5c4-ec36-4051-b975-25f1e872a6f7","Type":"ContainerStarted","Data":"e5c5fbdb8ca78693e89201429e91538827eee78cc5974836f43bddbfbec30a1c"} Nov 29 07:41:49 crc kubenswrapper[4795]: I1129 07:41:49.504880 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" event={"ID":"04596222-6779-478e-96cd-3aa99a923aa4","Type":"ContainerStarted","Data":"6749a4f3e8655d1adec49be5b226c66b6661df08001a84fb4b426da7f21249e9"} Nov 29 07:41:49 crc kubenswrapper[4795]: I1129 07:41:49.505007 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:41:49 crc kubenswrapper[4795]: I1129 07:41:49.526403 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d4htg" event={"ID":"097562dd-99cb-4451-ac96-c1cdfd8cc4f4","Type":"ContainerStarted","Data":"eefb0c0e0099912b753a70b90afd3455e1ece7e91e1d7b45424b832d30c0175f"} Nov 29 07:41:49 crc kubenswrapper[4795]: I1129 07:41:49.526459 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d4htg" event={"ID":"097562dd-99cb-4451-ac96-c1cdfd8cc4f4","Type":"ContainerStarted","Data":"40ef80d60fd17bccfefb9b5b91dcb12a71f324459f3aaa9ffd40f7de31330187"} Nov 29 07:41:49 crc kubenswrapper[4795]: I1129 07:41:49.537151 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:41:49 crc kubenswrapper[4795]: I1129 07:41:49.539466 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" podStartSLOduration=129.539451309 podStartE2EDuration="2m9.539451309s" podCreationTimestamp="2025-11-29 07:39:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:41:49.536334692 +0000 UTC m=+155.511910482" watchObservedRunningTime="2025-11-29 07:41:49.539451309 +0000 UTC m=+155.515027099" Nov 29 07:41:49 crc kubenswrapper[4795]: I1129 07:41:49.597837 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-692g9" Nov 29 07:41:49 crc kubenswrapper[4795]: I1129 07:41:49.611068 4795 patch_prober.go:28] interesting pod/router-default-5444994796-692g9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 29 07:41:49 crc kubenswrapper[4795]: [-]has-synced failed: reason withheld Nov 29 07:41:49 crc kubenswrapper[4795]: [+]process-running ok Nov 29 07:41:49 crc kubenswrapper[4795]: healthz check failed Nov 29 07:41:49 crc kubenswrapper[4795]: I1129 07:41:49.611130 4795 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-692g9" podUID="fe1144c0-6804-4a78-bad8-3319ddb3c30c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 29 07:41:49 crc kubenswrapper[4795]: I1129 07:41:49.716987 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tr5zq"] Nov 29 07:41:49 crc kubenswrapper[4795]: I1129 07:41:49.859621 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 29 07:41:49 crc kubenswrapper[4795]: W1129 07:41:49.880528 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod6623fff6_cf51_4f70_a3c7_9dab9c329c5c.slice/crio-9e0bb95bcbc585796538a3dbe34ede2fe6f1ac7fd098363417a87154a3ad5c19 WatchSource:0}: Error finding container 9e0bb95bcbc585796538a3dbe34ede2fe6f1ac7fd098363417a87154a3ad5c19: Status 404 returned error can't find the container with id 9e0bb95bcbc585796538a3dbe34ede2fe6f1ac7fd098363417a87154a3ad5c19 Nov 29 07:41:50 crc kubenswrapper[4795]: I1129 07:41:50.544360 4795 generic.go:334] "Generic (PLEG): container finished" podID="e0cc4a28-2815-4b61-9893-3cf1b256872a" containerID="aca7500b649372d8fdb01dff8c1b882add00557b919da0a6f8486b579ebc988b" exitCode=0 Nov 29 07:41:50 crc kubenswrapper[4795]: I1129 07:41:50.544698 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tr5zq" event={"ID":"e0cc4a28-2815-4b61-9893-3cf1b256872a","Type":"ContainerDied","Data":"aca7500b649372d8fdb01dff8c1b882add00557b919da0a6f8486b579ebc988b"} Nov 29 07:41:50 crc kubenswrapper[4795]: I1129 07:41:50.544727 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tr5zq" event={"ID":"e0cc4a28-2815-4b61-9893-3cf1b256872a","Type":"ContainerStarted","Data":"e544139d96ec5a0c2f0ba6c092f4b5762fbc164e63938559ae52313dc7c3c95c"} Nov 29 07:41:50 crc kubenswrapper[4795]: I1129 07:41:50.556233 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"6623fff6-cf51-4f70-a3c7-9dab9c329c5c","Type":"ContainerStarted","Data":"21b6d48406df46a47e3f18b11c4ba7d08d9bc0c9789a97dd36a85c9573322f27"} Nov 29 07:41:50 crc kubenswrapper[4795]: I1129 07:41:50.556276 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"6623fff6-cf51-4f70-a3c7-9dab9c329c5c","Type":"ContainerStarted","Data":"9e0bb95bcbc585796538a3dbe34ede2fe6f1ac7fd098363417a87154a3ad5c19"} Nov 29 07:41:50 crc kubenswrapper[4795]: I1129 07:41:50.565540 4795 generic.go:334] "Generic (PLEG): container finished" podID="097562dd-99cb-4451-ac96-c1cdfd8cc4f4" containerID="eefb0c0e0099912b753a70b90afd3455e1ece7e91e1d7b45424b832d30c0175f" exitCode=0 Nov 29 07:41:50 crc kubenswrapper[4795]: I1129 07:41:50.566356 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d4htg" event={"ID":"097562dd-99cb-4451-ac96-c1cdfd8cc4f4","Type":"ContainerDied","Data":"eefb0c0e0099912b753a70b90afd3455e1ece7e91e1d7b45424b832d30c0175f"} Nov 29 07:41:50 crc kubenswrapper[4795]: I1129 07:41:50.585656 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.585640077 podStartE2EDuration="2.585640077s" podCreationTimestamp="2025-11-29 07:41:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:41:50.58252325 +0000 UTC m=+156.558099060" watchObservedRunningTime="2025-11-29 07:41:50.585640077 +0000 UTC m=+156.561215867" Nov 29 07:41:50 crc kubenswrapper[4795]: I1129 07:41:50.602943 4795 patch_prober.go:28] interesting pod/router-default-5444994796-692g9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 29 07:41:50 crc kubenswrapper[4795]: [-]has-synced failed: reason withheld Nov 29 07:41:50 crc kubenswrapper[4795]: [+]process-running ok Nov 29 07:41:50 crc kubenswrapper[4795]: healthz check failed Nov 29 07:41:50 crc kubenswrapper[4795]: I1129 07:41:50.603077 4795 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-692g9" podUID="fe1144c0-6804-4a78-bad8-3319ddb3c30c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 29 07:41:51 crc kubenswrapper[4795]: I1129 07:41:51.583422 4795 generic.go:334] "Generic (PLEG): container finished" podID="6623fff6-cf51-4f70-a3c7-9dab9c329c5c" containerID="21b6d48406df46a47e3f18b11c4ba7d08d9bc0c9789a97dd36a85c9573322f27" exitCode=0 Nov 29 07:41:51 crc kubenswrapper[4795]: I1129 07:41:51.583510 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"6623fff6-cf51-4f70-a3c7-9dab9c329c5c","Type":"ContainerDied","Data":"21b6d48406df46a47e3f18b11c4ba7d08d9bc0c9789a97dd36a85c9573322f27"} Nov 29 07:41:51 crc kubenswrapper[4795]: I1129 07:41:51.602332 4795 patch_prober.go:28] interesting pod/router-default-5444994796-692g9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 29 07:41:51 crc kubenswrapper[4795]: [-]has-synced failed: reason withheld Nov 29 07:41:51 crc kubenswrapper[4795]: [+]process-running ok Nov 29 07:41:51 crc kubenswrapper[4795]: healthz check failed Nov 29 07:41:51 crc kubenswrapper[4795]: I1129 07:41:51.602408 4795 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-692g9" podUID="fe1144c0-6804-4a78-bad8-3319ddb3c30c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 29 07:41:51 crc kubenswrapper[4795]: I1129 07:41:51.796713 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 29 07:41:51 crc kubenswrapper[4795]: I1129 07:41:51.797394 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 29 07:41:51 crc kubenswrapper[4795]: I1129 07:41:51.800841 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 29 07:41:51 crc kubenswrapper[4795]: I1129 07:41:51.801195 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 29 07:41:51 crc kubenswrapper[4795]: I1129 07:41:51.809122 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 29 07:41:51 crc kubenswrapper[4795]: I1129 07:41:51.876875 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/828efb35-b8f4-40f4-91b2-bb76bc014d6a-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"828efb35-b8f4-40f4-91b2-bb76bc014d6a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 29 07:41:51 crc kubenswrapper[4795]: I1129 07:41:51.876962 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/828efb35-b8f4-40f4-91b2-bb76bc014d6a-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"828efb35-b8f4-40f4-91b2-bb76bc014d6a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 29 07:41:51 crc kubenswrapper[4795]: I1129 07:41:51.978628 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/828efb35-b8f4-40f4-91b2-bb76bc014d6a-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"828efb35-b8f4-40f4-91b2-bb76bc014d6a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 29 07:41:51 crc kubenswrapper[4795]: I1129 07:41:51.978723 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/828efb35-b8f4-40f4-91b2-bb76bc014d6a-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"828efb35-b8f4-40f4-91b2-bb76bc014d6a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 29 07:41:51 crc kubenswrapper[4795]: I1129 07:41:51.978834 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/828efb35-b8f4-40f4-91b2-bb76bc014d6a-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"828efb35-b8f4-40f4-91b2-bb76bc014d6a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 29 07:41:51 crc kubenswrapper[4795]: I1129 07:41:51.997555 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/828efb35-b8f4-40f4-91b2-bb76bc014d6a-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"828efb35-b8f4-40f4-91b2-bb76bc014d6a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 29 07:41:52 crc kubenswrapper[4795]: I1129 07:41:52.121452 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 29 07:41:52 crc kubenswrapper[4795]: I1129 07:41:52.483215 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 29 07:41:52 crc kubenswrapper[4795]: I1129 07:41:52.599262 4795 patch_prober.go:28] interesting pod/router-default-5444994796-692g9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 29 07:41:52 crc kubenswrapper[4795]: [-]has-synced failed: reason withheld Nov 29 07:41:52 crc kubenswrapper[4795]: [+]process-running ok Nov 29 07:41:52 crc kubenswrapper[4795]: healthz check failed Nov 29 07:41:52 crc kubenswrapper[4795]: I1129 07:41:52.599580 4795 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-692g9" podUID="fe1144c0-6804-4a78-bad8-3319ddb3c30c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 29 07:41:52 crc kubenswrapper[4795]: I1129 07:41:52.605854 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"828efb35-b8f4-40f4-91b2-bb76bc014d6a","Type":"ContainerStarted","Data":"4603703c995f2c682004aee4c7639f01f86dcfda9b55754b3669bff9e2fcc014"} Nov 29 07:41:52 crc kubenswrapper[4795]: I1129 07:41:52.758252 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-lgd6b" Nov 29 07:41:52 crc kubenswrapper[4795]: I1129 07:41:52.763845 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-lgd6b" Nov 29 07:41:52 crc kubenswrapper[4795]: I1129 07:41:52.836126 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 29 07:41:52 crc kubenswrapper[4795]: I1129 07:41:52.890802 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6623fff6-cf51-4f70-a3c7-9dab9c329c5c-kube-api-access\") pod \"6623fff6-cf51-4f70-a3c7-9dab9c329c5c\" (UID: \"6623fff6-cf51-4f70-a3c7-9dab9c329c5c\") " Nov 29 07:41:52 crc kubenswrapper[4795]: I1129 07:41:52.894107 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6623fff6-cf51-4f70-a3c7-9dab9c329c5c-kubelet-dir\") pod \"6623fff6-cf51-4f70-a3c7-9dab9c329c5c\" (UID: \"6623fff6-cf51-4f70-a3c7-9dab9c329c5c\") " Nov 29 07:41:52 crc kubenswrapper[4795]: I1129 07:41:52.897661 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6623fff6-cf51-4f70-a3c7-9dab9c329c5c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "6623fff6-cf51-4f70-a3c7-9dab9c329c5c" (UID: "6623fff6-cf51-4f70-a3c7-9dab9c329c5c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:41:52 crc kubenswrapper[4795]: I1129 07:41:52.904075 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6623fff6-cf51-4f70-a3c7-9dab9c329c5c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "6623fff6-cf51-4f70-a3c7-9dab9c329c5c" (UID: "6623fff6-cf51-4f70-a3c7-9dab9c329c5c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:41:53 crc kubenswrapper[4795]: I1129 07:41:53.007617 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6623fff6-cf51-4f70-a3c7-9dab9c329c5c-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:53 crc kubenswrapper[4795]: I1129 07:41:53.007972 4795 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6623fff6-cf51-4f70-a3c7-9dab9c329c5c-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:53 crc kubenswrapper[4795]: I1129 07:41:53.602865 4795 patch_prober.go:28] interesting pod/router-default-5444994796-692g9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 29 07:41:53 crc kubenswrapper[4795]: [-]has-synced failed: reason withheld Nov 29 07:41:53 crc kubenswrapper[4795]: [+]process-running ok Nov 29 07:41:53 crc kubenswrapper[4795]: healthz check failed Nov 29 07:41:53 crc kubenswrapper[4795]: I1129 07:41:53.603047 4795 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-692g9" podUID="fe1144c0-6804-4a78-bad8-3319ddb3c30c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 29 07:41:53 crc kubenswrapper[4795]: I1129 07:41:53.625469 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"6623fff6-cf51-4f70-a3c7-9dab9c329c5c","Type":"ContainerDied","Data":"9e0bb95bcbc585796538a3dbe34ede2fe6f1ac7fd098363417a87154a3ad5c19"} Nov 29 07:41:53 crc kubenswrapper[4795]: I1129 07:41:53.625523 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e0bb95bcbc585796538a3dbe34ede2fe6f1ac7fd098363417a87154a3ad5c19" Nov 29 07:41:53 crc kubenswrapper[4795]: I1129 07:41:53.625498 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 29 07:41:54 crc kubenswrapper[4795]: I1129 07:41:54.599538 4795 patch_prober.go:28] interesting pod/router-default-5444994796-692g9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 29 07:41:54 crc kubenswrapper[4795]: [-]has-synced failed: reason withheld Nov 29 07:41:54 crc kubenswrapper[4795]: [+]process-running ok Nov 29 07:41:54 crc kubenswrapper[4795]: healthz check failed Nov 29 07:41:54 crc kubenswrapper[4795]: I1129 07:41:54.599607 4795 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-692g9" podUID="fe1144c0-6804-4a78-bad8-3319ddb3c30c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 29 07:41:54 crc kubenswrapper[4795]: I1129 07:41:54.608028 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-ddtz8" Nov 29 07:41:54 crc kubenswrapper[4795]: I1129 07:41:54.650345 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"828efb35-b8f4-40f4-91b2-bb76bc014d6a","Type":"ContainerStarted","Data":"e593a0d9f0405a838372ab7c27951a7872ac9514fa30119c6f2ac4aad3a5b4e2"} Nov 29 07:41:55 crc kubenswrapper[4795]: I1129 07:41:55.598572 4795 patch_prober.go:28] interesting pod/router-default-5444994796-692g9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 29 07:41:55 crc kubenswrapper[4795]: [-]has-synced failed: reason withheld Nov 29 07:41:55 crc kubenswrapper[4795]: [+]process-running ok Nov 29 07:41:55 crc kubenswrapper[4795]: healthz check failed Nov 29 07:41:55 crc kubenswrapper[4795]: I1129 07:41:55.598859 4795 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-692g9" podUID="fe1144c0-6804-4a78-bad8-3319ddb3c30c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 29 07:41:55 crc kubenswrapper[4795]: I1129 07:41:55.661775 4795 generic.go:334] "Generic (PLEG): container finished" podID="828efb35-b8f4-40f4-91b2-bb76bc014d6a" containerID="e593a0d9f0405a838372ab7c27951a7872ac9514fa30119c6f2ac4aad3a5b4e2" exitCode=0 Nov 29 07:41:55 crc kubenswrapper[4795]: I1129 07:41:55.661815 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"828efb35-b8f4-40f4-91b2-bb76bc014d6a","Type":"ContainerDied","Data":"e593a0d9f0405a838372ab7c27951a7872ac9514fa30119c6f2ac4aad3a5b4e2"} Nov 29 07:41:56 crc kubenswrapper[4795]: I1129 07:41:56.598909 4795 patch_prober.go:28] interesting pod/router-default-5444994796-692g9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 29 07:41:56 crc kubenswrapper[4795]: [-]has-synced failed: reason withheld Nov 29 07:41:56 crc kubenswrapper[4795]: [+]process-running ok Nov 29 07:41:56 crc kubenswrapper[4795]: healthz check failed Nov 29 07:41:56 crc kubenswrapper[4795]: I1129 07:41:56.598975 4795 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-692g9" podUID="fe1144c0-6804-4a78-bad8-3319ddb3c30c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 29 07:41:57 crc kubenswrapper[4795]: I1129 07:41:57.598884 4795 patch_prober.go:28] interesting pod/router-default-5444994796-692g9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 29 07:41:57 crc kubenswrapper[4795]: [-]has-synced failed: reason withheld Nov 29 07:41:57 crc kubenswrapper[4795]: [+]process-running ok Nov 29 07:41:57 crc kubenswrapper[4795]: healthz check failed Nov 29 07:41:57 crc kubenswrapper[4795]: I1129 07:41:57.599277 4795 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-692g9" podUID="fe1144c0-6804-4a78-bad8-3319ddb3c30c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 29 07:41:58 crc kubenswrapper[4795]: I1129 07:41:58.282144 4795 patch_prober.go:28] interesting pod/console-f9d7485db-w4g7w container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.19:8443/health\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Nov 29 07:41:58 crc kubenswrapper[4795]: I1129 07:41:58.282299 4795 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-w4g7w" podUID="9b02347e-bd6a-4a77-97d4-d8276d1b6167" containerName="console" probeResult="failure" output="Get \"https://10.217.0.19:8443/health\": dial tcp 10.217.0.19:8443: connect: connection refused" Nov 29 07:41:58 crc kubenswrapper[4795]: I1129 07:41:58.289723 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-klnm7" Nov 29 07:41:58 crc kubenswrapper[4795]: I1129 07:41:58.599452 4795 patch_prober.go:28] interesting pod/router-default-5444994796-692g9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 29 07:41:58 crc kubenswrapper[4795]: [-]has-synced failed: reason withheld Nov 29 07:41:58 crc kubenswrapper[4795]: [+]process-running ok Nov 29 07:41:58 crc kubenswrapper[4795]: healthz check failed Nov 29 07:41:58 crc kubenswrapper[4795]: I1129 07:41:58.599553 4795 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-692g9" podUID="fe1144c0-6804-4a78-bad8-3319ddb3c30c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 29 07:41:59 crc kubenswrapper[4795]: I1129 07:41:59.598499 4795 patch_prober.go:28] interesting pod/router-default-5444994796-692g9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 29 07:41:59 crc kubenswrapper[4795]: [+]has-synced ok Nov 29 07:41:59 crc kubenswrapper[4795]: [+]process-running ok Nov 29 07:41:59 crc kubenswrapper[4795]: healthz check failed Nov 29 07:41:59 crc kubenswrapper[4795]: I1129 07:41:59.598583 4795 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-692g9" podUID="fe1144c0-6804-4a78-bad8-3319ddb3c30c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 29 07:42:00 crc kubenswrapper[4795]: I1129 07:42:00.598505 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-692g9" Nov 29 07:42:00 crc kubenswrapper[4795]: I1129 07:42:00.601483 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-692g9" Nov 29 07:42:02 crc kubenswrapper[4795]: I1129 07:42:02.765439 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9be66670-47c2-4d05-bf3d-59ae6f4ff53b-metrics-certs\") pod \"network-metrics-daemon-bvmzq\" (UID: \"9be66670-47c2-4d05-bf3d-59ae6f4ff53b\") " pod="openshift-multus/network-metrics-daemon-bvmzq" Nov 29 07:42:02 crc kubenswrapper[4795]: I1129 07:42:02.772878 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9be66670-47c2-4d05-bf3d-59ae6f4ff53b-metrics-certs\") pod \"network-metrics-daemon-bvmzq\" (UID: \"9be66670-47c2-4d05-bf3d-59ae6f4ff53b\") " pod="openshift-multus/network-metrics-daemon-bvmzq" Nov 29 07:42:02 crc kubenswrapper[4795]: I1129 07:42:02.876367 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvmzq" Nov 29 07:42:03 crc kubenswrapper[4795]: I1129 07:42:03.587830 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 29 07:42:03 crc kubenswrapper[4795]: I1129 07:42:03.677073 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/828efb35-b8f4-40f4-91b2-bb76bc014d6a-kube-api-access\") pod \"828efb35-b8f4-40f4-91b2-bb76bc014d6a\" (UID: \"828efb35-b8f4-40f4-91b2-bb76bc014d6a\") " Nov 29 07:42:03 crc kubenswrapper[4795]: I1129 07:42:03.677173 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/828efb35-b8f4-40f4-91b2-bb76bc014d6a-kubelet-dir\") pod \"828efb35-b8f4-40f4-91b2-bb76bc014d6a\" (UID: \"828efb35-b8f4-40f4-91b2-bb76bc014d6a\") " Nov 29 07:42:03 crc kubenswrapper[4795]: I1129 07:42:03.677272 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/828efb35-b8f4-40f4-91b2-bb76bc014d6a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "828efb35-b8f4-40f4-91b2-bb76bc014d6a" (UID: "828efb35-b8f4-40f4-91b2-bb76bc014d6a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:42:03 crc kubenswrapper[4795]: I1129 07:42:03.677553 4795 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/828efb35-b8f4-40f4-91b2-bb76bc014d6a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 29 07:42:03 crc kubenswrapper[4795]: I1129 07:42:03.681516 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/828efb35-b8f4-40f4-91b2-bb76bc014d6a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "828efb35-b8f4-40f4-91b2-bb76bc014d6a" (UID: "828efb35-b8f4-40f4-91b2-bb76bc014d6a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:42:03 crc kubenswrapper[4795]: I1129 07:42:03.715698 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"828efb35-b8f4-40f4-91b2-bb76bc014d6a","Type":"ContainerDied","Data":"4603703c995f2c682004aee4c7639f01f86dcfda9b55754b3669bff9e2fcc014"} Nov 29 07:42:03 crc kubenswrapper[4795]: I1129 07:42:03.715743 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4603703c995f2c682004aee4c7639f01f86dcfda9b55754b3669bff9e2fcc014" Nov 29 07:42:03 crc kubenswrapper[4795]: I1129 07:42:03.715794 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 29 07:42:03 crc kubenswrapper[4795]: I1129 07:42:03.778573 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/828efb35-b8f4-40f4-91b2-bb76bc014d6a-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 29 07:42:07 crc kubenswrapper[4795]: I1129 07:42:07.929439 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:42:08 crc kubenswrapper[4795]: I1129 07:42:08.286680 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-w4g7w" Nov 29 07:42:08 crc kubenswrapper[4795]: I1129 07:42:08.290065 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-w4g7w" Nov 29 07:42:11 crc kubenswrapper[4795]: I1129 07:42:11.941656 4795 patch_prober.go:28] interesting pod/machine-config-daemon-bkmq6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:42:11 crc kubenswrapper[4795]: I1129 07:42:11.942000 4795 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:42:19 crc kubenswrapper[4795]: I1129 07:42:19.379584 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qvn2d" Nov 29 07:42:26 crc kubenswrapper[4795]: I1129 07:42:26.685121 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:42:30 crc kubenswrapper[4795]: I1129 07:42:30.982385 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 29 07:42:30 crc kubenswrapper[4795]: E1129 07:42:30.982894 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="828efb35-b8f4-40f4-91b2-bb76bc014d6a" containerName="pruner" Nov 29 07:42:30 crc kubenswrapper[4795]: I1129 07:42:30.982906 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="828efb35-b8f4-40f4-91b2-bb76bc014d6a" containerName="pruner" Nov 29 07:42:30 crc kubenswrapper[4795]: E1129 07:42:30.982922 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6623fff6-cf51-4f70-a3c7-9dab9c329c5c" containerName="pruner" Nov 29 07:42:30 crc kubenswrapper[4795]: I1129 07:42:30.982928 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="6623fff6-cf51-4f70-a3c7-9dab9c329c5c" containerName="pruner" Nov 29 07:42:30 crc kubenswrapper[4795]: I1129 07:42:30.983029 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="6623fff6-cf51-4f70-a3c7-9dab9c329c5c" containerName="pruner" Nov 29 07:42:30 crc kubenswrapper[4795]: I1129 07:42:30.983038 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="828efb35-b8f4-40f4-91b2-bb76bc014d6a" containerName="pruner" Nov 29 07:42:30 crc kubenswrapper[4795]: I1129 07:42:30.984745 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 29 07:42:30 crc kubenswrapper[4795]: I1129 07:42:30.987165 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 29 07:42:30 crc kubenswrapper[4795]: I1129 07:42:30.987273 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 29 07:42:31 crc kubenswrapper[4795]: I1129 07:42:30.990983 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 29 07:42:31 crc kubenswrapper[4795]: I1129 07:42:31.076002 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/52dfdf3b-a9b0-4b03-84b6-4513f714a51f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"52dfdf3b-a9b0-4b03-84b6-4513f714a51f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 29 07:42:31 crc kubenswrapper[4795]: I1129 07:42:31.076242 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/52dfdf3b-a9b0-4b03-84b6-4513f714a51f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"52dfdf3b-a9b0-4b03-84b6-4513f714a51f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 29 07:42:31 crc kubenswrapper[4795]: I1129 07:42:31.177313 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/52dfdf3b-a9b0-4b03-84b6-4513f714a51f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"52dfdf3b-a9b0-4b03-84b6-4513f714a51f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 29 07:42:31 crc kubenswrapper[4795]: I1129 07:42:31.177428 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/52dfdf3b-a9b0-4b03-84b6-4513f714a51f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"52dfdf3b-a9b0-4b03-84b6-4513f714a51f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 29 07:42:31 crc kubenswrapper[4795]: I1129 07:42:31.177423 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/52dfdf3b-a9b0-4b03-84b6-4513f714a51f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"52dfdf3b-a9b0-4b03-84b6-4513f714a51f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 29 07:42:31 crc kubenswrapper[4795]: I1129 07:42:31.207181 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/52dfdf3b-a9b0-4b03-84b6-4513f714a51f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"52dfdf3b-a9b0-4b03-84b6-4513f714a51f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 29 07:42:31 crc kubenswrapper[4795]: I1129 07:42:31.326826 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 29 07:42:31 crc kubenswrapper[4795]: E1129 07:42:31.696682 4795 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 29 07:42:31 crc kubenswrapper[4795]: E1129 07:42:31.696923 4795 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xbn9z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-tr5zq_openshift-marketplace(e0cc4a28-2815-4b61-9893-3cf1b256872a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 29 07:42:31 crc kubenswrapper[4795]: E1129 07:42:31.698091 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-tr5zq" podUID="e0cc4a28-2815-4b61-9893-3cf1b256872a" Nov 29 07:42:35 crc kubenswrapper[4795]: E1129 07:42:35.670418 4795 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 29 07:42:35 crc kubenswrapper[4795]: E1129 07:42:35.670908 4795 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lp5vh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-d4htg_openshift-marketplace(097562dd-99cb-4451-ac96-c1cdfd8cc4f4): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 29 07:42:35 crc kubenswrapper[4795]: E1129 07:42:35.672116 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-d4htg" podUID="097562dd-99cb-4451-ac96-c1cdfd8cc4f4" Nov 29 07:42:36 crc kubenswrapper[4795]: I1129 07:42:36.378867 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 29 07:42:36 crc kubenswrapper[4795]: I1129 07:42:36.379561 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 29 07:42:36 crc kubenswrapper[4795]: I1129 07:42:36.385698 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 29 07:42:36 crc kubenswrapper[4795]: I1129 07:42:36.445375 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5876d38e-799e-4ca3-ac4a-c04c3070491a-var-lock\") pod \"installer-9-crc\" (UID: \"5876d38e-799e-4ca3-ac4a-c04c3070491a\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 29 07:42:36 crc kubenswrapper[4795]: I1129 07:42:36.445466 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5876d38e-799e-4ca3-ac4a-c04c3070491a-kube-api-access\") pod \"installer-9-crc\" (UID: \"5876d38e-799e-4ca3-ac4a-c04c3070491a\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 29 07:42:36 crc kubenswrapper[4795]: I1129 07:42:36.445495 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5876d38e-799e-4ca3-ac4a-c04c3070491a-kubelet-dir\") pod \"installer-9-crc\" (UID: \"5876d38e-799e-4ca3-ac4a-c04c3070491a\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 29 07:42:36 crc kubenswrapper[4795]: I1129 07:42:36.546638 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5876d38e-799e-4ca3-ac4a-c04c3070491a-var-lock\") pod \"installer-9-crc\" (UID: \"5876d38e-799e-4ca3-ac4a-c04c3070491a\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 29 07:42:36 crc kubenswrapper[4795]: I1129 07:42:36.546764 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5876d38e-799e-4ca3-ac4a-c04c3070491a-kube-api-access\") pod \"installer-9-crc\" (UID: \"5876d38e-799e-4ca3-ac4a-c04c3070491a\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 29 07:42:36 crc kubenswrapper[4795]: I1129 07:42:36.546771 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5876d38e-799e-4ca3-ac4a-c04c3070491a-var-lock\") pod \"installer-9-crc\" (UID: \"5876d38e-799e-4ca3-ac4a-c04c3070491a\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 29 07:42:36 crc kubenswrapper[4795]: I1129 07:42:36.546904 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5876d38e-799e-4ca3-ac4a-c04c3070491a-kubelet-dir\") pod \"installer-9-crc\" (UID: \"5876d38e-799e-4ca3-ac4a-c04c3070491a\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 29 07:42:36 crc kubenswrapper[4795]: I1129 07:42:36.546800 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5876d38e-799e-4ca3-ac4a-c04c3070491a-kubelet-dir\") pod \"installer-9-crc\" (UID: \"5876d38e-799e-4ca3-ac4a-c04c3070491a\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 29 07:42:36 crc kubenswrapper[4795]: I1129 07:42:36.577360 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5876d38e-799e-4ca3-ac4a-c04c3070491a-kube-api-access\") pod \"installer-9-crc\" (UID: \"5876d38e-799e-4ca3-ac4a-c04c3070491a\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 29 07:42:36 crc kubenswrapper[4795]: I1129 07:42:36.710480 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 29 07:42:39 crc kubenswrapper[4795]: E1129 07:42:39.951247 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-d4htg" podUID="097562dd-99cb-4451-ac96-c1cdfd8cc4f4" Nov 29 07:42:39 crc kubenswrapper[4795]: E1129 07:42:39.951342 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-tr5zq" podUID="e0cc4a28-2815-4b61-9893-3cf1b256872a" Nov 29 07:42:41 crc kubenswrapper[4795]: E1129 07:42:41.634694 4795 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 29 07:42:41 crc kubenswrapper[4795]: E1129 07:42:41.635147 4795 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kfndp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-cgrzq_openshift-marketplace(b36dad37-4975-4010-b93e-0ea5c932caab): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 29 07:42:41 crc kubenswrapper[4795]: E1129 07:42:41.636386 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-cgrzq" podUID="b36dad37-4975-4010-b93e-0ea5c932caab" Nov 29 07:42:41 crc kubenswrapper[4795]: I1129 07:42:41.941560 4795 patch_prober.go:28] interesting pod/machine-config-daemon-bkmq6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:42:41 crc kubenswrapper[4795]: I1129 07:42:41.941666 4795 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:42:41 crc kubenswrapper[4795]: I1129 07:42:41.941775 4795 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" Nov 29 07:42:41 crc kubenswrapper[4795]: I1129 07:42:41.943995 4795 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"63f3aac368767d0534a658bc0223f5cda0a951940f2d6ff3f1c89a6ac401a7d5"} pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 07:42:41 crc kubenswrapper[4795]: I1129 07:42:41.944136 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" containerID="cri-o://63f3aac368767d0534a658bc0223f5cda0a951940f2d6ff3f1c89a6ac401a7d5" gracePeriod=600 Nov 29 07:42:43 crc kubenswrapper[4795]: I1129 07:42:43.121232 4795 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 1.060399333s: [/var/lib/containers/storage/overlay/ab708724116526fe06b70945e71076c6526c5e0a91adfb08e3188dcef54ee3db/diff /var/log/pods/openshift-cluster-machine-approver_machine-approver-56656f9798-xdslc_42062f93-4804-4818-95b5-2b6b3225c433/machine-approver-controller/0.log]; will not log again for this container unless duration exceeds 2s Nov 29 07:42:43 crc kubenswrapper[4795]: I1129 07:42:43.121502 4795 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 1.027557268s: [/var/lib/containers/storage/overlay/ad061cf4b86f98b7667dce3aa5beded2bb7805234aa09bceb31679dd77a28809/diff /var/log/pods/openshift-operator-lifecycle-manager_olm-operator-6b444d44fb-tgqrw_f54838e7-e080-4c36-9f22-37173dca9044/olm-operator/0.log]; will not log again for this container unless duration exceeds 2s Nov 29 07:42:43 crc kubenswrapper[4795]: I1129 07:42:43.122171 4795 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 1.048186345s: [/var/lib/containers/storage/overlay/6ee33b6135f8953f96edc6dc0369dcad1914fe8b06a6734770f5a8a3506df37c/diff /var/log/pods/openshift-ingress_router-default-5444994796-692g9_fe1144c0-6804-4a78-bad8-3319ddb3c30c/router/0.log]; will not log again for this container unless duration exceeds 2s Nov 29 07:42:56 crc kubenswrapper[4795]: I1129 07:42:56.044581 4795 generic.go:334] "Generic (PLEG): container finished" podID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerID="63f3aac368767d0534a658bc0223f5cda0a951940f2d6ff3f1c89a6ac401a7d5" exitCode=0 Nov 29 07:42:56 crc kubenswrapper[4795]: I1129 07:42:56.044653 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" event={"ID":"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1","Type":"ContainerDied","Data":"63f3aac368767d0534a658bc0223f5cda0a951940f2d6ff3f1c89a6ac401a7d5"} Nov 29 07:43:04 crc kubenswrapper[4795]: E1129 07:43:04.866629 4795 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 29 07:43:04 crc kubenswrapper[4795]: E1129 07:43:04.867563 4795 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k7kf7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-vx8d5_openshift-marketplace(0deb15dc-57ff-4c83-8e81-ea7ebfda038d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 29 07:43:04 crc kubenswrapper[4795]: E1129 07:43:04.868906 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-vx8d5" podUID="0deb15dc-57ff-4c83-8e81-ea7ebfda038d" Nov 29 07:43:04 crc kubenswrapper[4795]: E1129 07:43:04.882880 4795 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 29 07:43:04 crc kubenswrapper[4795]: E1129 07:43:04.883065 4795 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7swxq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-t8rg9_openshift-marketplace(e1461195-363d-49f5-b5c2-61b99f16ece2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 29 07:43:04 crc kubenswrapper[4795]: E1129 07:43:04.884404 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-t8rg9" podUID="e1461195-363d-49f5-b5c2-61b99f16ece2" Nov 29 07:43:05 crc kubenswrapper[4795]: E1129 07:43:05.344642 4795 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 29 07:43:05 crc kubenswrapper[4795]: E1129 07:43:05.344897 4795 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4kmmg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-h8pp6_openshift-marketplace(a04ecf63-b125-4a0e-9869-403c9cca5648): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 29 07:43:05 crc kubenswrapper[4795]: E1129 07:43:05.346159 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-h8pp6" podUID="a04ecf63-b125-4a0e-9869-403c9cca5648" Nov 29 07:43:08 crc kubenswrapper[4795]: E1129 07:43:08.880475 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-t8rg9" podUID="e1461195-363d-49f5-b5c2-61b99f16ece2" Nov 29 07:43:08 crc kubenswrapper[4795]: E1129 07:43:08.880834 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-h8pp6" podUID="a04ecf63-b125-4a0e-9869-403c9cca5648" Nov 29 07:43:08 crc kubenswrapper[4795]: E1129 07:43:08.880951 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vx8d5" podUID="0deb15dc-57ff-4c83-8e81-ea7ebfda038d" Nov 29 07:43:09 crc kubenswrapper[4795]: E1129 07:43:09.015302 4795 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 29 07:43:09 crc kubenswrapper[4795]: E1129 07:43:09.015985 4795 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z98xv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-8bzbm_openshift-marketplace(96ff138b-b30f-4a36-9c9c-76cf2c9e8e89): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 29 07:43:09 crc kubenswrapper[4795]: E1129 07:43:09.017267 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-8bzbm" podUID="96ff138b-b30f-4a36-9c9c-76cf2c9e8e89" Nov 29 07:43:09 crc kubenswrapper[4795]: E1129 07:43:09.268607 4795 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 29 07:43:09 crc kubenswrapper[4795]: E1129 07:43:09.269192 4795 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jp6pp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-tl6cb_openshift-marketplace(cc49b5c4-ec36-4051-b975-25f1e872a6f7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 29 07:43:09 crc kubenswrapper[4795]: E1129 07:43:09.271491 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-tl6cb" podUID="cc49b5c4-ec36-4051-b975-25f1e872a6f7" Nov 29 07:43:09 crc kubenswrapper[4795]: I1129 07:43:09.350739 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-bvmzq"] Nov 29 07:43:09 crc kubenswrapper[4795]: I1129 07:43:09.413742 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 29 07:43:09 crc kubenswrapper[4795]: I1129 07:43:09.416915 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 29 07:43:10 crc kubenswrapper[4795]: W1129 07:43:10.981037 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod52dfdf3b_a9b0_4b03_84b6_4513f714a51f.slice/crio-e530ea1db8d23451e5746652f9cfc45f26aff5e1dcfe4bc029523ce2a61cec1f WatchSource:0}: Error finding container e530ea1db8d23451e5746652f9cfc45f26aff5e1dcfe4bc029523ce2a61cec1f: Status 404 returned error can't find the container with id e530ea1db8d23451e5746652f9cfc45f26aff5e1dcfe4bc029523ce2a61cec1f Nov 29 07:43:10 crc kubenswrapper[4795]: E1129 07:43:10.981133 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-tl6cb" podUID="cc49b5c4-ec36-4051-b975-25f1e872a6f7" Nov 29 07:43:10 crc kubenswrapper[4795]: E1129 07:43:10.981467 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-8bzbm" podUID="96ff138b-b30f-4a36-9c9c-76cf2c9e8e89" Nov 29 07:43:10 crc kubenswrapper[4795]: W1129 07:43:10.983571 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod5876d38e_799e_4ca3_ac4a_c04c3070491a.slice/crio-82ed222fb3a30647ff3fd372751dcc6d4be47383e3ffc548086f7e4bcdf17e5e WatchSource:0}: Error finding container 82ed222fb3a30647ff3fd372751dcc6d4be47383e3ffc548086f7e4bcdf17e5e: Status 404 returned error can't find the container with id 82ed222fb3a30647ff3fd372751dcc6d4be47383e3ffc548086f7e4bcdf17e5e Nov 29 07:43:11 crc kubenswrapper[4795]: I1129 07:43:11.131321 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"5876d38e-799e-4ca3-ac4a-c04c3070491a","Type":"ContainerStarted","Data":"82ed222fb3a30647ff3fd372751dcc6d4be47383e3ffc548086f7e4bcdf17e5e"} Nov 29 07:43:11 crc kubenswrapper[4795]: I1129 07:43:11.132194 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-bvmzq" event={"ID":"9be66670-47c2-4d05-bf3d-59ae6f4ff53b","Type":"ContainerStarted","Data":"9086b91b42f377cffcbcfc5da566bedf0fc871721b179b22e008d09c5758a47e"} Nov 29 07:43:11 crc kubenswrapper[4795]: I1129 07:43:11.133545 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"52dfdf3b-a9b0-4b03-84b6-4513f714a51f","Type":"ContainerStarted","Data":"e530ea1db8d23451e5746652f9cfc45f26aff5e1dcfe4bc029523ce2a61cec1f"} Nov 29 07:43:13 crc kubenswrapper[4795]: I1129 07:43:13.147206 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"5876d38e-799e-4ca3-ac4a-c04c3070491a","Type":"ContainerStarted","Data":"4dbd55216791fefe5988e5e64635fea01a4f908dc7ad783f1d8b4f33a438b080"} Nov 29 07:43:13 crc kubenswrapper[4795]: I1129 07:43:13.150898 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-bvmzq" event={"ID":"9be66670-47c2-4d05-bf3d-59ae6f4ff53b","Type":"ContainerStarted","Data":"2f5d011cd12a2304dcbc342c7bc5d48136dd0c23e3ab2e07bf70f08acccc7796"} Nov 29 07:43:13 crc kubenswrapper[4795]: I1129 07:43:13.152209 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"52dfdf3b-a9b0-4b03-84b6-4513f714a51f","Type":"ContainerStarted","Data":"18037c726306abe90cfeff511fa65550cc1eeadc49067070d990f32042a4c9b6"} Nov 29 07:43:13 crc kubenswrapper[4795]: I1129 07:43:13.154041 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tr5zq" event={"ID":"e0cc4a28-2815-4b61-9893-3cf1b256872a","Type":"ContainerStarted","Data":"e879f9ec39b0dab42ed7e5e7f385eb3aa7a91298af008ea2163c9538c79cdc19"} Nov 29 07:43:13 crc kubenswrapper[4795]: I1129 07:43:13.156053 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" event={"ID":"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1","Type":"ContainerStarted","Data":"2126419e14a4a2775ea5be60a712196b4855482260536f930abe91252856634d"} Nov 29 07:43:13 crc kubenswrapper[4795]: I1129 07:43:13.157996 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cgrzq" event={"ID":"b36dad37-4975-4010-b93e-0ea5c932caab","Type":"ContainerStarted","Data":"06d82a55417d265a8be060f4c2ed35314a880be006bc56ba343f592c62fdfa48"} Nov 29 07:43:13 crc kubenswrapper[4795]: I1129 07:43:13.159691 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d4htg" event={"ID":"097562dd-99cb-4451-ac96-c1cdfd8cc4f4","Type":"ContainerStarted","Data":"2aa848deac074a7d4bb070d5bdcb59e861aa298b836984e036e1f522ff4ef0da"} Nov 29 07:43:13 crc kubenswrapper[4795]: I1129 07:43:13.161700 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=37.161679147 podStartE2EDuration="37.161679147s" podCreationTimestamp="2025-11-29 07:42:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:43:13.160264687 +0000 UTC m=+239.135840477" watchObservedRunningTime="2025-11-29 07:43:13.161679147 +0000 UTC m=+239.137254947" Nov 29 07:43:13 crc kubenswrapper[4795]: I1129 07:43:13.246866 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=43.246843592 podStartE2EDuration="43.246843592s" podCreationTimestamp="2025-11-29 07:42:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:43:13.245125123 +0000 UTC m=+239.220700913" watchObservedRunningTime="2025-11-29 07:43:13.246843592 +0000 UTC m=+239.222419402" Nov 29 07:43:14 crc kubenswrapper[4795]: I1129 07:43:14.206929 4795 generic.go:334] "Generic (PLEG): container finished" podID="b36dad37-4975-4010-b93e-0ea5c932caab" containerID="06d82a55417d265a8be060f4c2ed35314a880be006bc56ba343f592c62fdfa48" exitCode=0 Nov 29 07:43:14 crc kubenswrapper[4795]: I1129 07:43:14.207002 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cgrzq" event={"ID":"b36dad37-4975-4010-b93e-0ea5c932caab","Type":"ContainerDied","Data":"06d82a55417d265a8be060f4c2ed35314a880be006bc56ba343f592c62fdfa48"} Nov 29 07:43:14 crc kubenswrapper[4795]: I1129 07:43:14.211582 4795 generic.go:334] "Generic (PLEG): container finished" podID="097562dd-99cb-4451-ac96-c1cdfd8cc4f4" containerID="2aa848deac074a7d4bb070d5bdcb59e861aa298b836984e036e1f522ff4ef0da" exitCode=0 Nov 29 07:43:14 crc kubenswrapper[4795]: I1129 07:43:14.211618 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d4htg" event={"ID":"097562dd-99cb-4451-ac96-c1cdfd8cc4f4","Type":"ContainerDied","Data":"2aa848deac074a7d4bb070d5bdcb59e861aa298b836984e036e1f522ff4ef0da"} Nov 29 07:43:14 crc kubenswrapper[4795]: I1129 07:43:14.213473 4795 generic.go:334] "Generic (PLEG): container finished" podID="e0cc4a28-2815-4b61-9893-3cf1b256872a" containerID="e879f9ec39b0dab42ed7e5e7f385eb3aa7a91298af008ea2163c9538c79cdc19" exitCode=0 Nov 29 07:43:14 crc kubenswrapper[4795]: I1129 07:43:14.213613 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tr5zq" event={"ID":"e0cc4a28-2815-4b61-9893-3cf1b256872a","Type":"ContainerDied","Data":"e879f9ec39b0dab42ed7e5e7f385eb3aa7a91298af008ea2163c9538c79cdc19"} Nov 29 07:43:15 crc kubenswrapper[4795]: I1129 07:43:15.220205 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-bvmzq" event={"ID":"9be66670-47c2-4d05-bf3d-59ae6f4ff53b","Type":"ContainerStarted","Data":"2a6d409e211d634fc829d1e8d5bc2fdc49d131b4ebb708bdb259e3a30c8d2390"} Nov 29 07:43:15 crc kubenswrapper[4795]: I1129 07:43:15.221439 4795 generic.go:334] "Generic (PLEG): container finished" podID="52dfdf3b-a9b0-4b03-84b6-4513f714a51f" containerID="18037c726306abe90cfeff511fa65550cc1eeadc49067070d990f32042a4c9b6" exitCode=0 Nov 29 07:43:15 crc kubenswrapper[4795]: I1129 07:43:15.221475 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"52dfdf3b-a9b0-4b03-84b6-4513f714a51f","Type":"ContainerDied","Data":"18037c726306abe90cfeff511fa65550cc1eeadc49067070d990f32042a4c9b6"} Nov 29 07:43:15 crc kubenswrapper[4795]: I1129 07:43:15.238144 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-bvmzq" podStartSLOduration=215.238126071 podStartE2EDuration="3m35.238126071s" podCreationTimestamp="2025-11-29 07:39:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:43:15.234361204 +0000 UTC m=+241.209936994" watchObservedRunningTime="2025-11-29 07:43:15.238126071 +0000 UTC m=+241.213701881" Nov 29 07:43:16 crc kubenswrapper[4795]: I1129 07:43:16.228867 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tr5zq" event={"ID":"e0cc4a28-2815-4b61-9893-3cf1b256872a","Type":"ContainerStarted","Data":"f70fc0e43f2c1b862249da5e89ad4025246fea6a8073ec4357f0d3c642210df5"} Nov 29 07:43:16 crc kubenswrapper[4795]: I1129 07:43:16.230426 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cgrzq" event={"ID":"b36dad37-4975-4010-b93e-0ea5c932caab","Type":"ContainerStarted","Data":"de191339879abc9aff8fe5a8bee6eb675c2d5be3583f2c7de8a0ef0123911047"} Nov 29 07:43:16 crc kubenswrapper[4795]: I1129 07:43:16.233162 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d4htg" event={"ID":"097562dd-99cb-4451-ac96-c1cdfd8cc4f4","Type":"ContainerStarted","Data":"d3259e67864daffa16284888cb554ca3c689dd26ef2352dd25c311e6c1e45b9e"} Nov 29 07:43:16 crc kubenswrapper[4795]: I1129 07:43:16.263090 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tr5zq" podStartSLOduration=3.94290194 podStartE2EDuration="1m28.263071465s" podCreationTimestamp="2025-11-29 07:41:48 +0000 UTC" firstStartedPulling="2025-11-29 07:41:50.550797185 +0000 UTC m=+156.526372975" lastFinishedPulling="2025-11-29 07:43:14.87096671 +0000 UTC m=+240.846542500" observedRunningTime="2025-11-29 07:43:16.260152112 +0000 UTC m=+242.235727902" watchObservedRunningTime="2025-11-29 07:43:16.263071465 +0000 UTC m=+242.238647255" Nov 29 07:43:16 crc kubenswrapper[4795]: I1129 07:43:16.284536 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-d4htg" podStartSLOduration=3.210929619 podStartE2EDuration="1m28.284518616s" podCreationTimestamp="2025-11-29 07:41:48 +0000 UTC" firstStartedPulling="2025-11-29 07:41:49.535390836 +0000 UTC m=+155.510966626" lastFinishedPulling="2025-11-29 07:43:14.608979843 +0000 UTC m=+240.584555623" observedRunningTime="2025-11-29 07:43:16.281631384 +0000 UTC m=+242.257207194" watchObservedRunningTime="2025-11-29 07:43:16.284518616 +0000 UTC m=+242.260094406" Nov 29 07:43:16 crc kubenswrapper[4795]: I1129 07:43:16.309614 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cgrzq" podStartSLOduration=2.770313433 podStartE2EDuration="1m31.309580649s" podCreationTimestamp="2025-11-29 07:41:45 +0000 UTC" firstStartedPulling="2025-11-29 07:41:47.207269187 +0000 UTC m=+153.182844977" lastFinishedPulling="2025-11-29 07:43:15.746536403 +0000 UTC m=+241.722112193" observedRunningTime="2025-11-29 07:43:16.308786537 +0000 UTC m=+242.284362327" watchObservedRunningTime="2025-11-29 07:43:16.309580649 +0000 UTC m=+242.285156439" Nov 29 07:43:16 crc kubenswrapper[4795]: I1129 07:43:16.506881 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 29 07:43:16 crc kubenswrapper[4795]: I1129 07:43:16.612010 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/52dfdf3b-a9b0-4b03-84b6-4513f714a51f-kubelet-dir\") pod \"52dfdf3b-a9b0-4b03-84b6-4513f714a51f\" (UID: \"52dfdf3b-a9b0-4b03-84b6-4513f714a51f\") " Nov 29 07:43:16 crc kubenswrapper[4795]: I1129 07:43:16.612289 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/52dfdf3b-a9b0-4b03-84b6-4513f714a51f-kube-api-access\") pod \"52dfdf3b-a9b0-4b03-84b6-4513f714a51f\" (UID: \"52dfdf3b-a9b0-4b03-84b6-4513f714a51f\") " Nov 29 07:43:16 crc kubenswrapper[4795]: I1129 07:43:16.612162 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52dfdf3b-a9b0-4b03-84b6-4513f714a51f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "52dfdf3b-a9b0-4b03-84b6-4513f714a51f" (UID: "52dfdf3b-a9b0-4b03-84b6-4513f714a51f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:43:16 crc kubenswrapper[4795]: I1129 07:43:16.617695 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52dfdf3b-a9b0-4b03-84b6-4513f714a51f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "52dfdf3b-a9b0-4b03-84b6-4513f714a51f" (UID: "52dfdf3b-a9b0-4b03-84b6-4513f714a51f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:43:16 crc kubenswrapper[4795]: I1129 07:43:16.714102 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/52dfdf3b-a9b0-4b03-84b6-4513f714a51f-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:16 crc kubenswrapper[4795]: I1129 07:43:16.714135 4795 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/52dfdf3b-a9b0-4b03-84b6-4513f714a51f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:17 crc kubenswrapper[4795]: I1129 07:43:17.240613 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"52dfdf3b-a9b0-4b03-84b6-4513f714a51f","Type":"ContainerDied","Data":"e530ea1db8d23451e5746652f9cfc45f26aff5e1dcfe4bc029523ce2a61cec1f"} Nov 29 07:43:17 crc kubenswrapper[4795]: I1129 07:43:17.241253 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e530ea1db8d23451e5746652f9cfc45f26aff5e1dcfe4bc029523ce2a61cec1f" Nov 29 07:43:17 crc kubenswrapper[4795]: I1129 07:43:17.240654 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 29 07:43:18 crc kubenswrapper[4795]: I1129 07:43:18.496869 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-d4htg" Nov 29 07:43:18 crc kubenswrapper[4795]: I1129 07:43:18.497130 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-d4htg" Nov 29 07:43:19 crc kubenswrapper[4795]: I1129 07:43:19.023653 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tr5zq" Nov 29 07:43:19 crc kubenswrapper[4795]: I1129 07:43:19.024041 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tr5zq" Nov 29 07:43:20 crc kubenswrapper[4795]: I1129 07:43:20.385882 4795 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tr5zq" podUID="e0cc4a28-2815-4b61-9893-3cf1b256872a" containerName="registry-server" probeResult="failure" output=< Nov 29 07:43:20 crc kubenswrapper[4795]: timeout: failed to connect service ":50051" within 1s Nov 29 07:43:20 crc kubenswrapper[4795]: > Nov 29 07:43:20 crc kubenswrapper[4795]: I1129 07:43:20.385937 4795 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-d4htg" podUID="097562dd-99cb-4451-ac96-c1cdfd8cc4f4" containerName="registry-server" probeResult="failure" output=< Nov 29 07:43:20 crc kubenswrapper[4795]: timeout: failed to connect service ":50051" within 1s Nov 29 07:43:20 crc kubenswrapper[4795]: > Nov 29 07:43:23 crc kubenswrapper[4795]: I1129 07:43:23.276938 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h8pp6" event={"ID":"a04ecf63-b125-4a0e-9869-403c9cca5648","Type":"ContainerStarted","Data":"9f95cfa4cb57e999c5a8d53a190e431e0e1f491edd76b1a9d8f277ce23d86e9f"} Nov 29 07:43:24 crc kubenswrapper[4795]: I1129 07:43:24.284028 4795 generic.go:334] "Generic (PLEG): container finished" podID="a04ecf63-b125-4a0e-9869-403c9cca5648" containerID="9f95cfa4cb57e999c5a8d53a190e431e0e1f491edd76b1a9d8f277ce23d86e9f" exitCode=0 Nov 29 07:43:24 crc kubenswrapper[4795]: I1129 07:43:24.284150 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h8pp6" event={"ID":"a04ecf63-b125-4a0e-9869-403c9cca5648","Type":"ContainerDied","Data":"9f95cfa4cb57e999c5a8d53a190e431e0e1f491edd76b1a9d8f277ce23d86e9f"} Nov 29 07:43:24 crc kubenswrapper[4795]: I1129 07:43:24.286836 4795 generic.go:334] "Generic (PLEG): container finished" podID="e1461195-363d-49f5-b5c2-61b99f16ece2" containerID="713e874aea16a055d4957b05e84f8934dc631049c221f0a810e1c4a1093ce290" exitCode=0 Nov 29 07:43:24 crc kubenswrapper[4795]: I1129 07:43:24.286927 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t8rg9" event={"ID":"e1461195-363d-49f5-b5c2-61b99f16ece2","Type":"ContainerDied","Data":"713e874aea16a055d4957b05e84f8934dc631049c221f0a810e1c4a1093ce290"} Nov 29 07:43:24 crc kubenswrapper[4795]: I1129 07:43:24.301801 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vx8d5" event={"ID":"0deb15dc-57ff-4c83-8e81-ea7ebfda038d","Type":"ContainerStarted","Data":"65b9e5fdde52dc4c7a4b8c438621bf409f0c82cfbf1cdc06c7cb9dd28790ab49"} Nov 29 07:43:25 crc kubenswrapper[4795]: I1129 07:43:25.315236 4795 generic.go:334] "Generic (PLEG): container finished" podID="0deb15dc-57ff-4c83-8e81-ea7ebfda038d" containerID="65b9e5fdde52dc4c7a4b8c438621bf409f0c82cfbf1cdc06c7cb9dd28790ab49" exitCode=0 Nov 29 07:43:25 crc kubenswrapper[4795]: I1129 07:43:25.315292 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vx8d5" event={"ID":"0deb15dc-57ff-4c83-8e81-ea7ebfda038d","Type":"ContainerDied","Data":"65b9e5fdde52dc4c7a4b8c438621bf409f0c82cfbf1cdc06c7cb9dd28790ab49"} Nov 29 07:43:26 crc kubenswrapper[4795]: I1129 07:43:26.015821 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cgrzq" Nov 29 07:43:26 crc kubenswrapper[4795]: I1129 07:43:26.015877 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cgrzq" Nov 29 07:43:26 crc kubenswrapper[4795]: I1129 07:43:26.416204 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cgrzq" Nov 29 07:43:26 crc kubenswrapper[4795]: I1129 07:43:26.457077 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cgrzq" Nov 29 07:43:28 crc kubenswrapper[4795]: I1129 07:43:28.649877 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-d4htg" Nov 29 07:43:28 crc kubenswrapper[4795]: I1129 07:43:28.693217 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-d4htg" Nov 29 07:43:29 crc kubenswrapper[4795]: I1129 07:43:29.059707 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tr5zq" Nov 29 07:43:29 crc kubenswrapper[4795]: I1129 07:43:29.094876 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tr5zq" Nov 29 07:43:29 crc kubenswrapper[4795]: I1129 07:43:29.399643 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cgrzq"] Nov 29 07:43:29 crc kubenswrapper[4795]: I1129 07:43:29.399981 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cgrzq" podUID="b36dad37-4975-4010-b93e-0ea5c932caab" containerName="registry-server" containerID="cri-o://de191339879abc9aff8fe5a8bee6eb675c2d5be3583f2c7de8a0ef0123911047" gracePeriod=2 Nov 29 07:43:31 crc kubenswrapper[4795]: I1129 07:43:31.345139 4795 generic.go:334] "Generic (PLEG): container finished" podID="b36dad37-4975-4010-b93e-0ea5c932caab" containerID="de191339879abc9aff8fe5a8bee6eb675c2d5be3583f2c7de8a0ef0123911047" exitCode=0 Nov 29 07:43:31 crc kubenswrapper[4795]: I1129 07:43:31.345174 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cgrzq" event={"ID":"b36dad37-4975-4010-b93e-0ea5c932caab","Type":"ContainerDied","Data":"de191339879abc9aff8fe5a8bee6eb675c2d5be3583f2c7de8a0ef0123911047"} Nov 29 07:43:31 crc kubenswrapper[4795]: I1129 07:43:31.596743 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tr5zq"] Nov 29 07:43:31 crc kubenswrapper[4795]: I1129 07:43:31.597037 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tr5zq" podUID="e0cc4a28-2815-4b61-9893-3cf1b256872a" containerName="registry-server" containerID="cri-o://f70fc0e43f2c1b862249da5e89ad4025246fea6a8073ec4357f0d3c642210df5" gracePeriod=2 Nov 29 07:43:36 crc kubenswrapper[4795]: E1129 07:43:36.016602 4795 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of de191339879abc9aff8fe5a8bee6eb675c2d5be3583f2c7de8a0ef0123911047 is running failed: container process not found" containerID="de191339879abc9aff8fe5a8bee6eb675c2d5be3583f2c7de8a0ef0123911047" cmd=["grpc_health_probe","-addr=:50051"] Nov 29 07:43:36 crc kubenswrapper[4795]: E1129 07:43:36.017377 4795 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of de191339879abc9aff8fe5a8bee6eb675c2d5be3583f2c7de8a0ef0123911047 is running failed: container process not found" containerID="de191339879abc9aff8fe5a8bee6eb675c2d5be3583f2c7de8a0ef0123911047" cmd=["grpc_health_probe","-addr=:50051"] Nov 29 07:43:36 crc kubenswrapper[4795]: E1129 07:43:36.017619 4795 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of de191339879abc9aff8fe5a8bee6eb675c2d5be3583f2c7de8a0ef0123911047 is running failed: container process not found" containerID="de191339879abc9aff8fe5a8bee6eb675c2d5be3583f2c7de8a0ef0123911047" cmd=["grpc_health_probe","-addr=:50051"] Nov 29 07:43:36 crc kubenswrapper[4795]: E1129 07:43:36.017643 4795 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of de191339879abc9aff8fe5a8bee6eb675c2d5be3583f2c7de8a0ef0123911047 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-cgrzq" podUID="b36dad37-4975-4010-b93e-0ea5c932caab" containerName="registry-server" Nov 29 07:43:37 crc kubenswrapper[4795]: I1129 07:43:37.321624 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cgrzq" Nov 29 07:43:37 crc kubenswrapper[4795]: I1129 07:43:37.384285 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cgrzq" event={"ID":"b36dad37-4975-4010-b93e-0ea5c932caab","Type":"ContainerDied","Data":"faa52db34fcdb4ff3f31af383fbd8a283796d8be470e6c709c9530470f010c7a"} Nov 29 07:43:37 crc kubenswrapper[4795]: I1129 07:43:37.384330 4795 scope.go:117] "RemoveContainer" containerID="de191339879abc9aff8fe5a8bee6eb675c2d5be3583f2c7de8a0ef0123911047" Nov 29 07:43:37 crc kubenswrapper[4795]: I1129 07:43:37.384446 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cgrzq" Nov 29 07:43:37 crc kubenswrapper[4795]: I1129 07:43:37.393515 4795 generic.go:334] "Generic (PLEG): container finished" podID="e0cc4a28-2815-4b61-9893-3cf1b256872a" containerID="f70fc0e43f2c1b862249da5e89ad4025246fea6a8073ec4357f0d3c642210df5" exitCode=0 Nov 29 07:43:37 crc kubenswrapper[4795]: I1129 07:43:37.393571 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tr5zq" event={"ID":"e0cc4a28-2815-4b61-9893-3cf1b256872a","Type":"ContainerDied","Data":"f70fc0e43f2c1b862249da5e89ad4025246fea6a8073ec4357f0d3c642210df5"} Nov 29 07:43:37 crc kubenswrapper[4795]: I1129 07:43:37.404133 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b36dad37-4975-4010-b93e-0ea5c932caab-catalog-content\") pod \"b36dad37-4975-4010-b93e-0ea5c932caab\" (UID: \"b36dad37-4975-4010-b93e-0ea5c932caab\") " Nov 29 07:43:37 crc kubenswrapper[4795]: I1129 07:43:37.404291 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfndp\" (UniqueName: \"kubernetes.io/projected/b36dad37-4975-4010-b93e-0ea5c932caab-kube-api-access-kfndp\") pod \"b36dad37-4975-4010-b93e-0ea5c932caab\" (UID: \"b36dad37-4975-4010-b93e-0ea5c932caab\") " Nov 29 07:43:37 crc kubenswrapper[4795]: I1129 07:43:37.404519 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b36dad37-4975-4010-b93e-0ea5c932caab-utilities\") pod \"b36dad37-4975-4010-b93e-0ea5c932caab\" (UID: \"b36dad37-4975-4010-b93e-0ea5c932caab\") " Nov 29 07:43:37 crc kubenswrapper[4795]: I1129 07:43:37.405247 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b36dad37-4975-4010-b93e-0ea5c932caab-utilities" (OuterVolumeSpecName: "utilities") pod "b36dad37-4975-4010-b93e-0ea5c932caab" (UID: "b36dad37-4975-4010-b93e-0ea5c932caab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:43:37 crc kubenswrapper[4795]: I1129 07:43:37.409914 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b36dad37-4975-4010-b93e-0ea5c932caab-kube-api-access-kfndp" (OuterVolumeSpecName: "kube-api-access-kfndp") pod "b36dad37-4975-4010-b93e-0ea5c932caab" (UID: "b36dad37-4975-4010-b93e-0ea5c932caab"). InnerVolumeSpecName "kube-api-access-kfndp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:43:37 crc kubenswrapper[4795]: I1129 07:43:37.461278 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b36dad37-4975-4010-b93e-0ea5c932caab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b36dad37-4975-4010-b93e-0ea5c932caab" (UID: "b36dad37-4975-4010-b93e-0ea5c932caab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:43:37 crc kubenswrapper[4795]: I1129 07:43:37.506068 4795 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b36dad37-4975-4010-b93e-0ea5c932caab-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:37 crc kubenswrapper[4795]: I1129 07:43:37.506106 4795 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b36dad37-4975-4010-b93e-0ea5c932caab-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:37 crc kubenswrapper[4795]: I1129 07:43:37.506117 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfndp\" (UniqueName: \"kubernetes.io/projected/b36dad37-4975-4010-b93e-0ea5c932caab-kube-api-access-kfndp\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:37 crc kubenswrapper[4795]: I1129 07:43:37.527004 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tr5zq" Nov 29 07:43:37 crc kubenswrapper[4795]: I1129 07:43:37.596066 4795 scope.go:117] "RemoveContainer" containerID="06d82a55417d265a8be060f4c2ed35314a880be006bc56ba343f592c62fdfa48" Nov 29 07:43:37 crc kubenswrapper[4795]: I1129 07:43:37.606564 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbn9z\" (UniqueName: \"kubernetes.io/projected/e0cc4a28-2815-4b61-9893-3cf1b256872a-kube-api-access-xbn9z\") pod \"e0cc4a28-2815-4b61-9893-3cf1b256872a\" (UID: \"e0cc4a28-2815-4b61-9893-3cf1b256872a\") " Nov 29 07:43:37 crc kubenswrapper[4795]: I1129 07:43:37.606669 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0cc4a28-2815-4b61-9893-3cf1b256872a-utilities\") pod \"e0cc4a28-2815-4b61-9893-3cf1b256872a\" (UID: \"e0cc4a28-2815-4b61-9893-3cf1b256872a\") " Nov 29 07:43:37 crc kubenswrapper[4795]: I1129 07:43:37.606718 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0cc4a28-2815-4b61-9893-3cf1b256872a-catalog-content\") pod \"e0cc4a28-2815-4b61-9893-3cf1b256872a\" (UID: \"e0cc4a28-2815-4b61-9893-3cf1b256872a\") " Nov 29 07:43:37 crc kubenswrapper[4795]: I1129 07:43:37.607464 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0cc4a28-2815-4b61-9893-3cf1b256872a-utilities" (OuterVolumeSpecName: "utilities") pod "e0cc4a28-2815-4b61-9893-3cf1b256872a" (UID: "e0cc4a28-2815-4b61-9893-3cf1b256872a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:43:37 crc kubenswrapper[4795]: I1129 07:43:37.609105 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0cc4a28-2815-4b61-9893-3cf1b256872a-kube-api-access-xbn9z" (OuterVolumeSpecName: "kube-api-access-xbn9z") pod "e0cc4a28-2815-4b61-9893-3cf1b256872a" (UID: "e0cc4a28-2815-4b61-9893-3cf1b256872a"). InnerVolumeSpecName "kube-api-access-xbn9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:43:37 crc kubenswrapper[4795]: I1129 07:43:37.655630 4795 scope.go:117] "RemoveContainer" containerID="d8cdd411472f4ac1940b6d995f3b01d1491999de590ab94dcb85889f1b58149f" Nov 29 07:43:37 crc kubenswrapper[4795]: I1129 07:43:37.707717 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbn9z\" (UniqueName: \"kubernetes.io/projected/e0cc4a28-2815-4b61-9893-3cf1b256872a-kube-api-access-xbn9z\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:37 crc kubenswrapper[4795]: I1129 07:43:37.707777 4795 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0cc4a28-2815-4b61-9893-3cf1b256872a-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:37 crc kubenswrapper[4795]: I1129 07:43:37.720209 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cgrzq"] Nov 29 07:43:37 crc kubenswrapper[4795]: I1129 07:43:37.725543 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cgrzq"] Nov 29 07:43:37 crc kubenswrapper[4795]: I1129 07:43:37.739551 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0cc4a28-2815-4b61-9893-3cf1b256872a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e0cc4a28-2815-4b61-9893-3cf1b256872a" (UID: "e0cc4a28-2815-4b61-9893-3cf1b256872a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:43:37 crc kubenswrapper[4795]: I1129 07:43:37.808962 4795 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0cc4a28-2815-4b61-9893-3cf1b256872a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:38 crc kubenswrapper[4795]: I1129 07:43:38.282929 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b36dad37-4975-4010-b93e-0ea5c932caab" path="/var/lib/kubelet/pods/b36dad37-4975-4010-b93e-0ea5c932caab/volumes" Nov 29 07:43:38 crc kubenswrapper[4795]: I1129 07:43:38.401310 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vx8d5" event={"ID":"0deb15dc-57ff-4c83-8e81-ea7ebfda038d","Type":"ContainerStarted","Data":"fa1222e8b368f97208d5aedac2a28a6190187a6b679460a628dee8d1ad8f3192"} Nov 29 07:43:38 crc kubenswrapper[4795]: I1129 07:43:38.405336 4795 generic.go:334] "Generic (PLEG): container finished" podID="96ff138b-b30f-4a36-9c9c-76cf2c9e8e89" containerID="99316dba36faecd7c432d08e4965f0fe72e2d8310640149f69f5f986bba5c5e2" exitCode=0 Nov 29 07:43:38 crc kubenswrapper[4795]: I1129 07:43:38.405405 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8bzbm" event={"ID":"96ff138b-b30f-4a36-9c9c-76cf2c9e8e89","Type":"ContainerDied","Data":"99316dba36faecd7c432d08e4965f0fe72e2d8310640149f69f5f986bba5c5e2"} Nov 29 07:43:38 crc kubenswrapper[4795]: I1129 07:43:38.407400 4795 generic.go:334] "Generic (PLEG): container finished" podID="cc49b5c4-ec36-4051-b975-25f1e872a6f7" containerID="3715a89c1476485daf86081a33b5761ec3395c0b3dd12ab309da9f582e1bf235" exitCode=0 Nov 29 07:43:38 crc kubenswrapper[4795]: I1129 07:43:38.407467 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tl6cb" event={"ID":"cc49b5c4-ec36-4051-b975-25f1e872a6f7","Type":"ContainerDied","Data":"3715a89c1476485daf86081a33b5761ec3395c0b3dd12ab309da9f582e1bf235"} Nov 29 07:43:38 crc kubenswrapper[4795]: I1129 07:43:38.413213 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h8pp6" event={"ID":"a04ecf63-b125-4a0e-9869-403c9cca5648","Type":"ContainerStarted","Data":"69537845c3b2f03fc18a33867d3b13fff0b928a264fea1005b6639a22cf266d3"} Nov 29 07:43:38 crc kubenswrapper[4795]: I1129 07:43:38.415117 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t8rg9" event={"ID":"e1461195-363d-49f5-b5c2-61b99f16ece2","Type":"ContainerStarted","Data":"7e07fa1c9f939bad76bcbce6675383db2677f30124a5b8196b0c5b5d6c48fc51"} Nov 29 07:43:38 crc kubenswrapper[4795]: I1129 07:43:38.422088 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tr5zq" event={"ID":"e0cc4a28-2815-4b61-9893-3cf1b256872a","Type":"ContainerDied","Data":"e544139d96ec5a0c2f0ba6c092f4b5762fbc164e63938559ae52313dc7c3c95c"} Nov 29 07:43:38 crc kubenswrapper[4795]: I1129 07:43:38.422145 4795 scope.go:117] "RemoveContainer" containerID="f70fc0e43f2c1b862249da5e89ad4025246fea6a8073ec4357f0d3c642210df5" Nov 29 07:43:38 crc kubenswrapper[4795]: I1129 07:43:38.422163 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tr5zq" Nov 29 07:43:38 crc kubenswrapper[4795]: I1129 07:43:38.441667 4795 scope.go:117] "RemoveContainer" containerID="e879f9ec39b0dab42ed7e5e7f385eb3aa7a91298af008ea2163c9538c79cdc19" Nov 29 07:43:38 crc kubenswrapper[4795]: I1129 07:43:38.444216 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vx8d5" podStartSLOduration=3.952439849 podStartE2EDuration="1m54.444196342s" podCreationTimestamp="2025-11-29 07:41:44 +0000 UTC" firstStartedPulling="2025-11-29 07:41:47.122900225 +0000 UTC m=+153.098476015" lastFinishedPulling="2025-11-29 07:43:37.614656718 +0000 UTC m=+263.590232508" observedRunningTime="2025-11-29 07:43:38.422317101 +0000 UTC m=+264.397892891" watchObservedRunningTime="2025-11-29 07:43:38.444196342 +0000 UTC m=+264.419772132" Nov 29 07:43:38 crc kubenswrapper[4795]: I1129 07:43:38.446009 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-h8pp6" podStartSLOduration=2.876561859 podStartE2EDuration="1m53.446000833s" podCreationTimestamp="2025-11-29 07:41:45 +0000 UTC" firstStartedPulling="2025-11-29 07:41:47.096571841 +0000 UTC m=+153.072147631" lastFinishedPulling="2025-11-29 07:43:37.666010805 +0000 UTC m=+263.641586605" observedRunningTime="2025-11-29 07:43:38.440378244 +0000 UTC m=+264.415954034" watchObservedRunningTime="2025-11-29 07:43:38.446000833 +0000 UTC m=+264.421576643" Nov 29 07:43:38 crc kubenswrapper[4795]: I1129 07:43:38.463814 4795 scope.go:117] "RemoveContainer" containerID="aca7500b649372d8fdb01dff8c1b882add00557b919da0a6f8486b579ebc988b" Nov 29 07:43:38 crc kubenswrapper[4795]: I1129 07:43:38.487228 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-t8rg9" podStartSLOduration=3.526182786 podStartE2EDuration="1m53.487210983s" podCreationTimestamp="2025-11-29 07:41:45 +0000 UTC" firstStartedPulling="2025-11-29 07:41:47.277840675 +0000 UTC m=+153.253416465" lastFinishedPulling="2025-11-29 07:43:37.238868872 +0000 UTC m=+263.214444662" observedRunningTime="2025-11-29 07:43:38.482736076 +0000 UTC m=+264.458311886" watchObservedRunningTime="2025-11-29 07:43:38.487210983 +0000 UTC m=+264.462786773" Nov 29 07:43:38 crc kubenswrapper[4795]: I1129 07:43:38.526023 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tr5zq"] Nov 29 07:43:38 crc kubenswrapper[4795]: I1129 07:43:38.533543 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tr5zq"] Nov 29 07:43:40 crc kubenswrapper[4795]: I1129 07:43:40.286388 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0cc4a28-2815-4b61-9893-3cf1b256872a" path="/var/lib/kubelet/pods/e0cc4a28-2815-4b61-9893-3cf1b256872a/volumes" Nov 29 07:43:40 crc kubenswrapper[4795]: I1129 07:43:40.435575 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8bzbm" event={"ID":"96ff138b-b30f-4a36-9c9c-76cf2c9e8e89","Type":"ContainerStarted","Data":"d93d118a033bf15573d9e5aa3a9a66036e6996e9fe5801607d59446bcb1fd8ee"} Nov 29 07:43:40 crc kubenswrapper[4795]: I1129 07:43:40.453642 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8bzbm" podStartSLOduration=1.865103392 podStartE2EDuration="1m53.453626756s" podCreationTimestamp="2025-11-29 07:41:47 +0000 UTC" firstStartedPulling="2025-11-29 07:41:48.299115308 +0000 UTC m=+154.274691098" lastFinishedPulling="2025-11-29 07:43:39.887638672 +0000 UTC m=+265.863214462" observedRunningTime="2025-11-29 07:43:40.450070676 +0000 UTC m=+266.425646476" watchObservedRunningTime="2025-11-29 07:43:40.453626756 +0000 UTC m=+266.429202546" Nov 29 07:43:41 crc kubenswrapper[4795]: I1129 07:43:41.443050 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tl6cb" event={"ID":"cc49b5c4-ec36-4051-b975-25f1e872a6f7","Type":"ContainerStarted","Data":"6c1e8420d6108bbf5a7b91999c79bfa0784cd84e50280de7da2eff08380374d9"} Nov 29 07:43:41 crc kubenswrapper[4795]: I1129 07:43:41.461244 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tl6cb" podStartSLOduration=3.535356389 podStartE2EDuration="1m54.461228846s" podCreationTimestamp="2025-11-29 07:41:47 +0000 UTC" firstStartedPulling="2025-11-29 07:41:49.415176134 +0000 UTC m=+155.390751914" lastFinishedPulling="2025-11-29 07:43:40.341048581 +0000 UTC m=+266.316624371" observedRunningTime="2025-11-29 07:43:41.457404767 +0000 UTC m=+267.432980567" watchObservedRunningTime="2025-11-29 07:43:41.461228846 +0000 UTC m=+267.436804636" Nov 29 07:43:45 crc kubenswrapper[4795]: I1129 07:43:45.291467 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vx8d5" Nov 29 07:43:45 crc kubenswrapper[4795]: I1129 07:43:45.292084 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vx8d5" Nov 29 07:43:45 crc kubenswrapper[4795]: I1129 07:43:45.341109 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vx8d5" Nov 29 07:43:45 crc kubenswrapper[4795]: I1129 07:43:45.498879 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vx8d5" Nov 29 07:43:45 crc kubenswrapper[4795]: I1129 07:43:45.533754 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-h8pp6" Nov 29 07:43:45 crc kubenswrapper[4795]: I1129 07:43:45.533838 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-h8pp6" Nov 29 07:43:45 crc kubenswrapper[4795]: I1129 07:43:45.568147 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-h8pp6" Nov 29 07:43:45 crc kubenswrapper[4795]: I1129 07:43:45.986987 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-t8rg9" Nov 29 07:43:45 crc kubenswrapper[4795]: I1129 07:43:45.987066 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-t8rg9" Nov 29 07:43:46 crc kubenswrapper[4795]: I1129 07:43:46.031004 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-t8rg9" Nov 29 07:43:46 crc kubenswrapper[4795]: I1129 07:43:46.512356 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-h8pp6" Nov 29 07:43:46 crc kubenswrapper[4795]: I1129 07:43:46.516785 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-t8rg9" Nov 29 07:43:47 crc kubenswrapper[4795]: I1129 07:43:47.552036 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8bzbm" Nov 29 07:43:47 crc kubenswrapper[4795]: I1129 07:43:47.552280 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8bzbm" Nov 29 07:43:47 crc kubenswrapper[4795]: I1129 07:43:47.588701 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8bzbm" Nov 29 07:43:47 crc kubenswrapper[4795]: I1129 07:43:47.965648 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tl6cb" Nov 29 07:43:47 crc kubenswrapper[4795]: I1129 07:43:47.965713 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-tl6cb" Nov 29 07:43:48 crc kubenswrapper[4795]: I1129 07:43:48.005516 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tl6cb" Nov 29 07:43:48 crc kubenswrapper[4795]: I1129 07:43:48.186191 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t8rg9"] Nov 29 07:43:48 crc kubenswrapper[4795]: I1129 07:43:48.476516 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-t8rg9" podUID="e1461195-363d-49f5-b5c2-61b99f16ece2" containerName="registry-server" containerID="cri-o://7e07fa1c9f939bad76bcbce6675383db2677f30124a5b8196b0c5b5d6c48fc51" gracePeriod=2 Nov 29 07:43:48 crc kubenswrapper[4795]: I1129 07:43:48.535461 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8bzbm" Nov 29 07:43:48 crc kubenswrapper[4795]: I1129 07:43:48.555296 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tl6cb" Nov 29 07:43:50 crc kubenswrapper[4795]: I1129 07:43:50.024328 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-qpsg9"] Nov 29 07:43:50 crc kubenswrapper[4795]: I1129 07:43:50.382171 4795 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 29 07:43:50 crc kubenswrapper[4795]: I1129 07:43:50.382858 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://102735e5872ecbca5ba247f4fc457619ce4b383def180851d69bf37f7ea1051a" gracePeriod=15 Nov 29 07:43:50 crc kubenswrapper[4795]: I1129 07:43:50.382918 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://6319d19e0aa1655aaff5e3abcc13d8ff9ff26f401c28cf23593ecaa23d1ac5a2" gracePeriod=15 Nov 29 07:43:50 crc kubenswrapper[4795]: I1129 07:43:50.382931 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://c385dac962c7c2c54fe6b5eabe286bc75d0518601f68b6845677764c5916ad44" gracePeriod=15 Nov 29 07:43:50 crc kubenswrapper[4795]: I1129 07:43:50.382918 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://cba2e40f95664da3b482ce01c3432b7a421b785791004833aeaffbc2b6bdcb9a" gracePeriod=15 Nov 29 07:43:50 crc kubenswrapper[4795]: I1129 07:43:50.383026 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://f7b73774f26f4ea48586f8faf7f53be02b4678c85937c8e402868610f36d4d00" gracePeriod=15 Nov 29 07:43:50 crc kubenswrapper[4795]: I1129 07:43:50.385709 4795 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 29 07:43:50 crc kubenswrapper[4795]: E1129 07:43:50.386663 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0cc4a28-2815-4b61-9893-3cf1b256872a" containerName="registry-server" Nov 29 07:43:50 crc kubenswrapper[4795]: I1129 07:43:50.386674 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0cc4a28-2815-4b61-9893-3cf1b256872a" containerName="registry-server" Nov 29 07:43:50 crc kubenswrapper[4795]: E1129 07:43:50.386685 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 29 07:43:50 crc kubenswrapper[4795]: I1129 07:43:50.386691 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 29 07:43:50 crc kubenswrapper[4795]: E1129 07:43:50.386701 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b36dad37-4975-4010-b93e-0ea5c932caab" containerName="extract-utilities" Nov 29 07:43:50 crc kubenswrapper[4795]: I1129 07:43:50.386707 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="b36dad37-4975-4010-b93e-0ea5c932caab" containerName="extract-utilities" Nov 29 07:43:50 crc kubenswrapper[4795]: E1129 07:43:50.386716 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b36dad37-4975-4010-b93e-0ea5c932caab" containerName="registry-server" Nov 29 07:43:50 crc kubenswrapper[4795]: I1129 07:43:50.386721 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="b36dad37-4975-4010-b93e-0ea5c932caab" containerName="registry-server" Nov 29 07:43:50 crc kubenswrapper[4795]: E1129 07:43:50.386730 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Nov 29 07:43:50 crc kubenswrapper[4795]: I1129 07:43:50.386737 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Nov 29 07:43:50 crc kubenswrapper[4795]: E1129 07:43:50.386745 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 29 07:43:50 crc kubenswrapper[4795]: I1129 07:43:50.386753 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 29 07:43:50 crc kubenswrapper[4795]: E1129 07:43:50.386763 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 29 07:43:50 crc kubenswrapper[4795]: I1129 07:43:50.386769 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 29 07:43:50 crc kubenswrapper[4795]: E1129 07:43:50.386778 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 29 07:43:50 crc kubenswrapper[4795]: I1129 07:43:50.386784 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 29 07:43:50 crc kubenswrapper[4795]: E1129 07:43:50.386792 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b36dad37-4975-4010-b93e-0ea5c932caab" containerName="extract-content" Nov 29 07:43:50 crc kubenswrapper[4795]: I1129 07:43:50.386798 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="b36dad37-4975-4010-b93e-0ea5c932caab" containerName="extract-content" Nov 29 07:43:50 crc kubenswrapper[4795]: E1129 07:43:50.386806 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52dfdf3b-a9b0-4b03-84b6-4513f714a51f" containerName="pruner" Nov 29 07:43:50 crc kubenswrapper[4795]: I1129 07:43:50.386812 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="52dfdf3b-a9b0-4b03-84b6-4513f714a51f" containerName="pruner" Nov 29 07:43:50 crc kubenswrapper[4795]: E1129 07:43:50.386820 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 29 07:43:50 crc kubenswrapper[4795]: I1129 07:43:50.386826 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 29 07:43:50 crc kubenswrapper[4795]: E1129 07:43:50.386833 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0cc4a28-2815-4b61-9893-3cf1b256872a" containerName="extract-utilities" Nov 29 07:43:50 crc kubenswrapper[4795]: I1129 07:43:50.386840 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0cc4a28-2815-4b61-9893-3cf1b256872a" containerName="extract-utilities" Nov 29 07:43:50 crc kubenswrapper[4795]: E1129 07:43:50.386845 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0cc4a28-2815-4b61-9893-3cf1b256872a" containerName="extract-content" Nov 29 07:43:50 crc kubenswrapper[4795]: I1129 07:43:50.386851 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0cc4a28-2815-4b61-9893-3cf1b256872a" containerName="extract-content" Nov 29 07:43:50 crc kubenswrapper[4795]: I1129 07:43:50.386941 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 29 07:43:50 crc kubenswrapper[4795]: I1129 07:43:50.386949 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 29 07:43:50 crc kubenswrapper[4795]: I1129 07:43:50.386956 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 29 07:43:50 crc kubenswrapper[4795]: I1129 07:43:50.386965 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 29 07:43:50 crc kubenswrapper[4795]: I1129 07:43:50.386975 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="b36dad37-4975-4010-b93e-0ea5c932caab" containerName="registry-server" Nov 29 07:43:50 crc kubenswrapper[4795]: I1129 07:43:50.386982 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0cc4a28-2815-4b61-9893-3cf1b256872a" containerName="registry-server" Nov 29 07:43:50 crc kubenswrapper[4795]: I1129 07:43:50.386992 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="52dfdf3b-a9b0-4b03-84b6-4513f714a51f" containerName="pruner" Nov 29 07:43:50 crc kubenswrapper[4795]: I1129 07:43:50.386999 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 29 07:43:50 crc kubenswrapper[4795]: E1129 07:43:50.387091 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 29 07:43:50 crc kubenswrapper[4795]: I1129 07:43:50.387098 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 29 07:43:50 crc kubenswrapper[4795]: I1129 07:43:50.387176 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 29 07:43:50 crc kubenswrapper[4795]: I1129 07:43:50.388122 4795 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 29 07:43:50 crc kubenswrapper[4795]: I1129 07:43:50.388522 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 07:43:50 crc kubenswrapper[4795]: I1129 07:43:50.396762 4795 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Nov 29 07:43:50 crc kubenswrapper[4795]: I1129 07:43:50.419817 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 29 07:43:50 crc kubenswrapper[4795]: I1129 07:43:50.577856 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 07:43:50 crc kubenswrapper[4795]: I1129 07:43:50.577907 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:43:50 crc kubenswrapper[4795]: I1129 07:43:50.577945 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 07:43:50 crc kubenswrapper[4795]: I1129 07:43:50.577966 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 07:43:50 crc kubenswrapper[4795]: I1129 07:43:50.578127 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 07:43:50 crc kubenswrapper[4795]: I1129 07:43:50.578256 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:43:50 crc kubenswrapper[4795]: I1129 07:43:50.578311 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:43:50 crc kubenswrapper[4795]: I1129 07:43:50.578348 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 07:43:50 crc kubenswrapper[4795]: I1129 07:43:50.594032 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tl6cb"] Nov 29 07:43:50 crc kubenswrapper[4795]: I1129 07:43:50.594257 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-tl6cb" podUID="cc49b5c4-ec36-4051-b975-25f1e872a6f7" containerName="registry-server" containerID="cri-o://6c1e8420d6108bbf5a7b91999c79bfa0784cd84e50280de7da2eff08380374d9" gracePeriod=2 Nov 29 07:43:50 crc kubenswrapper[4795]: I1129 07:43:50.680004 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 07:43:50 crc kubenswrapper[4795]: I1129 07:43:50.680134 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:43:50 crc kubenswrapper[4795]: I1129 07:43:50.680182 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:43:50 crc kubenswrapper[4795]: I1129 07:43:50.680198 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 07:43:50 crc kubenswrapper[4795]: I1129 07:43:50.680215 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 07:43:50 crc kubenswrapper[4795]: I1129 07:43:50.680295 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 07:43:50 crc kubenswrapper[4795]: I1129 07:43:50.680309 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:43:50 crc kubenswrapper[4795]: I1129 07:43:50.680305 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:43:50 crc kubenswrapper[4795]: I1129 07:43:50.680375 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 07:43:50 crc kubenswrapper[4795]: I1129 07:43:50.680419 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 07:43:50 crc kubenswrapper[4795]: I1129 07:43:50.680422 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:43:50 crc kubenswrapper[4795]: I1129 07:43:50.680464 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:43:50 crc kubenswrapper[4795]: I1129 07:43:50.680484 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 07:43:50 crc kubenswrapper[4795]: I1129 07:43:50.680519 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 07:43:50 crc kubenswrapper[4795]: I1129 07:43:50.680524 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 07:43:50 crc kubenswrapper[4795]: I1129 07:43:50.680622 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 07:43:50 crc kubenswrapper[4795]: I1129 07:43:50.718428 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 07:43:50 crc kubenswrapper[4795]: W1129 07:43:50.738450 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-eda404a17ce8b8cf1c28ba4e9a80537cb49e94b37997154fec278ca6b48b7213 WatchSource:0}: Error finding container eda404a17ce8b8cf1c28ba4e9a80537cb49e94b37997154fec278ca6b48b7213: Status 404 returned error can't find the container with id eda404a17ce8b8cf1c28ba4e9a80537cb49e94b37997154fec278ca6b48b7213 Nov 29 07:43:50 crc kubenswrapper[4795]: E1129 07:43:50.740774 4795 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.107:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187c6a7b0c2b0d2f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-29 07:43:50.740061487 +0000 UTC m=+276.715637277,LastTimestamp:2025-11-29 07:43:50.740061487 +0000 UTC m=+276.715637277,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 29 07:43:51 crc kubenswrapper[4795]: I1129 07:43:51.495155 4795 generic.go:334] "Generic (PLEG): container finished" podID="cc49b5c4-ec36-4051-b975-25f1e872a6f7" containerID="6c1e8420d6108bbf5a7b91999c79bfa0784cd84e50280de7da2eff08380374d9" exitCode=0 Nov 29 07:43:51 crc kubenswrapper[4795]: I1129 07:43:51.495214 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tl6cb" event={"ID":"cc49b5c4-ec36-4051-b975-25f1e872a6f7","Type":"ContainerDied","Data":"6c1e8420d6108bbf5a7b91999c79bfa0784cd84e50280de7da2eff08380374d9"} Nov 29 07:43:51 crc kubenswrapper[4795]: I1129 07:43:51.498369 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 29 07:43:51 crc kubenswrapper[4795]: I1129 07:43:51.499929 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 29 07:43:51 crc kubenswrapper[4795]: I1129 07:43:51.500717 4795 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6319d19e0aa1655aaff5e3abcc13d8ff9ff26f401c28cf23593ecaa23d1ac5a2" exitCode=0 Nov 29 07:43:51 crc kubenswrapper[4795]: I1129 07:43:51.500751 4795 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f7b73774f26f4ea48586f8faf7f53be02b4678c85937c8e402868610f36d4d00" exitCode=0 Nov 29 07:43:51 crc kubenswrapper[4795]: I1129 07:43:51.500758 4795 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="cba2e40f95664da3b482ce01c3432b7a421b785791004833aeaffbc2b6bdcb9a" exitCode=0 Nov 29 07:43:51 crc kubenswrapper[4795]: I1129 07:43:51.500766 4795 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c385dac962c7c2c54fe6b5eabe286bc75d0518601f68b6845677764c5916ad44" exitCode=2 Nov 29 07:43:51 crc kubenswrapper[4795]: I1129 07:43:51.500878 4795 scope.go:117] "RemoveContainer" containerID="efa34ac2960898820609db88aa051a6567a7b6bd0bfc7fcb741205f6f5c3f402" Nov 29 07:43:51 crc kubenswrapper[4795]: I1129 07:43:51.502735 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"3c9879552b54392c76220a72d52c526330975ac85ba968bd283dd2caeb0b7f7d"} Nov 29 07:43:51 crc kubenswrapper[4795]: I1129 07:43:51.502805 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"eda404a17ce8b8cf1c28ba4e9a80537cb49e94b37997154fec278ca6b48b7213"} Nov 29 07:43:51 crc kubenswrapper[4795]: I1129 07:43:51.503530 4795 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:43:51 crc kubenswrapper[4795]: I1129 07:43:51.505946 4795 generic.go:334] "Generic (PLEG): container finished" podID="e1461195-363d-49f5-b5c2-61b99f16ece2" containerID="7e07fa1c9f939bad76bcbce6675383db2677f30124a5b8196b0c5b5d6c48fc51" exitCode=0 Nov 29 07:43:51 crc kubenswrapper[4795]: I1129 07:43:51.506022 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t8rg9" event={"ID":"e1461195-363d-49f5-b5c2-61b99f16ece2","Type":"ContainerDied","Data":"7e07fa1c9f939bad76bcbce6675383db2677f30124a5b8196b0c5b5d6c48fc51"} Nov 29 07:43:51 crc kubenswrapper[4795]: I1129 07:43:51.507842 4795 generic.go:334] "Generic (PLEG): container finished" podID="5876d38e-799e-4ca3-ac4a-c04c3070491a" containerID="4dbd55216791fefe5988e5e64635fea01a4f908dc7ad783f1d8b4f33a438b080" exitCode=0 Nov 29 07:43:51 crc kubenswrapper[4795]: I1129 07:43:51.507886 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"5876d38e-799e-4ca3-ac4a-c04c3070491a","Type":"ContainerDied","Data":"4dbd55216791fefe5988e5e64635fea01a4f908dc7ad783f1d8b4f33a438b080"} Nov 29 07:43:51 crc kubenswrapper[4795]: I1129 07:43:51.508897 4795 status_manager.go:851] "Failed to get status for pod" podUID="5876d38e-799e-4ca3-ac4a-c04c3070491a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:43:51 crc kubenswrapper[4795]: I1129 07:43:51.509503 4795 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:43:52 crc kubenswrapper[4795]: I1129 07:43:52.123292 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tl6cb" Nov 29 07:43:52 crc kubenswrapper[4795]: I1129 07:43:52.124125 4795 status_manager.go:851] "Failed to get status for pod" podUID="5876d38e-799e-4ca3-ac4a-c04c3070491a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:43:52 crc kubenswrapper[4795]: I1129 07:43:52.124466 4795 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:43:52 crc kubenswrapper[4795]: I1129 07:43:52.124832 4795 status_manager.go:851] "Failed to get status for pod" podUID="cc49b5c4-ec36-4051-b975-25f1e872a6f7" pod="openshift-marketplace/redhat-marketplace-tl6cb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-tl6cb\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:43:52 crc kubenswrapper[4795]: I1129 07:43:52.201812 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jp6pp\" (UniqueName: \"kubernetes.io/projected/cc49b5c4-ec36-4051-b975-25f1e872a6f7-kube-api-access-jp6pp\") pod \"cc49b5c4-ec36-4051-b975-25f1e872a6f7\" (UID: \"cc49b5c4-ec36-4051-b975-25f1e872a6f7\") " Nov 29 07:43:52 crc kubenswrapper[4795]: I1129 07:43:52.201881 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc49b5c4-ec36-4051-b975-25f1e872a6f7-catalog-content\") pod \"cc49b5c4-ec36-4051-b975-25f1e872a6f7\" (UID: \"cc49b5c4-ec36-4051-b975-25f1e872a6f7\") " Nov 29 07:43:52 crc kubenswrapper[4795]: I1129 07:43:52.201940 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc49b5c4-ec36-4051-b975-25f1e872a6f7-utilities\") pod \"cc49b5c4-ec36-4051-b975-25f1e872a6f7\" (UID: \"cc49b5c4-ec36-4051-b975-25f1e872a6f7\") " Nov 29 07:43:52 crc kubenswrapper[4795]: I1129 07:43:52.203096 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc49b5c4-ec36-4051-b975-25f1e872a6f7-utilities" (OuterVolumeSpecName: "utilities") pod "cc49b5c4-ec36-4051-b975-25f1e872a6f7" (UID: "cc49b5c4-ec36-4051-b975-25f1e872a6f7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:43:52 crc kubenswrapper[4795]: I1129 07:43:52.209378 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc49b5c4-ec36-4051-b975-25f1e872a6f7-kube-api-access-jp6pp" (OuterVolumeSpecName: "kube-api-access-jp6pp") pod "cc49b5c4-ec36-4051-b975-25f1e872a6f7" (UID: "cc49b5c4-ec36-4051-b975-25f1e872a6f7"). InnerVolumeSpecName "kube-api-access-jp6pp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:43:52 crc kubenswrapper[4795]: I1129 07:43:52.221890 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc49b5c4-ec36-4051-b975-25f1e872a6f7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc49b5c4-ec36-4051-b975-25f1e872a6f7" (UID: "cc49b5c4-ec36-4051-b975-25f1e872a6f7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:43:52 crc kubenswrapper[4795]: I1129 07:43:52.259794 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t8rg9" Nov 29 07:43:52 crc kubenswrapper[4795]: I1129 07:43:52.260364 4795 status_manager.go:851] "Failed to get status for pod" podUID="5876d38e-799e-4ca3-ac4a-c04c3070491a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:43:52 crc kubenswrapper[4795]: I1129 07:43:52.260774 4795 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:43:52 crc kubenswrapper[4795]: I1129 07:43:52.261193 4795 status_manager.go:851] "Failed to get status for pod" podUID="e1461195-363d-49f5-b5c2-61b99f16ece2" pod="openshift-marketplace/certified-operators-t8rg9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-t8rg9\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:43:52 crc kubenswrapper[4795]: I1129 07:43:52.261478 4795 status_manager.go:851] "Failed to get status for pod" podUID="cc49b5c4-ec36-4051-b975-25f1e872a6f7" pod="openshift-marketplace/redhat-marketplace-tl6cb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-tl6cb\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:43:52 crc kubenswrapper[4795]: I1129 07:43:52.303140 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1461195-363d-49f5-b5c2-61b99f16ece2-utilities\") pod \"e1461195-363d-49f5-b5c2-61b99f16ece2\" (UID: \"e1461195-363d-49f5-b5c2-61b99f16ece2\") " Nov 29 07:43:52 crc kubenswrapper[4795]: I1129 07:43:52.303287 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7swxq\" (UniqueName: \"kubernetes.io/projected/e1461195-363d-49f5-b5c2-61b99f16ece2-kube-api-access-7swxq\") pod \"e1461195-363d-49f5-b5c2-61b99f16ece2\" (UID: \"e1461195-363d-49f5-b5c2-61b99f16ece2\") " Nov 29 07:43:52 crc kubenswrapper[4795]: I1129 07:43:52.303332 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1461195-363d-49f5-b5c2-61b99f16ece2-catalog-content\") pod \"e1461195-363d-49f5-b5c2-61b99f16ece2\" (UID: \"e1461195-363d-49f5-b5c2-61b99f16ece2\") " Nov 29 07:43:52 crc kubenswrapper[4795]: I1129 07:43:52.303677 4795 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc49b5c4-ec36-4051-b975-25f1e872a6f7-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:52 crc kubenswrapper[4795]: I1129 07:43:52.303702 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jp6pp\" (UniqueName: \"kubernetes.io/projected/cc49b5c4-ec36-4051-b975-25f1e872a6f7-kube-api-access-jp6pp\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:52 crc kubenswrapper[4795]: I1129 07:43:52.303717 4795 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc49b5c4-ec36-4051-b975-25f1e872a6f7-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:52 crc kubenswrapper[4795]: I1129 07:43:52.305495 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1461195-363d-49f5-b5c2-61b99f16ece2-utilities" (OuterVolumeSpecName: "utilities") pod "e1461195-363d-49f5-b5c2-61b99f16ece2" (UID: "e1461195-363d-49f5-b5c2-61b99f16ece2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:43:52 crc kubenswrapper[4795]: I1129 07:43:52.321107 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1461195-363d-49f5-b5c2-61b99f16ece2-kube-api-access-7swxq" (OuterVolumeSpecName: "kube-api-access-7swxq") pod "e1461195-363d-49f5-b5c2-61b99f16ece2" (UID: "e1461195-363d-49f5-b5c2-61b99f16ece2"). InnerVolumeSpecName "kube-api-access-7swxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:43:52 crc kubenswrapper[4795]: I1129 07:43:52.349549 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1461195-363d-49f5-b5c2-61b99f16ece2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e1461195-363d-49f5-b5c2-61b99f16ece2" (UID: "e1461195-363d-49f5-b5c2-61b99f16ece2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:43:52 crc kubenswrapper[4795]: I1129 07:43:52.405004 4795 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1461195-363d-49f5-b5c2-61b99f16ece2-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:52 crc kubenswrapper[4795]: I1129 07:43:52.405044 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7swxq\" (UniqueName: \"kubernetes.io/projected/e1461195-363d-49f5-b5c2-61b99f16ece2-kube-api-access-7swxq\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:52 crc kubenswrapper[4795]: I1129 07:43:52.405056 4795 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1461195-363d-49f5-b5c2-61b99f16ece2-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:52 crc kubenswrapper[4795]: I1129 07:43:52.515974 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t8rg9" event={"ID":"e1461195-363d-49f5-b5c2-61b99f16ece2","Type":"ContainerDied","Data":"19f03f7b2ef52a2d34ff0260d110528204c398ab5599d211c542a6ca9ca9145c"} Nov 29 07:43:52 crc kubenswrapper[4795]: I1129 07:43:52.516000 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t8rg9" Nov 29 07:43:52 crc kubenswrapper[4795]: I1129 07:43:52.516023 4795 scope.go:117] "RemoveContainer" containerID="7e07fa1c9f939bad76bcbce6675383db2677f30124a5b8196b0c5b5d6c48fc51" Nov 29 07:43:52 crc kubenswrapper[4795]: I1129 07:43:52.516527 4795 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:43:52 crc kubenswrapper[4795]: I1129 07:43:52.516730 4795 status_manager.go:851] "Failed to get status for pod" podUID="e1461195-363d-49f5-b5c2-61b99f16ece2" pod="openshift-marketplace/certified-operators-t8rg9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-t8rg9\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:43:52 crc kubenswrapper[4795]: I1129 07:43:52.517017 4795 status_manager.go:851] "Failed to get status for pod" podUID="cc49b5c4-ec36-4051-b975-25f1e872a6f7" pod="openshift-marketplace/redhat-marketplace-tl6cb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-tl6cb\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:43:52 crc kubenswrapper[4795]: I1129 07:43:52.517303 4795 status_manager.go:851] "Failed to get status for pod" podUID="5876d38e-799e-4ca3-ac4a-c04c3070491a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:43:52 crc kubenswrapper[4795]: I1129 07:43:52.518935 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tl6cb" event={"ID":"cc49b5c4-ec36-4051-b975-25f1e872a6f7","Type":"ContainerDied","Data":"e5c5fbdb8ca78693e89201429e91538827eee78cc5974836f43bddbfbec30a1c"} Nov 29 07:43:52 crc kubenswrapper[4795]: I1129 07:43:52.519097 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tl6cb" Nov 29 07:43:52 crc kubenswrapper[4795]: I1129 07:43:52.519735 4795 status_manager.go:851] "Failed to get status for pod" podUID="5876d38e-799e-4ca3-ac4a-c04c3070491a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:43:52 crc kubenswrapper[4795]: I1129 07:43:52.519956 4795 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:43:52 crc kubenswrapper[4795]: I1129 07:43:52.520157 4795 status_manager.go:851] "Failed to get status for pod" podUID="e1461195-363d-49f5-b5c2-61b99f16ece2" pod="openshift-marketplace/certified-operators-t8rg9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-t8rg9\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:43:52 crc kubenswrapper[4795]: I1129 07:43:52.520386 4795 status_manager.go:851] "Failed to get status for pod" podUID="cc49b5c4-ec36-4051-b975-25f1e872a6f7" pod="openshift-marketplace/redhat-marketplace-tl6cb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-tl6cb\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:43:52 crc kubenswrapper[4795]: I1129 07:43:52.524692 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 29 07:43:52 crc kubenswrapper[4795]: I1129 07:43:52.524334 4795 status_manager.go:851] "Failed to get status for pod" podUID="e1461195-363d-49f5-b5c2-61b99f16ece2" pod="openshift-marketplace/certified-operators-t8rg9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-t8rg9\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:43:52 crc kubenswrapper[4795]: I1129 07:43:52.527533 4795 status_manager.go:851] "Failed to get status for pod" podUID="cc49b5c4-ec36-4051-b975-25f1e872a6f7" pod="openshift-marketplace/redhat-marketplace-tl6cb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-tl6cb\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:43:52 crc kubenswrapper[4795]: I1129 07:43:52.527903 4795 status_manager.go:851] "Failed to get status for pod" podUID="5876d38e-799e-4ca3-ac4a-c04c3070491a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:43:52 crc kubenswrapper[4795]: I1129 07:43:52.528383 4795 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:43:52 crc kubenswrapper[4795]: I1129 07:43:52.528695 4795 status_manager.go:851] "Failed to get status for pod" podUID="e1461195-363d-49f5-b5c2-61b99f16ece2" pod="openshift-marketplace/certified-operators-t8rg9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-t8rg9\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:43:52 crc kubenswrapper[4795]: I1129 07:43:52.529016 4795 status_manager.go:851] "Failed to get status for pod" podUID="cc49b5c4-ec36-4051-b975-25f1e872a6f7" pod="openshift-marketplace/redhat-marketplace-tl6cb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-tl6cb\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:43:52 crc kubenswrapper[4795]: I1129 07:43:52.529360 4795 status_manager.go:851] "Failed to get status for pod" podUID="5876d38e-799e-4ca3-ac4a-c04c3070491a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:43:52 crc kubenswrapper[4795]: I1129 07:43:52.529708 4795 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:43:52 crc kubenswrapper[4795]: I1129 07:43:52.536316 4795 scope.go:117] "RemoveContainer" containerID="713e874aea16a055d4957b05e84f8934dc631049c221f0a810e1c4a1093ce290" Nov 29 07:43:52 crc kubenswrapper[4795]: I1129 07:43:52.573968 4795 scope.go:117] "RemoveContainer" containerID="cde60f25cbe20ca4ba16e97201b8a5cdf54f5fe2bf7fbdf55941c2092f209b9c" Nov 29 07:43:52 crc kubenswrapper[4795]: I1129 07:43:52.598466 4795 scope.go:117] "RemoveContainer" containerID="6c1e8420d6108bbf5a7b91999c79bfa0784cd84e50280de7da2eff08380374d9" Nov 29 07:43:52 crc kubenswrapper[4795]: I1129 07:43:52.638094 4795 scope.go:117] "RemoveContainer" containerID="3715a89c1476485daf86081a33b5761ec3395c0b3dd12ab309da9f582e1bf235" Nov 29 07:43:52 crc kubenswrapper[4795]: I1129 07:43:52.652892 4795 scope.go:117] "RemoveContainer" containerID="e10df092c1261177eeb72db701ad9830ccfa4a19a01134d8bc4e5593bb18f573" Nov 29 07:43:52 crc kubenswrapper[4795]: E1129 07:43:52.783897 4795 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:43:52 crc kubenswrapper[4795]: E1129 07:43:52.784073 4795 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:43:52 crc kubenswrapper[4795]: E1129 07:43:52.784224 4795 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:43:52 crc kubenswrapper[4795]: E1129 07:43:52.784369 4795 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:43:52 crc kubenswrapper[4795]: E1129 07:43:52.784517 4795 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:43:52 crc kubenswrapper[4795]: I1129 07:43:52.784540 4795 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Nov 29 07:43:52 crc kubenswrapper[4795]: E1129 07:43:52.784711 4795 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" interval="200ms" Nov 29 07:43:52 crc kubenswrapper[4795]: I1129 07:43:52.810343 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 29 07:43:52 crc kubenswrapper[4795]: I1129 07:43:52.810986 4795 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:43:52 crc kubenswrapper[4795]: I1129 07:43:52.812384 4795 status_manager.go:851] "Failed to get status for pod" podUID="e1461195-363d-49f5-b5c2-61b99f16ece2" pod="openshift-marketplace/certified-operators-t8rg9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-t8rg9\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:43:52 crc kubenswrapper[4795]: I1129 07:43:52.812993 4795 status_manager.go:851] "Failed to get status for pod" podUID="cc49b5c4-ec36-4051-b975-25f1e872a6f7" pod="openshift-marketplace/redhat-marketplace-tl6cb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-tl6cb\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:43:52 crc kubenswrapper[4795]: I1129 07:43:52.813404 4795 status_manager.go:851] "Failed to get status for pod" podUID="5876d38e-799e-4ca3-ac4a-c04c3070491a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:43:52 crc kubenswrapper[4795]: I1129 07:43:52.914382 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5876d38e-799e-4ca3-ac4a-c04c3070491a-var-lock\") pod \"5876d38e-799e-4ca3-ac4a-c04c3070491a\" (UID: \"5876d38e-799e-4ca3-ac4a-c04c3070491a\") " Nov 29 07:43:52 crc kubenswrapper[4795]: I1129 07:43:52.914698 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5876d38e-799e-4ca3-ac4a-c04c3070491a-kubelet-dir\") pod \"5876d38e-799e-4ca3-ac4a-c04c3070491a\" (UID: \"5876d38e-799e-4ca3-ac4a-c04c3070491a\") " Nov 29 07:43:52 crc kubenswrapper[4795]: I1129 07:43:52.914816 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5876d38e-799e-4ca3-ac4a-c04c3070491a-kube-api-access\") pod \"5876d38e-799e-4ca3-ac4a-c04c3070491a\" (UID: \"5876d38e-799e-4ca3-ac4a-c04c3070491a\") " Nov 29 07:43:52 crc kubenswrapper[4795]: I1129 07:43:52.914511 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5876d38e-799e-4ca3-ac4a-c04c3070491a-var-lock" (OuterVolumeSpecName: "var-lock") pod "5876d38e-799e-4ca3-ac4a-c04c3070491a" (UID: "5876d38e-799e-4ca3-ac4a-c04c3070491a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:43:52 crc kubenswrapper[4795]: I1129 07:43:52.914965 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5876d38e-799e-4ca3-ac4a-c04c3070491a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5876d38e-799e-4ca3-ac4a-c04c3070491a" (UID: "5876d38e-799e-4ca3-ac4a-c04c3070491a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:43:52 crc kubenswrapper[4795]: I1129 07:43:52.915846 4795 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5876d38e-799e-4ca3-ac4a-c04c3070491a-var-lock\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:52 crc kubenswrapper[4795]: I1129 07:43:52.915871 4795 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5876d38e-799e-4ca3-ac4a-c04c3070491a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:52 crc kubenswrapper[4795]: I1129 07:43:52.920311 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5876d38e-799e-4ca3-ac4a-c04c3070491a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5876d38e-799e-4ca3-ac4a-c04c3070491a" (UID: "5876d38e-799e-4ca3-ac4a-c04c3070491a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:43:52 crc kubenswrapper[4795]: E1129 07:43:52.985562 4795 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" interval="400ms" Nov 29 07:43:53 crc kubenswrapper[4795]: I1129 07:43:53.017124 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5876d38e-799e-4ca3-ac4a-c04c3070491a-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:53 crc kubenswrapper[4795]: E1129 07:43:53.107133 4795 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.107:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187c6a7b0c2b0d2f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-29 07:43:50.740061487 +0000 UTC m=+276.715637277,LastTimestamp:2025-11-29 07:43:50.740061487 +0000 UTC m=+276.715637277,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 29 07:43:53 crc kubenswrapper[4795]: E1129 07:43:53.386357 4795 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" interval="800ms" Nov 29 07:43:53 crc kubenswrapper[4795]: I1129 07:43:53.533077 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 29 07:43:53 crc kubenswrapper[4795]: I1129 07:43:53.533082 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"5876d38e-799e-4ca3-ac4a-c04c3070491a","Type":"ContainerDied","Data":"82ed222fb3a30647ff3fd372751dcc6d4be47383e3ffc548086f7e4bcdf17e5e"} Nov 29 07:43:53 crc kubenswrapper[4795]: I1129 07:43:53.533319 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82ed222fb3a30647ff3fd372751dcc6d4be47383e3ffc548086f7e4bcdf17e5e" Nov 29 07:43:53 crc kubenswrapper[4795]: I1129 07:43:53.539819 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 29 07:43:53 crc kubenswrapper[4795]: I1129 07:43:53.540870 4795 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="102735e5872ecbca5ba247f4fc457619ce4b383def180851d69bf37f7ea1051a" exitCode=0 Nov 29 07:43:53 crc kubenswrapper[4795]: I1129 07:43:53.557480 4795 status_manager.go:851] "Failed to get status for pod" podUID="e1461195-363d-49f5-b5c2-61b99f16ece2" pod="openshift-marketplace/certified-operators-t8rg9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-t8rg9\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:43:53 crc kubenswrapper[4795]: I1129 07:43:53.558233 4795 status_manager.go:851] "Failed to get status for pod" podUID="cc49b5c4-ec36-4051-b975-25f1e872a6f7" pod="openshift-marketplace/redhat-marketplace-tl6cb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-tl6cb\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:43:53 crc kubenswrapper[4795]: I1129 07:43:53.558766 4795 status_manager.go:851] "Failed to get status for pod" podUID="5876d38e-799e-4ca3-ac4a-c04c3070491a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:43:53 crc kubenswrapper[4795]: I1129 07:43:53.559135 4795 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:43:54 crc kubenswrapper[4795]: E1129 07:43:54.187063 4795 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" interval="1.6s" Nov 29 07:43:54 crc kubenswrapper[4795]: I1129 07:43:54.266252 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 29 07:43:54 crc kubenswrapper[4795]: I1129 07:43:54.267098 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:43:54 crc kubenswrapper[4795]: I1129 07:43:54.267758 4795 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:43:54 crc kubenswrapper[4795]: I1129 07:43:54.268493 4795 status_manager.go:851] "Failed to get status for pod" podUID="5876d38e-799e-4ca3-ac4a-c04c3070491a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:43:54 crc kubenswrapper[4795]: I1129 07:43:54.268816 4795 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:43:54 crc kubenswrapper[4795]: I1129 07:43:54.269272 4795 status_manager.go:851] "Failed to get status for pod" podUID="e1461195-363d-49f5-b5c2-61b99f16ece2" pod="openshift-marketplace/certified-operators-t8rg9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-t8rg9\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:43:54 crc kubenswrapper[4795]: I1129 07:43:54.270125 4795 status_manager.go:851] "Failed to get status for pod" podUID="cc49b5c4-ec36-4051-b975-25f1e872a6f7" pod="openshift-marketplace/redhat-marketplace-tl6cb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-tl6cb\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:43:54 crc kubenswrapper[4795]: I1129 07:43:54.278843 4795 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:43:54 crc kubenswrapper[4795]: I1129 07:43:54.279393 4795 status_manager.go:851] "Failed to get status for pod" podUID="e1461195-363d-49f5-b5c2-61b99f16ece2" pod="openshift-marketplace/certified-operators-t8rg9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-t8rg9\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:43:54 crc kubenswrapper[4795]: I1129 07:43:54.280356 4795 status_manager.go:851] "Failed to get status for pod" podUID="cc49b5c4-ec36-4051-b975-25f1e872a6f7" pod="openshift-marketplace/redhat-marketplace-tl6cb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-tl6cb\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:43:54 crc kubenswrapper[4795]: I1129 07:43:54.280706 4795 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:43:54 crc kubenswrapper[4795]: I1129 07:43:54.281017 4795 status_manager.go:851] "Failed to get status for pod" podUID="5876d38e-799e-4ca3-ac4a-c04c3070491a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:43:54 crc kubenswrapper[4795]: I1129 07:43:54.335140 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 29 07:43:54 crc kubenswrapper[4795]: I1129 07:43:54.335181 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 29 07:43:54 crc kubenswrapper[4795]: I1129 07:43:54.335242 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 29 07:43:54 crc kubenswrapper[4795]: I1129 07:43:54.335694 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:43:54 crc kubenswrapper[4795]: I1129 07:43:54.335728 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:43:54 crc kubenswrapper[4795]: I1129 07:43:54.335841 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:43:54 crc kubenswrapper[4795]: I1129 07:43:54.336235 4795 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:54 crc kubenswrapper[4795]: I1129 07:43:54.336350 4795 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:54 crc kubenswrapper[4795]: I1129 07:43:54.336452 4795 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:54 crc kubenswrapper[4795]: I1129 07:43:54.549227 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 29 07:43:54 crc kubenswrapper[4795]: I1129 07:43:54.550071 4795 scope.go:117] "RemoveContainer" containerID="6319d19e0aa1655aaff5e3abcc13d8ff9ff26f401c28cf23593ecaa23d1ac5a2" Nov 29 07:43:54 crc kubenswrapper[4795]: I1129 07:43:54.550110 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:43:54 crc kubenswrapper[4795]: I1129 07:43:54.550844 4795 status_manager.go:851] "Failed to get status for pod" podUID="e1461195-363d-49f5-b5c2-61b99f16ece2" pod="openshift-marketplace/certified-operators-t8rg9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-t8rg9\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:43:54 crc kubenswrapper[4795]: I1129 07:43:54.551198 4795 status_manager.go:851] "Failed to get status for pod" podUID="cc49b5c4-ec36-4051-b975-25f1e872a6f7" pod="openshift-marketplace/redhat-marketplace-tl6cb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-tl6cb\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:43:54 crc kubenswrapper[4795]: I1129 07:43:54.552495 4795 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:43:54 crc kubenswrapper[4795]: I1129 07:43:54.552873 4795 status_manager.go:851] "Failed to get status for pod" podUID="5876d38e-799e-4ca3-ac4a-c04c3070491a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:43:54 crc kubenswrapper[4795]: I1129 07:43:54.553657 4795 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:43:54 crc kubenswrapper[4795]: I1129 07:43:54.563835 4795 status_manager.go:851] "Failed to get status for pod" podUID="5876d38e-799e-4ca3-ac4a-c04c3070491a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:43:54 crc kubenswrapper[4795]: I1129 07:43:54.564628 4795 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:43:54 crc kubenswrapper[4795]: I1129 07:43:54.564747 4795 scope.go:117] "RemoveContainer" containerID="f7b73774f26f4ea48586f8faf7f53be02b4678c85937c8e402868610f36d4d00" Nov 29 07:43:54 crc kubenswrapper[4795]: I1129 07:43:54.564965 4795 status_manager.go:851] "Failed to get status for pod" podUID="e1461195-363d-49f5-b5c2-61b99f16ece2" pod="openshift-marketplace/certified-operators-t8rg9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-t8rg9\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:43:54 crc kubenswrapper[4795]: I1129 07:43:54.565189 4795 status_manager.go:851] "Failed to get status for pod" podUID="cc49b5c4-ec36-4051-b975-25f1e872a6f7" pod="openshift-marketplace/redhat-marketplace-tl6cb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-tl6cb\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:43:54 crc kubenswrapper[4795]: I1129 07:43:54.565673 4795 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:43:54 crc kubenswrapper[4795]: I1129 07:43:54.576110 4795 scope.go:117] "RemoveContainer" containerID="cba2e40f95664da3b482ce01c3432b7a421b785791004833aeaffbc2b6bdcb9a" Nov 29 07:43:54 crc kubenswrapper[4795]: I1129 07:43:54.587461 4795 scope.go:117] "RemoveContainer" containerID="c385dac962c7c2c54fe6b5eabe286bc75d0518601f68b6845677764c5916ad44" Nov 29 07:43:54 crc kubenswrapper[4795]: I1129 07:43:54.598254 4795 scope.go:117] "RemoveContainer" containerID="102735e5872ecbca5ba247f4fc457619ce4b383def180851d69bf37f7ea1051a" Nov 29 07:43:54 crc kubenswrapper[4795]: I1129 07:43:54.610114 4795 scope.go:117] "RemoveContainer" containerID="5ab5a13e4f69a8311eb8bf99463875959e0c2ca3a645511c98d703cc9b01f893" Nov 29 07:43:55 crc kubenswrapper[4795]: E1129 07:43:55.788278 4795 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" interval="3.2s" Nov 29 07:43:56 crc kubenswrapper[4795]: I1129 07:43:56.281333 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Nov 29 07:43:58 crc kubenswrapper[4795]: E1129 07:43:58.855747 4795 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:43:58Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:43:58Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:43:58Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:43:58Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:43:59 crc kubenswrapper[4795]: E1129 07:43:58.856270 4795 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:43:59 crc kubenswrapper[4795]: E1129 07:43:58.856469 4795 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:43:59 crc kubenswrapper[4795]: E1129 07:43:58.856678 4795 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:43:59 crc kubenswrapper[4795]: E1129 07:43:58.856938 4795 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:43:59 crc kubenswrapper[4795]: E1129 07:43:58.856954 4795 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 29 07:43:59 crc kubenswrapper[4795]: E1129 07:43:58.988854 4795 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" interval="6.4s" Nov 29 07:44:02 crc kubenswrapper[4795]: I1129 07:44:02.275465 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:44:02 crc kubenswrapper[4795]: I1129 07:44:02.278745 4795 status_manager.go:851] "Failed to get status for pod" podUID="e1461195-363d-49f5-b5c2-61b99f16ece2" pod="openshift-marketplace/certified-operators-t8rg9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-t8rg9\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:44:02 crc kubenswrapper[4795]: I1129 07:44:02.279333 4795 status_manager.go:851] "Failed to get status for pod" podUID="cc49b5c4-ec36-4051-b975-25f1e872a6f7" pod="openshift-marketplace/redhat-marketplace-tl6cb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-tl6cb\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:44:02 crc kubenswrapper[4795]: I1129 07:44:02.280761 4795 status_manager.go:851] "Failed to get status for pod" podUID="5876d38e-799e-4ca3-ac4a-c04c3070491a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:44:02 crc kubenswrapper[4795]: I1129 07:44:02.281066 4795 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:44:02 crc kubenswrapper[4795]: I1129 07:44:02.295623 4795 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="84bbfda4-f166-406b-8459-d8cad5d21032" Nov 29 07:44:02 crc kubenswrapper[4795]: I1129 07:44:02.295664 4795 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="84bbfda4-f166-406b-8459-d8cad5d21032" Nov 29 07:44:02 crc kubenswrapper[4795]: E1129 07:44:02.296309 4795 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:44:02 crc kubenswrapper[4795]: I1129 07:44:02.297446 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:44:02 crc kubenswrapper[4795]: I1129 07:44:02.593861 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"67a520f76ecacb7aa0b42fee571bdac85991b209b5056700c81554f31ac40df3"} Nov 29 07:44:03 crc kubenswrapper[4795]: E1129 07:44:03.108139 4795 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.107:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187c6a7b0c2b0d2f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-29 07:43:50.740061487 +0000 UTC m=+276.715637277,LastTimestamp:2025-11-29 07:43:50.740061487 +0000 UTC m=+276.715637277,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 29 07:44:03 crc kubenswrapper[4795]: I1129 07:44:03.603110 4795 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="03dc79e8043eefcffc6ad27669752f29a96b5820027cae09ed9c4312cc8572f5" exitCode=0 Nov 29 07:44:03 crc kubenswrapper[4795]: I1129 07:44:03.603239 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"03dc79e8043eefcffc6ad27669752f29a96b5820027cae09ed9c4312cc8572f5"} Nov 29 07:44:03 crc kubenswrapper[4795]: I1129 07:44:03.604154 4795 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="84bbfda4-f166-406b-8459-d8cad5d21032" Nov 29 07:44:03 crc kubenswrapper[4795]: I1129 07:44:03.604204 4795 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="84bbfda4-f166-406b-8459-d8cad5d21032" Nov 29 07:44:03 crc kubenswrapper[4795]: I1129 07:44:03.604182 4795 status_manager.go:851] "Failed to get status for pod" podUID="cc49b5c4-ec36-4051-b975-25f1e872a6f7" pod="openshift-marketplace/redhat-marketplace-tl6cb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-tl6cb\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:44:03 crc kubenswrapper[4795]: I1129 07:44:03.604627 4795 status_manager.go:851] "Failed to get status for pod" podUID="e1461195-363d-49f5-b5c2-61b99f16ece2" pod="openshift-marketplace/certified-operators-t8rg9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-t8rg9\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:44:03 crc kubenswrapper[4795]: I1129 07:44:03.605206 4795 status_manager.go:851] "Failed to get status for pod" podUID="5876d38e-799e-4ca3-ac4a-c04c3070491a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:44:03 crc kubenswrapper[4795]: E1129 07:44:03.605240 4795 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:44:03 crc kubenswrapper[4795]: I1129 07:44:03.605706 4795 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:44:03 crc kubenswrapper[4795]: I1129 07:44:03.610511 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Nov 29 07:44:03 crc kubenswrapper[4795]: I1129 07:44:03.610568 4795 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="508f7cb88a5b33c09bb52519555900031de44693e10de750ed34d9bb4c31738c" exitCode=1 Nov 29 07:44:03 crc kubenswrapper[4795]: I1129 07:44:03.610625 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"508f7cb88a5b33c09bb52519555900031de44693e10de750ed34d9bb4c31738c"} Nov 29 07:44:03 crc kubenswrapper[4795]: I1129 07:44:03.611052 4795 scope.go:117] "RemoveContainer" containerID="508f7cb88a5b33c09bb52519555900031de44693e10de750ed34d9bb4c31738c" Nov 29 07:44:03 crc kubenswrapper[4795]: I1129 07:44:03.611362 4795 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:44:03 crc kubenswrapper[4795]: I1129 07:44:03.612702 4795 status_manager.go:851] "Failed to get status for pod" podUID="e1461195-363d-49f5-b5c2-61b99f16ece2" pod="openshift-marketplace/certified-operators-t8rg9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-t8rg9\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:44:03 crc kubenswrapper[4795]: I1129 07:44:03.613510 4795 status_manager.go:851] "Failed to get status for pod" podUID="cc49b5c4-ec36-4051-b975-25f1e872a6f7" pod="openshift-marketplace/redhat-marketplace-tl6cb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-tl6cb\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:44:03 crc kubenswrapper[4795]: I1129 07:44:03.614009 4795 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:44:03 crc kubenswrapper[4795]: I1129 07:44:03.614553 4795 status_manager.go:851] "Failed to get status for pod" podUID="5876d38e-799e-4ca3-ac4a-c04c3070491a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:44:04 crc kubenswrapper[4795]: I1129 07:44:04.280557 4795 status_manager.go:851] "Failed to get status for pod" podUID="e1461195-363d-49f5-b5c2-61b99f16ece2" pod="openshift-marketplace/certified-operators-t8rg9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-t8rg9\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:44:04 crc kubenswrapper[4795]: I1129 07:44:04.281140 4795 status_manager.go:851] "Failed to get status for pod" podUID="cc49b5c4-ec36-4051-b975-25f1e872a6f7" pod="openshift-marketplace/redhat-marketplace-tl6cb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-tl6cb\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:44:04 crc kubenswrapper[4795]: I1129 07:44:04.281404 4795 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:44:04 crc kubenswrapper[4795]: I1129 07:44:04.281688 4795 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:44:04 crc kubenswrapper[4795]: I1129 07:44:04.281936 4795 status_manager.go:851] "Failed to get status for pod" podUID="5876d38e-799e-4ca3-ac4a-c04c3070491a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:44:04 crc kubenswrapper[4795]: I1129 07:44:04.282234 4795 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 29 07:44:05 crc kubenswrapper[4795]: I1129 07:44:05.624438 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Nov 29 07:44:05 crc kubenswrapper[4795]: I1129 07:44:05.624877 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"bd83ba72fb05976d4bf000e88cf7e56eed4cd5fdad3c7cad297e02220e864c43"} Nov 29 07:44:05 crc kubenswrapper[4795]: I1129 07:44:05.626696 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"09e623d969910e355b30ff1b61faac511450741d1ece6d712e43bfd3c231939f"} Nov 29 07:44:05 crc kubenswrapper[4795]: I1129 07:44:05.626740 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"83b2049a9a1364318c9c2a63e7ed40e0b57ba19e717236b3b4598e29ce18ada1"} Nov 29 07:44:06 crc kubenswrapper[4795]: I1129 07:44:06.634865 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1e3999bc6630ec89acf247f998d2b67e613e9db42fb05df007dc67629e98e1f1"} Nov 29 07:44:06 crc kubenswrapper[4795]: I1129 07:44:06.634928 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5021c90fe63880fa1b0d1a1c316e4691e2ff3870e576b4ae0c859c8a1d935e1f"} Nov 29 07:44:06 crc kubenswrapper[4795]: I1129 07:44:06.634942 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6acdf1a695ee993d0821511068853957c147c1339285141d58112cade86ed343"} Nov 29 07:44:06 crc kubenswrapper[4795]: I1129 07:44:06.635025 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:44:06 crc kubenswrapper[4795]: I1129 07:44:06.635158 4795 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="84bbfda4-f166-406b-8459-d8cad5d21032" Nov 29 07:44:06 crc kubenswrapper[4795]: I1129 07:44:06.635186 4795 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="84bbfda4-f166-406b-8459-d8cad5d21032" Nov 29 07:44:07 crc kubenswrapper[4795]: I1129 07:44:07.298109 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:44:07 crc kubenswrapper[4795]: I1129 07:44:07.298434 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:44:07 crc kubenswrapper[4795]: I1129 07:44:07.302710 4795 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Nov 29 07:44:07 crc kubenswrapper[4795]: [+]log ok Nov 29 07:44:07 crc kubenswrapper[4795]: [+]etcd ok Nov 29 07:44:07 crc kubenswrapper[4795]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Nov 29 07:44:07 crc kubenswrapper[4795]: [+]poststarthook/openshift.io-api-request-count-filter ok Nov 29 07:44:07 crc kubenswrapper[4795]: [+]poststarthook/openshift.io-startkubeinformers ok Nov 29 07:44:07 crc kubenswrapper[4795]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Nov 29 07:44:07 crc kubenswrapper[4795]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Nov 29 07:44:07 crc kubenswrapper[4795]: [+]poststarthook/start-apiserver-admission-initializer ok Nov 29 07:44:07 crc kubenswrapper[4795]: [+]poststarthook/generic-apiserver-start-informers ok Nov 29 07:44:07 crc kubenswrapper[4795]: [+]poststarthook/priority-and-fairness-config-consumer ok Nov 29 07:44:07 crc kubenswrapper[4795]: [+]poststarthook/priority-and-fairness-filter ok Nov 29 07:44:07 crc kubenswrapper[4795]: [+]poststarthook/storage-object-count-tracker-hook ok Nov 29 07:44:07 crc kubenswrapper[4795]: [+]poststarthook/start-apiextensions-informers ok Nov 29 07:44:07 crc kubenswrapper[4795]: [+]poststarthook/start-apiextensions-controllers ok Nov 29 07:44:07 crc kubenswrapper[4795]: [+]poststarthook/crd-informer-synced ok Nov 29 07:44:07 crc kubenswrapper[4795]: [+]poststarthook/start-system-namespaces-controller ok Nov 29 07:44:07 crc kubenswrapper[4795]: [+]poststarthook/start-cluster-authentication-info-controller ok Nov 29 07:44:07 crc kubenswrapper[4795]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Nov 29 07:44:07 crc kubenswrapper[4795]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Nov 29 07:44:07 crc kubenswrapper[4795]: [+]poststarthook/start-legacy-token-tracking-controller ok Nov 29 07:44:07 crc kubenswrapper[4795]: [+]poststarthook/start-service-ip-repair-controllers ok Nov 29 07:44:07 crc kubenswrapper[4795]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Nov 29 07:44:07 crc kubenswrapper[4795]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Nov 29 07:44:07 crc kubenswrapper[4795]: [+]poststarthook/priority-and-fairness-config-producer ok Nov 29 07:44:07 crc kubenswrapper[4795]: [+]poststarthook/bootstrap-controller ok Nov 29 07:44:07 crc kubenswrapper[4795]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Nov 29 07:44:07 crc kubenswrapper[4795]: [+]poststarthook/start-kube-aggregator-informers ok Nov 29 07:44:07 crc kubenswrapper[4795]: [+]poststarthook/apiservice-status-local-available-controller ok Nov 29 07:44:07 crc kubenswrapper[4795]: [+]poststarthook/apiservice-status-remote-available-controller ok Nov 29 07:44:07 crc kubenswrapper[4795]: [+]poststarthook/apiservice-registration-controller ok Nov 29 07:44:07 crc kubenswrapper[4795]: [+]poststarthook/apiservice-wait-for-first-sync ok Nov 29 07:44:07 crc kubenswrapper[4795]: [+]poststarthook/apiservice-discovery-controller ok Nov 29 07:44:07 crc kubenswrapper[4795]: [+]poststarthook/kube-apiserver-autoregistration ok Nov 29 07:44:07 crc kubenswrapper[4795]: [+]autoregister-completion ok Nov 29 07:44:07 crc kubenswrapper[4795]: [+]poststarthook/apiservice-openapi-controller ok Nov 29 07:44:07 crc kubenswrapper[4795]: [+]poststarthook/apiservice-openapiv3-controller ok Nov 29 07:44:07 crc kubenswrapper[4795]: livez check failed Nov 29 07:44:07 crc kubenswrapper[4795]: I1129 07:44:07.302758 4795 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 29 07:44:11 crc kubenswrapper[4795]: I1129 07:44:11.643882 4795 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:44:11 crc kubenswrapper[4795]: I1129 07:44:11.894209 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:44:12 crc kubenswrapper[4795]: I1129 07:44:12.303656 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:44:12 crc kubenswrapper[4795]: I1129 07:44:12.307066 4795 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="08257020-33e6-4649-bcaa-7f9261f6d1bf" Nov 29 07:44:12 crc kubenswrapper[4795]: I1129 07:44:12.664205 4795 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="84bbfda4-f166-406b-8459-d8cad5d21032" Nov 29 07:44:12 crc kubenswrapper[4795]: I1129 07:44:12.664246 4795 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="84bbfda4-f166-406b-8459-d8cad5d21032" Nov 29 07:44:13 crc kubenswrapper[4795]: I1129 07:44:13.582098 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:44:13 crc kubenswrapper[4795]: I1129 07:44:13.585940 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:44:13 crc kubenswrapper[4795]: I1129 07:44:13.669816 4795 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="84bbfda4-f166-406b-8459-d8cad5d21032" Nov 29 07:44:13 crc kubenswrapper[4795]: I1129 07:44:13.669845 4795 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="84bbfda4-f166-406b-8459-d8cad5d21032" Nov 29 07:44:13 crc kubenswrapper[4795]: I1129 07:44:13.674997 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:44:14 crc kubenswrapper[4795]: I1129 07:44:14.290783 4795 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="08257020-33e6-4649-bcaa-7f9261f6d1bf" Nov 29 07:44:14 crc kubenswrapper[4795]: I1129 07:44:14.674030 4795 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="84bbfda4-f166-406b-8459-d8cad5d21032" Nov 29 07:44:14 crc kubenswrapper[4795]: I1129 07:44:14.674060 4795 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="84bbfda4-f166-406b-8459-d8cad5d21032" Nov 29 07:44:14 crc kubenswrapper[4795]: I1129 07:44:14.677031 4795 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="08257020-33e6-4649-bcaa-7f9261f6d1bf" Nov 29 07:44:15 crc kubenswrapper[4795]: I1129 07:44:15.053335 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-qpsg9" podUID="34ec03b0-6d17-4934-84fa-4323c7599fd0" containerName="oauth-openshift" containerID="cri-o://168f05a266d19b8a31f0e3df3d0757e2ae6323220b53f4762a6cfced03f1224e" gracePeriod=15 Nov 29 07:44:15 crc kubenswrapper[4795]: I1129 07:44:15.681833 4795 generic.go:334] "Generic (PLEG): container finished" podID="34ec03b0-6d17-4934-84fa-4323c7599fd0" containerID="168f05a266d19b8a31f0e3df3d0757e2ae6323220b53f4762a6cfced03f1224e" exitCode=0 Nov 29 07:44:15 crc kubenswrapper[4795]: I1129 07:44:15.681885 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-qpsg9" event={"ID":"34ec03b0-6d17-4934-84fa-4323c7599fd0","Type":"ContainerDied","Data":"168f05a266d19b8a31f0e3df3d0757e2ae6323220b53f4762a6cfced03f1224e"} Nov 29 07:44:15 crc kubenswrapper[4795]: I1129 07:44:15.974976 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-qpsg9" Nov 29 07:44:16 crc kubenswrapper[4795]: I1129 07:44:16.091750 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/34ec03b0-6d17-4934-84fa-4323c7599fd0-audit-dir\") pod \"34ec03b0-6d17-4934-84fa-4323c7599fd0\" (UID: \"34ec03b0-6d17-4934-84fa-4323c7599fd0\") " Nov 29 07:44:16 crc kubenswrapper[4795]: I1129 07:44:16.091808 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gnlv\" (UniqueName: \"kubernetes.io/projected/34ec03b0-6d17-4934-84fa-4323c7599fd0-kube-api-access-9gnlv\") pod \"34ec03b0-6d17-4934-84fa-4323c7599fd0\" (UID: \"34ec03b0-6d17-4934-84fa-4323c7599fd0\") " Nov 29 07:44:16 crc kubenswrapper[4795]: I1129 07:44:16.091838 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/34ec03b0-6d17-4934-84fa-4323c7599fd0-v4-0-config-system-session\") pod \"34ec03b0-6d17-4934-84fa-4323c7599fd0\" (UID: \"34ec03b0-6d17-4934-84fa-4323c7599fd0\") " Nov 29 07:44:16 crc kubenswrapper[4795]: I1129 07:44:16.091864 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/34ec03b0-6d17-4934-84fa-4323c7599fd0-v4-0-config-system-router-certs\") pod \"34ec03b0-6d17-4934-84fa-4323c7599fd0\" (UID: \"34ec03b0-6d17-4934-84fa-4323c7599fd0\") " Nov 29 07:44:16 crc kubenswrapper[4795]: I1129 07:44:16.091881 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/34ec03b0-6d17-4934-84fa-4323c7599fd0-v4-0-config-user-template-error\") pod \"34ec03b0-6d17-4934-84fa-4323c7599fd0\" (UID: \"34ec03b0-6d17-4934-84fa-4323c7599fd0\") " Nov 29 07:44:16 crc kubenswrapper[4795]: I1129 07:44:16.091906 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/34ec03b0-6d17-4934-84fa-4323c7599fd0-v4-0-config-system-service-ca\") pod \"34ec03b0-6d17-4934-84fa-4323c7599fd0\" (UID: \"34ec03b0-6d17-4934-84fa-4323c7599fd0\") " Nov 29 07:44:16 crc kubenswrapper[4795]: I1129 07:44:16.091933 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/34ec03b0-6d17-4934-84fa-4323c7599fd0-audit-policies\") pod \"34ec03b0-6d17-4934-84fa-4323c7599fd0\" (UID: \"34ec03b0-6d17-4934-84fa-4323c7599fd0\") " Nov 29 07:44:16 crc kubenswrapper[4795]: I1129 07:44:16.091952 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/34ec03b0-6d17-4934-84fa-4323c7599fd0-v4-0-config-system-ocp-branding-template\") pod \"34ec03b0-6d17-4934-84fa-4323c7599fd0\" (UID: \"34ec03b0-6d17-4934-84fa-4323c7599fd0\") " Nov 29 07:44:16 crc kubenswrapper[4795]: I1129 07:44:16.091973 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/34ec03b0-6d17-4934-84fa-4323c7599fd0-v4-0-config-user-idp-0-file-data\") pod \"34ec03b0-6d17-4934-84fa-4323c7599fd0\" (UID: \"34ec03b0-6d17-4934-84fa-4323c7599fd0\") " Nov 29 07:44:16 crc kubenswrapper[4795]: I1129 07:44:16.091994 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/34ec03b0-6d17-4934-84fa-4323c7599fd0-v4-0-config-system-serving-cert\") pod \"34ec03b0-6d17-4934-84fa-4323c7599fd0\" (UID: \"34ec03b0-6d17-4934-84fa-4323c7599fd0\") " Nov 29 07:44:16 crc kubenswrapper[4795]: I1129 07:44:16.092016 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34ec03b0-6d17-4934-84fa-4323c7599fd0-v4-0-config-system-trusted-ca-bundle\") pod \"34ec03b0-6d17-4934-84fa-4323c7599fd0\" (UID: \"34ec03b0-6d17-4934-84fa-4323c7599fd0\") " Nov 29 07:44:16 crc kubenswrapper[4795]: I1129 07:44:16.092032 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/34ec03b0-6d17-4934-84fa-4323c7599fd0-v4-0-config-system-cliconfig\") pod \"34ec03b0-6d17-4934-84fa-4323c7599fd0\" (UID: \"34ec03b0-6d17-4934-84fa-4323c7599fd0\") " Nov 29 07:44:16 crc kubenswrapper[4795]: I1129 07:44:16.092055 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/34ec03b0-6d17-4934-84fa-4323c7599fd0-v4-0-config-user-template-login\") pod \"34ec03b0-6d17-4934-84fa-4323c7599fd0\" (UID: \"34ec03b0-6d17-4934-84fa-4323c7599fd0\") " Nov 29 07:44:16 crc kubenswrapper[4795]: I1129 07:44:16.092080 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/34ec03b0-6d17-4934-84fa-4323c7599fd0-v4-0-config-user-template-provider-selection\") pod \"34ec03b0-6d17-4934-84fa-4323c7599fd0\" (UID: \"34ec03b0-6d17-4934-84fa-4323c7599fd0\") " Nov 29 07:44:16 crc kubenswrapper[4795]: I1129 07:44:16.092625 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34ec03b0-6d17-4934-84fa-4323c7599fd0-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "34ec03b0-6d17-4934-84fa-4323c7599fd0" (UID: "34ec03b0-6d17-4934-84fa-4323c7599fd0"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:44:16 crc kubenswrapper[4795]: I1129 07:44:16.092984 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34ec03b0-6d17-4934-84fa-4323c7599fd0-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "34ec03b0-6d17-4934-84fa-4323c7599fd0" (UID: "34ec03b0-6d17-4934-84fa-4323c7599fd0"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:44:16 crc kubenswrapper[4795]: I1129 07:44:16.093003 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34ec03b0-6d17-4934-84fa-4323c7599fd0-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "34ec03b0-6d17-4934-84fa-4323c7599fd0" (UID: "34ec03b0-6d17-4934-84fa-4323c7599fd0"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:44:16 crc kubenswrapper[4795]: I1129 07:44:16.093503 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34ec03b0-6d17-4934-84fa-4323c7599fd0-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "34ec03b0-6d17-4934-84fa-4323c7599fd0" (UID: "34ec03b0-6d17-4934-84fa-4323c7599fd0"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:44:16 crc kubenswrapper[4795]: I1129 07:44:16.093646 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34ec03b0-6d17-4934-84fa-4323c7599fd0-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "34ec03b0-6d17-4934-84fa-4323c7599fd0" (UID: "34ec03b0-6d17-4934-84fa-4323c7599fd0"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:44:16 crc kubenswrapper[4795]: I1129 07:44:16.098385 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34ec03b0-6d17-4934-84fa-4323c7599fd0-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "34ec03b0-6d17-4934-84fa-4323c7599fd0" (UID: "34ec03b0-6d17-4934-84fa-4323c7599fd0"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:44:16 crc kubenswrapper[4795]: I1129 07:44:16.098874 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34ec03b0-6d17-4934-84fa-4323c7599fd0-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "34ec03b0-6d17-4934-84fa-4323c7599fd0" (UID: "34ec03b0-6d17-4934-84fa-4323c7599fd0"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:44:16 crc kubenswrapper[4795]: I1129 07:44:16.099167 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34ec03b0-6d17-4934-84fa-4323c7599fd0-kube-api-access-9gnlv" (OuterVolumeSpecName: "kube-api-access-9gnlv") pod "34ec03b0-6d17-4934-84fa-4323c7599fd0" (UID: "34ec03b0-6d17-4934-84fa-4323c7599fd0"). InnerVolumeSpecName "kube-api-access-9gnlv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:44:16 crc kubenswrapper[4795]: I1129 07:44:16.099826 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34ec03b0-6d17-4934-84fa-4323c7599fd0-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "34ec03b0-6d17-4934-84fa-4323c7599fd0" (UID: "34ec03b0-6d17-4934-84fa-4323c7599fd0"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:44:16 crc kubenswrapper[4795]: I1129 07:44:16.100054 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34ec03b0-6d17-4934-84fa-4323c7599fd0-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "34ec03b0-6d17-4934-84fa-4323c7599fd0" (UID: "34ec03b0-6d17-4934-84fa-4323c7599fd0"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:44:16 crc kubenswrapper[4795]: I1129 07:44:16.100204 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34ec03b0-6d17-4934-84fa-4323c7599fd0-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "34ec03b0-6d17-4934-84fa-4323c7599fd0" (UID: "34ec03b0-6d17-4934-84fa-4323c7599fd0"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:44:16 crc kubenswrapper[4795]: I1129 07:44:16.102099 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34ec03b0-6d17-4934-84fa-4323c7599fd0-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "34ec03b0-6d17-4934-84fa-4323c7599fd0" (UID: "34ec03b0-6d17-4934-84fa-4323c7599fd0"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:44:16 crc kubenswrapper[4795]: I1129 07:44:16.102436 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34ec03b0-6d17-4934-84fa-4323c7599fd0-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "34ec03b0-6d17-4934-84fa-4323c7599fd0" (UID: "34ec03b0-6d17-4934-84fa-4323c7599fd0"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:44:16 crc kubenswrapper[4795]: I1129 07:44:16.102620 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34ec03b0-6d17-4934-84fa-4323c7599fd0-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "34ec03b0-6d17-4934-84fa-4323c7599fd0" (UID: "34ec03b0-6d17-4934-84fa-4323c7599fd0"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:44:16 crc kubenswrapper[4795]: I1129 07:44:16.193345 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9gnlv\" (UniqueName: \"kubernetes.io/projected/34ec03b0-6d17-4934-84fa-4323c7599fd0-kube-api-access-9gnlv\") on node \"crc\" DevicePath \"\"" Nov 29 07:44:16 crc kubenswrapper[4795]: I1129 07:44:16.193379 4795 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/34ec03b0-6d17-4934-84fa-4323c7599fd0-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 29 07:44:16 crc kubenswrapper[4795]: I1129 07:44:16.193391 4795 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/34ec03b0-6d17-4934-84fa-4323c7599fd0-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 29 07:44:16 crc kubenswrapper[4795]: I1129 07:44:16.193403 4795 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/34ec03b0-6d17-4934-84fa-4323c7599fd0-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 29 07:44:16 crc kubenswrapper[4795]: I1129 07:44:16.193412 4795 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/34ec03b0-6d17-4934-84fa-4323c7599fd0-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:44:16 crc kubenswrapper[4795]: I1129 07:44:16.193421 4795 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/34ec03b0-6d17-4934-84fa-4323c7599fd0-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 29 07:44:16 crc kubenswrapper[4795]: I1129 07:44:16.193432 4795 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/34ec03b0-6d17-4934-84fa-4323c7599fd0-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 29 07:44:16 crc kubenswrapper[4795]: I1129 07:44:16.193440 4795 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/34ec03b0-6d17-4934-84fa-4323c7599fd0-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:44:16 crc kubenswrapper[4795]: I1129 07:44:16.193448 4795 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/34ec03b0-6d17-4934-84fa-4323c7599fd0-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:44:16 crc kubenswrapper[4795]: I1129 07:44:16.193457 4795 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34ec03b0-6d17-4934-84fa-4323c7599fd0-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:44:16 crc kubenswrapper[4795]: I1129 07:44:16.193465 4795 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/34ec03b0-6d17-4934-84fa-4323c7599fd0-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 29 07:44:16 crc kubenswrapper[4795]: I1129 07:44:16.193474 4795 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/34ec03b0-6d17-4934-84fa-4323c7599fd0-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 29 07:44:16 crc kubenswrapper[4795]: I1129 07:44:16.193483 4795 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/34ec03b0-6d17-4934-84fa-4323c7599fd0-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 29 07:44:16 crc kubenswrapper[4795]: I1129 07:44:16.193493 4795 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/34ec03b0-6d17-4934-84fa-4323c7599fd0-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 29 07:44:16 crc kubenswrapper[4795]: I1129 07:44:16.691864 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-qpsg9" event={"ID":"34ec03b0-6d17-4934-84fa-4323c7599fd0","Type":"ContainerDied","Data":"8b4f04abdb491d97dfffa3429e02c871dd9c6f788dbc691b6ab0774379030ef1"} Nov 29 07:44:16 crc kubenswrapper[4795]: I1129 07:44:16.691937 4795 scope.go:117] "RemoveContainer" containerID="168f05a266d19b8a31f0e3df3d0757e2ae6323220b53f4762a6cfced03f1224e" Nov 29 07:44:16 crc kubenswrapper[4795]: I1129 07:44:16.692308 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-qpsg9" Nov 29 07:44:20 crc kubenswrapper[4795]: I1129 07:44:20.911887 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 29 07:44:21 crc kubenswrapper[4795]: I1129 07:44:21.084300 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 29 07:44:21 crc kubenswrapper[4795]: I1129 07:44:21.154016 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 29 07:44:21 crc kubenswrapper[4795]: I1129 07:44:21.472102 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 29 07:44:21 crc kubenswrapper[4795]: I1129 07:44:21.574814 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 29 07:44:21 crc kubenswrapper[4795]: I1129 07:44:21.631876 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 29 07:44:21 crc kubenswrapper[4795]: I1129 07:44:21.752883 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 29 07:44:21 crc kubenswrapper[4795]: I1129 07:44:21.898248 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:44:22 crc kubenswrapper[4795]: I1129 07:44:22.067258 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 29 07:44:22 crc kubenswrapper[4795]: I1129 07:44:22.171691 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 29 07:44:22 crc kubenswrapper[4795]: I1129 07:44:22.312772 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 29 07:44:22 crc kubenswrapper[4795]: I1129 07:44:22.329576 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 29 07:44:22 crc kubenswrapper[4795]: I1129 07:44:22.471063 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 29 07:44:22 crc kubenswrapper[4795]: I1129 07:44:22.965205 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 29 07:44:23 crc kubenswrapper[4795]: I1129 07:44:23.010560 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 29 07:44:23 crc kubenswrapper[4795]: I1129 07:44:23.051384 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 29 07:44:23 crc kubenswrapper[4795]: I1129 07:44:23.240444 4795 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 29 07:44:23 crc kubenswrapper[4795]: I1129 07:44:23.629905 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 29 07:44:23 crc kubenswrapper[4795]: I1129 07:44:23.651200 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 29 07:44:23 crc kubenswrapper[4795]: I1129 07:44:23.932648 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 29 07:44:23 crc kubenswrapper[4795]: I1129 07:44:23.996799 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 29 07:44:24 crc kubenswrapper[4795]: I1129 07:44:24.014946 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 29 07:44:24 crc kubenswrapper[4795]: I1129 07:44:24.269885 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 29 07:44:24 crc kubenswrapper[4795]: I1129 07:44:24.384211 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 29 07:44:24 crc kubenswrapper[4795]: I1129 07:44:24.437705 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 29 07:44:24 crc kubenswrapper[4795]: I1129 07:44:24.618911 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 29 07:44:24 crc kubenswrapper[4795]: I1129 07:44:24.630160 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 29 07:44:24 crc kubenswrapper[4795]: I1129 07:44:24.791956 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 29 07:44:24 crc kubenswrapper[4795]: I1129 07:44:24.846213 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 29 07:44:24 crc kubenswrapper[4795]: I1129 07:44:24.863658 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 29 07:44:24 crc kubenswrapper[4795]: I1129 07:44:24.878739 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 29 07:44:24 crc kubenswrapper[4795]: I1129 07:44:24.922278 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 29 07:44:25 crc kubenswrapper[4795]: I1129 07:44:25.049363 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 29 07:44:25 crc kubenswrapper[4795]: I1129 07:44:25.351330 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 29 07:44:25 crc kubenswrapper[4795]: I1129 07:44:25.351733 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 29 07:44:25 crc kubenswrapper[4795]: I1129 07:44:25.515889 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 29 07:44:25 crc kubenswrapper[4795]: I1129 07:44:25.646951 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 29 07:44:25 crc kubenswrapper[4795]: I1129 07:44:25.667480 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 29 07:44:25 crc kubenswrapper[4795]: I1129 07:44:25.894345 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 29 07:44:26 crc kubenswrapper[4795]: I1129 07:44:26.018354 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 29 07:44:26 crc kubenswrapper[4795]: I1129 07:44:26.048977 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 29 07:44:26 crc kubenswrapper[4795]: I1129 07:44:26.170462 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 29 07:44:26 crc kubenswrapper[4795]: I1129 07:44:26.250053 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 29 07:44:26 crc kubenswrapper[4795]: I1129 07:44:26.271387 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 29 07:44:26 crc kubenswrapper[4795]: I1129 07:44:26.297706 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 29 07:44:26 crc kubenswrapper[4795]: I1129 07:44:26.299843 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 29 07:44:26 crc kubenswrapper[4795]: I1129 07:44:26.406429 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 29 07:44:26 crc kubenswrapper[4795]: I1129 07:44:26.437898 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 29 07:44:26 crc kubenswrapper[4795]: I1129 07:44:26.444819 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 29 07:44:26 crc kubenswrapper[4795]: I1129 07:44:26.581371 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 29 07:44:26 crc kubenswrapper[4795]: I1129 07:44:26.708136 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 29 07:44:26 crc kubenswrapper[4795]: I1129 07:44:26.878740 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 29 07:44:26 crc kubenswrapper[4795]: I1129 07:44:26.976812 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 29 07:44:26 crc kubenswrapper[4795]: I1129 07:44:26.980557 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 29 07:44:26 crc kubenswrapper[4795]: I1129 07:44:26.987266 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 29 07:44:27 crc kubenswrapper[4795]: I1129 07:44:27.067690 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 29 07:44:27 crc kubenswrapper[4795]: I1129 07:44:27.078688 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 29 07:44:27 crc kubenswrapper[4795]: I1129 07:44:27.217470 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 29 07:44:27 crc kubenswrapper[4795]: I1129 07:44:27.233101 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 29 07:44:27 crc kubenswrapper[4795]: I1129 07:44:27.246494 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 29 07:44:27 crc kubenswrapper[4795]: I1129 07:44:27.350441 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 29 07:44:27 crc kubenswrapper[4795]: I1129 07:44:27.352719 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 29 07:44:27 crc kubenswrapper[4795]: I1129 07:44:27.475497 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 29 07:44:27 crc kubenswrapper[4795]: I1129 07:44:27.529388 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 29 07:44:27 crc kubenswrapper[4795]: I1129 07:44:27.572017 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 29 07:44:27 crc kubenswrapper[4795]: I1129 07:44:27.572507 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 29 07:44:27 crc kubenswrapper[4795]: I1129 07:44:27.607464 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 29 07:44:27 crc kubenswrapper[4795]: I1129 07:44:27.673707 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 29 07:44:27 crc kubenswrapper[4795]: I1129 07:44:27.754927 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 29 07:44:28 crc kubenswrapper[4795]: I1129 07:44:28.108385 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 29 07:44:28 crc kubenswrapper[4795]: I1129 07:44:28.184571 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 29 07:44:28 crc kubenswrapper[4795]: I1129 07:44:28.193257 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 29 07:44:28 crc kubenswrapper[4795]: I1129 07:44:28.249579 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 29 07:44:29 crc kubenswrapper[4795]: E1129 07:44:29.354705 4795 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.08s" Nov 29 07:44:29 crc kubenswrapper[4795]: I1129 07:44:29.401531 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 29 07:44:29 crc kubenswrapper[4795]: I1129 07:44:29.404494 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 29 07:44:29 crc kubenswrapper[4795]: I1129 07:44:29.404866 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 29 07:44:29 crc kubenswrapper[4795]: I1129 07:44:29.405012 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 29 07:44:29 crc kubenswrapper[4795]: I1129 07:44:29.405526 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 29 07:44:29 crc kubenswrapper[4795]: I1129 07:44:29.406762 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 29 07:44:29 crc kubenswrapper[4795]: I1129 07:44:29.407279 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 29 07:44:29 crc kubenswrapper[4795]: I1129 07:44:29.407492 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 29 07:44:29 crc kubenswrapper[4795]: I1129 07:44:29.407727 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 29 07:44:29 crc kubenswrapper[4795]: I1129 07:44:29.407879 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 29 07:44:29 crc kubenswrapper[4795]: I1129 07:44:29.408093 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 29 07:44:29 crc kubenswrapper[4795]: I1129 07:44:29.408218 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 29 07:44:29 crc kubenswrapper[4795]: I1129 07:44:29.408341 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 29 07:44:29 crc kubenswrapper[4795]: I1129 07:44:29.408515 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 29 07:44:29 crc kubenswrapper[4795]: I1129 07:44:29.408753 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 29 07:44:29 crc kubenswrapper[4795]: I1129 07:44:29.408856 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 29 07:44:29 crc kubenswrapper[4795]: I1129 07:44:29.408795 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 29 07:44:29 crc kubenswrapper[4795]: I1129 07:44:29.409220 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 29 07:44:29 crc kubenswrapper[4795]: I1129 07:44:29.408309 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 29 07:44:29 crc kubenswrapper[4795]: I1129 07:44:29.408267 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 29 07:44:29 crc kubenswrapper[4795]: I1129 07:44:29.410225 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 29 07:44:29 crc kubenswrapper[4795]: I1129 07:44:29.413766 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 29 07:44:29 crc kubenswrapper[4795]: I1129 07:44:29.414016 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 29 07:44:29 crc kubenswrapper[4795]: I1129 07:44:29.437753 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 29 07:44:29 crc kubenswrapper[4795]: I1129 07:44:29.439029 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 29 07:44:29 crc kubenswrapper[4795]: I1129 07:44:29.470934 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 29 07:44:29 crc kubenswrapper[4795]: I1129 07:44:29.501874 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 29 07:44:29 crc kubenswrapper[4795]: I1129 07:44:29.522070 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 29 07:44:29 crc kubenswrapper[4795]: I1129 07:44:29.548400 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 29 07:44:29 crc kubenswrapper[4795]: I1129 07:44:29.628346 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 29 07:44:29 crc kubenswrapper[4795]: I1129 07:44:29.771134 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 29 07:44:29 crc kubenswrapper[4795]: I1129 07:44:29.829224 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 29 07:44:29 crc kubenswrapper[4795]: I1129 07:44:29.830606 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 29 07:44:29 crc kubenswrapper[4795]: I1129 07:44:29.840673 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 29 07:44:29 crc kubenswrapper[4795]: I1129 07:44:29.880474 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 29 07:44:29 crc kubenswrapper[4795]: I1129 07:44:29.899785 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 29 07:44:29 crc kubenswrapper[4795]: I1129 07:44:29.973301 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 29 07:44:29 crc kubenswrapper[4795]: I1129 07:44:29.978358 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 29 07:44:30 crc kubenswrapper[4795]: I1129 07:44:30.216581 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 29 07:44:30 crc kubenswrapper[4795]: I1129 07:44:30.262313 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 29 07:44:30 crc kubenswrapper[4795]: I1129 07:44:30.388709 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 29 07:44:30 crc kubenswrapper[4795]: I1129 07:44:30.430743 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 29 07:44:30 crc kubenswrapper[4795]: I1129 07:44:30.479417 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 29 07:44:30 crc kubenswrapper[4795]: I1129 07:44:30.486456 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 29 07:44:30 crc kubenswrapper[4795]: I1129 07:44:30.548827 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 29 07:44:30 crc kubenswrapper[4795]: I1129 07:44:30.686518 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 29 07:44:30 crc kubenswrapper[4795]: I1129 07:44:30.781700 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 29 07:44:30 crc kubenswrapper[4795]: I1129 07:44:30.787025 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 29 07:44:30 crc kubenswrapper[4795]: I1129 07:44:30.810137 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 29 07:44:30 crc kubenswrapper[4795]: I1129 07:44:30.823480 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 29 07:44:30 crc kubenswrapper[4795]: I1129 07:44:30.884404 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 29 07:44:30 crc kubenswrapper[4795]: I1129 07:44:30.979404 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 29 07:44:30 crc kubenswrapper[4795]: I1129 07:44:30.980290 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.024573 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.101017 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.111772 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.126199 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.133437 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.205106 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.261712 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.412427 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.419324 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.450393 4795 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.574395 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.580085 4795 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.582512 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=41.582495255 podStartE2EDuration="41.582495255s" podCreationTimestamp="2025-11-29 07:43:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:44:11.29814127 +0000 UTC m=+297.273717060" watchObservedRunningTime="2025-11-29 07:44:31.582495255 +0000 UTC m=+317.558071045" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.584019 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-tl6cb","openshift-marketplace/certified-operators-t8rg9","openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-558db77b4-qpsg9"] Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.584073 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.587476 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.600612 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=20.600569888 podStartE2EDuration="20.600569888s" podCreationTimestamp="2025-11-29 07:44:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:44:31.597644905 +0000 UTC m=+317.573220695" watchObservedRunningTime="2025-11-29 07:44:31.600569888 +0000 UTC m=+317.576145668" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.627984 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.710695 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.733385 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.736531 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.790447 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-d846b65cc-9mx9r"] Nov 29 07:44:31 crc kubenswrapper[4795]: E1129 07:44:31.790702 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1461195-363d-49f5-b5c2-61b99f16ece2" containerName="extract-utilities" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.790717 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1461195-363d-49f5-b5c2-61b99f16ece2" containerName="extract-utilities" Nov 29 07:44:31 crc kubenswrapper[4795]: E1129 07:44:31.790728 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc49b5c4-ec36-4051-b975-25f1e872a6f7" containerName="registry-server" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.790736 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc49b5c4-ec36-4051-b975-25f1e872a6f7" containerName="registry-server" Nov 29 07:44:31 crc kubenswrapper[4795]: E1129 07:44:31.790748 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34ec03b0-6d17-4934-84fa-4323c7599fd0" containerName="oauth-openshift" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.790756 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="34ec03b0-6d17-4934-84fa-4323c7599fd0" containerName="oauth-openshift" Nov 29 07:44:31 crc kubenswrapper[4795]: E1129 07:44:31.790766 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1461195-363d-49f5-b5c2-61b99f16ece2" containerName="registry-server" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.790773 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1461195-363d-49f5-b5c2-61b99f16ece2" containerName="registry-server" Nov 29 07:44:31 crc kubenswrapper[4795]: E1129 07:44:31.790781 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc49b5c4-ec36-4051-b975-25f1e872a6f7" containerName="extract-utilities" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.790789 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc49b5c4-ec36-4051-b975-25f1e872a6f7" containerName="extract-utilities" Nov 29 07:44:31 crc kubenswrapper[4795]: E1129 07:44:31.790802 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc49b5c4-ec36-4051-b975-25f1e872a6f7" containerName="extract-content" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.790808 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc49b5c4-ec36-4051-b975-25f1e872a6f7" containerName="extract-content" Nov 29 07:44:31 crc kubenswrapper[4795]: E1129 07:44:31.790818 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1461195-363d-49f5-b5c2-61b99f16ece2" containerName="extract-content" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.790824 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1461195-363d-49f5-b5c2-61b99f16ece2" containerName="extract-content" Nov 29 07:44:31 crc kubenswrapper[4795]: E1129 07:44:31.790833 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5876d38e-799e-4ca3-ac4a-c04c3070491a" containerName="installer" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.790839 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="5876d38e-799e-4ca3-ac4a-c04c3070491a" containerName="installer" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.790921 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1461195-363d-49f5-b5c2-61b99f16ece2" containerName="registry-server" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.790932 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc49b5c4-ec36-4051-b975-25f1e872a6f7" containerName="registry-server" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.790938 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="5876d38e-799e-4ca3-ac4a-c04c3070491a" containerName="installer" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.790949 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="34ec03b0-6d17-4934-84fa-4323c7599fd0" containerName="oauth-openshift" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.791319 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-d846b65cc-9mx9r" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.795924 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.795924 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.795935 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.796756 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.796844 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.797041 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.797300 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.797309 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.797410 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.797581 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.799969 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.805734 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.807310 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.812105 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.812842 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.813144 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.813881 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.820892 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.872402 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d90f263f-8adf-405d-b1d2-6f00583e4723-v4-0-config-user-template-error\") pod \"oauth-openshift-d846b65cc-9mx9r\" (UID: \"d90f263f-8adf-405d-b1d2-6f00583e4723\") " pod="openshift-authentication/oauth-openshift-d846b65cc-9mx9r" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.872457 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d90f263f-8adf-405d-b1d2-6f00583e4723-v4-0-config-user-template-login\") pod \"oauth-openshift-d846b65cc-9mx9r\" (UID: \"d90f263f-8adf-405d-b1d2-6f00583e4723\") " pod="openshift-authentication/oauth-openshift-d846b65cc-9mx9r" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.872483 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d90f263f-8adf-405d-b1d2-6f00583e4723-v4-0-config-system-router-certs\") pod \"oauth-openshift-d846b65cc-9mx9r\" (UID: \"d90f263f-8adf-405d-b1d2-6f00583e4723\") " pod="openshift-authentication/oauth-openshift-d846b65cc-9mx9r" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.872509 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d90f263f-8adf-405d-b1d2-6f00583e4723-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-d846b65cc-9mx9r\" (UID: \"d90f263f-8adf-405d-b1d2-6f00583e4723\") " pod="openshift-authentication/oauth-openshift-d846b65cc-9mx9r" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.872566 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d90f263f-8adf-405d-b1d2-6f00583e4723-v4-0-config-system-session\") pod \"oauth-openshift-d846b65cc-9mx9r\" (UID: \"d90f263f-8adf-405d-b1d2-6f00583e4723\") " pod="openshift-authentication/oauth-openshift-d846b65cc-9mx9r" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.872613 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d90f263f-8adf-405d-b1d2-6f00583e4723-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-d846b65cc-9mx9r\" (UID: \"d90f263f-8adf-405d-b1d2-6f00583e4723\") " pod="openshift-authentication/oauth-openshift-d846b65cc-9mx9r" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.872647 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d90f263f-8adf-405d-b1d2-6f00583e4723-v4-0-config-system-service-ca\") pod \"oauth-openshift-d846b65cc-9mx9r\" (UID: \"d90f263f-8adf-405d-b1d2-6f00583e4723\") " pod="openshift-authentication/oauth-openshift-d846b65cc-9mx9r" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.872670 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d90f263f-8adf-405d-b1d2-6f00583e4723-audit-policies\") pod \"oauth-openshift-d846b65cc-9mx9r\" (UID: \"d90f263f-8adf-405d-b1d2-6f00583e4723\") " pod="openshift-authentication/oauth-openshift-d846b65cc-9mx9r" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.872710 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d90f263f-8adf-405d-b1d2-6f00583e4723-v4-0-config-system-cliconfig\") pod \"oauth-openshift-d846b65cc-9mx9r\" (UID: \"d90f263f-8adf-405d-b1d2-6f00583e4723\") " pod="openshift-authentication/oauth-openshift-d846b65cc-9mx9r" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.872745 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rvfr\" (UniqueName: \"kubernetes.io/projected/d90f263f-8adf-405d-b1d2-6f00583e4723-kube-api-access-7rvfr\") pod \"oauth-openshift-d846b65cc-9mx9r\" (UID: \"d90f263f-8adf-405d-b1d2-6f00583e4723\") " pod="openshift-authentication/oauth-openshift-d846b65cc-9mx9r" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.872780 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d90f263f-8adf-405d-b1d2-6f00583e4723-v4-0-config-system-serving-cert\") pod \"oauth-openshift-d846b65cc-9mx9r\" (UID: \"d90f263f-8adf-405d-b1d2-6f00583e4723\") " pod="openshift-authentication/oauth-openshift-d846b65cc-9mx9r" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.872801 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d90f263f-8adf-405d-b1d2-6f00583e4723-audit-dir\") pod \"oauth-openshift-d846b65cc-9mx9r\" (UID: \"d90f263f-8adf-405d-b1d2-6f00583e4723\") " pod="openshift-authentication/oauth-openshift-d846b65cc-9mx9r" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.872826 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d90f263f-8adf-405d-b1d2-6f00583e4723-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-d846b65cc-9mx9r\" (UID: \"d90f263f-8adf-405d-b1d2-6f00583e4723\") " pod="openshift-authentication/oauth-openshift-d846b65cc-9mx9r" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.872856 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d90f263f-8adf-405d-b1d2-6f00583e4723-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-d846b65cc-9mx9r\" (UID: \"d90f263f-8adf-405d-b1d2-6f00583e4723\") " pod="openshift-authentication/oauth-openshift-d846b65cc-9mx9r" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.873797 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.956557 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.973559 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rvfr\" (UniqueName: \"kubernetes.io/projected/d90f263f-8adf-405d-b1d2-6f00583e4723-kube-api-access-7rvfr\") pod \"oauth-openshift-d846b65cc-9mx9r\" (UID: \"d90f263f-8adf-405d-b1d2-6f00583e4723\") " pod="openshift-authentication/oauth-openshift-d846b65cc-9mx9r" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.973632 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d90f263f-8adf-405d-b1d2-6f00583e4723-v4-0-config-system-serving-cert\") pod \"oauth-openshift-d846b65cc-9mx9r\" (UID: \"d90f263f-8adf-405d-b1d2-6f00583e4723\") " pod="openshift-authentication/oauth-openshift-d846b65cc-9mx9r" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.973655 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d90f263f-8adf-405d-b1d2-6f00583e4723-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-d846b65cc-9mx9r\" (UID: \"d90f263f-8adf-405d-b1d2-6f00583e4723\") " pod="openshift-authentication/oauth-openshift-d846b65cc-9mx9r" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.973678 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d90f263f-8adf-405d-b1d2-6f00583e4723-audit-dir\") pod \"oauth-openshift-d846b65cc-9mx9r\" (UID: \"d90f263f-8adf-405d-b1d2-6f00583e4723\") " pod="openshift-authentication/oauth-openshift-d846b65cc-9mx9r" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.973701 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d90f263f-8adf-405d-b1d2-6f00583e4723-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-d846b65cc-9mx9r\" (UID: \"d90f263f-8adf-405d-b1d2-6f00583e4723\") " pod="openshift-authentication/oauth-openshift-d846b65cc-9mx9r" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.973727 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d90f263f-8adf-405d-b1d2-6f00583e4723-v4-0-config-user-template-error\") pod \"oauth-openshift-d846b65cc-9mx9r\" (UID: \"d90f263f-8adf-405d-b1d2-6f00583e4723\") " pod="openshift-authentication/oauth-openshift-d846b65cc-9mx9r" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.973747 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d90f263f-8adf-405d-b1d2-6f00583e4723-v4-0-config-user-template-login\") pod \"oauth-openshift-d846b65cc-9mx9r\" (UID: \"d90f263f-8adf-405d-b1d2-6f00583e4723\") " pod="openshift-authentication/oauth-openshift-d846b65cc-9mx9r" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.973762 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d90f263f-8adf-405d-b1d2-6f00583e4723-v4-0-config-system-router-certs\") pod \"oauth-openshift-d846b65cc-9mx9r\" (UID: \"d90f263f-8adf-405d-b1d2-6f00583e4723\") " pod="openshift-authentication/oauth-openshift-d846b65cc-9mx9r" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.973779 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d90f263f-8adf-405d-b1d2-6f00583e4723-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-d846b65cc-9mx9r\" (UID: \"d90f263f-8adf-405d-b1d2-6f00583e4723\") " pod="openshift-authentication/oauth-openshift-d846b65cc-9mx9r" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.973788 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d90f263f-8adf-405d-b1d2-6f00583e4723-audit-dir\") pod \"oauth-openshift-d846b65cc-9mx9r\" (UID: \"d90f263f-8adf-405d-b1d2-6f00583e4723\") " pod="openshift-authentication/oauth-openshift-d846b65cc-9mx9r" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.973802 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d90f263f-8adf-405d-b1d2-6f00583e4723-v4-0-config-system-session\") pod \"oauth-openshift-d846b65cc-9mx9r\" (UID: \"d90f263f-8adf-405d-b1d2-6f00583e4723\") " pod="openshift-authentication/oauth-openshift-d846b65cc-9mx9r" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.973877 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d90f263f-8adf-405d-b1d2-6f00583e4723-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-d846b65cc-9mx9r\" (UID: \"d90f263f-8adf-405d-b1d2-6f00583e4723\") " pod="openshift-authentication/oauth-openshift-d846b65cc-9mx9r" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.973938 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d90f263f-8adf-405d-b1d2-6f00583e4723-v4-0-config-system-service-ca\") pod \"oauth-openshift-d846b65cc-9mx9r\" (UID: \"d90f263f-8adf-405d-b1d2-6f00583e4723\") " pod="openshift-authentication/oauth-openshift-d846b65cc-9mx9r" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.973959 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d90f263f-8adf-405d-b1d2-6f00583e4723-audit-policies\") pod \"oauth-openshift-d846b65cc-9mx9r\" (UID: \"d90f263f-8adf-405d-b1d2-6f00583e4723\") " pod="openshift-authentication/oauth-openshift-d846b65cc-9mx9r" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.973980 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d90f263f-8adf-405d-b1d2-6f00583e4723-v4-0-config-system-cliconfig\") pod \"oauth-openshift-d846b65cc-9mx9r\" (UID: \"d90f263f-8adf-405d-b1d2-6f00583e4723\") " pod="openshift-authentication/oauth-openshift-d846b65cc-9mx9r" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.974686 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d90f263f-8adf-405d-b1d2-6f00583e4723-v4-0-config-system-cliconfig\") pod \"oauth-openshift-d846b65cc-9mx9r\" (UID: \"d90f263f-8adf-405d-b1d2-6f00583e4723\") " pod="openshift-authentication/oauth-openshift-d846b65cc-9mx9r" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.974725 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d90f263f-8adf-405d-b1d2-6f00583e4723-v4-0-config-system-service-ca\") pod \"oauth-openshift-d846b65cc-9mx9r\" (UID: \"d90f263f-8adf-405d-b1d2-6f00583e4723\") " pod="openshift-authentication/oauth-openshift-d846b65cc-9mx9r" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.975310 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d90f263f-8adf-405d-b1d2-6f00583e4723-audit-policies\") pod \"oauth-openshift-d846b65cc-9mx9r\" (UID: \"d90f263f-8adf-405d-b1d2-6f00583e4723\") " pod="openshift-authentication/oauth-openshift-d846b65cc-9mx9r" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.976811 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d90f263f-8adf-405d-b1d2-6f00583e4723-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-d846b65cc-9mx9r\" (UID: \"d90f263f-8adf-405d-b1d2-6f00583e4723\") " pod="openshift-authentication/oauth-openshift-d846b65cc-9mx9r" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.978381 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.978943 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.980532 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d90f263f-8adf-405d-b1d2-6f00583e4723-v4-0-config-user-template-error\") pod \"oauth-openshift-d846b65cc-9mx9r\" (UID: \"d90f263f-8adf-405d-b1d2-6f00583e4723\") " pod="openshift-authentication/oauth-openshift-d846b65cc-9mx9r" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.981076 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d90f263f-8adf-405d-b1d2-6f00583e4723-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-d846b65cc-9mx9r\" (UID: \"d90f263f-8adf-405d-b1d2-6f00583e4723\") " pod="openshift-authentication/oauth-openshift-d846b65cc-9mx9r" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.981539 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d90f263f-8adf-405d-b1d2-6f00583e4723-v4-0-config-system-serving-cert\") pod \"oauth-openshift-d846b65cc-9mx9r\" (UID: \"d90f263f-8adf-405d-b1d2-6f00583e4723\") " pod="openshift-authentication/oauth-openshift-d846b65cc-9mx9r" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.982753 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d90f263f-8adf-405d-b1d2-6f00583e4723-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-d846b65cc-9mx9r\" (UID: \"d90f263f-8adf-405d-b1d2-6f00583e4723\") " pod="openshift-authentication/oauth-openshift-d846b65cc-9mx9r" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.983116 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d90f263f-8adf-405d-b1d2-6f00583e4723-v4-0-config-user-template-login\") pod \"oauth-openshift-d846b65cc-9mx9r\" (UID: \"d90f263f-8adf-405d-b1d2-6f00583e4723\") " pod="openshift-authentication/oauth-openshift-d846b65cc-9mx9r" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.983344 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d90f263f-8adf-405d-b1d2-6f00583e4723-v4-0-config-system-session\") pod \"oauth-openshift-d846b65cc-9mx9r\" (UID: \"d90f263f-8adf-405d-b1d2-6f00583e4723\") " pod="openshift-authentication/oauth-openshift-d846b65cc-9mx9r" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.984606 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d90f263f-8adf-405d-b1d2-6f00583e4723-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-d846b65cc-9mx9r\" (UID: \"d90f263f-8adf-405d-b1d2-6f00583e4723\") " pod="openshift-authentication/oauth-openshift-d846b65cc-9mx9r" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.985003 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d90f263f-8adf-405d-b1d2-6f00583e4723-v4-0-config-system-router-certs\") pod \"oauth-openshift-d846b65cc-9mx9r\" (UID: \"d90f263f-8adf-405d-b1d2-6f00583e4723\") " pod="openshift-authentication/oauth-openshift-d846b65cc-9mx9r" Nov 29 07:44:31 crc kubenswrapper[4795]: I1129 07:44:31.995946 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rvfr\" (UniqueName: \"kubernetes.io/projected/d90f263f-8adf-405d-b1d2-6f00583e4723-kube-api-access-7rvfr\") pod \"oauth-openshift-d846b65cc-9mx9r\" (UID: \"d90f263f-8adf-405d-b1d2-6f00583e4723\") " pod="openshift-authentication/oauth-openshift-d846b65cc-9mx9r" Nov 29 07:44:32 crc kubenswrapper[4795]: I1129 07:44:32.036644 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 29 07:44:32 crc kubenswrapper[4795]: I1129 07:44:32.095207 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 29 07:44:32 crc kubenswrapper[4795]: I1129 07:44:32.098270 4795 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 29 07:44:32 crc kubenswrapper[4795]: I1129 07:44:32.103343 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 29 07:44:32 crc kubenswrapper[4795]: I1129 07:44:32.109552 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-d846b65cc-9mx9r" Nov 29 07:44:32 crc kubenswrapper[4795]: I1129 07:44:32.155550 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 29 07:44:32 crc kubenswrapper[4795]: I1129 07:44:32.228348 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 29 07:44:32 crc kubenswrapper[4795]: I1129 07:44:32.285523 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34ec03b0-6d17-4934-84fa-4323c7599fd0" path="/var/lib/kubelet/pods/34ec03b0-6d17-4934-84fa-4323c7599fd0/volumes" Nov 29 07:44:32 crc kubenswrapper[4795]: I1129 07:44:32.286651 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc49b5c4-ec36-4051-b975-25f1e872a6f7" path="/var/lib/kubelet/pods/cc49b5c4-ec36-4051-b975-25f1e872a6f7/volumes" Nov 29 07:44:32 crc kubenswrapper[4795]: I1129 07:44:32.287870 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1461195-363d-49f5-b5c2-61b99f16ece2" path="/var/lib/kubelet/pods/e1461195-363d-49f5-b5c2-61b99f16ece2/volumes" Nov 29 07:44:32 crc kubenswrapper[4795]: I1129 07:44:32.314321 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 29 07:44:32 crc kubenswrapper[4795]: I1129 07:44:32.386441 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 29 07:44:32 crc kubenswrapper[4795]: I1129 07:44:32.402226 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 29 07:44:32 crc kubenswrapper[4795]: I1129 07:44:32.446214 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 29 07:44:32 crc kubenswrapper[4795]: I1129 07:44:32.508039 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 29 07:44:32 crc kubenswrapper[4795]: I1129 07:44:32.608779 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 29 07:44:32 crc kubenswrapper[4795]: I1129 07:44:32.747376 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 29 07:44:32 crc kubenswrapper[4795]: I1129 07:44:32.756864 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 29 07:44:32 crc kubenswrapper[4795]: I1129 07:44:32.766785 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 29 07:44:32 crc kubenswrapper[4795]: I1129 07:44:32.770536 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 29 07:44:32 crc kubenswrapper[4795]: I1129 07:44:32.805679 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 29 07:44:32 crc kubenswrapper[4795]: I1129 07:44:32.957868 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 29 07:44:33 crc kubenswrapper[4795]: I1129 07:44:33.007409 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 29 07:44:33 crc kubenswrapper[4795]: I1129 07:44:33.106254 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 29 07:44:33 crc kubenswrapper[4795]: I1129 07:44:33.255247 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 29 07:44:33 crc kubenswrapper[4795]: I1129 07:44:33.262464 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 29 07:44:33 crc kubenswrapper[4795]: I1129 07:44:33.276723 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 29 07:44:33 crc kubenswrapper[4795]: I1129 07:44:33.425261 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 29 07:44:33 crc kubenswrapper[4795]: I1129 07:44:33.436999 4795 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 29 07:44:33 crc kubenswrapper[4795]: I1129 07:44:33.497717 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 29 07:44:33 crc kubenswrapper[4795]: I1129 07:44:33.680112 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 29 07:44:33 crc kubenswrapper[4795]: I1129 07:44:33.753207 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 29 07:44:33 crc kubenswrapper[4795]: I1129 07:44:33.838757 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 29 07:44:33 crc kubenswrapper[4795]: I1129 07:44:33.881312 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 29 07:44:33 crc kubenswrapper[4795]: I1129 07:44:33.889685 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 29 07:44:33 crc kubenswrapper[4795]: I1129 07:44:33.891085 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 29 07:44:33 crc kubenswrapper[4795]: I1129 07:44:33.963468 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 29 07:44:33 crc kubenswrapper[4795]: I1129 07:44:33.988504 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 29 07:44:33 crc kubenswrapper[4795]: I1129 07:44:33.991935 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 29 07:44:34 crc kubenswrapper[4795]: I1129 07:44:34.070399 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 29 07:44:34 crc kubenswrapper[4795]: I1129 07:44:34.173751 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 29 07:44:34 crc kubenswrapper[4795]: I1129 07:44:34.253728 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 29 07:44:34 crc kubenswrapper[4795]: I1129 07:44:34.311310 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 29 07:44:34 crc kubenswrapper[4795]: I1129 07:44:34.316872 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 29 07:44:34 crc kubenswrapper[4795]: I1129 07:44:34.430818 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 29 07:44:34 crc kubenswrapper[4795]: I1129 07:44:34.446410 4795 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 29 07:44:34 crc kubenswrapper[4795]: I1129 07:44:34.469602 4795 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 29 07:44:34 crc kubenswrapper[4795]: I1129 07:44:34.469930 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://3c9879552b54392c76220a72d52c526330975ac85ba968bd283dd2caeb0b7f7d" gracePeriod=5 Nov 29 07:44:34 crc kubenswrapper[4795]: I1129 07:44:34.561117 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 29 07:44:34 crc kubenswrapper[4795]: I1129 07:44:34.597704 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 29 07:44:34 crc kubenswrapper[4795]: I1129 07:44:34.611961 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 29 07:44:34 crc kubenswrapper[4795]: I1129 07:44:34.623278 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 29 07:44:34 crc kubenswrapper[4795]: I1129 07:44:34.675409 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 29 07:44:34 crc kubenswrapper[4795]: I1129 07:44:34.688628 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 29 07:44:34 crc kubenswrapper[4795]: I1129 07:44:34.808365 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 29 07:44:34 crc kubenswrapper[4795]: I1129 07:44:34.816029 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-d846b65cc-9mx9r"] Nov 29 07:44:34 crc kubenswrapper[4795]: I1129 07:44:34.832766 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 29 07:44:34 crc kubenswrapper[4795]: I1129 07:44:34.853314 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 29 07:44:34 crc kubenswrapper[4795]: I1129 07:44:34.853367 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 29 07:44:34 crc kubenswrapper[4795]: I1129 07:44:34.882876 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 29 07:44:35 crc kubenswrapper[4795]: I1129 07:44:35.007800 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 29 07:44:35 crc kubenswrapper[4795]: I1129 07:44:35.108575 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 29 07:44:35 crc kubenswrapper[4795]: I1129 07:44:35.146884 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 29 07:44:35 crc kubenswrapper[4795]: I1129 07:44:35.149135 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-d846b65cc-9mx9r"] Nov 29 07:44:35 crc kubenswrapper[4795]: I1129 07:44:35.158150 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 29 07:44:35 crc kubenswrapper[4795]: I1129 07:44:35.207656 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 29 07:44:35 crc kubenswrapper[4795]: I1129 07:44:35.225093 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 29 07:44:35 crc kubenswrapper[4795]: I1129 07:44:35.333180 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 29 07:44:35 crc kubenswrapper[4795]: I1129 07:44:35.375237 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 29 07:44:35 crc kubenswrapper[4795]: I1129 07:44:35.402342 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-d846b65cc-9mx9r" event={"ID":"d90f263f-8adf-405d-b1d2-6f00583e4723","Type":"ContainerStarted","Data":"651204baa2289a69e2f2faa5daa3b4871b75b65712aada20133ddf426c9cfd9a"} Nov 29 07:44:35 crc kubenswrapper[4795]: I1129 07:44:35.584199 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 29 07:44:35 crc kubenswrapper[4795]: I1129 07:44:35.781417 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 29 07:44:35 crc kubenswrapper[4795]: I1129 07:44:35.802654 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 29 07:44:36 crc kubenswrapper[4795]: I1129 07:44:36.081313 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 29 07:44:36 crc kubenswrapper[4795]: I1129 07:44:36.113371 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 29 07:44:36 crc kubenswrapper[4795]: I1129 07:44:36.140214 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 29 07:44:36 crc kubenswrapper[4795]: I1129 07:44:36.209932 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 29 07:44:36 crc kubenswrapper[4795]: I1129 07:44:36.302506 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 29 07:44:36 crc kubenswrapper[4795]: I1129 07:44:36.337749 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 29 07:44:36 crc kubenswrapper[4795]: I1129 07:44:36.376518 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 29 07:44:36 crc kubenswrapper[4795]: I1129 07:44:36.412051 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-d846b65cc-9mx9r" event={"ID":"d90f263f-8adf-405d-b1d2-6f00583e4723","Type":"ContainerStarted","Data":"0cdf2315a76e35bfcc3fae398310313b94efbb8b7abdee80e0792adaac480dad"} Nov 29 07:44:36 crc kubenswrapper[4795]: I1129 07:44:36.412846 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-d846b65cc-9mx9r" Nov 29 07:44:36 crc kubenswrapper[4795]: I1129 07:44:36.419276 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-d846b65cc-9mx9r" Nov 29 07:44:36 crc kubenswrapper[4795]: I1129 07:44:36.444673 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 29 07:44:36 crc kubenswrapper[4795]: I1129 07:44:36.456965 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 29 07:44:36 crc kubenswrapper[4795]: I1129 07:44:36.464245 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-d846b65cc-9mx9r" podStartSLOduration=46.464188262 podStartE2EDuration="46.464188262s" podCreationTimestamp="2025-11-29 07:43:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:44:36.45602986 +0000 UTC m=+322.431605710" watchObservedRunningTime="2025-11-29 07:44:36.464188262 +0000 UTC m=+322.439764062" Nov 29 07:44:36 crc kubenswrapper[4795]: I1129 07:44:36.534704 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 29 07:44:36 crc kubenswrapper[4795]: I1129 07:44:36.640000 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 29 07:44:36 crc kubenswrapper[4795]: I1129 07:44:36.653282 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 29 07:44:37 crc kubenswrapper[4795]: I1129 07:44:37.091457 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 29 07:44:37 crc kubenswrapper[4795]: I1129 07:44:37.173451 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 29 07:44:37 crc kubenswrapper[4795]: I1129 07:44:37.173570 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 29 07:44:37 crc kubenswrapper[4795]: I1129 07:44:37.415057 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 29 07:44:37 crc kubenswrapper[4795]: I1129 07:44:37.596381 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 29 07:44:37 crc kubenswrapper[4795]: I1129 07:44:37.651856 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 29 07:44:37 crc kubenswrapper[4795]: I1129 07:44:37.674682 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 29 07:44:37 crc kubenswrapper[4795]: I1129 07:44:37.829338 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 29 07:44:38 crc kubenswrapper[4795]: I1129 07:44:38.006491 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 29 07:44:38 crc kubenswrapper[4795]: I1129 07:44:38.835304 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 29 07:44:39 crc kubenswrapper[4795]: I1129 07:44:39.591600 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 29 07:44:39 crc kubenswrapper[4795]: I1129 07:44:39.623808 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 29 07:44:40 crc kubenswrapper[4795]: I1129 07:44:40.019157 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 29 07:44:40 crc kubenswrapper[4795]: I1129 07:44:40.239062 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Nov 29 07:44:40 crc kubenswrapper[4795]: I1129 07:44:40.239162 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 07:44:40 crc kubenswrapper[4795]: I1129 07:44:40.280680 4795 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Nov 29 07:44:40 crc kubenswrapper[4795]: I1129 07:44:40.303259 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 29 07:44:40 crc kubenswrapper[4795]: I1129 07:44:40.303305 4795 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="8cbe0c06-dddc-4e15-865b-08e97397137b" Nov 29 07:44:40 crc kubenswrapper[4795]: I1129 07:44:40.303333 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 29 07:44:40 crc kubenswrapper[4795]: I1129 07:44:40.303347 4795 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="8cbe0c06-dddc-4e15-865b-08e97397137b" Nov 29 07:44:40 crc kubenswrapper[4795]: I1129 07:44:40.384949 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 29 07:44:40 crc kubenswrapper[4795]: I1129 07:44:40.385072 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 29 07:44:40 crc kubenswrapper[4795]: I1129 07:44:40.385098 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 29 07:44:40 crc kubenswrapper[4795]: I1129 07:44:40.385108 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:44:40 crc kubenswrapper[4795]: I1129 07:44:40.385179 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:44:40 crc kubenswrapper[4795]: I1129 07:44:40.385238 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 29 07:44:40 crc kubenswrapper[4795]: I1129 07:44:40.385263 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 29 07:44:40 crc kubenswrapper[4795]: I1129 07:44:40.385298 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:44:40 crc kubenswrapper[4795]: I1129 07:44:40.385511 4795 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Nov 29 07:44:40 crc kubenswrapper[4795]: I1129 07:44:40.385529 4795 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Nov 29 07:44:40 crc kubenswrapper[4795]: I1129 07:44:40.385540 4795 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 29 07:44:40 crc kubenswrapper[4795]: I1129 07:44:40.385653 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:44:40 crc kubenswrapper[4795]: I1129 07:44:40.396758 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:44:40 crc kubenswrapper[4795]: I1129 07:44:40.438944 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Nov 29 07:44:40 crc kubenswrapper[4795]: I1129 07:44:40.439032 4795 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="3c9879552b54392c76220a72d52c526330975ac85ba968bd283dd2caeb0b7f7d" exitCode=137 Nov 29 07:44:40 crc kubenswrapper[4795]: I1129 07:44:40.439106 4795 scope.go:117] "RemoveContainer" containerID="3c9879552b54392c76220a72d52c526330975ac85ba968bd283dd2caeb0b7f7d" Nov 29 07:44:40 crc kubenswrapper[4795]: I1129 07:44:40.439130 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 07:44:40 crc kubenswrapper[4795]: I1129 07:44:40.456792 4795 scope.go:117] "RemoveContainer" containerID="3c9879552b54392c76220a72d52c526330975ac85ba968bd283dd2caeb0b7f7d" Nov 29 07:44:40 crc kubenswrapper[4795]: E1129 07:44:40.457236 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c9879552b54392c76220a72d52c526330975ac85ba968bd283dd2caeb0b7f7d\": container with ID starting with 3c9879552b54392c76220a72d52c526330975ac85ba968bd283dd2caeb0b7f7d not found: ID does not exist" containerID="3c9879552b54392c76220a72d52c526330975ac85ba968bd283dd2caeb0b7f7d" Nov 29 07:44:40 crc kubenswrapper[4795]: I1129 07:44:40.457269 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c9879552b54392c76220a72d52c526330975ac85ba968bd283dd2caeb0b7f7d"} err="failed to get container status \"3c9879552b54392c76220a72d52c526330975ac85ba968bd283dd2caeb0b7f7d\": rpc error: code = NotFound desc = could not find container \"3c9879552b54392c76220a72d52c526330975ac85ba968bd283dd2caeb0b7f7d\": container with ID starting with 3c9879552b54392c76220a72d52c526330975ac85ba968bd283dd2caeb0b7f7d not found: ID does not exist" Nov 29 07:44:40 crc kubenswrapper[4795]: I1129 07:44:40.486984 4795 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 29 07:44:40 crc kubenswrapper[4795]: I1129 07:44:40.487008 4795 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Nov 29 07:44:42 crc kubenswrapper[4795]: I1129 07:44:42.281389 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Nov 29 07:44:59 crc kubenswrapper[4795]: I1129 07:44:59.759877 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-sjwfz"] Nov 29 07:44:59 crc kubenswrapper[4795]: I1129 07:44:59.760550 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-sjwfz" podUID="099aeef3-9f86-47a1-bc18-3784b8e87bcd" containerName="controller-manager" containerID="cri-o://ebf74afe6c83cd03b3a294268c80a8e5451ce88c9a81433e530df92fd69e6d41" gracePeriod=30 Nov 29 07:44:59 crc kubenswrapper[4795]: I1129 07:44:59.860803 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-kfx9d"] Nov 29 07:44:59 crc kubenswrapper[4795]: I1129 07:44:59.861007 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kfx9d" podUID="84c50c19-d82f-444f-9558-6f9932e3ff86" containerName="route-controller-manager" containerID="cri-o://1514eb0604c20cb1a9a9f512ad022da757b52d39a5e26e669af5ba95c43181d3" gracePeriod=30 Nov 29 07:45:00 crc kubenswrapper[4795]: I1129 07:45:00.177504 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406705-89jdp"] Nov 29 07:45:00 crc kubenswrapper[4795]: E1129 07:45:00.177777 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 29 07:45:00 crc kubenswrapper[4795]: I1129 07:45:00.177794 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 29 07:45:00 crc kubenswrapper[4795]: I1129 07:45:00.177922 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 29 07:45:00 crc kubenswrapper[4795]: I1129 07:45:00.178361 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-89jdp" Nov 29 07:45:00 crc kubenswrapper[4795]: I1129 07:45:00.180659 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 29 07:45:00 crc kubenswrapper[4795]: I1129 07:45:00.184421 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406705-89jdp"] Nov 29 07:45:00 crc kubenswrapper[4795]: I1129 07:45:00.185523 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 29 07:45:00 crc kubenswrapper[4795]: I1129 07:45:00.335102 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/74935be3-c96a-49da-97c0-7543a4217bd2-secret-volume\") pod \"collect-profiles-29406705-89jdp\" (UID: \"74935be3-c96a-49da-97c0-7543a4217bd2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-89jdp" Nov 29 07:45:00 crc kubenswrapper[4795]: I1129 07:45:00.335189 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbnxw\" (UniqueName: \"kubernetes.io/projected/74935be3-c96a-49da-97c0-7543a4217bd2-kube-api-access-hbnxw\") pod \"collect-profiles-29406705-89jdp\" (UID: \"74935be3-c96a-49da-97c0-7543a4217bd2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-89jdp" Nov 29 07:45:00 crc kubenswrapper[4795]: I1129 07:45:00.335229 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74935be3-c96a-49da-97c0-7543a4217bd2-config-volume\") pod \"collect-profiles-29406705-89jdp\" (UID: \"74935be3-c96a-49da-97c0-7543a4217bd2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-89jdp" Nov 29 07:45:00 crc kubenswrapper[4795]: I1129 07:45:00.436090 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74935be3-c96a-49da-97c0-7543a4217bd2-config-volume\") pod \"collect-profiles-29406705-89jdp\" (UID: \"74935be3-c96a-49da-97c0-7543a4217bd2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-89jdp" Nov 29 07:45:00 crc kubenswrapper[4795]: I1129 07:45:00.436173 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/74935be3-c96a-49da-97c0-7543a4217bd2-secret-volume\") pod \"collect-profiles-29406705-89jdp\" (UID: \"74935be3-c96a-49da-97c0-7543a4217bd2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-89jdp" Nov 29 07:45:00 crc kubenswrapper[4795]: I1129 07:45:00.436220 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbnxw\" (UniqueName: \"kubernetes.io/projected/74935be3-c96a-49da-97c0-7543a4217bd2-kube-api-access-hbnxw\") pod \"collect-profiles-29406705-89jdp\" (UID: \"74935be3-c96a-49da-97c0-7543a4217bd2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-89jdp" Nov 29 07:45:00 crc kubenswrapper[4795]: I1129 07:45:00.437086 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74935be3-c96a-49da-97c0-7543a4217bd2-config-volume\") pod \"collect-profiles-29406705-89jdp\" (UID: \"74935be3-c96a-49da-97c0-7543a4217bd2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-89jdp" Nov 29 07:45:00 crc kubenswrapper[4795]: I1129 07:45:00.443078 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/74935be3-c96a-49da-97c0-7543a4217bd2-secret-volume\") pod \"collect-profiles-29406705-89jdp\" (UID: \"74935be3-c96a-49da-97c0-7543a4217bd2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-89jdp" Nov 29 07:45:00 crc kubenswrapper[4795]: I1129 07:45:00.453856 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbnxw\" (UniqueName: \"kubernetes.io/projected/74935be3-c96a-49da-97c0-7543a4217bd2-kube-api-access-hbnxw\") pod \"collect-profiles-29406705-89jdp\" (UID: \"74935be3-c96a-49da-97c0-7543a4217bd2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-89jdp" Nov 29 07:45:00 crc kubenswrapper[4795]: I1129 07:45:00.491720 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-89jdp" Nov 29 07:45:00 crc kubenswrapper[4795]: I1129 07:45:00.585561 4795 generic.go:334] "Generic (PLEG): container finished" podID="84c50c19-d82f-444f-9558-6f9932e3ff86" containerID="1514eb0604c20cb1a9a9f512ad022da757b52d39a5e26e669af5ba95c43181d3" exitCode=0 Nov 29 07:45:00 crc kubenswrapper[4795]: I1129 07:45:00.585774 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kfx9d" event={"ID":"84c50c19-d82f-444f-9558-6f9932e3ff86","Type":"ContainerDied","Data":"1514eb0604c20cb1a9a9f512ad022da757b52d39a5e26e669af5ba95c43181d3"} Nov 29 07:45:00 crc kubenswrapper[4795]: I1129 07:45:00.597732 4795 generic.go:334] "Generic (PLEG): container finished" podID="099aeef3-9f86-47a1-bc18-3784b8e87bcd" containerID="ebf74afe6c83cd03b3a294268c80a8e5451ce88c9a81433e530df92fd69e6d41" exitCode=0 Nov 29 07:45:00 crc kubenswrapper[4795]: I1129 07:45:00.597749 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-sjwfz" event={"ID":"099aeef3-9f86-47a1-bc18-3784b8e87bcd","Type":"ContainerDied","Data":"ebf74afe6c83cd03b3a294268c80a8e5451ce88c9a81433e530df92fd69e6d41"} Nov 29 07:45:00 crc kubenswrapper[4795]: I1129 07:45:00.698631 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406705-89jdp"] Nov 29 07:45:01 crc kubenswrapper[4795]: I1129 07:45:01.350177 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-sjwfz" Nov 29 07:45:01 crc kubenswrapper[4795]: I1129 07:45:01.382975 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-d48c458cb-mdstf"] Nov 29 07:45:01 crc kubenswrapper[4795]: E1129 07:45:01.383192 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="099aeef3-9f86-47a1-bc18-3784b8e87bcd" containerName="controller-manager" Nov 29 07:45:01 crc kubenswrapper[4795]: I1129 07:45:01.383204 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="099aeef3-9f86-47a1-bc18-3784b8e87bcd" containerName="controller-manager" Nov 29 07:45:01 crc kubenswrapper[4795]: I1129 07:45:01.383302 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="099aeef3-9f86-47a1-bc18-3784b8e87bcd" containerName="controller-manager" Nov 29 07:45:01 crc kubenswrapper[4795]: I1129 07:45:01.383693 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d48c458cb-mdstf" Nov 29 07:45:01 crc kubenswrapper[4795]: I1129 07:45:01.393932 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d48c458cb-mdstf"] Nov 29 07:45:01 crc kubenswrapper[4795]: I1129 07:45:01.436222 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kfx9d" Nov 29 07:45:01 crc kubenswrapper[4795]: I1129 07:45:01.547862 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84c50c19-d82f-444f-9558-6f9932e3ff86-client-ca\") pod \"84c50c19-d82f-444f-9558-6f9932e3ff86\" (UID: \"84c50c19-d82f-444f-9558-6f9932e3ff86\") " Nov 29 07:45:01 crc kubenswrapper[4795]: I1129 07:45:01.547942 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdzp2\" (UniqueName: \"kubernetes.io/projected/099aeef3-9f86-47a1-bc18-3784b8e87bcd-kube-api-access-zdzp2\") pod \"099aeef3-9f86-47a1-bc18-3784b8e87bcd\" (UID: \"099aeef3-9f86-47a1-bc18-3784b8e87bcd\") " Nov 29 07:45:01 crc kubenswrapper[4795]: I1129 07:45:01.547976 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7v8x\" (UniqueName: \"kubernetes.io/projected/84c50c19-d82f-444f-9558-6f9932e3ff86-kube-api-access-f7v8x\") pod \"84c50c19-d82f-444f-9558-6f9932e3ff86\" (UID: \"84c50c19-d82f-444f-9558-6f9932e3ff86\") " Nov 29 07:45:01 crc kubenswrapper[4795]: I1129 07:45:01.548000 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84c50c19-d82f-444f-9558-6f9932e3ff86-serving-cert\") pod \"84c50c19-d82f-444f-9558-6f9932e3ff86\" (UID: \"84c50c19-d82f-444f-9558-6f9932e3ff86\") " Nov 29 07:45:01 crc kubenswrapper[4795]: I1129 07:45:01.548032 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/099aeef3-9f86-47a1-bc18-3784b8e87bcd-proxy-ca-bundles\") pod \"099aeef3-9f86-47a1-bc18-3784b8e87bcd\" (UID: \"099aeef3-9f86-47a1-bc18-3784b8e87bcd\") " Nov 29 07:45:01 crc kubenswrapper[4795]: I1129 07:45:01.548057 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/099aeef3-9f86-47a1-bc18-3784b8e87bcd-client-ca\") pod \"099aeef3-9f86-47a1-bc18-3784b8e87bcd\" (UID: \"099aeef3-9f86-47a1-bc18-3784b8e87bcd\") " Nov 29 07:45:01 crc kubenswrapper[4795]: I1129 07:45:01.548125 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/099aeef3-9f86-47a1-bc18-3784b8e87bcd-config\") pod \"099aeef3-9f86-47a1-bc18-3784b8e87bcd\" (UID: \"099aeef3-9f86-47a1-bc18-3784b8e87bcd\") " Nov 29 07:45:01 crc kubenswrapper[4795]: I1129 07:45:01.548179 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84c50c19-d82f-444f-9558-6f9932e3ff86-config\") pod \"84c50c19-d82f-444f-9558-6f9932e3ff86\" (UID: \"84c50c19-d82f-444f-9558-6f9932e3ff86\") " Nov 29 07:45:01 crc kubenswrapper[4795]: I1129 07:45:01.548208 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/099aeef3-9f86-47a1-bc18-3784b8e87bcd-serving-cert\") pod \"099aeef3-9f86-47a1-bc18-3784b8e87bcd\" (UID: \"099aeef3-9f86-47a1-bc18-3784b8e87bcd\") " Nov 29 07:45:01 crc kubenswrapper[4795]: I1129 07:45:01.548370 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df6ce8d7-89a4-46f1-a10c-5f86db452a20-serving-cert\") pod \"controller-manager-d48c458cb-mdstf\" (UID: \"df6ce8d7-89a4-46f1-a10c-5f86db452a20\") " pod="openshift-controller-manager/controller-manager-d48c458cb-mdstf" Nov 29 07:45:01 crc kubenswrapper[4795]: I1129 07:45:01.548411 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df6ce8d7-89a4-46f1-a10c-5f86db452a20-config\") pod \"controller-manager-d48c458cb-mdstf\" (UID: \"df6ce8d7-89a4-46f1-a10c-5f86db452a20\") " pod="openshift-controller-manager/controller-manager-d48c458cb-mdstf" Nov 29 07:45:01 crc kubenswrapper[4795]: I1129 07:45:01.548457 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/df6ce8d7-89a4-46f1-a10c-5f86db452a20-client-ca\") pod \"controller-manager-d48c458cb-mdstf\" (UID: \"df6ce8d7-89a4-46f1-a10c-5f86db452a20\") " pod="openshift-controller-manager/controller-manager-d48c458cb-mdstf" Nov 29 07:45:01 crc kubenswrapper[4795]: I1129 07:45:01.548473 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/df6ce8d7-89a4-46f1-a10c-5f86db452a20-proxy-ca-bundles\") pod \"controller-manager-d48c458cb-mdstf\" (UID: \"df6ce8d7-89a4-46f1-a10c-5f86db452a20\") " pod="openshift-controller-manager/controller-manager-d48c458cb-mdstf" Nov 29 07:45:01 crc kubenswrapper[4795]: I1129 07:45:01.548525 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjcvm\" (UniqueName: \"kubernetes.io/projected/df6ce8d7-89a4-46f1-a10c-5f86db452a20-kube-api-access-sjcvm\") pod \"controller-manager-d48c458cb-mdstf\" (UID: \"df6ce8d7-89a4-46f1-a10c-5f86db452a20\") " pod="openshift-controller-manager/controller-manager-d48c458cb-mdstf" Nov 29 07:45:01 crc kubenswrapper[4795]: I1129 07:45:01.548625 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84c50c19-d82f-444f-9558-6f9932e3ff86-client-ca" (OuterVolumeSpecName: "client-ca") pod "84c50c19-d82f-444f-9558-6f9932e3ff86" (UID: "84c50c19-d82f-444f-9558-6f9932e3ff86"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:45:01 crc kubenswrapper[4795]: I1129 07:45:01.549122 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84c50c19-d82f-444f-9558-6f9932e3ff86-config" (OuterVolumeSpecName: "config") pod "84c50c19-d82f-444f-9558-6f9932e3ff86" (UID: "84c50c19-d82f-444f-9558-6f9932e3ff86"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:45:01 crc kubenswrapper[4795]: I1129 07:45:01.549267 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/099aeef3-9f86-47a1-bc18-3784b8e87bcd-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "099aeef3-9f86-47a1-bc18-3784b8e87bcd" (UID: "099aeef3-9f86-47a1-bc18-3784b8e87bcd"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:45:01 crc kubenswrapper[4795]: I1129 07:45:01.549303 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/099aeef3-9f86-47a1-bc18-3784b8e87bcd-config" (OuterVolumeSpecName: "config") pod "099aeef3-9f86-47a1-bc18-3784b8e87bcd" (UID: "099aeef3-9f86-47a1-bc18-3784b8e87bcd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:45:01 crc kubenswrapper[4795]: I1129 07:45:01.549622 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/099aeef3-9f86-47a1-bc18-3784b8e87bcd-client-ca" (OuterVolumeSpecName: "client-ca") pod "099aeef3-9f86-47a1-bc18-3784b8e87bcd" (UID: "099aeef3-9f86-47a1-bc18-3784b8e87bcd"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:45:01 crc kubenswrapper[4795]: I1129 07:45:01.553860 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/099aeef3-9f86-47a1-bc18-3784b8e87bcd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "099aeef3-9f86-47a1-bc18-3784b8e87bcd" (UID: "099aeef3-9f86-47a1-bc18-3784b8e87bcd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:45:01 crc kubenswrapper[4795]: I1129 07:45:01.553980 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84c50c19-d82f-444f-9558-6f9932e3ff86-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "84c50c19-d82f-444f-9558-6f9932e3ff86" (UID: "84c50c19-d82f-444f-9558-6f9932e3ff86"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:45:01 crc kubenswrapper[4795]: I1129 07:45:01.553965 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84c50c19-d82f-444f-9558-6f9932e3ff86-kube-api-access-f7v8x" (OuterVolumeSpecName: "kube-api-access-f7v8x") pod "84c50c19-d82f-444f-9558-6f9932e3ff86" (UID: "84c50c19-d82f-444f-9558-6f9932e3ff86"). InnerVolumeSpecName "kube-api-access-f7v8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:45:01 crc kubenswrapper[4795]: I1129 07:45:01.556092 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/099aeef3-9f86-47a1-bc18-3784b8e87bcd-kube-api-access-zdzp2" (OuterVolumeSpecName: "kube-api-access-zdzp2") pod "099aeef3-9f86-47a1-bc18-3784b8e87bcd" (UID: "099aeef3-9f86-47a1-bc18-3784b8e87bcd"). InnerVolumeSpecName "kube-api-access-zdzp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:45:01 crc kubenswrapper[4795]: I1129 07:45:01.603061 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kfx9d" event={"ID":"84c50c19-d82f-444f-9558-6f9932e3ff86","Type":"ContainerDied","Data":"353fc410fff6bbd6da1fafe3aa492b30c49b74953879186129640a95724c6a94"} Nov 29 07:45:01 crc kubenswrapper[4795]: I1129 07:45:01.603091 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kfx9d" Nov 29 07:45:01 crc kubenswrapper[4795]: I1129 07:45:01.603115 4795 scope.go:117] "RemoveContainer" containerID="1514eb0604c20cb1a9a9f512ad022da757b52d39a5e26e669af5ba95c43181d3" Nov 29 07:45:01 crc kubenswrapper[4795]: I1129 07:45:01.604727 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-sjwfz" event={"ID":"099aeef3-9f86-47a1-bc18-3784b8e87bcd","Type":"ContainerDied","Data":"eb68c6013ef112ccd9569693c5fe89a89810c26e6d5906acf1109a5c4261dd6d"} Nov 29 07:45:01 crc kubenswrapper[4795]: I1129 07:45:01.604759 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-sjwfz" Nov 29 07:45:01 crc kubenswrapper[4795]: I1129 07:45:01.606127 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-89jdp" event={"ID":"74935be3-c96a-49da-97c0-7543a4217bd2","Type":"ContainerStarted","Data":"d86ccfc33badf6c25e96bbd8a79ec62946be59723467fe91e294d441702dc110"} Nov 29 07:45:01 crc kubenswrapper[4795]: I1129 07:45:01.606170 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-89jdp" event={"ID":"74935be3-c96a-49da-97c0-7543a4217bd2","Type":"ContainerStarted","Data":"1bf407aa4c55438a4fae986d983d24e8a1cc37e34f9e1e9c277e1e44b45e11ac"} Nov 29 07:45:01 crc kubenswrapper[4795]: I1129 07:45:01.619759 4795 scope.go:117] "RemoveContainer" containerID="ebf74afe6c83cd03b3a294268c80a8e5451ce88c9a81433e530df92fd69e6d41" Nov 29 07:45:01 crc kubenswrapper[4795]: I1129 07:45:01.632375 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-kfx9d"] Nov 29 07:45:01 crc kubenswrapper[4795]: I1129 07:45:01.640253 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-kfx9d"] Nov 29 07:45:01 crc kubenswrapper[4795]: I1129 07:45:01.644783 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-sjwfz"] Nov 29 07:45:01 crc kubenswrapper[4795]: I1129 07:45:01.648503 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-sjwfz"] Nov 29 07:45:01 crc kubenswrapper[4795]: I1129 07:45:01.650281 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/df6ce8d7-89a4-46f1-a10c-5f86db452a20-proxy-ca-bundles\") pod \"controller-manager-d48c458cb-mdstf\" (UID: \"df6ce8d7-89a4-46f1-a10c-5f86db452a20\") " pod="openshift-controller-manager/controller-manager-d48c458cb-mdstf" Nov 29 07:45:01 crc kubenswrapper[4795]: I1129 07:45:01.650323 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/df6ce8d7-89a4-46f1-a10c-5f86db452a20-client-ca\") pod \"controller-manager-d48c458cb-mdstf\" (UID: \"df6ce8d7-89a4-46f1-a10c-5f86db452a20\") " pod="openshift-controller-manager/controller-manager-d48c458cb-mdstf" Nov 29 07:45:01 crc kubenswrapper[4795]: I1129 07:45:01.650368 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjcvm\" (UniqueName: \"kubernetes.io/projected/df6ce8d7-89a4-46f1-a10c-5f86db452a20-kube-api-access-sjcvm\") pod \"controller-manager-d48c458cb-mdstf\" (UID: \"df6ce8d7-89a4-46f1-a10c-5f86db452a20\") " pod="openshift-controller-manager/controller-manager-d48c458cb-mdstf" Nov 29 07:45:01 crc kubenswrapper[4795]: I1129 07:45:01.650414 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df6ce8d7-89a4-46f1-a10c-5f86db452a20-serving-cert\") pod \"controller-manager-d48c458cb-mdstf\" (UID: \"df6ce8d7-89a4-46f1-a10c-5f86db452a20\") " pod="openshift-controller-manager/controller-manager-d48c458cb-mdstf" Nov 29 07:45:01 crc kubenswrapper[4795]: I1129 07:45:01.650469 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df6ce8d7-89a4-46f1-a10c-5f86db452a20-config\") pod \"controller-manager-d48c458cb-mdstf\" (UID: \"df6ce8d7-89a4-46f1-a10c-5f86db452a20\") " pod="openshift-controller-manager/controller-manager-d48c458cb-mdstf" Nov 29 07:45:01 crc kubenswrapper[4795]: I1129 07:45:01.650534 4795 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/099aeef3-9f86-47a1-bc18-3784b8e87bcd-client-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:45:01 crc kubenswrapper[4795]: I1129 07:45:01.650550 4795 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/099aeef3-9f86-47a1-bc18-3784b8e87bcd-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 29 07:45:01 crc kubenswrapper[4795]: I1129 07:45:01.650561 4795 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/099aeef3-9f86-47a1-bc18-3784b8e87bcd-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:45:01 crc kubenswrapper[4795]: I1129 07:45:01.650570 4795 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84c50c19-d82f-444f-9558-6f9932e3ff86-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:45:01 crc kubenswrapper[4795]: I1129 07:45:01.650579 4795 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/099aeef3-9f86-47a1-bc18-3784b8e87bcd-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:45:01 crc kubenswrapper[4795]: I1129 07:45:01.650612 4795 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84c50c19-d82f-444f-9558-6f9932e3ff86-client-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:45:01 crc kubenswrapper[4795]: I1129 07:45:01.650623 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdzp2\" (UniqueName: \"kubernetes.io/projected/099aeef3-9f86-47a1-bc18-3784b8e87bcd-kube-api-access-zdzp2\") on node \"crc\" DevicePath \"\"" Nov 29 07:45:01 crc kubenswrapper[4795]: I1129 07:45:01.650631 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7v8x\" (UniqueName: \"kubernetes.io/projected/84c50c19-d82f-444f-9558-6f9932e3ff86-kube-api-access-f7v8x\") on node \"crc\" DevicePath \"\"" Nov 29 07:45:01 crc kubenswrapper[4795]: I1129 07:45:01.650640 4795 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84c50c19-d82f-444f-9558-6f9932e3ff86-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:45:01 crc kubenswrapper[4795]: I1129 07:45:01.651759 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/df6ce8d7-89a4-46f1-a10c-5f86db452a20-proxy-ca-bundles\") pod \"controller-manager-d48c458cb-mdstf\" (UID: \"df6ce8d7-89a4-46f1-a10c-5f86db452a20\") " pod="openshift-controller-manager/controller-manager-d48c458cb-mdstf" Nov 29 07:45:01 crc kubenswrapper[4795]: I1129 07:45:01.653157 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df6ce8d7-89a4-46f1-a10c-5f86db452a20-config\") pod \"controller-manager-d48c458cb-mdstf\" (UID: \"df6ce8d7-89a4-46f1-a10c-5f86db452a20\") " pod="openshift-controller-manager/controller-manager-d48c458cb-mdstf" Nov 29 07:45:01 crc kubenswrapper[4795]: I1129 07:45:01.653312 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/df6ce8d7-89a4-46f1-a10c-5f86db452a20-client-ca\") pod \"controller-manager-d48c458cb-mdstf\" (UID: \"df6ce8d7-89a4-46f1-a10c-5f86db452a20\") " pod="openshift-controller-manager/controller-manager-d48c458cb-mdstf" Nov 29 07:45:01 crc kubenswrapper[4795]: I1129 07:45:01.655930 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df6ce8d7-89a4-46f1-a10c-5f86db452a20-serving-cert\") pod \"controller-manager-d48c458cb-mdstf\" (UID: \"df6ce8d7-89a4-46f1-a10c-5f86db452a20\") " pod="openshift-controller-manager/controller-manager-d48c458cb-mdstf" Nov 29 07:45:01 crc kubenswrapper[4795]: I1129 07:45:01.667246 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjcvm\" (UniqueName: \"kubernetes.io/projected/df6ce8d7-89a4-46f1-a10c-5f86db452a20-kube-api-access-sjcvm\") pod \"controller-manager-d48c458cb-mdstf\" (UID: \"df6ce8d7-89a4-46f1-a10c-5f86db452a20\") " pod="openshift-controller-manager/controller-manager-d48c458cb-mdstf" Nov 29 07:45:01 crc kubenswrapper[4795]: I1129 07:45:01.729983 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d48c458cb-mdstf" Nov 29 07:45:02 crc kubenswrapper[4795]: I1129 07:45:02.179060 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d48c458cb-mdstf"] Nov 29 07:45:02 crc kubenswrapper[4795]: I1129 07:45:02.282833 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="099aeef3-9f86-47a1-bc18-3784b8e87bcd" path="/var/lib/kubelet/pods/099aeef3-9f86-47a1-bc18-3784b8e87bcd/volumes" Nov 29 07:45:02 crc kubenswrapper[4795]: I1129 07:45:02.283765 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84c50c19-d82f-444f-9558-6f9932e3ff86" path="/var/lib/kubelet/pods/84c50c19-d82f-444f-9558-6f9932e3ff86/volumes" Nov 29 07:45:02 crc kubenswrapper[4795]: I1129 07:45:02.621516 4795 generic.go:334] "Generic (PLEG): container finished" podID="74935be3-c96a-49da-97c0-7543a4217bd2" containerID="d86ccfc33badf6c25e96bbd8a79ec62946be59723467fe91e294d441702dc110" exitCode=0 Nov 29 07:45:02 crc kubenswrapper[4795]: I1129 07:45:02.621654 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-89jdp" event={"ID":"74935be3-c96a-49da-97c0-7543a4217bd2","Type":"ContainerDied","Data":"d86ccfc33badf6c25e96bbd8a79ec62946be59723467fe91e294d441702dc110"} Nov 29 07:45:02 crc kubenswrapper[4795]: I1129 07:45:02.624795 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d48c458cb-mdstf" event={"ID":"df6ce8d7-89a4-46f1-a10c-5f86db452a20","Type":"ContainerStarted","Data":"58e1234262f95726bfc4e08a00f0009e12f39b9ba535985dc6fb1ddccb3435de"} Nov 29 07:45:02 crc kubenswrapper[4795]: I1129 07:45:02.624837 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d48c458cb-mdstf" event={"ID":"df6ce8d7-89a4-46f1-a10c-5f86db452a20","Type":"ContainerStarted","Data":"82d81bb817d08c8e5d77874b27a2a41fd0be849fe0f23cf146aece31de24fcb3"} Nov 29 07:45:03 crc kubenswrapper[4795]: I1129 07:45:03.393118 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d5b76bdb6-57zxn"] Nov 29 07:45:03 crc kubenswrapper[4795]: E1129 07:45:03.393350 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84c50c19-d82f-444f-9558-6f9932e3ff86" containerName="route-controller-manager" Nov 29 07:45:03 crc kubenswrapper[4795]: I1129 07:45:03.393361 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="84c50c19-d82f-444f-9558-6f9932e3ff86" containerName="route-controller-manager" Nov 29 07:45:03 crc kubenswrapper[4795]: I1129 07:45:03.393454 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="84c50c19-d82f-444f-9558-6f9932e3ff86" containerName="route-controller-manager" Nov 29 07:45:03 crc kubenswrapper[4795]: I1129 07:45:03.393798 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d5b76bdb6-57zxn" Nov 29 07:45:03 crc kubenswrapper[4795]: I1129 07:45:03.395813 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 29 07:45:03 crc kubenswrapper[4795]: I1129 07:45:03.395873 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 29 07:45:03 crc kubenswrapper[4795]: I1129 07:45:03.398874 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 29 07:45:03 crc kubenswrapper[4795]: I1129 07:45:03.399107 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 29 07:45:03 crc kubenswrapper[4795]: I1129 07:45:03.399661 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 29 07:45:03 crc kubenswrapper[4795]: I1129 07:45:03.401063 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 29 07:45:03 crc kubenswrapper[4795]: I1129 07:45:03.405430 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d5b76bdb6-57zxn"] Nov 29 07:45:03 crc kubenswrapper[4795]: I1129 07:45:03.573849 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e0e2228-e69b-4e4a-8dcd-675baddab6b8-config\") pod \"route-controller-manager-d5b76bdb6-57zxn\" (UID: \"3e0e2228-e69b-4e4a-8dcd-675baddab6b8\") " pod="openshift-route-controller-manager/route-controller-manager-d5b76bdb6-57zxn" Nov 29 07:45:03 crc kubenswrapper[4795]: I1129 07:45:03.573930 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e0e2228-e69b-4e4a-8dcd-675baddab6b8-serving-cert\") pod \"route-controller-manager-d5b76bdb6-57zxn\" (UID: \"3e0e2228-e69b-4e4a-8dcd-675baddab6b8\") " pod="openshift-route-controller-manager/route-controller-manager-d5b76bdb6-57zxn" Nov 29 07:45:03 crc kubenswrapper[4795]: I1129 07:45:03.573957 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3e0e2228-e69b-4e4a-8dcd-675baddab6b8-client-ca\") pod \"route-controller-manager-d5b76bdb6-57zxn\" (UID: \"3e0e2228-e69b-4e4a-8dcd-675baddab6b8\") " pod="openshift-route-controller-manager/route-controller-manager-d5b76bdb6-57zxn" Nov 29 07:45:03 crc kubenswrapper[4795]: I1129 07:45:03.573978 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w766r\" (UniqueName: \"kubernetes.io/projected/3e0e2228-e69b-4e4a-8dcd-675baddab6b8-kube-api-access-w766r\") pod \"route-controller-manager-d5b76bdb6-57zxn\" (UID: \"3e0e2228-e69b-4e4a-8dcd-675baddab6b8\") " pod="openshift-route-controller-manager/route-controller-manager-d5b76bdb6-57zxn" Nov 29 07:45:03 crc kubenswrapper[4795]: I1129 07:45:03.647454 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-d48c458cb-mdstf" podStartSLOduration=3.647435171 podStartE2EDuration="3.647435171s" podCreationTimestamp="2025-11-29 07:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:45:03.645516147 +0000 UTC m=+349.621091927" watchObservedRunningTime="2025-11-29 07:45:03.647435171 +0000 UTC m=+349.623010951" Nov 29 07:45:03 crc kubenswrapper[4795]: I1129 07:45:03.674805 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e0e2228-e69b-4e4a-8dcd-675baddab6b8-serving-cert\") pod \"route-controller-manager-d5b76bdb6-57zxn\" (UID: \"3e0e2228-e69b-4e4a-8dcd-675baddab6b8\") " pod="openshift-route-controller-manager/route-controller-manager-d5b76bdb6-57zxn" Nov 29 07:45:03 crc kubenswrapper[4795]: I1129 07:45:03.675122 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3e0e2228-e69b-4e4a-8dcd-675baddab6b8-client-ca\") pod \"route-controller-manager-d5b76bdb6-57zxn\" (UID: \"3e0e2228-e69b-4e4a-8dcd-675baddab6b8\") " pod="openshift-route-controller-manager/route-controller-manager-d5b76bdb6-57zxn" Nov 29 07:45:03 crc kubenswrapper[4795]: I1129 07:45:03.675153 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w766r\" (UniqueName: \"kubernetes.io/projected/3e0e2228-e69b-4e4a-8dcd-675baddab6b8-kube-api-access-w766r\") pod \"route-controller-manager-d5b76bdb6-57zxn\" (UID: \"3e0e2228-e69b-4e4a-8dcd-675baddab6b8\") " pod="openshift-route-controller-manager/route-controller-manager-d5b76bdb6-57zxn" Nov 29 07:45:03 crc kubenswrapper[4795]: I1129 07:45:03.675181 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e0e2228-e69b-4e4a-8dcd-675baddab6b8-config\") pod \"route-controller-manager-d5b76bdb6-57zxn\" (UID: \"3e0e2228-e69b-4e4a-8dcd-675baddab6b8\") " pod="openshift-route-controller-manager/route-controller-manager-d5b76bdb6-57zxn" Nov 29 07:45:03 crc kubenswrapper[4795]: I1129 07:45:03.676137 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3e0e2228-e69b-4e4a-8dcd-675baddab6b8-client-ca\") pod \"route-controller-manager-d5b76bdb6-57zxn\" (UID: \"3e0e2228-e69b-4e4a-8dcd-675baddab6b8\") " pod="openshift-route-controller-manager/route-controller-manager-d5b76bdb6-57zxn" Nov 29 07:45:03 crc kubenswrapper[4795]: I1129 07:45:03.678833 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e0e2228-e69b-4e4a-8dcd-675baddab6b8-config\") pod \"route-controller-manager-d5b76bdb6-57zxn\" (UID: \"3e0e2228-e69b-4e4a-8dcd-675baddab6b8\") " pod="openshift-route-controller-manager/route-controller-manager-d5b76bdb6-57zxn" Nov 29 07:45:03 crc kubenswrapper[4795]: I1129 07:45:03.683471 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e0e2228-e69b-4e4a-8dcd-675baddab6b8-serving-cert\") pod \"route-controller-manager-d5b76bdb6-57zxn\" (UID: \"3e0e2228-e69b-4e4a-8dcd-675baddab6b8\") " pod="openshift-route-controller-manager/route-controller-manager-d5b76bdb6-57zxn" Nov 29 07:45:03 crc kubenswrapper[4795]: I1129 07:45:03.697555 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w766r\" (UniqueName: \"kubernetes.io/projected/3e0e2228-e69b-4e4a-8dcd-675baddab6b8-kube-api-access-w766r\") pod \"route-controller-manager-d5b76bdb6-57zxn\" (UID: \"3e0e2228-e69b-4e4a-8dcd-675baddab6b8\") " pod="openshift-route-controller-manager/route-controller-manager-d5b76bdb6-57zxn" Nov 29 07:45:03 crc kubenswrapper[4795]: I1129 07:45:03.713163 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d5b76bdb6-57zxn" Nov 29 07:45:03 crc kubenswrapper[4795]: I1129 07:45:03.887697 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-89jdp" Nov 29 07:45:03 crc kubenswrapper[4795]: I1129 07:45:03.941832 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d5b76bdb6-57zxn"] Nov 29 07:45:03 crc kubenswrapper[4795]: I1129 07:45:03.981080 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbnxw\" (UniqueName: \"kubernetes.io/projected/74935be3-c96a-49da-97c0-7543a4217bd2-kube-api-access-hbnxw\") pod \"74935be3-c96a-49da-97c0-7543a4217bd2\" (UID: \"74935be3-c96a-49da-97c0-7543a4217bd2\") " Nov 29 07:45:03 crc kubenswrapper[4795]: I1129 07:45:03.987483 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74935be3-c96a-49da-97c0-7543a4217bd2-kube-api-access-hbnxw" (OuterVolumeSpecName: "kube-api-access-hbnxw") pod "74935be3-c96a-49da-97c0-7543a4217bd2" (UID: "74935be3-c96a-49da-97c0-7543a4217bd2"). InnerVolumeSpecName "kube-api-access-hbnxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:45:04 crc kubenswrapper[4795]: I1129 07:45:04.081844 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/74935be3-c96a-49da-97c0-7543a4217bd2-secret-volume\") pod \"74935be3-c96a-49da-97c0-7543a4217bd2\" (UID: \"74935be3-c96a-49da-97c0-7543a4217bd2\") " Nov 29 07:45:04 crc kubenswrapper[4795]: I1129 07:45:04.081902 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74935be3-c96a-49da-97c0-7543a4217bd2-config-volume\") pod \"74935be3-c96a-49da-97c0-7543a4217bd2\" (UID: \"74935be3-c96a-49da-97c0-7543a4217bd2\") " Nov 29 07:45:04 crc kubenswrapper[4795]: I1129 07:45:04.082225 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbnxw\" (UniqueName: \"kubernetes.io/projected/74935be3-c96a-49da-97c0-7543a4217bd2-kube-api-access-hbnxw\") on node \"crc\" DevicePath \"\"" Nov 29 07:45:04 crc kubenswrapper[4795]: I1129 07:45:04.082867 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74935be3-c96a-49da-97c0-7543a4217bd2-config-volume" (OuterVolumeSpecName: "config-volume") pod "74935be3-c96a-49da-97c0-7543a4217bd2" (UID: "74935be3-c96a-49da-97c0-7543a4217bd2"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:45:04 crc kubenswrapper[4795]: I1129 07:45:04.084406 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74935be3-c96a-49da-97c0-7543a4217bd2-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "74935be3-c96a-49da-97c0-7543a4217bd2" (UID: "74935be3-c96a-49da-97c0-7543a4217bd2"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:45:04 crc kubenswrapper[4795]: I1129 07:45:04.183053 4795 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/74935be3-c96a-49da-97c0-7543a4217bd2-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 29 07:45:04 crc kubenswrapper[4795]: I1129 07:45:04.183091 4795 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74935be3-c96a-49da-97c0-7543a4217bd2-config-volume\") on node \"crc\" DevicePath \"\"" Nov 29 07:45:04 crc kubenswrapper[4795]: I1129 07:45:04.634719 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-89jdp" event={"ID":"74935be3-c96a-49da-97c0-7543a4217bd2","Type":"ContainerDied","Data":"1bf407aa4c55438a4fae986d983d24e8a1cc37e34f9e1e9c277e1e44b45e11ac"} Nov 29 07:45:04 crc kubenswrapper[4795]: I1129 07:45:04.634765 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1bf407aa4c55438a4fae986d983d24e8a1cc37e34f9e1e9c277e1e44b45e11ac" Nov 29 07:45:04 crc kubenswrapper[4795]: I1129 07:45:04.635008 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-89jdp" Nov 29 07:45:04 crc kubenswrapper[4795]: I1129 07:45:04.635568 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d5b76bdb6-57zxn" event={"ID":"3e0e2228-e69b-4e4a-8dcd-675baddab6b8","Type":"ContainerStarted","Data":"e956174ec36b85fbb855f5f666787f7d60193de37e2909a5dba5f8158919722e"} Nov 29 07:45:05 crc kubenswrapper[4795]: I1129 07:45:05.641542 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d5b76bdb6-57zxn" event={"ID":"3e0e2228-e69b-4e4a-8dcd-675baddab6b8","Type":"ContainerStarted","Data":"06f79af32b7417fd6634c18556fb2ef09fc6422f7957dbdda89b1aa3cdb8bbea"} Nov 29 07:45:05 crc kubenswrapper[4795]: I1129 07:45:05.642810 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-d5b76bdb6-57zxn" Nov 29 07:45:05 crc kubenswrapper[4795]: I1129 07:45:05.648824 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-d5b76bdb6-57zxn" Nov 29 07:45:05 crc kubenswrapper[4795]: I1129 07:45:05.663902 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-d5b76bdb6-57zxn" podStartSLOduration=5.663877432 podStartE2EDuration="5.663877432s" podCreationTimestamp="2025-11-29 07:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:45:05.661189196 +0000 UTC m=+351.636764986" watchObservedRunningTime="2025-11-29 07:45:05.663877432 +0000 UTC m=+351.639453222" Nov 29 07:45:11 crc kubenswrapper[4795]: I1129 07:45:11.730800 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-d48c458cb-mdstf" Nov 29 07:45:11 crc kubenswrapper[4795]: I1129 07:45:11.735040 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-d48c458cb-mdstf" Nov 29 07:45:20 crc kubenswrapper[4795]: I1129 07:45:20.231156 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-d48c458cb-mdstf"] Nov 29 07:45:20 crc kubenswrapper[4795]: I1129 07:45:20.232004 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-d48c458cb-mdstf" podUID="df6ce8d7-89a4-46f1-a10c-5f86db452a20" containerName="controller-manager" containerID="cri-o://58e1234262f95726bfc4e08a00f0009e12f39b9ba535985dc6fb1ddccb3435de" gracePeriod=30 Nov 29 07:45:21 crc kubenswrapper[4795]: I1129 07:45:21.731817 4795 patch_prober.go:28] interesting pod/controller-manager-d48c458cb-mdstf container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.58:8443/healthz\": dial tcp 10.217.0.58:8443: connect: connection refused" start-of-body= Nov 29 07:45:21 crc kubenswrapper[4795]: I1129 07:45:21.732300 4795 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-d48c458cb-mdstf" podUID="df6ce8d7-89a4-46f1-a10c-5f86db452a20" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.58:8443/healthz\": dial tcp 10.217.0.58:8443: connect: connection refused" Nov 29 07:45:22 crc kubenswrapper[4795]: I1129 07:45:22.673027 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d48c458cb-mdstf" Nov 29 07:45:22 crc kubenswrapper[4795]: I1129 07:45:22.711568 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-85496f666b-wlx89"] Nov 29 07:45:22 crc kubenswrapper[4795]: E1129 07:45:22.711813 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df6ce8d7-89a4-46f1-a10c-5f86db452a20" containerName="controller-manager" Nov 29 07:45:22 crc kubenswrapper[4795]: I1129 07:45:22.711826 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="df6ce8d7-89a4-46f1-a10c-5f86db452a20" containerName="controller-manager" Nov 29 07:45:22 crc kubenswrapper[4795]: E1129 07:45:22.711843 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74935be3-c96a-49da-97c0-7543a4217bd2" containerName="collect-profiles" Nov 29 07:45:22 crc kubenswrapper[4795]: I1129 07:45:22.711851 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="74935be3-c96a-49da-97c0-7543a4217bd2" containerName="collect-profiles" Nov 29 07:45:22 crc kubenswrapper[4795]: I1129 07:45:22.711968 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="74935be3-c96a-49da-97c0-7543a4217bd2" containerName="collect-profiles" Nov 29 07:45:22 crc kubenswrapper[4795]: I1129 07:45:22.711982 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="df6ce8d7-89a4-46f1-a10c-5f86db452a20" containerName="controller-manager" Nov 29 07:45:22 crc kubenswrapper[4795]: I1129 07:45:22.712412 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-85496f666b-wlx89" Nov 29 07:45:22 crc kubenswrapper[4795]: I1129 07:45:22.732662 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-85496f666b-wlx89"] Nov 29 07:45:22 crc kubenswrapper[4795]: I1129 07:45:22.737424 4795 generic.go:334] "Generic (PLEG): container finished" podID="df6ce8d7-89a4-46f1-a10c-5f86db452a20" containerID="58e1234262f95726bfc4e08a00f0009e12f39b9ba535985dc6fb1ddccb3435de" exitCode=0 Nov 29 07:45:22 crc kubenswrapper[4795]: I1129 07:45:22.737437 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d48c458cb-mdstf" event={"ID":"df6ce8d7-89a4-46f1-a10c-5f86db452a20","Type":"ContainerDied","Data":"58e1234262f95726bfc4e08a00f0009e12f39b9ba535985dc6fb1ddccb3435de"} Nov 29 07:45:22 crc kubenswrapper[4795]: I1129 07:45:22.737479 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d48c458cb-mdstf" event={"ID":"df6ce8d7-89a4-46f1-a10c-5f86db452a20","Type":"ContainerDied","Data":"82d81bb817d08c8e5d77874b27a2a41fd0be849fe0f23cf146aece31de24fcb3"} Nov 29 07:45:22 crc kubenswrapper[4795]: I1129 07:45:22.737502 4795 scope.go:117] "RemoveContainer" containerID="58e1234262f95726bfc4e08a00f0009e12f39b9ba535985dc6fb1ddccb3435de" Nov 29 07:45:22 crc kubenswrapper[4795]: I1129 07:45:22.737506 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d48c458cb-mdstf" Nov 29 07:45:22 crc kubenswrapper[4795]: I1129 07:45:22.760384 4795 scope.go:117] "RemoveContainer" containerID="58e1234262f95726bfc4e08a00f0009e12f39b9ba535985dc6fb1ddccb3435de" Nov 29 07:45:22 crc kubenswrapper[4795]: E1129 07:45:22.760967 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58e1234262f95726bfc4e08a00f0009e12f39b9ba535985dc6fb1ddccb3435de\": container with ID starting with 58e1234262f95726bfc4e08a00f0009e12f39b9ba535985dc6fb1ddccb3435de not found: ID does not exist" containerID="58e1234262f95726bfc4e08a00f0009e12f39b9ba535985dc6fb1ddccb3435de" Nov 29 07:45:22 crc kubenswrapper[4795]: I1129 07:45:22.761010 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58e1234262f95726bfc4e08a00f0009e12f39b9ba535985dc6fb1ddccb3435de"} err="failed to get container status \"58e1234262f95726bfc4e08a00f0009e12f39b9ba535985dc6fb1ddccb3435de\": rpc error: code = NotFound desc = could not find container \"58e1234262f95726bfc4e08a00f0009e12f39b9ba535985dc6fb1ddccb3435de\": container with ID starting with 58e1234262f95726bfc4e08a00f0009e12f39b9ba535985dc6fb1ddccb3435de not found: ID does not exist" Nov 29 07:45:22 crc kubenswrapper[4795]: I1129 07:45:22.815331 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/df6ce8d7-89a4-46f1-a10c-5f86db452a20-client-ca\") pod \"df6ce8d7-89a4-46f1-a10c-5f86db452a20\" (UID: \"df6ce8d7-89a4-46f1-a10c-5f86db452a20\") " Nov 29 07:45:22 crc kubenswrapper[4795]: I1129 07:45:22.815691 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df6ce8d7-89a4-46f1-a10c-5f86db452a20-serving-cert\") pod \"df6ce8d7-89a4-46f1-a10c-5f86db452a20\" (UID: \"df6ce8d7-89a4-46f1-a10c-5f86db452a20\") " Nov 29 07:45:22 crc kubenswrapper[4795]: I1129 07:45:22.815731 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/df6ce8d7-89a4-46f1-a10c-5f86db452a20-proxy-ca-bundles\") pod \"df6ce8d7-89a4-46f1-a10c-5f86db452a20\" (UID: \"df6ce8d7-89a4-46f1-a10c-5f86db452a20\") " Nov 29 07:45:22 crc kubenswrapper[4795]: I1129 07:45:22.815797 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df6ce8d7-89a4-46f1-a10c-5f86db452a20-config\") pod \"df6ce8d7-89a4-46f1-a10c-5f86db452a20\" (UID: \"df6ce8d7-89a4-46f1-a10c-5f86db452a20\") " Nov 29 07:45:22 crc kubenswrapper[4795]: I1129 07:45:22.815826 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjcvm\" (UniqueName: \"kubernetes.io/projected/df6ce8d7-89a4-46f1-a10c-5f86db452a20-kube-api-access-sjcvm\") pod \"df6ce8d7-89a4-46f1-a10c-5f86db452a20\" (UID: \"df6ce8d7-89a4-46f1-a10c-5f86db452a20\") " Nov 29 07:45:22 crc kubenswrapper[4795]: I1129 07:45:22.815956 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a64f4e2b-5a7f-4c68-8061-85bdabc8ffa4-config\") pod \"controller-manager-85496f666b-wlx89\" (UID: \"a64f4e2b-5a7f-4c68-8061-85bdabc8ffa4\") " pod="openshift-controller-manager/controller-manager-85496f666b-wlx89" Nov 29 07:45:22 crc kubenswrapper[4795]: I1129 07:45:22.815997 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a64f4e2b-5a7f-4c68-8061-85bdabc8ffa4-client-ca\") pod \"controller-manager-85496f666b-wlx89\" (UID: \"a64f4e2b-5a7f-4c68-8061-85bdabc8ffa4\") " pod="openshift-controller-manager/controller-manager-85496f666b-wlx89" Nov 29 07:45:22 crc kubenswrapper[4795]: I1129 07:45:22.816032 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a64f4e2b-5a7f-4c68-8061-85bdabc8ffa4-proxy-ca-bundles\") pod \"controller-manager-85496f666b-wlx89\" (UID: \"a64f4e2b-5a7f-4c68-8061-85bdabc8ffa4\") " pod="openshift-controller-manager/controller-manager-85496f666b-wlx89" Nov 29 07:45:22 crc kubenswrapper[4795]: I1129 07:45:22.816081 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2kxn\" (UniqueName: \"kubernetes.io/projected/a64f4e2b-5a7f-4c68-8061-85bdabc8ffa4-kube-api-access-f2kxn\") pod \"controller-manager-85496f666b-wlx89\" (UID: \"a64f4e2b-5a7f-4c68-8061-85bdabc8ffa4\") " pod="openshift-controller-manager/controller-manager-85496f666b-wlx89" Nov 29 07:45:22 crc kubenswrapper[4795]: I1129 07:45:22.816106 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a64f4e2b-5a7f-4c68-8061-85bdabc8ffa4-serving-cert\") pod \"controller-manager-85496f666b-wlx89\" (UID: \"a64f4e2b-5a7f-4c68-8061-85bdabc8ffa4\") " pod="openshift-controller-manager/controller-manager-85496f666b-wlx89" Nov 29 07:45:22 crc kubenswrapper[4795]: I1129 07:45:22.816294 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df6ce8d7-89a4-46f1-a10c-5f86db452a20-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "df6ce8d7-89a4-46f1-a10c-5f86db452a20" (UID: "df6ce8d7-89a4-46f1-a10c-5f86db452a20"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:45:22 crc kubenswrapper[4795]: I1129 07:45:22.816361 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df6ce8d7-89a4-46f1-a10c-5f86db452a20-config" (OuterVolumeSpecName: "config") pod "df6ce8d7-89a4-46f1-a10c-5f86db452a20" (UID: "df6ce8d7-89a4-46f1-a10c-5f86db452a20"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:45:22 crc kubenswrapper[4795]: I1129 07:45:22.816498 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df6ce8d7-89a4-46f1-a10c-5f86db452a20-client-ca" (OuterVolumeSpecName: "client-ca") pod "df6ce8d7-89a4-46f1-a10c-5f86db452a20" (UID: "df6ce8d7-89a4-46f1-a10c-5f86db452a20"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:45:22 crc kubenswrapper[4795]: I1129 07:45:22.820431 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df6ce8d7-89a4-46f1-a10c-5f86db452a20-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "df6ce8d7-89a4-46f1-a10c-5f86db452a20" (UID: "df6ce8d7-89a4-46f1-a10c-5f86db452a20"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:45:22 crc kubenswrapper[4795]: I1129 07:45:22.821517 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df6ce8d7-89a4-46f1-a10c-5f86db452a20-kube-api-access-sjcvm" (OuterVolumeSpecName: "kube-api-access-sjcvm") pod "df6ce8d7-89a4-46f1-a10c-5f86db452a20" (UID: "df6ce8d7-89a4-46f1-a10c-5f86db452a20"). InnerVolumeSpecName "kube-api-access-sjcvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:45:22 crc kubenswrapper[4795]: I1129 07:45:22.916916 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a64f4e2b-5a7f-4c68-8061-85bdabc8ffa4-client-ca\") pod \"controller-manager-85496f666b-wlx89\" (UID: \"a64f4e2b-5a7f-4c68-8061-85bdabc8ffa4\") " pod="openshift-controller-manager/controller-manager-85496f666b-wlx89" Nov 29 07:45:22 crc kubenswrapper[4795]: I1129 07:45:22.916984 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a64f4e2b-5a7f-4c68-8061-85bdabc8ffa4-proxy-ca-bundles\") pod \"controller-manager-85496f666b-wlx89\" (UID: \"a64f4e2b-5a7f-4c68-8061-85bdabc8ffa4\") " pod="openshift-controller-manager/controller-manager-85496f666b-wlx89" Nov 29 07:45:22 crc kubenswrapper[4795]: I1129 07:45:22.917033 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2kxn\" (UniqueName: \"kubernetes.io/projected/a64f4e2b-5a7f-4c68-8061-85bdabc8ffa4-kube-api-access-f2kxn\") pod \"controller-manager-85496f666b-wlx89\" (UID: \"a64f4e2b-5a7f-4c68-8061-85bdabc8ffa4\") " pod="openshift-controller-manager/controller-manager-85496f666b-wlx89" Nov 29 07:45:22 crc kubenswrapper[4795]: I1129 07:45:22.917066 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a64f4e2b-5a7f-4c68-8061-85bdabc8ffa4-serving-cert\") pod \"controller-manager-85496f666b-wlx89\" (UID: \"a64f4e2b-5a7f-4c68-8061-85bdabc8ffa4\") " pod="openshift-controller-manager/controller-manager-85496f666b-wlx89" Nov 29 07:45:22 crc kubenswrapper[4795]: I1129 07:45:22.917123 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a64f4e2b-5a7f-4c68-8061-85bdabc8ffa4-config\") pod \"controller-manager-85496f666b-wlx89\" (UID: \"a64f4e2b-5a7f-4c68-8061-85bdabc8ffa4\") " pod="openshift-controller-manager/controller-manager-85496f666b-wlx89" Nov 29 07:45:22 crc kubenswrapper[4795]: I1129 07:45:22.917167 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjcvm\" (UniqueName: \"kubernetes.io/projected/df6ce8d7-89a4-46f1-a10c-5f86db452a20-kube-api-access-sjcvm\") on node \"crc\" DevicePath \"\"" Nov 29 07:45:22 crc kubenswrapper[4795]: I1129 07:45:22.917180 4795 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/df6ce8d7-89a4-46f1-a10c-5f86db452a20-client-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:45:22 crc kubenswrapper[4795]: I1129 07:45:22.917193 4795 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df6ce8d7-89a4-46f1-a10c-5f86db452a20-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:45:22 crc kubenswrapper[4795]: I1129 07:45:22.917204 4795 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/df6ce8d7-89a4-46f1-a10c-5f86db452a20-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 29 07:45:22 crc kubenswrapper[4795]: I1129 07:45:22.917216 4795 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df6ce8d7-89a4-46f1-a10c-5f86db452a20-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:45:22 crc kubenswrapper[4795]: I1129 07:45:22.918716 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a64f4e2b-5a7f-4c68-8061-85bdabc8ffa4-client-ca\") pod \"controller-manager-85496f666b-wlx89\" (UID: \"a64f4e2b-5a7f-4c68-8061-85bdabc8ffa4\") " pod="openshift-controller-manager/controller-manager-85496f666b-wlx89" Nov 29 07:45:22 crc kubenswrapper[4795]: I1129 07:45:22.919262 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a64f4e2b-5a7f-4c68-8061-85bdabc8ffa4-proxy-ca-bundles\") pod \"controller-manager-85496f666b-wlx89\" (UID: \"a64f4e2b-5a7f-4c68-8061-85bdabc8ffa4\") " pod="openshift-controller-manager/controller-manager-85496f666b-wlx89" Nov 29 07:45:22 crc kubenswrapper[4795]: I1129 07:45:22.919407 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a64f4e2b-5a7f-4c68-8061-85bdabc8ffa4-config\") pod \"controller-manager-85496f666b-wlx89\" (UID: \"a64f4e2b-5a7f-4c68-8061-85bdabc8ffa4\") " pod="openshift-controller-manager/controller-manager-85496f666b-wlx89" Nov 29 07:45:22 crc kubenswrapper[4795]: I1129 07:45:22.922520 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a64f4e2b-5a7f-4c68-8061-85bdabc8ffa4-serving-cert\") pod \"controller-manager-85496f666b-wlx89\" (UID: \"a64f4e2b-5a7f-4c68-8061-85bdabc8ffa4\") " pod="openshift-controller-manager/controller-manager-85496f666b-wlx89" Nov 29 07:45:22 crc kubenswrapper[4795]: I1129 07:45:22.932105 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2kxn\" (UniqueName: \"kubernetes.io/projected/a64f4e2b-5a7f-4c68-8061-85bdabc8ffa4-kube-api-access-f2kxn\") pod \"controller-manager-85496f666b-wlx89\" (UID: \"a64f4e2b-5a7f-4c68-8061-85bdabc8ffa4\") " pod="openshift-controller-manager/controller-manager-85496f666b-wlx89" Nov 29 07:45:23 crc kubenswrapper[4795]: I1129 07:45:23.033540 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-85496f666b-wlx89" Nov 29 07:45:23 crc kubenswrapper[4795]: I1129 07:45:23.076656 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-d48c458cb-mdstf"] Nov 29 07:45:23 crc kubenswrapper[4795]: I1129 07:45:23.080452 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-d48c458cb-mdstf"] Nov 29 07:45:23 crc kubenswrapper[4795]: I1129 07:45:23.417054 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-85496f666b-wlx89"] Nov 29 07:45:23 crc kubenswrapper[4795]: I1129 07:45:23.745415 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-85496f666b-wlx89" event={"ID":"a64f4e2b-5a7f-4c68-8061-85bdabc8ffa4","Type":"ContainerStarted","Data":"8d8d91b4484bf0fb02e868a95a231db0690411878e6f73e3734c07a88a9368d1"} Nov 29 07:45:23 crc kubenswrapper[4795]: I1129 07:45:23.745469 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-85496f666b-wlx89" event={"ID":"a64f4e2b-5a7f-4c68-8061-85bdabc8ffa4","Type":"ContainerStarted","Data":"f89922b3b63391aa9ab8032480c8240f0e766ab970cfef114a5eda5723839f4e"} Nov 29 07:45:23 crc kubenswrapper[4795]: I1129 07:45:23.745862 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-85496f666b-wlx89" Nov 29 07:45:23 crc kubenswrapper[4795]: I1129 07:45:23.751172 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-85496f666b-wlx89" Nov 29 07:45:23 crc kubenswrapper[4795]: I1129 07:45:23.792387 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-85496f666b-wlx89" podStartSLOduration=3.7923691269999997 podStartE2EDuration="3.792369127s" podCreationTimestamp="2025-11-29 07:45:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:45:23.766501591 +0000 UTC m=+369.742077381" watchObservedRunningTime="2025-11-29 07:45:23.792369127 +0000 UTC m=+369.767944917" Nov 29 07:45:24 crc kubenswrapper[4795]: I1129 07:45:24.282821 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df6ce8d7-89a4-46f1-a10c-5f86db452a20" path="/var/lib/kubelet/pods/df6ce8d7-89a4-46f1-a10c-5f86db452a20/volumes" Nov 29 07:45:35 crc kubenswrapper[4795]: I1129 07:45:35.409712 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h8pp6"] Nov 29 07:45:35 crc kubenswrapper[4795]: I1129 07:45:35.411876 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-h8pp6" podUID="a04ecf63-b125-4a0e-9869-403c9cca5648" containerName="registry-server" containerID="cri-o://69537845c3b2f03fc18a33867d3b13fff0b928a264fea1005b6639a22cf266d3" gracePeriod=30 Nov 29 07:45:35 crc kubenswrapper[4795]: I1129 07:45:35.422675 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vx8d5"] Nov 29 07:45:35 crc kubenswrapper[4795]: I1129 07:45:35.422921 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vx8d5" podUID="0deb15dc-57ff-4c83-8e81-ea7ebfda038d" containerName="registry-server" containerID="cri-o://fa1222e8b368f97208d5aedac2a28a6190187a6b679460a628dee8d1ad8f3192" gracePeriod=30 Nov 29 07:45:35 crc kubenswrapper[4795]: I1129 07:45:35.425873 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-l2khx"] Nov 29 07:45:35 crc kubenswrapper[4795]: I1129 07:45:35.426059 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-l2khx" podUID="5550b70a-4101-4b56-8f88-c6339baaf188" containerName="marketplace-operator" containerID="cri-o://779841ac56099d3f6afb28f8490df34f6b23d4f3552629dbb24b7e04a0767446" gracePeriod=30 Nov 29 07:45:35 crc kubenswrapper[4795]: I1129 07:45:35.437430 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8bzbm"] Nov 29 07:45:35 crc kubenswrapper[4795]: I1129 07:45:35.442177 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8bzbm" podUID="96ff138b-b30f-4a36-9c9c-76cf2c9e8e89" containerName="registry-server" containerID="cri-o://d93d118a033bf15573d9e5aa3a9a66036e6996e9fe5801607d59446bcb1fd8ee" gracePeriod=30 Nov 29 07:45:35 crc kubenswrapper[4795]: I1129 07:45:35.443434 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nx2gj"] Nov 29 07:45:35 crc kubenswrapper[4795]: I1129 07:45:35.444266 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-nx2gj" Nov 29 07:45:35 crc kubenswrapper[4795]: I1129 07:45:35.454339 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d4htg"] Nov 29 07:45:35 crc kubenswrapper[4795]: I1129 07:45:35.454658 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-d4htg" podUID="097562dd-99cb-4451-ac96-c1cdfd8cc4f4" containerName="registry-server" containerID="cri-o://d3259e67864daffa16284888cb554ca3c689dd26ef2352dd25c311e6c1e45b9e" gracePeriod=30 Nov 29 07:45:35 crc kubenswrapper[4795]: I1129 07:45:35.467523 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nx2gj"] Nov 29 07:45:35 crc kubenswrapper[4795]: E1129 07:45:35.534014 4795 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 69537845c3b2f03fc18a33867d3b13fff0b928a264fea1005b6639a22cf266d3 is running failed: container process not found" containerID="69537845c3b2f03fc18a33867d3b13fff0b928a264fea1005b6639a22cf266d3" cmd=["grpc_health_probe","-addr=:50051"] Nov 29 07:45:35 crc kubenswrapper[4795]: E1129 07:45:35.536354 4795 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 69537845c3b2f03fc18a33867d3b13fff0b928a264fea1005b6639a22cf266d3 is running failed: container process not found" containerID="69537845c3b2f03fc18a33867d3b13fff0b928a264fea1005b6639a22cf266d3" cmd=["grpc_health_probe","-addr=:50051"] Nov 29 07:45:35 crc kubenswrapper[4795]: E1129 07:45:35.536805 4795 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 69537845c3b2f03fc18a33867d3b13fff0b928a264fea1005b6639a22cf266d3 is running failed: container process not found" containerID="69537845c3b2f03fc18a33867d3b13fff0b928a264fea1005b6639a22cf266d3" cmd=["grpc_health_probe","-addr=:50051"] Nov 29 07:45:35 crc kubenswrapper[4795]: E1129 07:45:35.536867 4795 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 69537845c3b2f03fc18a33867d3b13fff0b928a264fea1005b6639a22cf266d3 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-h8pp6" podUID="a04ecf63-b125-4a0e-9869-403c9cca5648" containerName="registry-server" Nov 29 07:45:35 crc kubenswrapper[4795]: I1129 07:45:35.577547 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwwc8\" (UniqueName: \"kubernetes.io/projected/d49cd6f6-0b90-4c8f-9e8f-30a52c232522-kube-api-access-rwwc8\") pod \"marketplace-operator-79b997595-nx2gj\" (UID: \"d49cd6f6-0b90-4c8f-9e8f-30a52c232522\") " pod="openshift-marketplace/marketplace-operator-79b997595-nx2gj" Nov 29 07:45:35 crc kubenswrapper[4795]: I1129 07:45:35.577606 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d49cd6f6-0b90-4c8f-9e8f-30a52c232522-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-nx2gj\" (UID: \"d49cd6f6-0b90-4c8f-9e8f-30a52c232522\") " pod="openshift-marketplace/marketplace-operator-79b997595-nx2gj" Nov 29 07:45:35 crc kubenswrapper[4795]: I1129 07:45:35.577647 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d49cd6f6-0b90-4c8f-9e8f-30a52c232522-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-nx2gj\" (UID: \"d49cd6f6-0b90-4c8f-9e8f-30a52c232522\") " pod="openshift-marketplace/marketplace-operator-79b997595-nx2gj" Nov 29 07:45:35 crc kubenswrapper[4795]: I1129 07:45:35.680684 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d49cd6f6-0b90-4c8f-9e8f-30a52c232522-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-nx2gj\" (UID: \"d49cd6f6-0b90-4c8f-9e8f-30a52c232522\") " pod="openshift-marketplace/marketplace-operator-79b997595-nx2gj" Nov 29 07:45:35 crc kubenswrapper[4795]: I1129 07:45:35.681091 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d49cd6f6-0b90-4c8f-9e8f-30a52c232522-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-nx2gj\" (UID: \"d49cd6f6-0b90-4c8f-9e8f-30a52c232522\") " pod="openshift-marketplace/marketplace-operator-79b997595-nx2gj" Nov 29 07:45:35 crc kubenswrapper[4795]: I1129 07:45:35.681159 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwwc8\" (UniqueName: \"kubernetes.io/projected/d49cd6f6-0b90-4c8f-9e8f-30a52c232522-kube-api-access-rwwc8\") pod \"marketplace-operator-79b997595-nx2gj\" (UID: \"d49cd6f6-0b90-4c8f-9e8f-30a52c232522\") " pod="openshift-marketplace/marketplace-operator-79b997595-nx2gj" Nov 29 07:45:35 crc kubenswrapper[4795]: I1129 07:45:35.684210 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d49cd6f6-0b90-4c8f-9e8f-30a52c232522-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-nx2gj\" (UID: \"d49cd6f6-0b90-4c8f-9e8f-30a52c232522\") " pod="openshift-marketplace/marketplace-operator-79b997595-nx2gj" Nov 29 07:45:35 crc kubenswrapper[4795]: I1129 07:45:35.691990 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d49cd6f6-0b90-4c8f-9e8f-30a52c232522-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-nx2gj\" (UID: \"d49cd6f6-0b90-4c8f-9e8f-30a52c232522\") " pod="openshift-marketplace/marketplace-operator-79b997595-nx2gj" Nov 29 07:45:35 crc kubenswrapper[4795]: I1129 07:45:35.707538 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwwc8\" (UniqueName: \"kubernetes.io/projected/d49cd6f6-0b90-4c8f-9e8f-30a52c232522-kube-api-access-rwwc8\") pod \"marketplace-operator-79b997595-nx2gj\" (UID: \"d49cd6f6-0b90-4c8f-9e8f-30a52c232522\") " pod="openshift-marketplace/marketplace-operator-79b997595-nx2gj" Nov 29 07:45:35 crc kubenswrapper[4795]: I1129 07:45:35.761788 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-nx2gj" Nov 29 07:45:35 crc kubenswrapper[4795]: I1129 07:45:35.815614 4795 generic.go:334] "Generic (PLEG): container finished" podID="96ff138b-b30f-4a36-9c9c-76cf2c9e8e89" containerID="d93d118a033bf15573d9e5aa3a9a66036e6996e9fe5801607d59446bcb1fd8ee" exitCode=0 Nov 29 07:45:35 crc kubenswrapper[4795]: I1129 07:45:35.815674 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8bzbm" event={"ID":"96ff138b-b30f-4a36-9c9c-76cf2c9e8e89","Type":"ContainerDied","Data":"d93d118a033bf15573d9e5aa3a9a66036e6996e9fe5801607d59446bcb1fd8ee"} Nov 29 07:45:35 crc kubenswrapper[4795]: I1129 07:45:35.819328 4795 generic.go:334] "Generic (PLEG): container finished" podID="a04ecf63-b125-4a0e-9869-403c9cca5648" containerID="69537845c3b2f03fc18a33867d3b13fff0b928a264fea1005b6639a22cf266d3" exitCode=0 Nov 29 07:45:35 crc kubenswrapper[4795]: I1129 07:45:35.819379 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h8pp6" event={"ID":"a04ecf63-b125-4a0e-9869-403c9cca5648","Type":"ContainerDied","Data":"69537845c3b2f03fc18a33867d3b13fff0b928a264fea1005b6639a22cf266d3"} Nov 29 07:45:35 crc kubenswrapper[4795]: I1129 07:45:35.821285 4795 generic.go:334] "Generic (PLEG): container finished" podID="0deb15dc-57ff-4c83-8e81-ea7ebfda038d" containerID="fa1222e8b368f97208d5aedac2a28a6190187a6b679460a628dee8d1ad8f3192" exitCode=0 Nov 29 07:45:35 crc kubenswrapper[4795]: I1129 07:45:35.821335 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vx8d5" event={"ID":"0deb15dc-57ff-4c83-8e81-ea7ebfda038d","Type":"ContainerDied","Data":"fa1222e8b368f97208d5aedac2a28a6190187a6b679460a628dee8d1ad8f3192"} Nov 29 07:45:35 crc kubenswrapper[4795]: I1129 07:45:35.822994 4795 generic.go:334] "Generic (PLEG): container finished" podID="5550b70a-4101-4b56-8f88-c6339baaf188" containerID="779841ac56099d3f6afb28f8490df34f6b23d4f3552629dbb24b7e04a0767446" exitCode=0 Nov 29 07:45:35 crc kubenswrapper[4795]: I1129 07:45:35.823029 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-l2khx" event={"ID":"5550b70a-4101-4b56-8f88-c6339baaf188","Type":"ContainerDied","Data":"779841ac56099d3f6afb28f8490df34f6b23d4f3552629dbb24b7e04a0767446"} Nov 29 07:45:35 crc kubenswrapper[4795]: I1129 07:45:35.830053 4795 generic.go:334] "Generic (PLEG): container finished" podID="097562dd-99cb-4451-ac96-c1cdfd8cc4f4" containerID="d3259e67864daffa16284888cb554ca3c689dd26ef2352dd25c311e6c1e45b9e" exitCode=0 Nov 29 07:45:35 crc kubenswrapper[4795]: I1129 07:45:35.830107 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d4htg" event={"ID":"097562dd-99cb-4451-ac96-c1cdfd8cc4f4","Type":"ContainerDied","Data":"d3259e67864daffa16284888cb554ca3c689dd26ef2352dd25c311e6c1e45b9e"} Nov 29 07:45:35 crc kubenswrapper[4795]: I1129 07:45:35.994057 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d4htg" Nov 29 07:45:36 crc kubenswrapper[4795]: I1129 07:45:36.160395 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h8pp6" Nov 29 07:45:36 crc kubenswrapper[4795]: I1129 07:45:36.161682 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8bzbm" Nov 29 07:45:36 crc kubenswrapper[4795]: I1129 07:45:36.190265 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/097562dd-99cb-4451-ac96-c1cdfd8cc4f4-utilities\") pod \"097562dd-99cb-4451-ac96-c1cdfd8cc4f4\" (UID: \"097562dd-99cb-4451-ac96-c1cdfd8cc4f4\") " Nov 29 07:45:36 crc kubenswrapper[4795]: I1129 07:45:36.190444 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/097562dd-99cb-4451-ac96-c1cdfd8cc4f4-catalog-content\") pod \"097562dd-99cb-4451-ac96-c1cdfd8cc4f4\" (UID: \"097562dd-99cb-4451-ac96-c1cdfd8cc4f4\") " Nov 29 07:45:36 crc kubenswrapper[4795]: I1129 07:45:36.190494 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lp5vh\" (UniqueName: \"kubernetes.io/projected/097562dd-99cb-4451-ac96-c1cdfd8cc4f4-kube-api-access-lp5vh\") pod \"097562dd-99cb-4451-ac96-c1cdfd8cc4f4\" (UID: \"097562dd-99cb-4451-ac96-c1cdfd8cc4f4\") " Nov 29 07:45:36 crc kubenswrapper[4795]: I1129 07:45:36.191309 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/097562dd-99cb-4451-ac96-c1cdfd8cc4f4-utilities" (OuterVolumeSpecName: "utilities") pod "097562dd-99cb-4451-ac96-c1cdfd8cc4f4" (UID: "097562dd-99cb-4451-ac96-c1cdfd8cc4f4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:45:36 crc kubenswrapper[4795]: I1129 07:45:36.192373 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-l2khx" Nov 29 07:45:36 crc kubenswrapper[4795]: I1129 07:45:36.194092 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/097562dd-99cb-4451-ac96-c1cdfd8cc4f4-kube-api-access-lp5vh" (OuterVolumeSpecName: "kube-api-access-lp5vh") pod "097562dd-99cb-4451-ac96-c1cdfd8cc4f4" (UID: "097562dd-99cb-4451-ac96-c1cdfd8cc4f4"). InnerVolumeSpecName "kube-api-access-lp5vh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:45:36 crc kubenswrapper[4795]: I1129 07:45:36.291678 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96ff138b-b30f-4a36-9c9c-76cf2c9e8e89-utilities\") pod \"96ff138b-b30f-4a36-9c9c-76cf2c9e8e89\" (UID: \"96ff138b-b30f-4a36-9c9c-76cf2c9e8e89\") " Nov 29 07:45:36 crc kubenswrapper[4795]: I1129 07:45:36.291725 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96ff138b-b30f-4a36-9c9c-76cf2c9e8e89-catalog-content\") pod \"96ff138b-b30f-4a36-9c9c-76cf2c9e8e89\" (UID: \"96ff138b-b30f-4a36-9c9c-76cf2c9e8e89\") " Nov 29 07:45:36 crc kubenswrapper[4795]: I1129 07:45:36.291757 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4kmmg\" (UniqueName: \"kubernetes.io/projected/a04ecf63-b125-4a0e-9869-403c9cca5648-kube-api-access-4kmmg\") pod \"a04ecf63-b125-4a0e-9869-403c9cca5648\" (UID: \"a04ecf63-b125-4a0e-9869-403c9cca5648\") " Nov 29 07:45:36 crc kubenswrapper[4795]: I1129 07:45:36.292262 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z98xv\" (UniqueName: \"kubernetes.io/projected/96ff138b-b30f-4a36-9c9c-76cf2c9e8e89-kube-api-access-z98xv\") pod \"96ff138b-b30f-4a36-9c9c-76cf2c9e8e89\" (UID: \"96ff138b-b30f-4a36-9c9c-76cf2c9e8e89\") " Nov 29 07:45:36 crc kubenswrapper[4795]: I1129 07:45:36.292327 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a04ecf63-b125-4a0e-9869-403c9cca5648-catalog-content\") pod \"a04ecf63-b125-4a0e-9869-403c9cca5648\" (UID: \"a04ecf63-b125-4a0e-9869-403c9cca5648\") " Nov 29 07:45:36 crc kubenswrapper[4795]: I1129 07:45:36.292388 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a04ecf63-b125-4a0e-9869-403c9cca5648-utilities\") pod \"a04ecf63-b125-4a0e-9869-403c9cca5648\" (UID: \"a04ecf63-b125-4a0e-9869-403c9cca5648\") " Nov 29 07:45:36 crc kubenswrapper[4795]: I1129 07:45:36.292416 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5550b70a-4101-4b56-8f88-c6339baaf188-marketplace-operator-metrics\") pod \"5550b70a-4101-4b56-8f88-c6339baaf188\" (UID: \"5550b70a-4101-4b56-8f88-c6339baaf188\") " Nov 29 07:45:36 crc kubenswrapper[4795]: I1129 07:45:36.292466 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5550b70a-4101-4b56-8f88-c6339baaf188-marketplace-trusted-ca\") pod \"5550b70a-4101-4b56-8f88-c6339baaf188\" (UID: \"5550b70a-4101-4b56-8f88-c6339baaf188\") " Nov 29 07:45:36 crc kubenswrapper[4795]: I1129 07:45:36.292493 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5kpq\" (UniqueName: \"kubernetes.io/projected/5550b70a-4101-4b56-8f88-c6339baaf188-kube-api-access-c5kpq\") pod \"5550b70a-4101-4b56-8f88-c6339baaf188\" (UID: \"5550b70a-4101-4b56-8f88-c6339baaf188\") " Nov 29 07:45:36 crc kubenswrapper[4795]: I1129 07:45:36.292876 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96ff138b-b30f-4a36-9c9c-76cf2c9e8e89-utilities" (OuterVolumeSpecName: "utilities") pod "96ff138b-b30f-4a36-9c9c-76cf2c9e8e89" (UID: "96ff138b-b30f-4a36-9c9c-76cf2c9e8e89"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:45:36 crc kubenswrapper[4795]: I1129 07:45:36.293815 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a04ecf63-b125-4a0e-9869-403c9cca5648-utilities" (OuterVolumeSpecName: "utilities") pod "a04ecf63-b125-4a0e-9869-403c9cca5648" (UID: "a04ecf63-b125-4a0e-9869-403c9cca5648"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:45:36 crc kubenswrapper[4795]: I1129 07:45:36.293911 4795 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/097562dd-99cb-4451-ac96-c1cdfd8cc4f4-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:45:36 crc kubenswrapper[4795]: I1129 07:45:36.294148 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lp5vh\" (UniqueName: \"kubernetes.io/projected/097562dd-99cb-4451-ac96-c1cdfd8cc4f4-kube-api-access-lp5vh\") on node \"crc\" DevicePath \"\"" Nov 29 07:45:36 crc kubenswrapper[4795]: I1129 07:45:36.294632 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5550b70a-4101-4b56-8f88-c6339baaf188-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "5550b70a-4101-4b56-8f88-c6339baaf188" (UID: "5550b70a-4101-4b56-8f88-c6339baaf188"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:45:36 crc kubenswrapper[4795]: I1129 07:45:36.299188 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a04ecf63-b125-4a0e-9869-403c9cca5648-kube-api-access-4kmmg" (OuterVolumeSpecName: "kube-api-access-4kmmg") pod "a04ecf63-b125-4a0e-9869-403c9cca5648" (UID: "a04ecf63-b125-4a0e-9869-403c9cca5648"). InnerVolumeSpecName "kube-api-access-4kmmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:45:36 crc kubenswrapper[4795]: I1129 07:45:36.299641 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96ff138b-b30f-4a36-9c9c-76cf2c9e8e89-kube-api-access-z98xv" (OuterVolumeSpecName: "kube-api-access-z98xv") pod "96ff138b-b30f-4a36-9c9c-76cf2c9e8e89" (UID: "96ff138b-b30f-4a36-9c9c-76cf2c9e8e89"). InnerVolumeSpecName "kube-api-access-z98xv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:45:36 crc kubenswrapper[4795]: I1129 07:45:36.299728 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5550b70a-4101-4b56-8f88-c6339baaf188-kube-api-access-c5kpq" (OuterVolumeSpecName: "kube-api-access-c5kpq") pod "5550b70a-4101-4b56-8f88-c6339baaf188" (UID: "5550b70a-4101-4b56-8f88-c6339baaf188"). InnerVolumeSpecName "kube-api-access-c5kpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:45:36 crc kubenswrapper[4795]: I1129 07:45:36.305522 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5550b70a-4101-4b56-8f88-c6339baaf188-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "5550b70a-4101-4b56-8f88-c6339baaf188" (UID: "5550b70a-4101-4b56-8f88-c6339baaf188"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:45:36 crc kubenswrapper[4795]: I1129 07:45:36.306279 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nx2gj"] Nov 29 07:45:36 crc kubenswrapper[4795]: I1129 07:45:36.327370 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96ff138b-b30f-4a36-9c9c-76cf2c9e8e89-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "96ff138b-b30f-4a36-9c9c-76cf2c9e8e89" (UID: "96ff138b-b30f-4a36-9c9c-76cf2c9e8e89"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:45:36 crc kubenswrapper[4795]: I1129 07:45:36.333381 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/097562dd-99cb-4451-ac96-c1cdfd8cc4f4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "097562dd-99cb-4451-ac96-c1cdfd8cc4f4" (UID: "097562dd-99cb-4451-ac96-c1cdfd8cc4f4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:45:36 crc kubenswrapper[4795]: I1129 07:45:36.348748 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a04ecf63-b125-4a0e-9869-403c9cca5648-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a04ecf63-b125-4a0e-9869-403c9cca5648" (UID: "a04ecf63-b125-4a0e-9869-403c9cca5648"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:45:36 crc kubenswrapper[4795]: I1129 07:45:36.395915 4795 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96ff138b-b30f-4a36-9c9c-76cf2c9e8e89-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:45:36 crc kubenswrapper[4795]: I1129 07:45:36.395981 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4kmmg\" (UniqueName: \"kubernetes.io/projected/a04ecf63-b125-4a0e-9869-403c9cca5648-kube-api-access-4kmmg\") on node \"crc\" DevicePath \"\"" Nov 29 07:45:36 crc kubenswrapper[4795]: I1129 07:45:36.396004 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z98xv\" (UniqueName: \"kubernetes.io/projected/96ff138b-b30f-4a36-9c9c-76cf2c9e8e89-kube-api-access-z98xv\") on node \"crc\" DevicePath \"\"" Nov 29 07:45:36 crc kubenswrapper[4795]: I1129 07:45:36.396022 4795 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a04ecf63-b125-4a0e-9869-403c9cca5648-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:45:36 crc kubenswrapper[4795]: I1129 07:45:36.396036 4795 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/097562dd-99cb-4451-ac96-c1cdfd8cc4f4-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:45:36 crc kubenswrapper[4795]: I1129 07:45:36.396049 4795 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a04ecf63-b125-4a0e-9869-403c9cca5648-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:45:36 crc kubenswrapper[4795]: I1129 07:45:36.396080 4795 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5550b70a-4101-4b56-8f88-c6339baaf188-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 29 07:45:36 crc kubenswrapper[4795]: I1129 07:45:36.396100 4795 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5550b70a-4101-4b56-8f88-c6339baaf188-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:45:36 crc kubenswrapper[4795]: I1129 07:45:36.396115 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5kpq\" (UniqueName: \"kubernetes.io/projected/5550b70a-4101-4b56-8f88-c6339baaf188-kube-api-access-c5kpq\") on node \"crc\" DevicePath \"\"" Nov 29 07:45:36 crc kubenswrapper[4795]: I1129 07:45:36.396127 4795 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96ff138b-b30f-4a36-9c9c-76cf2c9e8e89-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:45:36 crc kubenswrapper[4795]: I1129 07:45:36.438558 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vx8d5" Nov 29 07:45:36 crc kubenswrapper[4795]: I1129 07:45:36.496816 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k7kf7\" (UniqueName: \"kubernetes.io/projected/0deb15dc-57ff-4c83-8e81-ea7ebfda038d-kube-api-access-k7kf7\") pod \"0deb15dc-57ff-4c83-8e81-ea7ebfda038d\" (UID: \"0deb15dc-57ff-4c83-8e81-ea7ebfda038d\") " Nov 29 07:45:36 crc kubenswrapper[4795]: I1129 07:45:36.496968 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0deb15dc-57ff-4c83-8e81-ea7ebfda038d-utilities\") pod \"0deb15dc-57ff-4c83-8e81-ea7ebfda038d\" (UID: \"0deb15dc-57ff-4c83-8e81-ea7ebfda038d\") " Nov 29 07:45:36 crc kubenswrapper[4795]: I1129 07:45:36.497024 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0deb15dc-57ff-4c83-8e81-ea7ebfda038d-catalog-content\") pod \"0deb15dc-57ff-4c83-8e81-ea7ebfda038d\" (UID: \"0deb15dc-57ff-4c83-8e81-ea7ebfda038d\") " Nov 29 07:45:36 crc kubenswrapper[4795]: I1129 07:45:36.498701 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0deb15dc-57ff-4c83-8e81-ea7ebfda038d-utilities" (OuterVolumeSpecName: "utilities") pod "0deb15dc-57ff-4c83-8e81-ea7ebfda038d" (UID: "0deb15dc-57ff-4c83-8e81-ea7ebfda038d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:45:36 crc kubenswrapper[4795]: I1129 07:45:36.503429 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0deb15dc-57ff-4c83-8e81-ea7ebfda038d-kube-api-access-k7kf7" (OuterVolumeSpecName: "kube-api-access-k7kf7") pod "0deb15dc-57ff-4c83-8e81-ea7ebfda038d" (UID: "0deb15dc-57ff-4c83-8e81-ea7ebfda038d"). InnerVolumeSpecName "kube-api-access-k7kf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:45:36 crc kubenswrapper[4795]: I1129 07:45:36.555001 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0deb15dc-57ff-4c83-8e81-ea7ebfda038d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0deb15dc-57ff-4c83-8e81-ea7ebfda038d" (UID: "0deb15dc-57ff-4c83-8e81-ea7ebfda038d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:45:36 crc kubenswrapper[4795]: I1129 07:45:36.598744 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k7kf7\" (UniqueName: \"kubernetes.io/projected/0deb15dc-57ff-4c83-8e81-ea7ebfda038d-kube-api-access-k7kf7\") on node \"crc\" DevicePath \"\"" Nov 29 07:45:36 crc kubenswrapper[4795]: I1129 07:45:36.598779 4795 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0deb15dc-57ff-4c83-8e81-ea7ebfda038d-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:45:36 crc kubenswrapper[4795]: I1129 07:45:36.598789 4795 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0deb15dc-57ff-4c83-8e81-ea7ebfda038d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:45:36 crc kubenswrapper[4795]: I1129 07:45:36.838914 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8bzbm" event={"ID":"96ff138b-b30f-4a36-9c9c-76cf2c9e8e89","Type":"ContainerDied","Data":"ca97bbc1c53a21185c86c199b52df2571c303d0468f4ec18e347028bc614c41f"} Nov 29 07:45:36 crc kubenswrapper[4795]: I1129 07:45:36.838931 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8bzbm" Nov 29 07:45:36 crc kubenswrapper[4795]: I1129 07:45:36.839205 4795 scope.go:117] "RemoveContainer" containerID="d93d118a033bf15573d9e5aa3a9a66036e6996e9fe5801607d59446bcb1fd8ee" Nov 29 07:45:36 crc kubenswrapper[4795]: I1129 07:45:36.840880 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-nx2gj" event={"ID":"d49cd6f6-0b90-4c8f-9e8f-30a52c232522","Type":"ContainerStarted","Data":"c48baba8e465e5ee4cd23636af1f9e9f8cfcd28a75a54ae353281bc337f769c4"} Nov 29 07:45:36 crc kubenswrapper[4795]: I1129 07:45:36.840928 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-nx2gj" event={"ID":"d49cd6f6-0b90-4c8f-9e8f-30a52c232522","Type":"ContainerStarted","Data":"035a061412e2e420046870eea14856b820e307c0f1c95f07e3db6a633c797dde"} Nov 29 07:45:36 crc kubenswrapper[4795]: I1129 07:45:36.841258 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-nx2gj" Nov 29 07:45:36 crc kubenswrapper[4795]: I1129 07:45:36.844064 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h8pp6" event={"ID":"a04ecf63-b125-4a0e-9869-403c9cca5648","Type":"ContainerDied","Data":"2060d5fa0c4006d668aec5b7d7ca234de16309d36552db5f2db0feb93fc0991b"} Nov 29 07:45:36 crc kubenswrapper[4795]: I1129 07:45:36.844201 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h8pp6" Nov 29 07:45:36 crc kubenswrapper[4795]: I1129 07:45:36.854409 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vx8d5" Nov 29 07:45:36 crc kubenswrapper[4795]: I1129 07:45:36.854455 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vx8d5" event={"ID":"0deb15dc-57ff-4c83-8e81-ea7ebfda038d","Type":"ContainerDied","Data":"1bf46239af51a19271ecea1d13ef3235d767e6b4225e906ac6e7e8dd561ce1f9"} Nov 29 07:45:36 crc kubenswrapper[4795]: I1129 07:45:36.856616 4795 scope.go:117] "RemoveContainer" containerID="99316dba36faecd7c432d08e4965f0fe72e2d8310640149f69f5f986bba5c5e2" Nov 29 07:45:36 crc kubenswrapper[4795]: I1129 07:45:36.858500 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-nx2gj" Nov 29 07:45:36 crc kubenswrapper[4795]: I1129 07:45:36.859427 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-l2khx" Nov 29 07:45:36 crc kubenswrapper[4795]: I1129 07:45:36.859643 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-l2khx" event={"ID":"5550b70a-4101-4b56-8f88-c6339baaf188","Type":"ContainerDied","Data":"8b6dc9460cb05eccb1836001427b77918c1a8e4aaa671f120f1b02ce46a0f98c"} Nov 29 07:45:36 crc kubenswrapper[4795]: I1129 07:45:36.861954 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d4htg" event={"ID":"097562dd-99cb-4451-ac96-c1cdfd8cc4f4","Type":"ContainerDied","Data":"40ef80d60fd17bccfefb9b5b91dcb12a71f324459f3aaa9ffd40f7de31330187"} Nov 29 07:45:36 crc kubenswrapper[4795]: I1129 07:45:36.862087 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d4htg" Nov 29 07:45:36 crc kubenswrapper[4795]: I1129 07:45:36.877133 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-nx2gj" podStartSLOduration=1.877112409 podStartE2EDuration="1.877112409s" podCreationTimestamp="2025-11-29 07:45:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:45:36.871962613 +0000 UTC m=+382.847538403" watchObservedRunningTime="2025-11-29 07:45:36.877112409 +0000 UTC m=+382.852688199" Nov 29 07:45:36 crc kubenswrapper[4795]: I1129 07:45:36.918507 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h8pp6"] Nov 29 07:45:36 crc kubenswrapper[4795]: I1129 07:45:36.928111 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-h8pp6"] Nov 29 07:45:36 crc kubenswrapper[4795]: I1129 07:45:36.936381 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d4htg"] Nov 29 07:45:36 crc kubenswrapper[4795]: I1129 07:45:36.939902 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-d4htg"] Nov 29 07:45:36 crc kubenswrapper[4795]: I1129 07:45:36.945966 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8bzbm"] Nov 29 07:45:36 crc kubenswrapper[4795]: I1129 07:45:36.950225 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8bzbm"] Nov 29 07:45:36 crc kubenswrapper[4795]: I1129 07:45:36.961378 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-l2khx"] Nov 29 07:45:36 crc kubenswrapper[4795]: I1129 07:45:36.964968 4795 scope.go:117] "RemoveContainer" containerID="60c2de715ea55e2c78e75a0bd87f028677e22ba9e78e12e450593cdfbb817d03" Nov 29 07:45:36 crc kubenswrapper[4795]: I1129 07:45:36.965797 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-l2khx"] Nov 29 07:45:36 crc kubenswrapper[4795]: I1129 07:45:36.978381 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vx8d5"] Nov 29 07:45:36 crc kubenswrapper[4795]: I1129 07:45:36.986945 4795 scope.go:117] "RemoveContainer" containerID="69537845c3b2f03fc18a33867d3b13fff0b928a264fea1005b6639a22cf266d3" Nov 29 07:45:36 crc kubenswrapper[4795]: I1129 07:45:36.989264 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vx8d5"] Nov 29 07:45:37 crc kubenswrapper[4795]: I1129 07:45:37.004328 4795 scope.go:117] "RemoveContainer" containerID="9f95cfa4cb57e999c5a8d53a190e431e0e1f491edd76b1a9d8f277ce23d86e9f" Nov 29 07:45:37 crc kubenswrapper[4795]: I1129 07:45:37.022449 4795 scope.go:117] "RemoveContainer" containerID="639d98257cba6e6ec2309b46dd4dcb61798babcd021c0856eb4ff5e297830a3c" Nov 29 07:45:37 crc kubenswrapper[4795]: I1129 07:45:37.043566 4795 scope.go:117] "RemoveContainer" containerID="fa1222e8b368f97208d5aedac2a28a6190187a6b679460a628dee8d1ad8f3192" Nov 29 07:45:37 crc kubenswrapper[4795]: I1129 07:45:37.067102 4795 scope.go:117] "RemoveContainer" containerID="65b9e5fdde52dc4c7a4b8c438621bf409f0c82cfbf1cdc06c7cb9dd28790ab49" Nov 29 07:45:37 crc kubenswrapper[4795]: I1129 07:45:37.083079 4795 scope.go:117] "RemoveContainer" containerID="26d7c6cf7ad98ba57174b6e4dfe85d7c7581dc10c57022b2683c79bc887e772c" Nov 29 07:45:37 crc kubenswrapper[4795]: I1129 07:45:37.101672 4795 scope.go:117] "RemoveContainer" containerID="779841ac56099d3f6afb28f8490df34f6b23d4f3552629dbb24b7e04a0767446" Nov 29 07:45:37 crc kubenswrapper[4795]: I1129 07:45:37.114692 4795 scope.go:117] "RemoveContainer" containerID="d3259e67864daffa16284888cb554ca3c689dd26ef2352dd25c311e6c1e45b9e" Nov 29 07:45:37 crc kubenswrapper[4795]: I1129 07:45:37.126744 4795 scope.go:117] "RemoveContainer" containerID="2aa848deac074a7d4bb070d5bdcb59e861aa298b836984e036e1f522ff4ef0da" Nov 29 07:45:37 crc kubenswrapper[4795]: I1129 07:45:37.140815 4795 scope.go:117] "RemoveContainer" containerID="eefb0c0e0099912b753a70b90afd3455e1ece7e91e1d7b45424b832d30c0175f" Nov 29 07:45:38 crc kubenswrapper[4795]: I1129 07:45:38.281542 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="097562dd-99cb-4451-ac96-c1cdfd8cc4f4" path="/var/lib/kubelet/pods/097562dd-99cb-4451-ac96-c1cdfd8cc4f4/volumes" Nov 29 07:45:38 crc kubenswrapper[4795]: I1129 07:45:38.283127 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0deb15dc-57ff-4c83-8e81-ea7ebfda038d" path="/var/lib/kubelet/pods/0deb15dc-57ff-4c83-8e81-ea7ebfda038d/volumes" Nov 29 07:45:38 crc kubenswrapper[4795]: I1129 07:45:38.283884 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5550b70a-4101-4b56-8f88-c6339baaf188" path="/var/lib/kubelet/pods/5550b70a-4101-4b56-8f88-c6339baaf188/volumes" Nov 29 07:45:38 crc kubenswrapper[4795]: I1129 07:45:38.285033 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96ff138b-b30f-4a36-9c9c-76cf2c9e8e89" path="/var/lib/kubelet/pods/96ff138b-b30f-4a36-9c9c-76cf2c9e8e89/volumes" Nov 29 07:45:38 crc kubenswrapper[4795]: I1129 07:45:38.285700 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a04ecf63-b125-4a0e-9869-403c9cca5648" path="/var/lib/kubelet/pods/a04ecf63-b125-4a0e-9869-403c9cca5648/volumes" Nov 29 07:45:38 crc kubenswrapper[4795]: I1129 07:45:38.425349 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8vhrd"] Nov 29 07:45:38 crc kubenswrapper[4795]: E1129 07:45:38.425550 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="097562dd-99cb-4451-ac96-c1cdfd8cc4f4" containerName="extract-content" Nov 29 07:45:38 crc kubenswrapper[4795]: I1129 07:45:38.425560 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="097562dd-99cb-4451-ac96-c1cdfd8cc4f4" containerName="extract-content" Nov 29 07:45:38 crc kubenswrapper[4795]: E1129 07:45:38.425568 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0deb15dc-57ff-4c83-8e81-ea7ebfda038d" containerName="registry-server" Nov 29 07:45:38 crc kubenswrapper[4795]: I1129 07:45:38.425573 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="0deb15dc-57ff-4c83-8e81-ea7ebfda038d" containerName="registry-server" Nov 29 07:45:38 crc kubenswrapper[4795]: E1129 07:45:38.425581 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="097562dd-99cb-4451-ac96-c1cdfd8cc4f4" containerName="extract-utilities" Nov 29 07:45:38 crc kubenswrapper[4795]: I1129 07:45:38.425587 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="097562dd-99cb-4451-ac96-c1cdfd8cc4f4" containerName="extract-utilities" Nov 29 07:45:38 crc kubenswrapper[4795]: E1129 07:45:38.425617 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a04ecf63-b125-4a0e-9869-403c9cca5648" containerName="extract-content" Nov 29 07:45:38 crc kubenswrapper[4795]: I1129 07:45:38.425628 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="a04ecf63-b125-4a0e-9869-403c9cca5648" containerName="extract-content" Nov 29 07:45:38 crc kubenswrapper[4795]: E1129 07:45:38.425642 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="097562dd-99cb-4451-ac96-c1cdfd8cc4f4" containerName="registry-server" Nov 29 07:45:38 crc kubenswrapper[4795]: I1129 07:45:38.425650 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="097562dd-99cb-4451-ac96-c1cdfd8cc4f4" containerName="registry-server" Nov 29 07:45:38 crc kubenswrapper[4795]: E1129 07:45:38.425658 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96ff138b-b30f-4a36-9c9c-76cf2c9e8e89" containerName="extract-utilities" Nov 29 07:45:38 crc kubenswrapper[4795]: I1129 07:45:38.425663 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="96ff138b-b30f-4a36-9c9c-76cf2c9e8e89" containerName="extract-utilities" Nov 29 07:45:38 crc kubenswrapper[4795]: E1129 07:45:38.425675 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5550b70a-4101-4b56-8f88-c6339baaf188" containerName="marketplace-operator" Nov 29 07:45:38 crc kubenswrapper[4795]: I1129 07:45:38.425680 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="5550b70a-4101-4b56-8f88-c6339baaf188" containerName="marketplace-operator" Nov 29 07:45:38 crc kubenswrapper[4795]: E1129 07:45:38.425688 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96ff138b-b30f-4a36-9c9c-76cf2c9e8e89" containerName="registry-server" Nov 29 07:45:38 crc kubenswrapper[4795]: I1129 07:45:38.425694 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="96ff138b-b30f-4a36-9c9c-76cf2c9e8e89" containerName="registry-server" Nov 29 07:45:38 crc kubenswrapper[4795]: E1129 07:45:38.425702 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a04ecf63-b125-4a0e-9869-403c9cca5648" containerName="extract-utilities" Nov 29 07:45:38 crc kubenswrapper[4795]: I1129 07:45:38.425707 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="a04ecf63-b125-4a0e-9869-403c9cca5648" containerName="extract-utilities" Nov 29 07:45:38 crc kubenswrapper[4795]: E1129 07:45:38.425717 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0deb15dc-57ff-4c83-8e81-ea7ebfda038d" containerName="extract-utilities" Nov 29 07:45:38 crc kubenswrapper[4795]: I1129 07:45:38.425722 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="0deb15dc-57ff-4c83-8e81-ea7ebfda038d" containerName="extract-utilities" Nov 29 07:45:38 crc kubenswrapper[4795]: E1129 07:45:38.425729 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0deb15dc-57ff-4c83-8e81-ea7ebfda038d" containerName="extract-content" Nov 29 07:45:38 crc kubenswrapper[4795]: I1129 07:45:38.425737 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="0deb15dc-57ff-4c83-8e81-ea7ebfda038d" containerName="extract-content" Nov 29 07:45:38 crc kubenswrapper[4795]: E1129 07:45:38.425745 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96ff138b-b30f-4a36-9c9c-76cf2c9e8e89" containerName="extract-content" Nov 29 07:45:38 crc kubenswrapper[4795]: I1129 07:45:38.425750 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="96ff138b-b30f-4a36-9c9c-76cf2c9e8e89" containerName="extract-content" Nov 29 07:45:38 crc kubenswrapper[4795]: E1129 07:45:38.425759 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a04ecf63-b125-4a0e-9869-403c9cca5648" containerName="registry-server" Nov 29 07:45:38 crc kubenswrapper[4795]: I1129 07:45:38.425764 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="a04ecf63-b125-4a0e-9869-403c9cca5648" containerName="registry-server" Nov 29 07:45:38 crc kubenswrapper[4795]: I1129 07:45:38.425851 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="96ff138b-b30f-4a36-9c9c-76cf2c9e8e89" containerName="registry-server" Nov 29 07:45:38 crc kubenswrapper[4795]: I1129 07:45:38.425859 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="a04ecf63-b125-4a0e-9869-403c9cca5648" containerName="registry-server" Nov 29 07:45:38 crc kubenswrapper[4795]: I1129 07:45:38.425874 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="097562dd-99cb-4451-ac96-c1cdfd8cc4f4" containerName="registry-server" Nov 29 07:45:38 crc kubenswrapper[4795]: I1129 07:45:38.425879 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="0deb15dc-57ff-4c83-8e81-ea7ebfda038d" containerName="registry-server" Nov 29 07:45:38 crc kubenswrapper[4795]: I1129 07:45:38.425887 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="5550b70a-4101-4b56-8f88-c6339baaf188" containerName="marketplace-operator" Nov 29 07:45:38 crc kubenswrapper[4795]: I1129 07:45:38.426558 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8vhrd" Nov 29 07:45:38 crc kubenswrapper[4795]: I1129 07:45:38.428979 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 29 07:45:38 crc kubenswrapper[4795]: I1129 07:45:38.438716 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8vhrd"] Nov 29 07:45:38 crc kubenswrapper[4795]: I1129 07:45:38.621126 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edee21c8-0c84-4772-826f-0fa6de3076ba-utilities\") pod \"certified-operators-8vhrd\" (UID: \"edee21c8-0c84-4772-826f-0fa6de3076ba\") " pod="openshift-marketplace/certified-operators-8vhrd" Nov 29 07:45:38 crc kubenswrapper[4795]: I1129 07:45:38.621242 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pbr7\" (UniqueName: \"kubernetes.io/projected/edee21c8-0c84-4772-826f-0fa6de3076ba-kube-api-access-6pbr7\") pod \"certified-operators-8vhrd\" (UID: \"edee21c8-0c84-4772-826f-0fa6de3076ba\") " pod="openshift-marketplace/certified-operators-8vhrd" Nov 29 07:45:38 crc kubenswrapper[4795]: I1129 07:45:38.621285 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edee21c8-0c84-4772-826f-0fa6de3076ba-catalog-content\") pod \"certified-operators-8vhrd\" (UID: \"edee21c8-0c84-4772-826f-0fa6de3076ba\") " pod="openshift-marketplace/certified-operators-8vhrd" Nov 29 07:45:38 crc kubenswrapper[4795]: I1129 07:45:38.721941 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edee21c8-0c84-4772-826f-0fa6de3076ba-utilities\") pod \"certified-operators-8vhrd\" (UID: \"edee21c8-0c84-4772-826f-0fa6de3076ba\") " pod="openshift-marketplace/certified-operators-8vhrd" Nov 29 07:45:38 crc kubenswrapper[4795]: I1129 07:45:38.722332 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pbr7\" (UniqueName: \"kubernetes.io/projected/edee21c8-0c84-4772-826f-0fa6de3076ba-kube-api-access-6pbr7\") pod \"certified-operators-8vhrd\" (UID: \"edee21c8-0c84-4772-826f-0fa6de3076ba\") " pod="openshift-marketplace/certified-operators-8vhrd" Nov 29 07:45:38 crc kubenswrapper[4795]: I1129 07:45:38.722359 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edee21c8-0c84-4772-826f-0fa6de3076ba-catalog-content\") pod \"certified-operators-8vhrd\" (UID: \"edee21c8-0c84-4772-826f-0fa6de3076ba\") " pod="openshift-marketplace/certified-operators-8vhrd" Nov 29 07:45:38 crc kubenswrapper[4795]: I1129 07:45:38.722577 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edee21c8-0c84-4772-826f-0fa6de3076ba-utilities\") pod \"certified-operators-8vhrd\" (UID: \"edee21c8-0c84-4772-826f-0fa6de3076ba\") " pod="openshift-marketplace/certified-operators-8vhrd" Nov 29 07:45:38 crc kubenswrapper[4795]: I1129 07:45:38.722988 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edee21c8-0c84-4772-826f-0fa6de3076ba-catalog-content\") pod \"certified-operators-8vhrd\" (UID: \"edee21c8-0c84-4772-826f-0fa6de3076ba\") " pod="openshift-marketplace/certified-operators-8vhrd" Nov 29 07:45:38 crc kubenswrapper[4795]: I1129 07:45:38.744676 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pbr7\" (UniqueName: \"kubernetes.io/projected/edee21c8-0c84-4772-826f-0fa6de3076ba-kube-api-access-6pbr7\") pod \"certified-operators-8vhrd\" (UID: \"edee21c8-0c84-4772-826f-0fa6de3076ba\") " pod="openshift-marketplace/certified-operators-8vhrd" Nov 29 07:45:38 crc kubenswrapper[4795]: I1129 07:45:38.746366 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8vhrd" Nov 29 07:45:39 crc kubenswrapper[4795]: I1129 07:45:39.023580 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tr957"] Nov 29 07:45:39 crc kubenswrapper[4795]: I1129 07:45:39.025162 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tr957" Nov 29 07:45:39 crc kubenswrapper[4795]: I1129 07:45:39.027324 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 29 07:45:39 crc kubenswrapper[4795]: I1129 07:45:39.033057 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tr957"] Nov 29 07:45:39 crc kubenswrapper[4795]: I1129 07:45:39.135205 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8vhrd"] Nov 29 07:45:39 crc kubenswrapper[4795]: I1129 07:45:39.226658 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8efac1d2-dfa3-48d7-9928-823442690b91-catalog-content\") pod \"redhat-operators-tr957\" (UID: \"8efac1d2-dfa3-48d7-9928-823442690b91\") " pod="openshift-marketplace/redhat-operators-tr957" Nov 29 07:45:39 crc kubenswrapper[4795]: I1129 07:45:39.226789 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8efac1d2-dfa3-48d7-9928-823442690b91-utilities\") pod \"redhat-operators-tr957\" (UID: \"8efac1d2-dfa3-48d7-9928-823442690b91\") " pod="openshift-marketplace/redhat-operators-tr957" Nov 29 07:45:39 crc kubenswrapper[4795]: I1129 07:45:39.226839 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc96d\" (UniqueName: \"kubernetes.io/projected/8efac1d2-dfa3-48d7-9928-823442690b91-kube-api-access-lc96d\") pod \"redhat-operators-tr957\" (UID: \"8efac1d2-dfa3-48d7-9928-823442690b91\") " pod="openshift-marketplace/redhat-operators-tr957" Nov 29 07:45:39 crc kubenswrapper[4795]: I1129 07:45:39.327803 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8efac1d2-dfa3-48d7-9928-823442690b91-utilities\") pod \"redhat-operators-tr957\" (UID: \"8efac1d2-dfa3-48d7-9928-823442690b91\") " pod="openshift-marketplace/redhat-operators-tr957" Nov 29 07:45:39 crc kubenswrapper[4795]: I1129 07:45:39.327871 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lc96d\" (UniqueName: \"kubernetes.io/projected/8efac1d2-dfa3-48d7-9928-823442690b91-kube-api-access-lc96d\") pod \"redhat-operators-tr957\" (UID: \"8efac1d2-dfa3-48d7-9928-823442690b91\") " pod="openshift-marketplace/redhat-operators-tr957" Nov 29 07:45:39 crc kubenswrapper[4795]: I1129 07:45:39.327935 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8efac1d2-dfa3-48d7-9928-823442690b91-catalog-content\") pod \"redhat-operators-tr957\" (UID: \"8efac1d2-dfa3-48d7-9928-823442690b91\") " pod="openshift-marketplace/redhat-operators-tr957" Nov 29 07:45:39 crc kubenswrapper[4795]: I1129 07:45:39.328417 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8efac1d2-dfa3-48d7-9928-823442690b91-utilities\") pod \"redhat-operators-tr957\" (UID: \"8efac1d2-dfa3-48d7-9928-823442690b91\") " pod="openshift-marketplace/redhat-operators-tr957" Nov 29 07:45:39 crc kubenswrapper[4795]: I1129 07:45:39.328573 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8efac1d2-dfa3-48d7-9928-823442690b91-catalog-content\") pod \"redhat-operators-tr957\" (UID: \"8efac1d2-dfa3-48d7-9928-823442690b91\") " pod="openshift-marketplace/redhat-operators-tr957" Nov 29 07:45:39 crc kubenswrapper[4795]: I1129 07:45:39.347514 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lc96d\" (UniqueName: \"kubernetes.io/projected/8efac1d2-dfa3-48d7-9928-823442690b91-kube-api-access-lc96d\") pod \"redhat-operators-tr957\" (UID: \"8efac1d2-dfa3-48d7-9928-823442690b91\") " pod="openshift-marketplace/redhat-operators-tr957" Nov 29 07:45:39 crc kubenswrapper[4795]: I1129 07:45:39.641459 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tr957" Nov 29 07:45:39 crc kubenswrapper[4795]: I1129 07:45:39.890989 4795 generic.go:334] "Generic (PLEG): container finished" podID="edee21c8-0c84-4772-826f-0fa6de3076ba" containerID="95150d3176fbe75776b369e65f57987955acd8280018e8f05d480d871a05a0b1" exitCode=0 Nov 29 07:45:39 crc kubenswrapper[4795]: I1129 07:45:39.891276 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8vhrd" event={"ID":"edee21c8-0c84-4772-826f-0fa6de3076ba","Type":"ContainerDied","Data":"95150d3176fbe75776b369e65f57987955acd8280018e8f05d480d871a05a0b1"} Nov 29 07:45:39 crc kubenswrapper[4795]: I1129 07:45:39.891308 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8vhrd" event={"ID":"edee21c8-0c84-4772-826f-0fa6de3076ba","Type":"ContainerStarted","Data":"77511276675c3a6ba1c62f31f2a61462085698d7e55f76fd8c1486402dde5477"} Nov 29 07:45:40 crc kubenswrapper[4795]: I1129 07:45:40.026114 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tr957"] Nov 29 07:45:40 crc kubenswrapper[4795]: W1129 07:45:40.030718 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8efac1d2_dfa3_48d7_9928_823442690b91.slice/crio-b333c868583b643a44368a7025cf7daf72eb2215565c30e818a9f9ca3c39b4ce WatchSource:0}: Error finding container b333c868583b643a44368a7025cf7daf72eb2215565c30e818a9f9ca3c39b4ce: Status 404 returned error can't find the container with id b333c868583b643a44368a7025cf7daf72eb2215565c30e818a9f9ca3c39b4ce Nov 29 07:45:40 crc kubenswrapper[4795]: I1129 07:45:40.822060 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2l6px"] Nov 29 07:45:40 crc kubenswrapper[4795]: I1129 07:45:40.823316 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2l6px" Nov 29 07:45:40 crc kubenswrapper[4795]: I1129 07:45:40.825024 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 29 07:45:40 crc kubenswrapper[4795]: I1129 07:45:40.835497 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2l6px"] Nov 29 07:45:40 crc kubenswrapper[4795]: I1129 07:45:40.847566 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl5pg\" (UniqueName: \"kubernetes.io/projected/5c01d062-6304-480c-a957-63313b52599a-kube-api-access-zl5pg\") pod \"community-operators-2l6px\" (UID: \"5c01d062-6304-480c-a957-63313b52599a\") " pod="openshift-marketplace/community-operators-2l6px" Nov 29 07:45:40 crc kubenswrapper[4795]: I1129 07:45:40.847669 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c01d062-6304-480c-a957-63313b52599a-utilities\") pod \"community-operators-2l6px\" (UID: \"5c01d062-6304-480c-a957-63313b52599a\") " pod="openshift-marketplace/community-operators-2l6px" Nov 29 07:45:40 crc kubenswrapper[4795]: I1129 07:45:40.847688 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c01d062-6304-480c-a957-63313b52599a-catalog-content\") pod \"community-operators-2l6px\" (UID: \"5c01d062-6304-480c-a957-63313b52599a\") " pod="openshift-marketplace/community-operators-2l6px" Nov 29 07:45:40 crc kubenswrapper[4795]: I1129 07:45:40.909312 4795 generic.go:334] "Generic (PLEG): container finished" podID="8efac1d2-dfa3-48d7-9928-823442690b91" containerID="874188f0da4ca2159d129a0a588555617cadc11e4bc18749abca40c6683d3a8b" exitCode=0 Nov 29 07:45:40 crc kubenswrapper[4795]: I1129 07:45:40.909460 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tr957" event={"ID":"8efac1d2-dfa3-48d7-9928-823442690b91","Type":"ContainerDied","Data":"874188f0da4ca2159d129a0a588555617cadc11e4bc18749abca40c6683d3a8b"} Nov 29 07:45:40 crc kubenswrapper[4795]: I1129 07:45:40.909506 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tr957" event={"ID":"8efac1d2-dfa3-48d7-9928-823442690b91","Type":"ContainerStarted","Data":"b333c868583b643a44368a7025cf7daf72eb2215565c30e818a9f9ca3c39b4ce"} Nov 29 07:45:40 crc kubenswrapper[4795]: I1129 07:45:40.914771 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8vhrd" event={"ID":"edee21c8-0c84-4772-826f-0fa6de3076ba","Type":"ContainerStarted","Data":"7428dda5452082c41efc44682ccadced4e1b64f3884f8b1a92e8fa6c46522c1e"} Nov 29 07:45:40 crc kubenswrapper[4795]: I1129 07:45:40.949258 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c01d062-6304-480c-a957-63313b52599a-utilities\") pod \"community-operators-2l6px\" (UID: \"5c01d062-6304-480c-a957-63313b52599a\") " pod="openshift-marketplace/community-operators-2l6px" Nov 29 07:45:40 crc kubenswrapper[4795]: I1129 07:45:40.949557 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c01d062-6304-480c-a957-63313b52599a-catalog-content\") pod \"community-operators-2l6px\" (UID: \"5c01d062-6304-480c-a957-63313b52599a\") " pod="openshift-marketplace/community-operators-2l6px" Nov 29 07:45:40 crc kubenswrapper[4795]: I1129 07:45:40.949616 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zl5pg\" (UniqueName: \"kubernetes.io/projected/5c01d062-6304-480c-a957-63313b52599a-kube-api-access-zl5pg\") pod \"community-operators-2l6px\" (UID: \"5c01d062-6304-480c-a957-63313b52599a\") " pod="openshift-marketplace/community-operators-2l6px" Nov 29 07:45:40 crc kubenswrapper[4795]: I1129 07:45:40.949908 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c01d062-6304-480c-a957-63313b52599a-utilities\") pod \"community-operators-2l6px\" (UID: \"5c01d062-6304-480c-a957-63313b52599a\") " pod="openshift-marketplace/community-operators-2l6px" Nov 29 07:45:40 crc kubenswrapper[4795]: I1129 07:45:40.950252 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c01d062-6304-480c-a957-63313b52599a-catalog-content\") pod \"community-operators-2l6px\" (UID: \"5c01d062-6304-480c-a957-63313b52599a\") " pod="openshift-marketplace/community-operators-2l6px" Nov 29 07:45:40 crc kubenswrapper[4795]: I1129 07:45:40.968647 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zl5pg\" (UniqueName: \"kubernetes.io/projected/5c01d062-6304-480c-a957-63313b52599a-kube-api-access-zl5pg\") pod \"community-operators-2l6px\" (UID: \"5c01d062-6304-480c-a957-63313b52599a\") " pod="openshift-marketplace/community-operators-2l6px" Nov 29 07:45:41 crc kubenswrapper[4795]: I1129 07:45:41.139166 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2l6px" Nov 29 07:45:41 crc kubenswrapper[4795]: I1129 07:45:41.432748 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kvpxx"] Nov 29 07:45:41 crc kubenswrapper[4795]: I1129 07:45:41.434672 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kvpxx" Nov 29 07:45:41 crc kubenswrapper[4795]: I1129 07:45:41.437568 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kvpxx"] Nov 29 07:45:41 crc kubenswrapper[4795]: I1129 07:45:41.438364 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 29 07:45:41 crc kubenswrapper[4795]: I1129 07:45:41.456660 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stgl4\" (UniqueName: \"kubernetes.io/projected/18e2d8ec-4739-4097-8772-98689f8d8626-kube-api-access-stgl4\") pod \"redhat-marketplace-kvpxx\" (UID: \"18e2d8ec-4739-4097-8772-98689f8d8626\") " pod="openshift-marketplace/redhat-marketplace-kvpxx" Nov 29 07:45:41 crc kubenswrapper[4795]: I1129 07:45:41.456733 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18e2d8ec-4739-4097-8772-98689f8d8626-utilities\") pod \"redhat-marketplace-kvpxx\" (UID: \"18e2d8ec-4739-4097-8772-98689f8d8626\") " pod="openshift-marketplace/redhat-marketplace-kvpxx" Nov 29 07:45:41 crc kubenswrapper[4795]: I1129 07:45:41.456770 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18e2d8ec-4739-4097-8772-98689f8d8626-catalog-content\") pod \"redhat-marketplace-kvpxx\" (UID: \"18e2d8ec-4739-4097-8772-98689f8d8626\") " pod="openshift-marketplace/redhat-marketplace-kvpxx" Nov 29 07:45:41 crc kubenswrapper[4795]: I1129 07:45:41.552738 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2l6px"] Nov 29 07:45:41 crc kubenswrapper[4795]: I1129 07:45:41.557619 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stgl4\" (UniqueName: \"kubernetes.io/projected/18e2d8ec-4739-4097-8772-98689f8d8626-kube-api-access-stgl4\") pod \"redhat-marketplace-kvpxx\" (UID: \"18e2d8ec-4739-4097-8772-98689f8d8626\") " pod="openshift-marketplace/redhat-marketplace-kvpxx" Nov 29 07:45:41 crc kubenswrapper[4795]: I1129 07:45:41.557711 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18e2d8ec-4739-4097-8772-98689f8d8626-utilities\") pod \"redhat-marketplace-kvpxx\" (UID: \"18e2d8ec-4739-4097-8772-98689f8d8626\") " pod="openshift-marketplace/redhat-marketplace-kvpxx" Nov 29 07:45:41 crc kubenswrapper[4795]: I1129 07:45:41.557754 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18e2d8ec-4739-4097-8772-98689f8d8626-catalog-content\") pod \"redhat-marketplace-kvpxx\" (UID: \"18e2d8ec-4739-4097-8772-98689f8d8626\") " pod="openshift-marketplace/redhat-marketplace-kvpxx" Nov 29 07:45:41 crc kubenswrapper[4795]: I1129 07:45:41.558526 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18e2d8ec-4739-4097-8772-98689f8d8626-catalog-content\") pod \"redhat-marketplace-kvpxx\" (UID: \"18e2d8ec-4739-4097-8772-98689f8d8626\") " pod="openshift-marketplace/redhat-marketplace-kvpxx" Nov 29 07:45:41 crc kubenswrapper[4795]: I1129 07:45:41.559115 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18e2d8ec-4739-4097-8772-98689f8d8626-utilities\") pod \"redhat-marketplace-kvpxx\" (UID: \"18e2d8ec-4739-4097-8772-98689f8d8626\") " pod="openshift-marketplace/redhat-marketplace-kvpxx" Nov 29 07:45:41 crc kubenswrapper[4795]: I1129 07:45:41.580104 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stgl4\" (UniqueName: \"kubernetes.io/projected/18e2d8ec-4739-4097-8772-98689f8d8626-kube-api-access-stgl4\") pod \"redhat-marketplace-kvpxx\" (UID: \"18e2d8ec-4739-4097-8772-98689f8d8626\") " pod="openshift-marketplace/redhat-marketplace-kvpxx" Nov 29 07:45:41 crc kubenswrapper[4795]: I1129 07:45:41.763474 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kvpxx" Nov 29 07:45:41 crc kubenswrapper[4795]: I1129 07:45:41.927061 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tr957" event={"ID":"8efac1d2-dfa3-48d7-9928-823442690b91","Type":"ContainerStarted","Data":"eb8022e719820f42e672d976aa2760d001ff974d128b333d80db4c2ca1a40a51"} Nov 29 07:45:41 crc kubenswrapper[4795]: I1129 07:45:41.937984 4795 generic.go:334] "Generic (PLEG): container finished" podID="5c01d062-6304-480c-a957-63313b52599a" containerID="e05a803c11a9096d91d6e0214e99af4e6957f46baa8d70a1a22b067fc1271411" exitCode=0 Nov 29 07:45:41 crc kubenswrapper[4795]: I1129 07:45:41.938133 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2l6px" event={"ID":"5c01d062-6304-480c-a957-63313b52599a","Type":"ContainerDied","Data":"e05a803c11a9096d91d6e0214e99af4e6957f46baa8d70a1a22b067fc1271411"} Nov 29 07:45:41 crc kubenswrapper[4795]: I1129 07:45:41.938200 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2l6px" event={"ID":"5c01d062-6304-480c-a957-63313b52599a","Type":"ContainerStarted","Data":"2ebe426dd08beb190c6789d6df66068261fdf22b5a79c3a849ad12966571064b"} Nov 29 07:45:41 crc kubenswrapper[4795]: I1129 07:45:41.941042 4795 patch_prober.go:28] interesting pod/machine-config-daemon-bkmq6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:45:41 crc kubenswrapper[4795]: I1129 07:45:41.941078 4795 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:45:41 crc kubenswrapper[4795]: I1129 07:45:41.942744 4795 generic.go:334] "Generic (PLEG): container finished" podID="edee21c8-0c84-4772-826f-0fa6de3076ba" containerID="7428dda5452082c41efc44682ccadced4e1b64f3884f8b1a92e8fa6c46522c1e" exitCode=0 Nov 29 07:45:41 crc kubenswrapper[4795]: I1129 07:45:41.942776 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8vhrd" event={"ID":"edee21c8-0c84-4772-826f-0fa6de3076ba","Type":"ContainerDied","Data":"7428dda5452082c41efc44682ccadced4e1b64f3884f8b1a92e8fa6c46522c1e"} Nov 29 07:45:41 crc kubenswrapper[4795]: I1129 07:45:41.942805 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8vhrd" event={"ID":"edee21c8-0c84-4772-826f-0fa6de3076ba","Type":"ContainerStarted","Data":"06f8454aa985bd8e3a3e0b6144fa5c8e3b759317eaaa89a1d47dc59824f53045"} Nov 29 07:45:41 crc kubenswrapper[4795]: I1129 07:45:41.988581 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8vhrd" podStartSLOduration=2.342483085 podStartE2EDuration="3.988561849s" podCreationTimestamp="2025-11-29 07:45:38 +0000 UTC" firstStartedPulling="2025-11-29 07:45:39.892665227 +0000 UTC m=+385.868241017" lastFinishedPulling="2025-11-29 07:45:41.538743991 +0000 UTC m=+387.514319781" observedRunningTime="2025-11-29 07:45:41.970002971 +0000 UTC m=+387.945578761" watchObservedRunningTime="2025-11-29 07:45:41.988561849 +0000 UTC m=+387.964137639" Nov 29 07:45:42 crc kubenswrapper[4795]: I1129 07:45:42.177158 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kvpxx"] Nov 29 07:45:42 crc kubenswrapper[4795]: W1129 07:45:42.182585 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18e2d8ec_4739_4097_8772_98689f8d8626.slice/crio-b269112910d63f34492651d89978f2dd363778df9c9b44f301c076edb3a086e8 WatchSource:0}: Error finding container b269112910d63f34492651d89978f2dd363778df9c9b44f301c076edb3a086e8: Status 404 returned error can't find the container with id b269112910d63f34492651d89978f2dd363778df9c9b44f301c076edb3a086e8 Nov 29 07:45:42 crc kubenswrapper[4795]: I1129 07:45:42.949957 4795 generic.go:334] "Generic (PLEG): container finished" podID="18e2d8ec-4739-4097-8772-98689f8d8626" containerID="ade0704ec1639e75ba7bb57556147979626cae39b48dc7dbffa915aa6d379349" exitCode=0 Nov 29 07:45:42 crc kubenswrapper[4795]: I1129 07:45:42.950077 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kvpxx" event={"ID":"18e2d8ec-4739-4097-8772-98689f8d8626","Type":"ContainerDied","Data":"ade0704ec1639e75ba7bb57556147979626cae39b48dc7dbffa915aa6d379349"} Nov 29 07:45:42 crc kubenswrapper[4795]: I1129 07:45:42.950377 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kvpxx" event={"ID":"18e2d8ec-4739-4097-8772-98689f8d8626","Type":"ContainerStarted","Data":"b269112910d63f34492651d89978f2dd363778df9c9b44f301c076edb3a086e8"} Nov 29 07:45:42 crc kubenswrapper[4795]: I1129 07:45:42.953106 4795 generic.go:334] "Generic (PLEG): container finished" podID="8efac1d2-dfa3-48d7-9928-823442690b91" containerID="eb8022e719820f42e672d976aa2760d001ff974d128b333d80db4c2ca1a40a51" exitCode=0 Nov 29 07:45:42 crc kubenswrapper[4795]: I1129 07:45:42.953190 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tr957" event={"ID":"8efac1d2-dfa3-48d7-9928-823442690b91","Type":"ContainerDied","Data":"eb8022e719820f42e672d976aa2760d001ff974d128b333d80db4c2ca1a40a51"} Nov 29 07:45:42 crc kubenswrapper[4795]: I1129 07:45:42.960151 4795 generic.go:334] "Generic (PLEG): container finished" podID="5c01d062-6304-480c-a957-63313b52599a" containerID="bad2cd182310519d230ab4499ac6ba9287b35c07a986b124bc4b235cd69b9cb0" exitCode=0 Nov 29 07:45:42 crc kubenswrapper[4795]: I1129 07:45:42.960996 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2l6px" event={"ID":"5c01d062-6304-480c-a957-63313b52599a","Type":"ContainerDied","Data":"bad2cd182310519d230ab4499ac6ba9287b35c07a986b124bc4b235cd69b9cb0"} Nov 29 07:45:43 crc kubenswrapper[4795]: I1129 07:45:43.969480 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2l6px" event={"ID":"5c01d062-6304-480c-a957-63313b52599a","Type":"ContainerStarted","Data":"83a9e75280f18ed2e7789110f4d5dd0bb23d67327ee66c3de906ffbc2a9d7492"} Nov 29 07:45:43 crc kubenswrapper[4795]: I1129 07:45:43.974888 4795 generic.go:334] "Generic (PLEG): container finished" podID="18e2d8ec-4739-4097-8772-98689f8d8626" containerID="a2fe34ec61b41a5a7976bede23a3e1417996f4b8fc8db13fa35d8b8f9ee3573e" exitCode=0 Nov 29 07:45:43 crc kubenswrapper[4795]: I1129 07:45:43.974983 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kvpxx" event={"ID":"18e2d8ec-4739-4097-8772-98689f8d8626","Type":"ContainerDied","Data":"a2fe34ec61b41a5a7976bede23a3e1417996f4b8fc8db13fa35d8b8f9ee3573e"} Nov 29 07:45:43 crc kubenswrapper[4795]: I1129 07:45:43.977822 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tr957" event={"ID":"8efac1d2-dfa3-48d7-9928-823442690b91","Type":"ContainerStarted","Data":"4ae400af6d2aee3d7388365f2afb2f24336f90bdb585d51c4363e603ce47a793"} Nov 29 07:45:43 crc kubenswrapper[4795]: I1129 07:45:43.987506 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2l6px" podStartSLOduration=2.539402291 podStartE2EDuration="3.987492451s" podCreationTimestamp="2025-11-29 07:45:40 +0000 UTC" firstStartedPulling="2025-11-29 07:45:41.94010377 +0000 UTC m=+387.915679560" lastFinishedPulling="2025-11-29 07:45:43.38819393 +0000 UTC m=+389.363769720" observedRunningTime="2025-11-29 07:45:43.985059082 +0000 UTC m=+389.960634872" watchObservedRunningTime="2025-11-29 07:45:43.987492451 +0000 UTC m=+389.963068241" Nov 29 07:45:44 crc kubenswrapper[4795]: I1129 07:45:44.024338 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tr957" podStartSLOduration=2.580681754 podStartE2EDuration="5.024320939s" podCreationTimestamp="2025-11-29 07:45:39 +0000 UTC" firstStartedPulling="2025-11-29 07:45:40.912540344 +0000 UTC m=+386.888116134" lastFinishedPulling="2025-11-29 07:45:43.356179529 +0000 UTC m=+389.331755319" observedRunningTime="2025-11-29 07:45:44.021037626 +0000 UTC m=+389.996613416" watchObservedRunningTime="2025-11-29 07:45:44.024320939 +0000 UTC m=+389.999896729" Nov 29 07:45:44 crc kubenswrapper[4795]: I1129 07:45:44.984678 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kvpxx" event={"ID":"18e2d8ec-4739-4097-8772-98689f8d8626","Type":"ContainerStarted","Data":"38de575f48f22a14b2dc39f00c1411139d341d609d8e2324e9d8affae81c7f66"} Nov 29 07:45:45 crc kubenswrapper[4795]: I1129 07:45:45.009296 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kvpxx" podStartSLOduration=2.441108347 podStartE2EDuration="4.009279213s" podCreationTimestamp="2025-11-29 07:45:41 +0000 UTC" firstStartedPulling="2025-11-29 07:45:42.951831626 +0000 UTC m=+388.927407416" lastFinishedPulling="2025-11-29 07:45:44.520002492 +0000 UTC m=+390.495578282" observedRunningTime="2025-11-29 07:45:45.003990613 +0000 UTC m=+390.979566423" watchObservedRunningTime="2025-11-29 07:45:45.009279213 +0000 UTC m=+390.984855003" Nov 29 07:45:48 crc kubenswrapper[4795]: I1129 07:45:48.747405 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8vhrd" Nov 29 07:45:48 crc kubenswrapper[4795]: I1129 07:45:48.748052 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8vhrd" Nov 29 07:45:48 crc kubenswrapper[4795]: I1129 07:45:48.789182 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8vhrd" Nov 29 07:45:49 crc kubenswrapper[4795]: I1129 07:45:49.044459 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8vhrd" Nov 29 07:45:49 crc kubenswrapper[4795]: I1129 07:45:49.642245 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tr957" Nov 29 07:45:49 crc kubenswrapper[4795]: I1129 07:45:49.642898 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tr957" Nov 29 07:45:49 crc kubenswrapper[4795]: I1129 07:45:49.685888 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tr957" Nov 29 07:45:50 crc kubenswrapper[4795]: I1129 07:45:50.044012 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tr957" Nov 29 07:45:51 crc kubenswrapper[4795]: I1129 07:45:51.141754 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2l6px" Nov 29 07:45:51 crc kubenswrapper[4795]: I1129 07:45:51.141822 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2l6px" Nov 29 07:45:51 crc kubenswrapper[4795]: I1129 07:45:51.183512 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2l6px" Nov 29 07:45:51 crc kubenswrapper[4795]: I1129 07:45:51.764625 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kvpxx" Nov 29 07:45:51 crc kubenswrapper[4795]: I1129 07:45:51.764978 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kvpxx" Nov 29 07:45:51 crc kubenswrapper[4795]: I1129 07:45:51.804082 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kvpxx" Nov 29 07:45:52 crc kubenswrapper[4795]: I1129 07:45:52.067396 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kvpxx" Nov 29 07:45:52 crc kubenswrapper[4795]: I1129 07:45:52.069218 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2l6px" Nov 29 07:45:59 crc kubenswrapper[4795]: I1129 07:45:59.767447 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d5b76bdb6-57zxn"] Nov 29 07:45:59 crc kubenswrapper[4795]: I1129 07:45:59.768297 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-d5b76bdb6-57zxn" podUID="3e0e2228-e69b-4e4a-8dcd-675baddab6b8" containerName="route-controller-manager" containerID="cri-o://06f79af32b7417fd6634c18556fb2ef09fc6422f7957dbdda89b1aa3cdb8bbea" gracePeriod=30 Nov 29 07:46:00 crc kubenswrapper[4795]: I1129 07:46:00.058047 4795 generic.go:334] "Generic (PLEG): container finished" podID="3e0e2228-e69b-4e4a-8dcd-675baddab6b8" containerID="06f79af32b7417fd6634c18556fb2ef09fc6422f7957dbdda89b1aa3cdb8bbea" exitCode=0 Nov 29 07:46:00 crc kubenswrapper[4795]: I1129 07:46:00.058129 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d5b76bdb6-57zxn" event={"ID":"3e0e2228-e69b-4e4a-8dcd-675baddab6b8","Type":"ContainerDied","Data":"06f79af32b7417fd6634c18556fb2ef09fc6422f7957dbdda89b1aa3cdb8bbea"} Nov 29 07:46:00 crc kubenswrapper[4795]: I1129 07:46:00.158076 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d5b76bdb6-57zxn" Nov 29 07:46:00 crc kubenswrapper[4795]: I1129 07:46:00.210907 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3e0e2228-e69b-4e4a-8dcd-675baddab6b8-client-ca\") pod \"3e0e2228-e69b-4e4a-8dcd-675baddab6b8\" (UID: \"3e0e2228-e69b-4e4a-8dcd-675baddab6b8\") " Nov 29 07:46:00 crc kubenswrapper[4795]: I1129 07:46:00.210962 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e0e2228-e69b-4e4a-8dcd-675baddab6b8-config\") pod \"3e0e2228-e69b-4e4a-8dcd-675baddab6b8\" (UID: \"3e0e2228-e69b-4e4a-8dcd-675baddab6b8\") " Nov 29 07:46:00 crc kubenswrapper[4795]: I1129 07:46:00.211060 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w766r\" (UniqueName: \"kubernetes.io/projected/3e0e2228-e69b-4e4a-8dcd-675baddab6b8-kube-api-access-w766r\") pod \"3e0e2228-e69b-4e4a-8dcd-675baddab6b8\" (UID: \"3e0e2228-e69b-4e4a-8dcd-675baddab6b8\") " Nov 29 07:46:00 crc kubenswrapper[4795]: I1129 07:46:00.211095 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e0e2228-e69b-4e4a-8dcd-675baddab6b8-serving-cert\") pod \"3e0e2228-e69b-4e4a-8dcd-675baddab6b8\" (UID: \"3e0e2228-e69b-4e4a-8dcd-675baddab6b8\") " Nov 29 07:46:00 crc kubenswrapper[4795]: I1129 07:46:00.212252 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e0e2228-e69b-4e4a-8dcd-675baddab6b8-client-ca" (OuterVolumeSpecName: "client-ca") pod "3e0e2228-e69b-4e4a-8dcd-675baddab6b8" (UID: "3e0e2228-e69b-4e4a-8dcd-675baddab6b8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:46:00 crc kubenswrapper[4795]: I1129 07:46:00.212663 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e0e2228-e69b-4e4a-8dcd-675baddab6b8-config" (OuterVolumeSpecName: "config") pod "3e0e2228-e69b-4e4a-8dcd-675baddab6b8" (UID: "3e0e2228-e69b-4e4a-8dcd-675baddab6b8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:46:00 crc kubenswrapper[4795]: I1129 07:46:00.217567 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e0e2228-e69b-4e4a-8dcd-675baddab6b8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3e0e2228-e69b-4e4a-8dcd-675baddab6b8" (UID: "3e0e2228-e69b-4e4a-8dcd-675baddab6b8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:46:00 crc kubenswrapper[4795]: I1129 07:46:00.217620 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e0e2228-e69b-4e4a-8dcd-675baddab6b8-kube-api-access-w766r" (OuterVolumeSpecName: "kube-api-access-w766r") pod "3e0e2228-e69b-4e4a-8dcd-675baddab6b8" (UID: "3e0e2228-e69b-4e4a-8dcd-675baddab6b8"). InnerVolumeSpecName "kube-api-access-w766r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:46:00 crc kubenswrapper[4795]: I1129 07:46:00.312201 4795 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3e0e2228-e69b-4e4a-8dcd-675baddab6b8-client-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:46:00 crc kubenswrapper[4795]: I1129 07:46:00.312246 4795 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e0e2228-e69b-4e4a-8dcd-675baddab6b8-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:46:00 crc kubenswrapper[4795]: I1129 07:46:00.312259 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w766r\" (UniqueName: \"kubernetes.io/projected/3e0e2228-e69b-4e4a-8dcd-675baddab6b8-kube-api-access-w766r\") on node \"crc\" DevicePath \"\"" Nov 29 07:46:00 crc kubenswrapper[4795]: I1129 07:46:00.312272 4795 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e0e2228-e69b-4e4a-8dcd-675baddab6b8-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:46:01 crc kubenswrapper[4795]: I1129 07:46:01.063421 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d5b76bdb6-57zxn" event={"ID":"3e0e2228-e69b-4e4a-8dcd-675baddab6b8","Type":"ContainerDied","Data":"e956174ec36b85fbb855f5f666787f7d60193de37e2909a5dba5f8158919722e"} Nov 29 07:46:01 crc kubenswrapper[4795]: I1129 07:46:01.063487 4795 scope.go:117] "RemoveContainer" containerID="06f79af32b7417fd6634c18556fb2ef09fc6422f7957dbdda89b1aa3cdb8bbea" Nov 29 07:46:01 crc kubenswrapper[4795]: I1129 07:46:01.063493 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d5b76bdb6-57zxn" Nov 29 07:46:01 crc kubenswrapper[4795]: I1129 07:46:01.080562 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d5b76bdb6-57zxn"] Nov 29 07:46:01 crc kubenswrapper[4795]: I1129 07:46:01.085085 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d5b76bdb6-57zxn"] Nov 29 07:46:01 crc kubenswrapper[4795]: I1129 07:46:01.450102 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64cf5dcff4-52hkq"] Nov 29 07:46:01 crc kubenswrapper[4795]: E1129 07:46:01.450661 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e0e2228-e69b-4e4a-8dcd-675baddab6b8" containerName="route-controller-manager" Nov 29 07:46:01 crc kubenswrapper[4795]: I1129 07:46:01.450678 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e0e2228-e69b-4e4a-8dcd-675baddab6b8" containerName="route-controller-manager" Nov 29 07:46:01 crc kubenswrapper[4795]: I1129 07:46:01.450795 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e0e2228-e69b-4e4a-8dcd-675baddab6b8" containerName="route-controller-manager" Nov 29 07:46:01 crc kubenswrapper[4795]: I1129 07:46:01.451332 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-64cf5dcff4-52hkq" Nov 29 07:46:01 crc kubenswrapper[4795]: I1129 07:46:01.453289 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 29 07:46:01 crc kubenswrapper[4795]: I1129 07:46:01.454320 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 29 07:46:01 crc kubenswrapper[4795]: I1129 07:46:01.454334 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 29 07:46:01 crc kubenswrapper[4795]: I1129 07:46:01.454347 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 29 07:46:01 crc kubenswrapper[4795]: I1129 07:46:01.454463 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 29 07:46:01 crc kubenswrapper[4795]: I1129 07:46:01.454560 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 29 07:46:01 crc kubenswrapper[4795]: I1129 07:46:01.467112 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64cf5dcff4-52hkq"] Nov 29 07:46:01 crc kubenswrapper[4795]: I1129 07:46:01.525406 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5102a82b-978d-49e1-9196-afa5f15dc5b6-config\") pod \"route-controller-manager-64cf5dcff4-52hkq\" (UID: \"5102a82b-978d-49e1-9196-afa5f15dc5b6\") " pod="openshift-route-controller-manager/route-controller-manager-64cf5dcff4-52hkq" Nov 29 07:46:01 crc kubenswrapper[4795]: I1129 07:46:01.525463 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5102a82b-978d-49e1-9196-afa5f15dc5b6-serving-cert\") pod \"route-controller-manager-64cf5dcff4-52hkq\" (UID: \"5102a82b-978d-49e1-9196-afa5f15dc5b6\") " pod="openshift-route-controller-manager/route-controller-manager-64cf5dcff4-52hkq" Nov 29 07:46:01 crc kubenswrapper[4795]: I1129 07:46:01.525507 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbvkk\" (UniqueName: \"kubernetes.io/projected/5102a82b-978d-49e1-9196-afa5f15dc5b6-kube-api-access-nbvkk\") pod \"route-controller-manager-64cf5dcff4-52hkq\" (UID: \"5102a82b-978d-49e1-9196-afa5f15dc5b6\") " pod="openshift-route-controller-manager/route-controller-manager-64cf5dcff4-52hkq" Nov 29 07:46:01 crc kubenswrapper[4795]: I1129 07:46:01.525553 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5102a82b-978d-49e1-9196-afa5f15dc5b6-client-ca\") pod \"route-controller-manager-64cf5dcff4-52hkq\" (UID: \"5102a82b-978d-49e1-9196-afa5f15dc5b6\") " pod="openshift-route-controller-manager/route-controller-manager-64cf5dcff4-52hkq" Nov 29 07:46:01 crc kubenswrapper[4795]: I1129 07:46:01.628106 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5102a82b-978d-49e1-9196-afa5f15dc5b6-config\") pod \"route-controller-manager-64cf5dcff4-52hkq\" (UID: \"5102a82b-978d-49e1-9196-afa5f15dc5b6\") " pod="openshift-route-controller-manager/route-controller-manager-64cf5dcff4-52hkq" Nov 29 07:46:01 crc kubenswrapper[4795]: I1129 07:46:01.628222 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5102a82b-978d-49e1-9196-afa5f15dc5b6-serving-cert\") pod \"route-controller-manager-64cf5dcff4-52hkq\" (UID: \"5102a82b-978d-49e1-9196-afa5f15dc5b6\") " pod="openshift-route-controller-manager/route-controller-manager-64cf5dcff4-52hkq" Nov 29 07:46:01 crc kubenswrapper[4795]: I1129 07:46:01.628347 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbvkk\" (UniqueName: \"kubernetes.io/projected/5102a82b-978d-49e1-9196-afa5f15dc5b6-kube-api-access-nbvkk\") pod \"route-controller-manager-64cf5dcff4-52hkq\" (UID: \"5102a82b-978d-49e1-9196-afa5f15dc5b6\") " pod="openshift-route-controller-manager/route-controller-manager-64cf5dcff4-52hkq" Nov 29 07:46:01 crc kubenswrapper[4795]: I1129 07:46:01.628502 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5102a82b-978d-49e1-9196-afa5f15dc5b6-client-ca\") pod \"route-controller-manager-64cf5dcff4-52hkq\" (UID: \"5102a82b-978d-49e1-9196-afa5f15dc5b6\") " pod="openshift-route-controller-manager/route-controller-manager-64cf5dcff4-52hkq" Nov 29 07:46:01 crc kubenswrapper[4795]: I1129 07:46:01.629363 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5102a82b-978d-49e1-9196-afa5f15dc5b6-client-ca\") pod \"route-controller-manager-64cf5dcff4-52hkq\" (UID: \"5102a82b-978d-49e1-9196-afa5f15dc5b6\") " pod="openshift-route-controller-manager/route-controller-manager-64cf5dcff4-52hkq" Nov 29 07:46:01 crc kubenswrapper[4795]: I1129 07:46:01.631168 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5102a82b-978d-49e1-9196-afa5f15dc5b6-config\") pod \"route-controller-manager-64cf5dcff4-52hkq\" (UID: \"5102a82b-978d-49e1-9196-afa5f15dc5b6\") " pod="openshift-route-controller-manager/route-controller-manager-64cf5dcff4-52hkq" Nov 29 07:46:01 crc kubenswrapper[4795]: I1129 07:46:01.637307 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5102a82b-978d-49e1-9196-afa5f15dc5b6-serving-cert\") pod \"route-controller-manager-64cf5dcff4-52hkq\" (UID: \"5102a82b-978d-49e1-9196-afa5f15dc5b6\") " pod="openshift-route-controller-manager/route-controller-manager-64cf5dcff4-52hkq" Nov 29 07:46:01 crc kubenswrapper[4795]: I1129 07:46:01.646311 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbvkk\" (UniqueName: \"kubernetes.io/projected/5102a82b-978d-49e1-9196-afa5f15dc5b6-kube-api-access-nbvkk\") pod \"route-controller-manager-64cf5dcff4-52hkq\" (UID: \"5102a82b-978d-49e1-9196-afa5f15dc5b6\") " pod="openshift-route-controller-manager/route-controller-manager-64cf5dcff4-52hkq" Nov 29 07:46:01 crc kubenswrapper[4795]: I1129 07:46:01.771971 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-64cf5dcff4-52hkq" Nov 29 07:46:02 crc kubenswrapper[4795]: I1129 07:46:02.178253 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64cf5dcff4-52hkq"] Nov 29 07:46:02 crc kubenswrapper[4795]: I1129 07:46:02.282311 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e0e2228-e69b-4e4a-8dcd-675baddab6b8" path="/var/lib/kubelet/pods/3e0e2228-e69b-4e4a-8dcd-675baddab6b8/volumes" Nov 29 07:46:03 crc kubenswrapper[4795]: I1129 07:46:03.075434 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-64cf5dcff4-52hkq" event={"ID":"5102a82b-978d-49e1-9196-afa5f15dc5b6","Type":"ContainerStarted","Data":"fcd8cb69cc1448b93ee61e3379c08e3647eb494d5fe8e9747d79e25f4f697281"} Nov 29 07:46:03 crc kubenswrapper[4795]: I1129 07:46:03.075491 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-64cf5dcff4-52hkq" event={"ID":"5102a82b-978d-49e1-9196-afa5f15dc5b6","Type":"ContainerStarted","Data":"254a787019523f909dc47109de563a1383c8900182c742e0d8d233f7aa0afe25"} Nov 29 07:46:03 crc kubenswrapper[4795]: I1129 07:46:03.075713 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-64cf5dcff4-52hkq" Nov 29 07:46:03 crc kubenswrapper[4795]: I1129 07:46:03.082096 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-64cf5dcff4-52hkq" Nov 29 07:46:03 crc kubenswrapper[4795]: I1129 07:46:03.094319 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-64cf5dcff4-52hkq" podStartSLOduration=4.094282152 podStartE2EDuration="4.094282152s" podCreationTimestamp="2025-11-29 07:45:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:46:03.09035266 +0000 UTC m=+409.065928470" watchObservedRunningTime="2025-11-29 07:46:03.094282152 +0000 UTC m=+409.069857942" Nov 29 07:46:04 crc kubenswrapper[4795]: I1129 07:46:04.680547 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-nz57z"] Nov 29 07:46:04 crc kubenswrapper[4795]: I1129 07:46:04.681832 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-nz57z" Nov 29 07:46:04 crc kubenswrapper[4795]: I1129 07:46:04.712714 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-nz57z"] Nov 29 07:46:04 crc kubenswrapper[4795]: I1129 07:46:04.768618 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/de3a58b3-0247-4835-9536-033b8bf30457-registry-tls\") pod \"image-registry-66df7c8f76-nz57z\" (UID: \"de3a58b3-0247-4835-9536-033b8bf30457\") " pod="openshift-image-registry/image-registry-66df7c8f76-nz57z" Nov 29 07:46:04 crc kubenswrapper[4795]: I1129 07:46:04.768676 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/de3a58b3-0247-4835-9536-033b8bf30457-trusted-ca\") pod \"image-registry-66df7c8f76-nz57z\" (UID: \"de3a58b3-0247-4835-9536-033b8bf30457\") " pod="openshift-image-registry/image-registry-66df7c8f76-nz57z" Nov 29 07:46:04 crc kubenswrapper[4795]: I1129 07:46:04.768696 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/de3a58b3-0247-4835-9536-033b8bf30457-registry-certificates\") pod \"image-registry-66df7c8f76-nz57z\" (UID: \"de3a58b3-0247-4835-9536-033b8bf30457\") " pod="openshift-image-registry/image-registry-66df7c8f76-nz57z" Nov 29 07:46:04 crc kubenswrapper[4795]: I1129 07:46:04.768720 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/de3a58b3-0247-4835-9536-033b8bf30457-bound-sa-token\") pod \"image-registry-66df7c8f76-nz57z\" (UID: \"de3a58b3-0247-4835-9536-033b8bf30457\") " pod="openshift-image-registry/image-registry-66df7c8f76-nz57z" Nov 29 07:46:04 crc kubenswrapper[4795]: I1129 07:46:04.768744 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgfj4\" (UniqueName: \"kubernetes.io/projected/de3a58b3-0247-4835-9536-033b8bf30457-kube-api-access-tgfj4\") pod \"image-registry-66df7c8f76-nz57z\" (UID: \"de3a58b3-0247-4835-9536-033b8bf30457\") " pod="openshift-image-registry/image-registry-66df7c8f76-nz57z" Nov 29 07:46:04 crc kubenswrapper[4795]: I1129 07:46:04.768760 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/de3a58b3-0247-4835-9536-033b8bf30457-ca-trust-extracted\") pod \"image-registry-66df7c8f76-nz57z\" (UID: \"de3a58b3-0247-4835-9536-033b8bf30457\") " pod="openshift-image-registry/image-registry-66df7c8f76-nz57z" Nov 29 07:46:04 crc kubenswrapper[4795]: I1129 07:46:04.768825 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/de3a58b3-0247-4835-9536-033b8bf30457-installation-pull-secrets\") pod \"image-registry-66df7c8f76-nz57z\" (UID: \"de3a58b3-0247-4835-9536-033b8bf30457\") " pod="openshift-image-registry/image-registry-66df7c8f76-nz57z" Nov 29 07:46:04 crc kubenswrapper[4795]: I1129 07:46:04.768864 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-nz57z\" (UID: \"de3a58b3-0247-4835-9536-033b8bf30457\") " pod="openshift-image-registry/image-registry-66df7c8f76-nz57z" Nov 29 07:46:04 crc kubenswrapper[4795]: I1129 07:46:04.804467 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-nz57z\" (UID: \"de3a58b3-0247-4835-9536-033b8bf30457\") " pod="openshift-image-registry/image-registry-66df7c8f76-nz57z" Nov 29 07:46:04 crc kubenswrapper[4795]: I1129 07:46:04.870195 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/de3a58b3-0247-4835-9536-033b8bf30457-registry-tls\") pod \"image-registry-66df7c8f76-nz57z\" (UID: \"de3a58b3-0247-4835-9536-033b8bf30457\") " pod="openshift-image-registry/image-registry-66df7c8f76-nz57z" Nov 29 07:46:04 crc kubenswrapper[4795]: I1129 07:46:04.870260 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/de3a58b3-0247-4835-9536-033b8bf30457-trusted-ca\") pod \"image-registry-66df7c8f76-nz57z\" (UID: \"de3a58b3-0247-4835-9536-033b8bf30457\") " pod="openshift-image-registry/image-registry-66df7c8f76-nz57z" Nov 29 07:46:04 crc kubenswrapper[4795]: I1129 07:46:04.870284 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/de3a58b3-0247-4835-9536-033b8bf30457-registry-certificates\") pod \"image-registry-66df7c8f76-nz57z\" (UID: \"de3a58b3-0247-4835-9536-033b8bf30457\") " pod="openshift-image-registry/image-registry-66df7c8f76-nz57z" Nov 29 07:46:04 crc kubenswrapper[4795]: I1129 07:46:04.870312 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/de3a58b3-0247-4835-9536-033b8bf30457-bound-sa-token\") pod \"image-registry-66df7c8f76-nz57z\" (UID: \"de3a58b3-0247-4835-9536-033b8bf30457\") " pod="openshift-image-registry/image-registry-66df7c8f76-nz57z" Nov 29 07:46:04 crc kubenswrapper[4795]: I1129 07:46:04.870331 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgfj4\" (UniqueName: \"kubernetes.io/projected/de3a58b3-0247-4835-9536-033b8bf30457-kube-api-access-tgfj4\") pod \"image-registry-66df7c8f76-nz57z\" (UID: \"de3a58b3-0247-4835-9536-033b8bf30457\") " pod="openshift-image-registry/image-registry-66df7c8f76-nz57z" Nov 29 07:46:04 crc kubenswrapper[4795]: I1129 07:46:04.870347 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/de3a58b3-0247-4835-9536-033b8bf30457-ca-trust-extracted\") pod \"image-registry-66df7c8f76-nz57z\" (UID: \"de3a58b3-0247-4835-9536-033b8bf30457\") " pod="openshift-image-registry/image-registry-66df7c8f76-nz57z" Nov 29 07:46:04 crc kubenswrapper[4795]: I1129 07:46:04.870381 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/de3a58b3-0247-4835-9536-033b8bf30457-installation-pull-secrets\") pod \"image-registry-66df7c8f76-nz57z\" (UID: \"de3a58b3-0247-4835-9536-033b8bf30457\") " pod="openshift-image-registry/image-registry-66df7c8f76-nz57z" Nov 29 07:46:04 crc kubenswrapper[4795]: I1129 07:46:04.871470 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/de3a58b3-0247-4835-9536-033b8bf30457-ca-trust-extracted\") pod \"image-registry-66df7c8f76-nz57z\" (UID: \"de3a58b3-0247-4835-9536-033b8bf30457\") " pod="openshift-image-registry/image-registry-66df7c8f76-nz57z" Nov 29 07:46:04 crc kubenswrapper[4795]: I1129 07:46:04.871834 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/de3a58b3-0247-4835-9536-033b8bf30457-trusted-ca\") pod \"image-registry-66df7c8f76-nz57z\" (UID: \"de3a58b3-0247-4835-9536-033b8bf30457\") " pod="openshift-image-registry/image-registry-66df7c8f76-nz57z" Nov 29 07:46:04 crc kubenswrapper[4795]: I1129 07:46:04.871880 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/de3a58b3-0247-4835-9536-033b8bf30457-registry-certificates\") pod \"image-registry-66df7c8f76-nz57z\" (UID: \"de3a58b3-0247-4835-9536-033b8bf30457\") " pod="openshift-image-registry/image-registry-66df7c8f76-nz57z" Nov 29 07:46:04 crc kubenswrapper[4795]: I1129 07:46:04.876310 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/de3a58b3-0247-4835-9536-033b8bf30457-installation-pull-secrets\") pod \"image-registry-66df7c8f76-nz57z\" (UID: \"de3a58b3-0247-4835-9536-033b8bf30457\") " pod="openshift-image-registry/image-registry-66df7c8f76-nz57z" Nov 29 07:46:04 crc kubenswrapper[4795]: I1129 07:46:04.876560 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/de3a58b3-0247-4835-9536-033b8bf30457-registry-tls\") pod \"image-registry-66df7c8f76-nz57z\" (UID: \"de3a58b3-0247-4835-9536-033b8bf30457\") " pod="openshift-image-registry/image-registry-66df7c8f76-nz57z" Nov 29 07:46:04 crc kubenswrapper[4795]: I1129 07:46:04.886309 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/de3a58b3-0247-4835-9536-033b8bf30457-bound-sa-token\") pod \"image-registry-66df7c8f76-nz57z\" (UID: \"de3a58b3-0247-4835-9536-033b8bf30457\") " pod="openshift-image-registry/image-registry-66df7c8f76-nz57z" Nov 29 07:46:04 crc kubenswrapper[4795]: I1129 07:46:04.886531 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgfj4\" (UniqueName: \"kubernetes.io/projected/de3a58b3-0247-4835-9536-033b8bf30457-kube-api-access-tgfj4\") pod \"image-registry-66df7c8f76-nz57z\" (UID: \"de3a58b3-0247-4835-9536-033b8bf30457\") " pod="openshift-image-registry/image-registry-66df7c8f76-nz57z" Nov 29 07:46:04 crc kubenswrapper[4795]: I1129 07:46:04.997189 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-nz57z" Nov 29 07:46:05 crc kubenswrapper[4795]: I1129 07:46:05.391871 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-nz57z"] Nov 29 07:46:05 crc kubenswrapper[4795]: W1129 07:46:05.400696 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podde3a58b3_0247_4835_9536_033b8bf30457.slice/crio-01c90b712d24b38f797d1e971c2ffeaf5119c99ee8d398b1da3035505213e01e WatchSource:0}: Error finding container 01c90b712d24b38f797d1e971c2ffeaf5119c99ee8d398b1da3035505213e01e: Status 404 returned error can't find the container with id 01c90b712d24b38f797d1e971c2ffeaf5119c99ee8d398b1da3035505213e01e Nov 29 07:46:06 crc kubenswrapper[4795]: I1129 07:46:06.093839 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-nz57z" event={"ID":"de3a58b3-0247-4835-9536-033b8bf30457","Type":"ContainerStarted","Data":"46da080e52c7b4cb14384459f8d6c8eb2eaa47e6c200280bc01c5cae5f05a780"} Nov 29 07:46:06 crc kubenswrapper[4795]: I1129 07:46:06.093934 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-nz57z" event={"ID":"de3a58b3-0247-4835-9536-033b8bf30457","Type":"ContainerStarted","Data":"01c90b712d24b38f797d1e971c2ffeaf5119c99ee8d398b1da3035505213e01e"} Nov 29 07:46:06 crc kubenswrapper[4795]: I1129 07:46:06.093966 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-nz57z" Nov 29 07:46:06 crc kubenswrapper[4795]: I1129 07:46:06.117369 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-nz57z" podStartSLOduration=2.117347208 podStartE2EDuration="2.117347208s" podCreationTimestamp="2025-11-29 07:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:46:06.115075484 +0000 UTC m=+412.090651274" watchObservedRunningTime="2025-11-29 07:46:06.117347208 +0000 UTC m=+412.092922998" Nov 29 07:46:06 crc kubenswrapper[4795]: I1129 07:46:06.289180 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-7fzvx"] Nov 29 07:46:06 crc kubenswrapper[4795]: I1129 07:46:06.290383 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-7fzvx" Nov 29 07:46:06 crc kubenswrapper[4795]: I1129 07:46:06.294507 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Nov 29 07:46:06 crc kubenswrapper[4795]: I1129 07:46:06.294883 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Nov 29 07:46:06 crc kubenswrapper[4795]: I1129 07:46:06.295145 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Nov 29 07:46:06 crc kubenswrapper[4795]: I1129 07:46:06.296434 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Nov 29 07:46:06 crc kubenswrapper[4795]: I1129 07:46:06.296735 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-dockercfg-wwt9l" Nov 29 07:46:06 crc kubenswrapper[4795]: I1129 07:46:06.301448 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-7fzvx"] Nov 29 07:46:06 crc kubenswrapper[4795]: I1129 07:46:06.391039 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/15a1e849-7ac7-4fbd-9019-9eb0c98b4bc5-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-7fzvx\" (UID: \"15a1e849-7ac7-4fbd-9019-9eb0c98b4bc5\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-7fzvx" Nov 29 07:46:06 crc kubenswrapper[4795]: I1129 07:46:06.391121 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/15a1e849-7ac7-4fbd-9019-9eb0c98b4bc5-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-7fzvx\" (UID: \"15a1e849-7ac7-4fbd-9019-9eb0c98b4bc5\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-7fzvx" Nov 29 07:46:06 crc kubenswrapper[4795]: I1129 07:46:06.391167 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmbst\" (UniqueName: \"kubernetes.io/projected/15a1e849-7ac7-4fbd-9019-9eb0c98b4bc5-kube-api-access-nmbst\") pod \"cluster-monitoring-operator-6d5b84845-7fzvx\" (UID: \"15a1e849-7ac7-4fbd-9019-9eb0c98b4bc5\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-7fzvx" Nov 29 07:46:06 crc kubenswrapper[4795]: I1129 07:46:06.492449 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/15a1e849-7ac7-4fbd-9019-9eb0c98b4bc5-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-7fzvx\" (UID: \"15a1e849-7ac7-4fbd-9019-9eb0c98b4bc5\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-7fzvx" Nov 29 07:46:06 crc kubenswrapper[4795]: I1129 07:46:06.492617 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/15a1e849-7ac7-4fbd-9019-9eb0c98b4bc5-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-7fzvx\" (UID: \"15a1e849-7ac7-4fbd-9019-9eb0c98b4bc5\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-7fzvx" Nov 29 07:46:06 crc kubenswrapper[4795]: I1129 07:46:06.492665 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmbst\" (UniqueName: \"kubernetes.io/projected/15a1e849-7ac7-4fbd-9019-9eb0c98b4bc5-kube-api-access-nmbst\") pod \"cluster-monitoring-operator-6d5b84845-7fzvx\" (UID: \"15a1e849-7ac7-4fbd-9019-9eb0c98b4bc5\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-7fzvx" Nov 29 07:46:06 crc kubenswrapper[4795]: I1129 07:46:06.494043 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/15a1e849-7ac7-4fbd-9019-9eb0c98b4bc5-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-7fzvx\" (UID: \"15a1e849-7ac7-4fbd-9019-9eb0c98b4bc5\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-7fzvx" Nov 29 07:46:06 crc kubenswrapper[4795]: I1129 07:46:06.499669 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/15a1e849-7ac7-4fbd-9019-9eb0c98b4bc5-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-7fzvx\" (UID: \"15a1e849-7ac7-4fbd-9019-9eb0c98b4bc5\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-7fzvx" Nov 29 07:46:06 crc kubenswrapper[4795]: I1129 07:46:06.512640 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmbst\" (UniqueName: \"kubernetes.io/projected/15a1e849-7ac7-4fbd-9019-9eb0c98b4bc5-kube-api-access-nmbst\") pod \"cluster-monitoring-operator-6d5b84845-7fzvx\" (UID: \"15a1e849-7ac7-4fbd-9019-9eb0c98b4bc5\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-7fzvx" Nov 29 07:46:06 crc kubenswrapper[4795]: I1129 07:46:06.617575 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-7fzvx" Nov 29 07:46:07 crc kubenswrapper[4795]: I1129 07:46:07.054710 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-7fzvx"] Nov 29 07:46:07 crc kubenswrapper[4795]: W1129 07:46:07.059565 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15a1e849_7ac7_4fbd_9019_9eb0c98b4bc5.slice/crio-f948f43c52224c0e845fe730b1315a5d0192a7dd65d86f695516b538d5cc7578 WatchSource:0}: Error finding container f948f43c52224c0e845fe730b1315a5d0192a7dd65d86f695516b538d5cc7578: Status 404 returned error can't find the container with id f948f43c52224c0e845fe730b1315a5d0192a7dd65d86f695516b538d5cc7578 Nov 29 07:46:07 crc kubenswrapper[4795]: I1129 07:46:07.103864 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-7fzvx" event={"ID":"15a1e849-7ac7-4fbd-9019-9eb0c98b4bc5","Type":"ContainerStarted","Data":"f948f43c52224c0e845fe730b1315a5d0192a7dd65d86f695516b538d5cc7578"} Nov 29 07:46:09 crc kubenswrapper[4795]: I1129 07:46:09.949023 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-wwb76"] Nov 29 07:46:09 crc kubenswrapper[4795]: I1129 07:46:09.950114 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-wwb76" Nov 29 07:46:09 crc kubenswrapper[4795]: I1129 07:46:09.952080 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Nov 29 07:46:09 crc kubenswrapper[4795]: I1129 07:46:09.953631 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-dockercfg-5kvgw" Nov 29 07:46:09 crc kubenswrapper[4795]: I1129 07:46:09.957230 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-wwb76"] Nov 29 07:46:10 crc kubenswrapper[4795]: I1129 07:46:10.132957 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-7fzvx" event={"ID":"15a1e849-7ac7-4fbd-9019-9eb0c98b4bc5","Type":"ContainerStarted","Data":"189a05a25cad068b363fe62d7dd89a7e00ccc04fc9101aee993623b3983eee5d"} Nov 29 07:46:10 crc kubenswrapper[4795]: I1129 07:46:10.148476 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/3d14ee7f-15a2-4335-88e7-f0dd1ac1ac55-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-wwb76\" (UID: \"3d14ee7f-15a2-4335-88e7-f0dd1ac1ac55\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-wwb76" Nov 29 07:46:10 crc kubenswrapper[4795]: I1129 07:46:10.152151 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-7fzvx" podStartSLOduration=1.9714377490000001 podStartE2EDuration="4.152134396s" podCreationTimestamp="2025-11-29 07:46:06 +0000 UTC" firstStartedPulling="2025-11-29 07:46:07.061676383 +0000 UTC m=+413.037252163" lastFinishedPulling="2025-11-29 07:46:09.24237302 +0000 UTC m=+415.217948810" observedRunningTime="2025-11-29 07:46:10.149277494 +0000 UTC m=+416.124853284" watchObservedRunningTime="2025-11-29 07:46:10.152134396 +0000 UTC m=+416.127710186" Nov 29 07:46:10 crc kubenswrapper[4795]: I1129 07:46:10.249331 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/3d14ee7f-15a2-4335-88e7-f0dd1ac1ac55-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-wwb76\" (UID: \"3d14ee7f-15a2-4335-88e7-f0dd1ac1ac55\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-wwb76" Nov 29 07:46:10 crc kubenswrapper[4795]: I1129 07:46:10.257384 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/3d14ee7f-15a2-4335-88e7-f0dd1ac1ac55-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-wwb76\" (UID: \"3d14ee7f-15a2-4335-88e7-f0dd1ac1ac55\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-wwb76" Nov 29 07:46:10 crc kubenswrapper[4795]: I1129 07:46:10.266833 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-wwb76" Nov 29 07:46:10 crc kubenswrapper[4795]: I1129 07:46:10.738356 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-wwb76"] Nov 29 07:46:11 crc kubenswrapper[4795]: I1129 07:46:11.143950 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-wwb76" event={"ID":"3d14ee7f-15a2-4335-88e7-f0dd1ac1ac55","Type":"ContainerStarted","Data":"25eb85a4d6e51877fae69e6300e2a002e6dd787b658dfbe96c2dd9310bf68e71"} Nov 29 07:46:11 crc kubenswrapper[4795]: I1129 07:46:11.941671 4795 patch_prober.go:28] interesting pod/machine-config-daemon-bkmq6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:46:11 crc kubenswrapper[4795]: I1129 07:46:11.941780 4795 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:46:13 crc kubenswrapper[4795]: I1129 07:46:13.158273 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-wwb76" event={"ID":"3d14ee7f-15a2-4335-88e7-f0dd1ac1ac55","Type":"ContainerStarted","Data":"be291ad48bf55ea9724e3afdc6af50d5bc211733c39e131e9277db32ffd29d85"} Nov 29 07:46:13 crc kubenswrapper[4795]: I1129 07:46:13.158751 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-wwb76" Nov 29 07:46:13 crc kubenswrapper[4795]: I1129 07:46:13.165123 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-wwb76" Nov 29 07:46:13 crc kubenswrapper[4795]: I1129 07:46:13.176828 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-wwb76" podStartSLOduration=2.739633931 podStartE2EDuration="4.176809568s" podCreationTimestamp="2025-11-29 07:46:09 +0000 UTC" firstStartedPulling="2025-11-29 07:46:10.748946322 +0000 UTC m=+416.724522112" lastFinishedPulling="2025-11-29 07:46:12.186121949 +0000 UTC m=+418.161697749" observedRunningTime="2025-11-29 07:46:13.175658096 +0000 UTC m=+419.151233896" watchObservedRunningTime="2025-11-29 07:46:13.176809568 +0000 UTC m=+419.152385378" Nov 29 07:46:14 crc kubenswrapper[4795]: I1129 07:46:14.056733 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-bg4h8"] Nov 29 07:46:14 crc kubenswrapper[4795]: I1129 07:46:14.058473 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-bg4h8" Nov 29 07:46:14 crc kubenswrapper[4795]: I1129 07:46:14.059711 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-bg4h8"] Nov 29 07:46:14 crc kubenswrapper[4795]: I1129 07:46:14.063971 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-twzlg" Nov 29 07:46:14 crc kubenswrapper[4795]: I1129 07:46:14.064145 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Nov 29 07:46:14 crc kubenswrapper[4795]: I1129 07:46:14.064193 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Nov 29 07:46:14 crc kubenswrapper[4795]: I1129 07:46:14.064511 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Nov 29 07:46:14 crc kubenswrapper[4795]: I1129 07:46:14.125738 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5ld8\" (UniqueName: \"kubernetes.io/projected/ca10b605-0efc-4b28-afdd-f2a5b1b617fd-kube-api-access-b5ld8\") pod \"prometheus-operator-db54df47d-bg4h8\" (UID: \"ca10b605-0efc-4b28-afdd-f2a5b1b617fd\") " pod="openshift-monitoring/prometheus-operator-db54df47d-bg4h8" Nov 29 07:46:14 crc kubenswrapper[4795]: I1129 07:46:14.125795 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ca10b605-0efc-4b28-afdd-f2a5b1b617fd-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-bg4h8\" (UID: \"ca10b605-0efc-4b28-afdd-f2a5b1b617fd\") " pod="openshift-monitoring/prometheus-operator-db54df47d-bg4h8" Nov 29 07:46:14 crc kubenswrapper[4795]: I1129 07:46:14.125820 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ca10b605-0efc-4b28-afdd-f2a5b1b617fd-metrics-client-ca\") pod \"prometheus-operator-db54df47d-bg4h8\" (UID: \"ca10b605-0efc-4b28-afdd-f2a5b1b617fd\") " pod="openshift-monitoring/prometheus-operator-db54df47d-bg4h8" Nov 29 07:46:14 crc kubenswrapper[4795]: I1129 07:46:14.125837 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/ca10b605-0efc-4b28-afdd-f2a5b1b617fd-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-bg4h8\" (UID: \"ca10b605-0efc-4b28-afdd-f2a5b1b617fd\") " pod="openshift-monitoring/prometheus-operator-db54df47d-bg4h8" Nov 29 07:46:14 crc kubenswrapper[4795]: I1129 07:46:14.227629 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5ld8\" (UniqueName: \"kubernetes.io/projected/ca10b605-0efc-4b28-afdd-f2a5b1b617fd-kube-api-access-b5ld8\") pod \"prometheus-operator-db54df47d-bg4h8\" (UID: \"ca10b605-0efc-4b28-afdd-f2a5b1b617fd\") " pod="openshift-monitoring/prometheus-operator-db54df47d-bg4h8" Nov 29 07:46:14 crc kubenswrapper[4795]: I1129 07:46:14.227712 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ca10b605-0efc-4b28-afdd-f2a5b1b617fd-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-bg4h8\" (UID: \"ca10b605-0efc-4b28-afdd-f2a5b1b617fd\") " pod="openshift-monitoring/prometheus-operator-db54df47d-bg4h8" Nov 29 07:46:14 crc kubenswrapper[4795]: I1129 07:46:14.227738 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ca10b605-0efc-4b28-afdd-f2a5b1b617fd-metrics-client-ca\") pod \"prometheus-operator-db54df47d-bg4h8\" (UID: \"ca10b605-0efc-4b28-afdd-f2a5b1b617fd\") " pod="openshift-monitoring/prometheus-operator-db54df47d-bg4h8" Nov 29 07:46:14 crc kubenswrapper[4795]: I1129 07:46:14.227760 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/ca10b605-0efc-4b28-afdd-f2a5b1b617fd-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-bg4h8\" (UID: \"ca10b605-0efc-4b28-afdd-f2a5b1b617fd\") " pod="openshift-monitoring/prometheus-operator-db54df47d-bg4h8" Nov 29 07:46:14 crc kubenswrapper[4795]: I1129 07:46:14.230071 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Nov 29 07:46:14 crc kubenswrapper[4795]: I1129 07:46:14.230143 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Nov 29 07:46:14 crc kubenswrapper[4795]: I1129 07:46:14.232002 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Nov 29 07:46:14 crc kubenswrapper[4795]: I1129 07:46:14.239468 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ca10b605-0efc-4b28-afdd-f2a5b1b617fd-metrics-client-ca\") pod \"prometheus-operator-db54df47d-bg4h8\" (UID: \"ca10b605-0efc-4b28-afdd-f2a5b1b617fd\") " pod="openshift-monitoring/prometheus-operator-db54df47d-bg4h8" Nov 29 07:46:14 crc kubenswrapper[4795]: I1129 07:46:14.244174 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/ca10b605-0efc-4b28-afdd-f2a5b1b617fd-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-bg4h8\" (UID: \"ca10b605-0efc-4b28-afdd-f2a5b1b617fd\") " pod="openshift-monitoring/prometheus-operator-db54df47d-bg4h8" Nov 29 07:46:14 crc kubenswrapper[4795]: I1129 07:46:14.246151 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5ld8\" (UniqueName: \"kubernetes.io/projected/ca10b605-0efc-4b28-afdd-f2a5b1b617fd-kube-api-access-b5ld8\") pod \"prometheus-operator-db54df47d-bg4h8\" (UID: \"ca10b605-0efc-4b28-afdd-f2a5b1b617fd\") " pod="openshift-monitoring/prometheus-operator-db54df47d-bg4h8" Nov 29 07:46:14 crc kubenswrapper[4795]: I1129 07:46:14.252790 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ca10b605-0efc-4b28-afdd-f2a5b1b617fd-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-bg4h8\" (UID: \"ca10b605-0efc-4b28-afdd-f2a5b1b617fd\") " pod="openshift-monitoring/prometheus-operator-db54df47d-bg4h8" Nov 29 07:46:14 crc kubenswrapper[4795]: I1129 07:46:14.389242 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-twzlg" Nov 29 07:46:14 crc kubenswrapper[4795]: I1129 07:46:14.396799 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-bg4h8" Nov 29 07:46:14 crc kubenswrapper[4795]: I1129 07:46:14.804887 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-bg4h8"] Nov 29 07:46:15 crc kubenswrapper[4795]: I1129 07:46:15.171740 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-bg4h8" event={"ID":"ca10b605-0efc-4b28-afdd-f2a5b1b617fd","Type":"ContainerStarted","Data":"80e6d59b0fdd6b6332fc4c32641e9ae488ede3da8a06c2ba21415744c5d8baa6"} Nov 29 07:46:19 crc kubenswrapper[4795]: I1129 07:46:19.194110 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-bg4h8" event={"ID":"ca10b605-0efc-4b28-afdd-f2a5b1b617fd","Type":"ContainerStarted","Data":"1b88381ecb50c8166a8bef4c708a5b36f3665273cc576147251e8ada8e9226b7"} Nov 29 07:46:19 crc kubenswrapper[4795]: I1129 07:46:19.194689 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-bg4h8" event={"ID":"ca10b605-0efc-4b28-afdd-f2a5b1b617fd","Type":"ContainerStarted","Data":"ae2c9c66e5e75bf625f620439e9caf0ffde59f52b883889c23477289eb65a1e8"} Nov 29 07:46:19 crc kubenswrapper[4795]: I1129 07:46:19.231249 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-db54df47d-bg4h8" podStartSLOduration=1.7151449840000002 podStartE2EDuration="5.231205388s" podCreationTimestamp="2025-11-29 07:46:14 +0000 UTC" firstStartedPulling="2025-11-29 07:46:14.805979376 +0000 UTC m=+420.781555166" lastFinishedPulling="2025-11-29 07:46:18.32203978 +0000 UTC m=+424.297615570" observedRunningTime="2025-11-29 07:46:19.225025721 +0000 UTC m=+425.200601521" watchObservedRunningTime="2025-11-29 07:46:19.231205388 +0000 UTC m=+425.206781218" Nov 29 07:46:21 crc kubenswrapper[4795]: I1129 07:46:21.355378 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-gwbvq"] Nov 29 07:46:21 crc kubenswrapper[4795]: I1129 07:46:21.357514 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-gwbvq" Nov 29 07:46:21 crc kubenswrapper[4795]: I1129 07:46:21.359664 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Nov 29 07:46:21 crc kubenswrapper[4795]: I1129 07:46:21.359753 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-r4lwx"] Nov 29 07:46:21 crc kubenswrapper[4795]: I1129 07:46:21.359982 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Nov 29 07:46:21 crc kubenswrapper[4795]: I1129 07:46:21.361254 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-r4lwx" Nov 29 07:46:21 crc kubenswrapper[4795]: I1129 07:46:21.361873 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Nov 29 07:46:21 crc kubenswrapper[4795]: I1129 07:46:21.362100 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-j55mp" Nov 29 07:46:21 crc kubenswrapper[4795]: I1129 07:46:21.364245 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-wc4mg" Nov 29 07:46:21 crc kubenswrapper[4795]: I1129 07:46:21.364806 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Nov 29 07:46:21 crc kubenswrapper[4795]: I1129 07:46:21.365005 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Nov 29 07:46:21 crc kubenswrapper[4795]: I1129 07:46:21.377997 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-r4lwx"] Nov 29 07:46:21 crc kubenswrapper[4795]: I1129 07:46:21.378052 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-gwbvq"] Nov 29 07:46:21 crc kubenswrapper[4795]: I1129 07:46:21.380806 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-gpgm5"] Nov 29 07:46:21 crc kubenswrapper[4795]: I1129 07:46:21.381860 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-gpgm5" Nov 29 07:46:21 crc kubenswrapper[4795]: I1129 07:46:21.384860 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Nov 29 07:46:21 crc kubenswrapper[4795]: I1129 07:46:21.387161 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Nov 29 07:46:21 crc kubenswrapper[4795]: I1129 07:46:21.393550 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-t6wdf" Nov 29 07:46:21 crc kubenswrapper[4795]: I1129 07:46:21.526442 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/da700315-23ec-4eba-8df0-32845329d616-sys\") pod \"node-exporter-gpgm5\" (UID: \"da700315-23ec-4eba-8df0-32845329d616\") " pod="openshift-monitoring/node-exporter-gpgm5" Nov 29 07:46:21 crc kubenswrapper[4795]: I1129 07:46:21.526486 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsmc9\" (UniqueName: \"kubernetes.io/projected/6fd992f0-9162-4936-bec5-764dab2c9090-kube-api-access-hsmc9\") pod \"kube-state-metrics-777cb5bd5d-gwbvq\" (UID: \"6fd992f0-9162-4936-bec5-764dab2c9090\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-gwbvq" Nov 29 07:46:21 crc kubenswrapper[4795]: I1129 07:46:21.526518 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/60007f1a-bd5c-46d4-901c-0c33d468cbd9-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-r4lwx\" (UID: \"60007f1a-bd5c-46d4-901c-0c33d468cbd9\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-r4lwx" Nov 29 07:46:21 crc kubenswrapper[4795]: I1129 07:46:21.526565 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/60007f1a-bd5c-46d4-901c-0c33d468cbd9-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-r4lwx\" (UID: \"60007f1a-bd5c-46d4-901c-0c33d468cbd9\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-r4lwx" Nov 29 07:46:21 crc kubenswrapper[4795]: I1129 07:46:21.526621 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/60007f1a-bd5c-46d4-901c-0c33d468cbd9-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-r4lwx\" (UID: \"60007f1a-bd5c-46d4-901c-0c33d468cbd9\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-r4lwx" Nov 29 07:46:21 crc kubenswrapper[4795]: I1129 07:46:21.526646 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p7gp\" (UniqueName: \"kubernetes.io/projected/da700315-23ec-4eba-8df0-32845329d616-kube-api-access-8p7gp\") pod \"node-exporter-gpgm5\" (UID: \"da700315-23ec-4eba-8df0-32845329d616\") " pod="openshift-monitoring/node-exporter-gpgm5" Nov 29 07:46:21 crc kubenswrapper[4795]: I1129 07:46:21.526694 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/da700315-23ec-4eba-8df0-32845329d616-metrics-client-ca\") pod \"node-exporter-gpgm5\" (UID: \"da700315-23ec-4eba-8df0-32845329d616\") " pod="openshift-monitoring/node-exporter-gpgm5" Nov 29 07:46:21 crc kubenswrapper[4795]: I1129 07:46:21.526711 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/6fd992f0-9162-4936-bec5-764dab2c9090-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-gwbvq\" (UID: \"6fd992f0-9162-4936-bec5-764dab2c9090\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-gwbvq" Nov 29 07:46:21 crc kubenswrapper[4795]: I1129 07:46:21.526732 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/da700315-23ec-4eba-8df0-32845329d616-node-exporter-wtmp\") pod \"node-exporter-gpgm5\" (UID: \"da700315-23ec-4eba-8df0-32845329d616\") " pod="openshift-monitoring/node-exporter-gpgm5" Nov 29 07:46:21 crc kubenswrapper[4795]: I1129 07:46:21.526750 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/6fd992f0-9162-4936-bec5-764dab2c9090-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-gwbvq\" (UID: \"6fd992f0-9162-4936-bec5-764dab2c9090\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-gwbvq" Nov 29 07:46:21 crc kubenswrapper[4795]: I1129 07:46:21.526776 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/da700315-23ec-4eba-8df0-32845329d616-root\") pod \"node-exporter-gpgm5\" (UID: \"da700315-23ec-4eba-8df0-32845329d616\") " pod="openshift-monitoring/node-exporter-gpgm5" Nov 29 07:46:21 crc kubenswrapper[4795]: I1129 07:46:21.526797 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/da700315-23ec-4eba-8df0-32845329d616-node-exporter-textfile\") pod \"node-exporter-gpgm5\" (UID: \"da700315-23ec-4eba-8df0-32845329d616\") " pod="openshift-monitoring/node-exporter-gpgm5" Nov 29 07:46:21 crc kubenswrapper[4795]: I1129 07:46:21.526865 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/6fd992f0-9162-4936-bec5-764dab2c9090-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-gwbvq\" (UID: \"6fd992f0-9162-4936-bec5-764dab2c9090\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-gwbvq" Nov 29 07:46:21 crc kubenswrapper[4795]: I1129 07:46:21.526971 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6fd992f0-9162-4936-bec5-764dab2c9090-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-gwbvq\" (UID: \"6fd992f0-9162-4936-bec5-764dab2c9090\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-gwbvq" Nov 29 07:46:21 crc kubenswrapper[4795]: I1129 07:46:21.526997 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/6fd992f0-9162-4936-bec5-764dab2c9090-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-gwbvq\" (UID: \"6fd992f0-9162-4936-bec5-764dab2c9090\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-gwbvq" Nov 29 07:46:21 crc kubenswrapper[4795]: I1129 07:46:21.527080 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9tp4\" (UniqueName: \"kubernetes.io/projected/60007f1a-bd5c-46d4-901c-0c33d468cbd9-kube-api-access-j9tp4\") pod \"openshift-state-metrics-566fddb674-r4lwx\" (UID: \"60007f1a-bd5c-46d4-901c-0c33d468cbd9\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-r4lwx" Nov 29 07:46:21 crc kubenswrapper[4795]: I1129 07:46:21.527124 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/da700315-23ec-4eba-8df0-32845329d616-node-exporter-tls\") pod \"node-exporter-gpgm5\" (UID: \"da700315-23ec-4eba-8df0-32845329d616\") " pod="openshift-monitoring/node-exporter-gpgm5" Nov 29 07:46:21 crc kubenswrapper[4795]: I1129 07:46:21.527153 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/da700315-23ec-4eba-8df0-32845329d616-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-gpgm5\" (UID: \"da700315-23ec-4eba-8df0-32845329d616\") " pod="openshift-monitoring/node-exporter-gpgm5" Nov 29 07:46:21 crc kubenswrapper[4795]: I1129 07:46:21.628088 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9tp4\" (UniqueName: \"kubernetes.io/projected/60007f1a-bd5c-46d4-901c-0c33d468cbd9-kube-api-access-j9tp4\") pod \"openshift-state-metrics-566fddb674-r4lwx\" (UID: \"60007f1a-bd5c-46d4-901c-0c33d468cbd9\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-r4lwx" Nov 29 07:46:21 crc kubenswrapper[4795]: I1129 07:46:21.628146 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/da700315-23ec-4eba-8df0-32845329d616-node-exporter-tls\") pod \"node-exporter-gpgm5\" (UID: \"da700315-23ec-4eba-8df0-32845329d616\") " pod="openshift-monitoring/node-exporter-gpgm5" Nov 29 07:46:21 crc kubenswrapper[4795]: I1129 07:46:21.628167 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/da700315-23ec-4eba-8df0-32845329d616-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-gpgm5\" (UID: \"da700315-23ec-4eba-8df0-32845329d616\") " pod="openshift-monitoring/node-exporter-gpgm5" Nov 29 07:46:21 crc kubenswrapper[4795]: I1129 07:46:21.628189 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/da700315-23ec-4eba-8df0-32845329d616-sys\") pod \"node-exporter-gpgm5\" (UID: \"da700315-23ec-4eba-8df0-32845329d616\") " pod="openshift-monitoring/node-exporter-gpgm5" Nov 29 07:46:21 crc kubenswrapper[4795]: I1129 07:46:21.628210 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsmc9\" (UniqueName: \"kubernetes.io/projected/6fd992f0-9162-4936-bec5-764dab2c9090-kube-api-access-hsmc9\") pod \"kube-state-metrics-777cb5bd5d-gwbvq\" (UID: \"6fd992f0-9162-4936-bec5-764dab2c9090\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-gwbvq" Nov 29 07:46:21 crc kubenswrapper[4795]: I1129 07:46:21.628265 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/60007f1a-bd5c-46d4-901c-0c33d468cbd9-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-r4lwx\" (UID: \"60007f1a-bd5c-46d4-901c-0c33d468cbd9\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-r4lwx" Nov 29 07:46:21 crc kubenswrapper[4795]: I1129 07:46:21.628297 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/60007f1a-bd5c-46d4-901c-0c33d468cbd9-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-r4lwx\" (UID: \"60007f1a-bd5c-46d4-901c-0c33d468cbd9\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-r4lwx" Nov 29 07:46:21 crc kubenswrapper[4795]: I1129 07:46:21.628353 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/60007f1a-bd5c-46d4-901c-0c33d468cbd9-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-r4lwx\" (UID: \"60007f1a-bd5c-46d4-901c-0c33d468cbd9\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-r4lwx" Nov 29 07:46:21 crc kubenswrapper[4795]: I1129 07:46:21.628385 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p7gp\" (UniqueName: \"kubernetes.io/projected/da700315-23ec-4eba-8df0-32845329d616-kube-api-access-8p7gp\") pod \"node-exporter-gpgm5\" (UID: \"da700315-23ec-4eba-8df0-32845329d616\") " pod="openshift-monitoring/node-exporter-gpgm5" Nov 29 07:46:21 crc kubenswrapper[4795]: I1129 07:46:21.628416 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/da700315-23ec-4eba-8df0-32845329d616-metrics-client-ca\") pod \"node-exporter-gpgm5\" (UID: \"da700315-23ec-4eba-8df0-32845329d616\") " pod="openshift-monitoring/node-exporter-gpgm5" Nov 29 07:46:21 crc kubenswrapper[4795]: I1129 07:46:21.628440 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/6fd992f0-9162-4936-bec5-764dab2c9090-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-gwbvq\" (UID: \"6fd992f0-9162-4936-bec5-764dab2c9090\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-gwbvq" Nov 29 07:46:21 crc kubenswrapper[4795]: I1129 07:46:21.628466 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/da700315-23ec-4eba-8df0-32845329d616-node-exporter-wtmp\") pod \"node-exporter-gpgm5\" (UID: \"da700315-23ec-4eba-8df0-32845329d616\") " pod="openshift-monitoring/node-exporter-gpgm5" Nov 29 07:46:21 crc kubenswrapper[4795]: I1129 07:46:21.628498 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/6fd992f0-9162-4936-bec5-764dab2c9090-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-gwbvq\" (UID: \"6fd992f0-9162-4936-bec5-764dab2c9090\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-gwbvq" Nov 29 07:46:21 crc kubenswrapper[4795]: I1129 07:46:21.628536 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/da700315-23ec-4eba-8df0-32845329d616-root\") pod \"node-exporter-gpgm5\" (UID: \"da700315-23ec-4eba-8df0-32845329d616\") " pod="openshift-monitoring/node-exporter-gpgm5" Nov 29 07:46:21 crc kubenswrapper[4795]: I1129 07:46:21.628559 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/da700315-23ec-4eba-8df0-32845329d616-node-exporter-textfile\") pod \"node-exporter-gpgm5\" (UID: \"da700315-23ec-4eba-8df0-32845329d616\") " pod="openshift-monitoring/node-exporter-gpgm5" Nov 29 07:46:21 crc kubenswrapper[4795]: I1129 07:46:21.628604 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/6fd992f0-9162-4936-bec5-764dab2c9090-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-gwbvq\" (UID: \"6fd992f0-9162-4936-bec5-764dab2c9090\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-gwbvq" Nov 29 07:46:21 crc kubenswrapper[4795]: I1129 07:46:21.628628 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6fd992f0-9162-4936-bec5-764dab2c9090-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-gwbvq\" (UID: \"6fd992f0-9162-4936-bec5-764dab2c9090\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-gwbvq" Nov 29 07:46:21 crc kubenswrapper[4795]: I1129 07:46:21.628649 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/6fd992f0-9162-4936-bec5-764dab2c9090-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-gwbvq\" (UID: \"6fd992f0-9162-4936-bec5-764dab2c9090\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-gwbvq" Nov 29 07:46:21 crc kubenswrapper[4795]: I1129 07:46:21.628727 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/da700315-23ec-4eba-8df0-32845329d616-node-exporter-wtmp\") pod \"node-exporter-gpgm5\" (UID: \"da700315-23ec-4eba-8df0-32845329d616\") " pod="openshift-monitoring/node-exporter-gpgm5" Nov 29 07:46:21 crc kubenswrapper[4795]: I1129 07:46:21.647202 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/6fd992f0-9162-4936-bec5-764dab2c9090-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-gwbvq\" (UID: \"6fd992f0-9162-4936-bec5-764dab2c9090\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-gwbvq" Nov 29 07:46:21 crc kubenswrapper[4795]: I1129 07:46:21.647732 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/60007f1a-bd5c-46d4-901c-0c33d468cbd9-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-r4lwx\" (UID: \"60007f1a-bd5c-46d4-901c-0c33d468cbd9\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-r4lwx" Nov 29 07:46:21 crc kubenswrapper[4795]: I1129 07:46:21.652151 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/da700315-23ec-4eba-8df0-32845329d616-node-exporter-tls\") pod \"node-exporter-gpgm5\" (UID: \"da700315-23ec-4eba-8df0-32845329d616\") " pod="openshift-monitoring/node-exporter-gpgm5" Nov 29 07:46:21 crc kubenswrapper[4795]: I1129 07:46:21.652782 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/60007f1a-bd5c-46d4-901c-0c33d468cbd9-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-r4lwx\" (UID: \"60007f1a-bd5c-46d4-901c-0c33d468cbd9\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-r4lwx" Nov 29 07:46:21 crc kubenswrapper[4795]: I1129 07:46:21.652712 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsmc9\" (UniqueName: \"kubernetes.io/projected/6fd992f0-9162-4936-bec5-764dab2c9090-kube-api-access-hsmc9\") pod \"kube-state-metrics-777cb5bd5d-gwbvq\" (UID: \"6fd992f0-9162-4936-bec5-764dab2c9090\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-gwbvq" Nov 29 07:46:21 crc kubenswrapper[4795]: I1129 07:46:21.652967 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/da700315-23ec-4eba-8df0-32845329d616-node-exporter-textfile\") pod \"node-exporter-gpgm5\" (UID: \"da700315-23ec-4eba-8df0-32845329d616\") " pod="openshift-monitoring/node-exporter-gpgm5" Nov 29 07:46:21 crc kubenswrapper[4795]: I1129 07:46:21.653179 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/da700315-23ec-4eba-8df0-32845329d616-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-gpgm5\" (UID: \"da700315-23ec-4eba-8df0-32845329d616\") " pod="openshift-monitoring/node-exporter-gpgm5" Nov 29 07:46:21 crc kubenswrapper[4795]: I1129 07:46:21.657526 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/6fd992f0-9162-4936-bec5-764dab2c9090-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-gwbvq\" (UID: \"6fd992f0-9162-4936-bec5-764dab2c9090\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-gwbvq" Nov 29 07:46:21 crc kubenswrapper[4795]: I1129 07:46:21.657744 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9tp4\" (UniqueName: \"kubernetes.io/projected/60007f1a-bd5c-46d4-901c-0c33d468cbd9-kube-api-access-j9tp4\") pod \"openshift-state-metrics-566fddb674-r4lwx\" (UID: \"60007f1a-bd5c-46d4-901c-0c33d468cbd9\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-r4lwx" Nov 29 07:46:21 crc kubenswrapper[4795]: I1129 07:46:21.657753 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/da700315-23ec-4eba-8df0-32845329d616-metrics-client-ca\") pod \"node-exporter-gpgm5\" (UID: \"da700315-23ec-4eba-8df0-32845329d616\") " pod="openshift-monitoring/node-exporter-gpgm5" Nov 29 07:46:21 crc kubenswrapper[4795]: I1129 07:46:21.657901 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/da700315-23ec-4eba-8df0-32845329d616-root\") pod \"node-exporter-gpgm5\" (UID: \"da700315-23ec-4eba-8df0-32845329d616\") " pod="openshift-monitoring/node-exporter-gpgm5" Nov 29 07:46:21 crc kubenswrapper[4795]: I1129 07:46:21.657949 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/da700315-23ec-4eba-8df0-32845329d616-sys\") pod \"node-exporter-gpgm5\" (UID: \"da700315-23ec-4eba-8df0-32845329d616\") " pod="openshift-monitoring/node-exporter-gpgm5" Nov 29 07:46:21 crc kubenswrapper[4795]: I1129 07:46:21.658153 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/60007f1a-bd5c-46d4-901c-0c33d468cbd9-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-r4lwx\" (UID: \"60007f1a-bd5c-46d4-901c-0c33d468cbd9\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-r4lwx" Nov 29 07:46:21 crc kubenswrapper[4795]: I1129 07:46:21.667158 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6fd992f0-9162-4936-bec5-764dab2c9090-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-gwbvq\" (UID: \"6fd992f0-9162-4936-bec5-764dab2c9090\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-gwbvq" Nov 29 07:46:21 crc kubenswrapper[4795]: I1129 07:46:21.669926 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/6fd992f0-9162-4936-bec5-764dab2c9090-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-gwbvq\" (UID: \"6fd992f0-9162-4936-bec5-764dab2c9090\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-gwbvq" Nov 29 07:46:21 crc kubenswrapper[4795]: I1129 07:46:21.669987 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8p7gp\" (UniqueName: \"kubernetes.io/projected/da700315-23ec-4eba-8df0-32845329d616-kube-api-access-8p7gp\") pod \"node-exporter-gpgm5\" (UID: \"da700315-23ec-4eba-8df0-32845329d616\") " pod="openshift-monitoring/node-exporter-gpgm5" Nov 29 07:46:21 crc kubenswrapper[4795]: I1129 07:46:21.673509 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/6fd992f0-9162-4936-bec5-764dab2c9090-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-gwbvq\" (UID: \"6fd992f0-9162-4936-bec5-764dab2c9090\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-gwbvq" Nov 29 07:46:21 crc kubenswrapper[4795]: I1129 07:46:21.686367 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-gwbvq" Nov 29 07:46:21 crc kubenswrapper[4795]: I1129 07:46:21.697183 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-r4lwx" Nov 29 07:46:21 crc kubenswrapper[4795]: I1129 07:46:21.710147 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-gpgm5" Nov 29 07:46:22 crc kubenswrapper[4795]: I1129 07:46:22.212785 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-gpgm5" event={"ID":"da700315-23ec-4eba-8df0-32845329d616","Type":"ContainerStarted","Data":"b28b90e7522679407caef1c187791f2f3705c7964fe79d9565fc59fe5568441b"} Nov 29 07:46:22 crc kubenswrapper[4795]: I1129 07:46:22.470265 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Nov 29 07:46:22 crc kubenswrapper[4795]: I1129 07:46:22.472468 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Nov 29 07:46:22 crc kubenswrapper[4795]: I1129 07:46:22.475617 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Nov 29 07:46:22 crc kubenswrapper[4795]: I1129 07:46:22.475933 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-4bllg" Nov 29 07:46:22 crc kubenswrapper[4795]: I1129 07:46:22.476269 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Nov 29 07:46:22 crc kubenswrapper[4795]: I1129 07:46:22.476394 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Nov 29 07:46:22 crc kubenswrapper[4795]: I1129 07:46:22.476432 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Nov 29 07:46:22 crc kubenswrapper[4795]: I1129 07:46:22.476728 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Nov 29 07:46:22 crc kubenswrapper[4795]: I1129 07:46:22.476868 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Nov 29 07:46:22 crc kubenswrapper[4795]: I1129 07:46:22.492181 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Nov 29 07:46:22 crc kubenswrapper[4795]: I1129 07:46:22.492347 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Nov 29 07:46:22 crc kubenswrapper[4795]: I1129 07:46:22.500312 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Nov 29 07:46:22 crc kubenswrapper[4795]: I1129 07:46:22.545928 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd07424d-8d26-40f5-a714-f951196f0109-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"cd07424d-8d26-40f5-a714-f951196f0109\") " pod="openshift-monitoring/alertmanager-main-0" Nov 29 07:46:22 crc kubenswrapper[4795]: I1129 07:46:22.545969 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/cd07424d-8d26-40f5-a714-f951196f0109-config-out\") pod \"alertmanager-main-0\" (UID: \"cd07424d-8d26-40f5-a714-f951196f0109\") " pod="openshift-monitoring/alertmanager-main-0" Nov 29 07:46:22 crc kubenswrapper[4795]: I1129 07:46:22.545995 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/cd07424d-8d26-40f5-a714-f951196f0109-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"cd07424d-8d26-40f5-a714-f951196f0109\") " pod="openshift-monitoring/alertmanager-main-0" Nov 29 07:46:22 crc kubenswrapper[4795]: I1129 07:46:22.546017 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/cd07424d-8d26-40f5-a714-f951196f0109-tls-assets\") pod \"alertmanager-main-0\" (UID: \"cd07424d-8d26-40f5-a714-f951196f0109\") " pod="openshift-monitoring/alertmanager-main-0" Nov 29 07:46:22 crc kubenswrapper[4795]: I1129 07:46:22.546042 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/cd07424d-8d26-40f5-a714-f951196f0109-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"cd07424d-8d26-40f5-a714-f951196f0109\") " pod="openshift-monitoring/alertmanager-main-0" Nov 29 07:46:22 crc kubenswrapper[4795]: I1129 07:46:22.546066 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5fq4\" (UniqueName: \"kubernetes.io/projected/cd07424d-8d26-40f5-a714-f951196f0109-kube-api-access-d5fq4\") pod \"alertmanager-main-0\" (UID: \"cd07424d-8d26-40f5-a714-f951196f0109\") " pod="openshift-monitoring/alertmanager-main-0" Nov 29 07:46:22 crc kubenswrapper[4795]: I1129 07:46:22.546090 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/cd07424d-8d26-40f5-a714-f951196f0109-config-volume\") pod \"alertmanager-main-0\" (UID: \"cd07424d-8d26-40f5-a714-f951196f0109\") " pod="openshift-monitoring/alertmanager-main-0" Nov 29 07:46:22 crc kubenswrapper[4795]: I1129 07:46:22.546114 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/cd07424d-8d26-40f5-a714-f951196f0109-web-config\") pod \"alertmanager-main-0\" (UID: \"cd07424d-8d26-40f5-a714-f951196f0109\") " pod="openshift-monitoring/alertmanager-main-0" Nov 29 07:46:22 crc kubenswrapper[4795]: I1129 07:46:22.546131 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/cd07424d-8d26-40f5-a714-f951196f0109-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"cd07424d-8d26-40f5-a714-f951196f0109\") " pod="openshift-monitoring/alertmanager-main-0" Nov 29 07:46:22 crc kubenswrapper[4795]: I1129 07:46:22.546149 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/cd07424d-8d26-40f5-a714-f951196f0109-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"cd07424d-8d26-40f5-a714-f951196f0109\") " pod="openshift-monitoring/alertmanager-main-0" Nov 29 07:46:22 crc kubenswrapper[4795]: I1129 07:46:22.546164 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/cd07424d-8d26-40f5-a714-f951196f0109-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"cd07424d-8d26-40f5-a714-f951196f0109\") " pod="openshift-monitoring/alertmanager-main-0" Nov 29 07:46:22 crc kubenswrapper[4795]: I1129 07:46:22.546203 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/cd07424d-8d26-40f5-a714-f951196f0109-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"cd07424d-8d26-40f5-a714-f951196f0109\") " pod="openshift-monitoring/alertmanager-main-0" Nov 29 07:46:22 crc kubenswrapper[4795]: I1129 07:46:22.643748 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-r4lwx"] Nov 29 07:46:22 crc kubenswrapper[4795]: I1129 07:46:22.647348 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd07424d-8d26-40f5-a714-f951196f0109-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"cd07424d-8d26-40f5-a714-f951196f0109\") " pod="openshift-monitoring/alertmanager-main-0" Nov 29 07:46:22 crc kubenswrapper[4795]: I1129 07:46:22.647423 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/cd07424d-8d26-40f5-a714-f951196f0109-config-out\") pod \"alertmanager-main-0\" (UID: \"cd07424d-8d26-40f5-a714-f951196f0109\") " pod="openshift-monitoring/alertmanager-main-0" Nov 29 07:46:22 crc kubenswrapper[4795]: I1129 07:46:22.647478 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/cd07424d-8d26-40f5-a714-f951196f0109-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"cd07424d-8d26-40f5-a714-f951196f0109\") " pod="openshift-monitoring/alertmanager-main-0" Nov 29 07:46:22 crc kubenswrapper[4795]: I1129 07:46:22.647524 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/cd07424d-8d26-40f5-a714-f951196f0109-tls-assets\") pod \"alertmanager-main-0\" (UID: \"cd07424d-8d26-40f5-a714-f951196f0109\") " pod="openshift-monitoring/alertmanager-main-0" Nov 29 07:46:22 crc kubenswrapper[4795]: I1129 07:46:22.647555 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/cd07424d-8d26-40f5-a714-f951196f0109-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"cd07424d-8d26-40f5-a714-f951196f0109\") " pod="openshift-monitoring/alertmanager-main-0" Nov 29 07:46:22 crc kubenswrapper[4795]: I1129 07:46:22.647605 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5fq4\" (UniqueName: \"kubernetes.io/projected/cd07424d-8d26-40f5-a714-f951196f0109-kube-api-access-d5fq4\") pod \"alertmanager-main-0\" (UID: \"cd07424d-8d26-40f5-a714-f951196f0109\") " pod="openshift-monitoring/alertmanager-main-0" Nov 29 07:46:22 crc kubenswrapper[4795]: I1129 07:46:22.647646 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/cd07424d-8d26-40f5-a714-f951196f0109-config-volume\") pod \"alertmanager-main-0\" (UID: \"cd07424d-8d26-40f5-a714-f951196f0109\") " pod="openshift-monitoring/alertmanager-main-0" Nov 29 07:46:22 crc kubenswrapper[4795]: I1129 07:46:22.647680 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/cd07424d-8d26-40f5-a714-f951196f0109-web-config\") pod \"alertmanager-main-0\" (UID: \"cd07424d-8d26-40f5-a714-f951196f0109\") " pod="openshift-monitoring/alertmanager-main-0" Nov 29 07:46:22 crc kubenswrapper[4795]: I1129 07:46:22.647709 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/cd07424d-8d26-40f5-a714-f951196f0109-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"cd07424d-8d26-40f5-a714-f951196f0109\") " pod="openshift-monitoring/alertmanager-main-0" Nov 29 07:46:22 crc kubenswrapper[4795]: I1129 07:46:22.647738 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/cd07424d-8d26-40f5-a714-f951196f0109-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"cd07424d-8d26-40f5-a714-f951196f0109\") " pod="openshift-monitoring/alertmanager-main-0" Nov 29 07:46:22 crc kubenswrapper[4795]: I1129 07:46:22.647764 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/cd07424d-8d26-40f5-a714-f951196f0109-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"cd07424d-8d26-40f5-a714-f951196f0109\") " pod="openshift-monitoring/alertmanager-main-0" Nov 29 07:46:22 crc kubenswrapper[4795]: I1129 07:46:22.647800 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/cd07424d-8d26-40f5-a714-f951196f0109-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"cd07424d-8d26-40f5-a714-f951196f0109\") " pod="openshift-monitoring/alertmanager-main-0" Nov 29 07:46:22 crc kubenswrapper[4795]: I1129 07:46:22.648457 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/cd07424d-8d26-40f5-a714-f951196f0109-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"cd07424d-8d26-40f5-a714-f951196f0109\") " pod="openshift-monitoring/alertmanager-main-0" Nov 29 07:46:22 crc kubenswrapper[4795]: I1129 07:46:22.650070 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/cd07424d-8d26-40f5-a714-f951196f0109-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"cd07424d-8d26-40f5-a714-f951196f0109\") " pod="openshift-monitoring/alertmanager-main-0" Nov 29 07:46:22 crc kubenswrapper[4795]: I1129 07:46:22.650424 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd07424d-8d26-40f5-a714-f951196f0109-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"cd07424d-8d26-40f5-a714-f951196f0109\") " pod="openshift-monitoring/alertmanager-main-0" Nov 29 07:46:22 crc kubenswrapper[4795]: I1129 07:46:22.653798 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/cd07424d-8d26-40f5-a714-f951196f0109-config-out\") pod \"alertmanager-main-0\" (UID: \"cd07424d-8d26-40f5-a714-f951196f0109\") " pod="openshift-monitoring/alertmanager-main-0" Nov 29 07:46:22 crc kubenswrapper[4795]: I1129 07:46:22.655426 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/cd07424d-8d26-40f5-a714-f951196f0109-web-config\") pod \"alertmanager-main-0\" (UID: \"cd07424d-8d26-40f5-a714-f951196f0109\") " pod="openshift-monitoring/alertmanager-main-0" Nov 29 07:46:22 crc kubenswrapper[4795]: I1129 07:46:22.657778 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/cd07424d-8d26-40f5-a714-f951196f0109-tls-assets\") pod \"alertmanager-main-0\" (UID: \"cd07424d-8d26-40f5-a714-f951196f0109\") " pod="openshift-monitoring/alertmanager-main-0" Nov 29 07:46:22 crc kubenswrapper[4795]: I1129 07:46:22.658455 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/cd07424d-8d26-40f5-a714-f951196f0109-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"cd07424d-8d26-40f5-a714-f951196f0109\") " pod="openshift-monitoring/alertmanager-main-0" Nov 29 07:46:22 crc kubenswrapper[4795]: I1129 07:46:22.662442 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/cd07424d-8d26-40f5-a714-f951196f0109-config-volume\") pod \"alertmanager-main-0\" (UID: \"cd07424d-8d26-40f5-a714-f951196f0109\") " pod="openshift-monitoring/alertmanager-main-0" Nov 29 07:46:22 crc kubenswrapper[4795]: I1129 07:46:22.665409 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/cd07424d-8d26-40f5-a714-f951196f0109-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"cd07424d-8d26-40f5-a714-f951196f0109\") " pod="openshift-monitoring/alertmanager-main-0" Nov 29 07:46:22 crc kubenswrapper[4795]: I1129 07:46:22.665953 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/cd07424d-8d26-40f5-a714-f951196f0109-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"cd07424d-8d26-40f5-a714-f951196f0109\") " pod="openshift-monitoring/alertmanager-main-0" Nov 29 07:46:22 crc kubenswrapper[4795]: I1129 07:46:22.668640 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/cd07424d-8d26-40f5-a714-f951196f0109-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"cd07424d-8d26-40f5-a714-f951196f0109\") " pod="openshift-monitoring/alertmanager-main-0" Nov 29 07:46:22 crc kubenswrapper[4795]: I1129 07:46:22.673754 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-gwbvq"] Nov 29 07:46:22 crc kubenswrapper[4795]: I1129 07:46:22.688691 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5fq4\" (UniqueName: \"kubernetes.io/projected/cd07424d-8d26-40f5-a714-f951196f0109-kube-api-access-d5fq4\") pod \"alertmanager-main-0\" (UID: \"cd07424d-8d26-40f5-a714-f951196f0109\") " pod="openshift-monitoring/alertmanager-main-0" Nov 29 07:46:22 crc kubenswrapper[4795]: I1129 07:46:22.804846 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Nov 29 07:46:23 crc kubenswrapper[4795]: I1129 07:46:23.185518 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Nov 29 07:46:23 crc kubenswrapper[4795]: I1129 07:46:23.218615 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-r4lwx" event={"ID":"60007f1a-bd5c-46d4-901c-0c33d468cbd9","Type":"ContainerStarted","Data":"96b5bcc316b53fc142aca7718c9aa0d03239842ea62cbb7a6fbd01cb4235a704"} Nov 29 07:46:23 crc kubenswrapper[4795]: I1129 07:46:23.218665 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-r4lwx" event={"ID":"60007f1a-bd5c-46d4-901c-0c33d468cbd9","Type":"ContainerStarted","Data":"2f3f4d1033ec0e9e78abb3f68cb57f9ab35b69f378a2a0764727654de88119f0"} Nov 29 07:46:23 crc kubenswrapper[4795]: I1129 07:46:23.219684 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-gwbvq" event={"ID":"6fd992f0-9162-4936-bec5-764dab2c9090","Type":"ContainerStarted","Data":"c95e62ea4d12f107fda5610a1e8c4c866b536c74e3494ad2df9299d3c44d381c"} Nov 29 07:46:23 crc kubenswrapper[4795]: I1129 07:46:23.330884 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-6b77f7dd4f-cv6b5"] Nov 29 07:46:23 crc kubenswrapper[4795]: I1129 07:46:23.332528 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-6b77f7dd4f-cv6b5" Nov 29 07:46:23 crc kubenswrapper[4795]: I1129 07:46:23.335527 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Nov 29 07:46:23 crc kubenswrapper[4795]: I1129 07:46:23.335557 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Nov 29 07:46:23 crc kubenswrapper[4795]: I1129 07:46:23.335613 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Nov 29 07:46:23 crc kubenswrapper[4795]: I1129 07:46:23.335635 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Nov 29 07:46:23 crc kubenswrapper[4795]: I1129 07:46:23.335645 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-a497rhbgk6407" Nov 29 07:46:23 crc kubenswrapper[4795]: I1129 07:46:23.336121 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-dockercfg-s66b5" Nov 29 07:46:23 crc kubenswrapper[4795]: I1129 07:46:23.343967 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-6b77f7dd4f-cv6b5"] Nov 29 07:46:23 crc kubenswrapper[4795]: I1129 07:46:23.344239 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Nov 29 07:46:23 crc kubenswrapper[4795]: I1129 07:46:23.357340 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/537c19bc-51da-4b65-9baa-176ac3225a2d-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-6b77f7dd4f-cv6b5\" (UID: \"537c19bc-51da-4b65-9baa-176ac3225a2d\") " pod="openshift-monitoring/thanos-querier-6b77f7dd4f-cv6b5" Nov 29 07:46:23 crc kubenswrapper[4795]: I1129 07:46:23.357421 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/537c19bc-51da-4b65-9baa-176ac3225a2d-secret-thanos-querier-tls\") pod \"thanos-querier-6b77f7dd4f-cv6b5\" (UID: \"537c19bc-51da-4b65-9baa-176ac3225a2d\") " pod="openshift-monitoring/thanos-querier-6b77f7dd4f-cv6b5" Nov 29 07:46:23 crc kubenswrapper[4795]: I1129 07:46:23.357468 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/537c19bc-51da-4b65-9baa-176ac3225a2d-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-6b77f7dd4f-cv6b5\" (UID: \"537c19bc-51da-4b65-9baa-176ac3225a2d\") " pod="openshift-monitoring/thanos-querier-6b77f7dd4f-cv6b5" Nov 29 07:46:23 crc kubenswrapper[4795]: I1129 07:46:23.357507 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6kc9\" (UniqueName: \"kubernetes.io/projected/537c19bc-51da-4b65-9baa-176ac3225a2d-kube-api-access-h6kc9\") pod \"thanos-querier-6b77f7dd4f-cv6b5\" (UID: \"537c19bc-51da-4b65-9baa-176ac3225a2d\") " pod="openshift-monitoring/thanos-querier-6b77f7dd4f-cv6b5" Nov 29 07:46:23 crc kubenswrapper[4795]: I1129 07:46:23.357533 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/537c19bc-51da-4b65-9baa-176ac3225a2d-metrics-client-ca\") pod \"thanos-querier-6b77f7dd4f-cv6b5\" (UID: \"537c19bc-51da-4b65-9baa-176ac3225a2d\") " pod="openshift-monitoring/thanos-querier-6b77f7dd4f-cv6b5" Nov 29 07:46:23 crc kubenswrapper[4795]: I1129 07:46:23.357556 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/537c19bc-51da-4b65-9baa-176ac3225a2d-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-6b77f7dd4f-cv6b5\" (UID: \"537c19bc-51da-4b65-9baa-176ac3225a2d\") " pod="openshift-monitoring/thanos-querier-6b77f7dd4f-cv6b5" Nov 29 07:46:23 crc kubenswrapper[4795]: I1129 07:46:23.357766 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/537c19bc-51da-4b65-9baa-176ac3225a2d-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-6b77f7dd4f-cv6b5\" (UID: \"537c19bc-51da-4b65-9baa-176ac3225a2d\") " pod="openshift-monitoring/thanos-querier-6b77f7dd4f-cv6b5" Nov 29 07:46:23 crc kubenswrapper[4795]: I1129 07:46:23.357794 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/537c19bc-51da-4b65-9baa-176ac3225a2d-secret-grpc-tls\") pod \"thanos-querier-6b77f7dd4f-cv6b5\" (UID: \"537c19bc-51da-4b65-9baa-176ac3225a2d\") " pod="openshift-monitoring/thanos-querier-6b77f7dd4f-cv6b5" Nov 29 07:46:23 crc kubenswrapper[4795]: I1129 07:46:23.459404 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/537c19bc-51da-4b65-9baa-176ac3225a2d-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-6b77f7dd4f-cv6b5\" (UID: \"537c19bc-51da-4b65-9baa-176ac3225a2d\") " pod="openshift-monitoring/thanos-querier-6b77f7dd4f-cv6b5" Nov 29 07:46:23 crc kubenswrapper[4795]: I1129 07:46:23.459458 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/537c19bc-51da-4b65-9baa-176ac3225a2d-secret-thanos-querier-tls\") pod \"thanos-querier-6b77f7dd4f-cv6b5\" (UID: \"537c19bc-51da-4b65-9baa-176ac3225a2d\") " pod="openshift-monitoring/thanos-querier-6b77f7dd4f-cv6b5" Nov 29 07:46:23 crc kubenswrapper[4795]: I1129 07:46:23.459491 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/537c19bc-51da-4b65-9baa-176ac3225a2d-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-6b77f7dd4f-cv6b5\" (UID: \"537c19bc-51da-4b65-9baa-176ac3225a2d\") " pod="openshift-monitoring/thanos-querier-6b77f7dd4f-cv6b5" Nov 29 07:46:23 crc kubenswrapper[4795]: I1129 07:46:23.459524 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6kc9\" (UniqueName: \"kubernetes.io/projected/537c19bc-51da-4b65-9baa-176ac3225a2d-kube-api-access-h6kc9\") pod \"thanos-querier-6b77f7dd4f-cv6b5\" (UID: \"537c19bc-51da-4b65-9baa-176ac3225a2d\") " pod="openshift-monitoring/thanos-querier-6b77f7dd4f-cv6b5" Nov 29 07:46:23 crc kubenswrapper[4795]: I1129 07:46:23.459550 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/537c19bc-51da-4b65-9baa-176ac3225a2d-metrics-client-ca\") pod \"thanos-querier-6b77f7dd4f-cv6b5\" (UID: \"537c19bc-51da-4b65-9baa-176ac3225a2d\") " pod="openshift-monitoring/thanos-querier-6b77f7dd4f-cv6b5" Nov 29 07:46:23 crc kubenswrapper[4795]: I1129 07:46:23.459567 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/537c19bc-51da-4b65-9baa-176ac3225a2d-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-6b77f7dd4f-cv6b5\" (UID: \"537c19bc-51da-4b65-9baa-176ac3225a2d\") " pod="openshift-monitoring/thanos-querier-6b77f7dd4f-cv6b5" Nov 29 07:46:23 crc kubenswrapper[4795]: I1129 07:46:23.459613 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/537c19bc-51da-4b65-9baa-176ac3225a2d-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-6b77f7dd4f-cv6b5\" (UID: \"537c19bc-51da-4b65-9baa-176ac3225a2d\") " pod="openshift-monitoring/thanos-querier-6b77f7dd4f-cv6b5" Nov 29 07:46:23 crc kubenswrapper[4795]: I1129 07:46:23.459630 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/537c19bc-51da-4b65-9baa-176ac3225a2d-secret-grpc-tls\") pod \"thanos-querier-6b77f7dd4f-cv6b5\" (UID: \"537c19bc-51da-4b65-9baa-176ac3225a2d\") " pod="openshift-monitoring/thanos-querier-6b77f7dd4f-cv6b5" Nov 29 07:46:23 crc kubenswrapper[4795]: I1129 07:46:23.460692 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/537c19bc-51da-4b65-9baa-176ac3225a2d-metrics-client-ca\") pod \"thanos-querier-6b77f7dd4f-cv6b5\" (UID: \"537c19bc-51da-4b65-9baa-176ac3225a2d\") " pod="openshift-monitoring/thanos-querier-6b77f7dd4f-cv6b5" Nov 29 07:46:23 crc kubenswrapper[4795]: I1129 07:46:23.463612 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/537c19bc-51da-4b65-9baa-176ac3225a2d-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-6b77f7dd4f-cv6b5\" (UID: \"537c19bc-51da-4b65-9baa-176ac3225a2d\") " pod="openshift-monitoring/thanos-querier-6b77f7dd4f-cv6b5" Nov 29 07:46:23 crc kubenswrapper[4795]: I1129 07:46:23.463970 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/537c19bc-51da-4b65-9baa-176ac3225a2d-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-6b77f7dd4f-cv6b5\" (UID: \"537c19bc-51da-4b65-9baa-176ac3225a2d\") " pod="openshift-monitoring/thanos-querier-6b77f7dd4f-cv6b5" Nov 29 07:46:23 crc kubenswrapper[4795]: I1129 07:46:23.463972 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/537c19bc-51da-4b65-9baa-176ac3225a2d-secret-thanos-querier-tls\") pod \"thanos-querier-6b77f7dd4f-cv6b5\" (UID: \"537c19bc-51da-4b65-9baa-176ac3225a2d\") " pod="openshift-monitoring/thanos-querier-6b77f7dd4f-cv6b5" Nov 29 07:46:23 crc kubenswrapper[4795]: I1129 07:46:23.464069 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/537c19bc-51da-4b65-9baa-176ac3225a2d-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-6b77f7dd4f-cv6b5\" (UID: \"537c19bc-51da-4b65-9baa-176ac3225a2d\") " pod="openshift-monitoring/thanos-querier-6b77f7dd4f-cv6b5" Nov 29 07:46:23 crc kubenswrapper[4795]: I1129 07:46:23.464636 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/537c19bc-51da-4b65-9baa-176ac3225a2d-secret-grpc-tls\") pod \"thanos-querier-6b77f7dd4f-cv6b5\" (UID: \"537c19bc-51da-4b65-9baa-176ac3225a2d\") " pod="openshift-monitoring/thanos-querier-6b77f7dd4f-cv6b5" Nov 29 07:46:23 crc kubenswrapper[4795]: I1129 07:46:23.465506 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/537c19bc-51da-4b65-9baa-176ac3225a2d-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-6b77f7dd4f-cv6b5\" (UID: \"537c19bc-51da-4b65-9baa-176ac3225a2d\") " pod="openshift-monitoring/thanos-querier-6b77f7dd4f-cv6b5" Nov 29 07:46:23 crc kubenswrapper[4795]: I1129 07:46:23.476653 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6kc9\" (UniqueName: \"kubernetes.io/projected/537c19bc-51da-4b65-9baa-176ac3225a2d-kube-api-access-h6kc9\") pod \"thanos-querier-6b77f7dd4f-cv6b5\" (UID: \"537c19bc-51da-4b65-9baa-176ac3225a2d\") " pod="openshift-monitoring/thanos-querier-6b77f7dd4f-cv6b5" Nov 29 07:46:23 crc kubenswrapper[4795]: I1129 07:46:23.655348 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-6b77f7dd4f-cv6b5" Nov 29 07:46:24 crc kubenswrapper[4795]: W1129 07:46:24.202504 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd07424d_8d26_40f5_a714_f951196f0109.slice/crio-95f3939a9349c1b624b12ee31a088d2ef8010323e5046b9c9061b8b112a4334f WatchSource:0}: Error finding container 95f3939a9349c1b624b12ee31a088d2ef8010323e5046b9c9061b8b112a4334f: Status 404 returned error can't find the container with id 95f3939a9349c1b624b12ee31a088d2ef8010323e5046b9c9061b8b112a4334f Nov 29 07:46:24 crc kubenswrapper[4795]: I1129 07:46:24.228130 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"cd07424d-8d26-40f5-a714-f951196f0109","Type":"ContainerStarted","Data":"95f3939a9349c1b624b12ee31a088d2ef8010323e5046b9c9061b8b112a4334f"} Nov 29 07:46:24 crc kubenswrapper[4795]: I1129 07:46:24.655389 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-6b77f7dd4f-cv6b5"] Nov 29 07:46:24 crc kubenswrapper[4795]: W1129 07:46:24.660473 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod537c19bc_51da_4b65_9baa_176ac3225a2d.slice/crio-9fb2f3587821e56c9df10772d45fd36c879f2b41a75d0c70b50089df8d222a08 WatchSource:0}: Error finding container 9fb2f3587821e56c9df10772d45fd36c879f2b41a75d0c70b50089df8d222a08: Status 404 returned error can't find the container with id 9fb2f3587821e56c9df10772d45fd36c879f2b41a75d0c70b50089df8d222a08 Nov 29 07:46:25 crc kubenswrapper[4795]: I1129 07:46:25.002553 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-nz57z" Nov 29 07:46:25 crc kubenswrapper[4795]: I1129 07:46:25.063339 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lm9wt"] Nov 29 07:46:25 crc kubenswrapper[4795]: I1129 07:46:25.234303 4795 generic.go:334] "Generic (PLEG): container finished" podID="da700315-23ec-4eba-8df0-32845329d616" containerID="ef0c06755f9e9329b08f1d35230c4076397d6023a7569e85fe4e9c3319d4c55b" exitCode=0 Nov 29 07:46:25 crc kubenswrapper[4795]: I1129 07:46:25.234473 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-gpgm5" event={"ID":"da700315-23ec-4eba-8df0-32845329d616","Type":"ContainerDied","Data":"ef0c06755f9e9329b08f1d35230c4076397d6023a7569e85fe4e9c3319d4c55b"} Nov 29 07:46:25 crc kubenswrapper[4795]: I1129 07:46:25.237634 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-r4lwx" event={"ID":"60007f1a-bd5c-46d4-901c-0c33d468cbd9","Type":"ContainerStarted","Data":"f44fdfa94874fea73ce293df2880902e8bcc3a4fe75cadde62dd415893a652db"} Nov 29 07:46:25 crc kubenswrapper[4795]: I1129 07:46:25.242020 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6b77f7dd4f-cv6b5" event={"ID":"537c19bc-51da-4b65-9baa-176ac3225a2d","Type":"ContainerStarted","Data":"9fb2f3587821e56c9df10772d45fd36c879f2b41a75d0c70b50089df8d222a08"} Nov 29 07:46:26 crc kubenswrapper[4795]: I1129 07:46:26.249157 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-gpgm5" event={"ID":"da700315-23ec-4eba-8df0-32845329d616","Type":"ContainerStarted","Data":"f84b4588c6c3afa7c4db75bf6bdfdde82a3349148b34546d804ca25c592a4658"} Nov 29 07:46:26 crc kubenswrapper[4795]: I1129 07:46:26.328767 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-559df6b64-rl42v"] Nov 29 07:46:26 crc kubenswrapper[4795]: I1129 07:46:26.329725 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-559df6b64-rl42v" Nov 29 07:46:26 crc kubenswrapper[4795]: I1129 07:46:26.334345 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-559df6b64-rl42v"] Nov 29 07:46:26 crc kubenswrapper[4795]: I1129 07:46:26.433340 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d07c7092-6e69-4ca2-9c12-02991c2cda50-console-config\") pod \"console-559df6b64-rl42v\" (UID: \"d07c7092-6e69-4ca2-9c12-02991c2cda50\") " pod="openshift-console/console-559df6b64-rl42v" Nov 29 07:46:26 crc kubenswrapper[4795]: I1129 07:46:26.433451 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlnxw\" (UniqueName: \"kubernetes.io/projected/d07c7092-6e69-4ca2-9c12-02991c2cda50-kube-api-access-qlnxw\") pod \"console-559df6b64-rl42v\" (UID: \"d07c7092-6e69-4ca2-9c12-02991c2cda50\") " pod="openshift-console/console-559df6b64-rl42v" Nov 29 07:46:26 crc kubenswrapper[4795]: I1129 07:46:26.433485 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d07c7092-6e69-4ca2-9c12-02991c2cda50-console-oauth-config\") pod \"console-559df6b64-rl42v\" (UID: \"d07c7092-6e69-4ca2-9c12-02991c2cda50\") " pod="openshift-console/console-559df6b64-rl42v" Nov 29 07:46:26 crc kubenswrapper[4795]: I1129 07:46:26.433539 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d07c7092-6e69-4ca2-9c12-02991c2cda50-trusted-ca-bundle\") pod \"console-559df6b64-rl42v\" (UID: \"d07c7092-6e69-4ca2-9c12-02991c2cda50\") " pod="openshift-console/console-559df6b64-rl42v" Nov 29 07:46:26 crc kubenswrapper[4795]: I1129 07:46:26.433572 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d07c7092-6e69-4ca2-9c12-02991c2cda50-console-serving-cert\") pod \"console-559df6b64-rl42v\" (UID: \"d07c7092-6e69-4ca2-9c12-02991c2cda50\") " pod="openshift-console/console-559df6b64-rl42v" Nov 29 07:46:26 crc kubenswrapper[4795]: I1129 07:46:26.433621 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d07c7092-6e69-4ca2-9c12-02991c2cda50-oauth-serving-cert\") pod \"console-559df6b64-rl42v\" (UID: \"d07c7092-6e69-4ca2-9c12-02991c2cda50\") " pod="openshift-console/console-559df6b64-rl42v" Nov 29 07:46:26 crc kubenswrapper[4795]: I1129 07:46:26.433652 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d07c7092-6e69-4ca2-9c12-02991c2cda50-service-ca\") pod \"console-559df6b64-rl42v\" (UID: \"d07c7092-6e69-4ca2-9c12-02991c2cda50\") " pod="openshift-console/console-559df6b64-rl42v" Nov 29 07:46:26 crc kubenswrapper[4795]: I1129 07:46:26.534532 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d07c7092-6e69-4ca2-9c12-02991c2cda50-console-serving-cert\") pod \"console-559df6b64-rl42v\" (UID: \"d07c7092-6e69-4ca2-9c12-02991c2cda50\") " pod="openshift-console/console-559df6b64-rl42v" Nov 29 07:46:26 crc kubenswrapper[4795]: I1129 07:46:26.534602 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d07c7092-6e69-4ca2-9c12-02991c2cda50-oauth-serving-cert\") pod \"console-559df6b64-rl42v\" (UID: \"d07c7092-6e69-4ca2-9c12-02991c2cda50\") " pod="openshift-console/console-559df6b64-rl42v" Nov 29 07:46:26 crc kubenswrapper[4795]: I1129 07:46:26.534625 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d07c7092-6e69-4ca2-9c12-02991c2cda50-service-ca\") pod \"console-559df6b64-rl42v\" (UID: \"d07c7092-6e69-4ca2-9c12-02991c2cda50\") " pod="openshift-console/console-559df6b64-rl42v" Nov 29 07:46:26 crc kubenswrapper[4795]: I1129 07:46:26.534652 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d07c7092-6e69-4ca2-9c12-02991c2cda50-console-config\") pod \"console-559df6b64-rl42v\" (UID: \"d07c7092-6e69-4ca2-9c12-02991c2cda50\") " pod="openshift-console/console-559df6b64-rl42v" Nov 29 07:46:26 crc kubenswrapper[4795]: I1129 07:46:26.534737 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlnxw\" (UniqueName: \"kubernetes.io/projected/d07c7092-6e69-4ca2-9c12-02991c2cda50-kube-api-access-qlnxw\") pod \"console-559df6b64-rl42v\" (UID: \"d07c7092-6e69-4ca2-9c12-02991c2cda50\") " pod="openshift-console/console-559df6b64-rl42v" Nov 29 07:46:26 crc kubenswrapper[4795]: I1129 07:46:26.534784 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d07c7092-6e69-4ca2-9c12-02991c2cda50-console-oauth-config\") pod \"console-559df6b64-rl42v\" (UID: \"d07c7092-6e69-4ca2-9c12-02991c2cda50\") " pod="openshift-console/console-559df6b64-rl42v" Nov 29 07:46:26 crc kubenswrapper[4795]: I1129 07:46:26.534827 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d07c7092-6e69-4ca2-9c12-02991c2cda50-trusted-ca-bundle\") pod \"console-559df6b64-rl42v\" (UID: \"d07c7092-6e69-4ca2-9c12-02991c2cda50\") " pod="openshift-console/console-559df6b64-rl42v" Nov 29 07:46:26 crc kubenswrapper[4795]: I1129 07:46:26.535527 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d07c7092-6e69-4ca2-9c12-02991c2cda50-console-config\") pod \"console-559df6b64-rl42v\" (UID: \"d07c7092-6e69-4ca2-9c12-02991c2cda50\") " pod="openshift-console/console-559df6b64-rl42v" Nov 29 07:46:26 crc kubenswrapper[4795]: I1129 07:46:26.535613 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d07c7092-6e69-4ca2-9c12-02991c2cda50-service-ca\") pod \"console-559df6b64-rl42v\" (UID: \"d07c7092-6e69-4ca2-9c12-02991c2cda50\") " pod="openshift-console/console-559df6b64-rl42v" Nov 29 07:46:26 crc kubenswrapper[4795]: I1129 07:46:26.535916 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d07c7092-6e69-4ca2-9c12-02991c2cda50-oauth-serving-cert\") pod \"console-559df6b64-rl42v\" (UID: \"d07c7092-6e69-4ca2-9c12-02991c2cda50\") " pod="openshift-console/console-559df6b64-rl42v" Nov 29 07:46:26 crc kubenswrapper[4795]: I1129 07:46:26.536152 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d07c7092-6e69-4ca2-9c12-02991c2cda50-trusted-ca-bundle\") pod \"console-559df6b64-rl42v\" (UID: \"d07c7092-6e69-4ca2-9c12-02991c2cda50\") " pod="openshift-console/console-559df6b64-rl42v" Nov 29 07:46:26 crc kubenswrapper[4795]: I1129 07:46:26.539010 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d07c7092-6e69-4ca2-9c12-02991c2cda50-console-serving-cert\") pod \"console-559df6b64-rl42v\" (UID: \"d07c7092-6e69-4ca2-9c12-02991c2cda50\") " pod="openshift-console/console-559df6b64-rl42v" Nov 29 07:46:26 crc kubenswrapper[4795]: I1129 07:46:26.541121 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d07c7092-6e69-4ca2-9c12-02991c2cda50-console-oauth-config\") pod \"console-559df6b64-rl42v\" (UID: \"d07c7092-6e69-4ca2-9c12-02991c2cda50\") " pod="openshift-console/console-559df6b64-rl42v" Nov 29 07:46:26 crc kubenswrapper[4795]: I1129 07:46:26.559963 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlnxw\" (UniqueName: \"kubernetes.io/projected/d07c7092-6e69-4ca2-9c12-02991c2cda50-kube-api-access-qlnxw\") pod \"console-559df6b64-rl42v\" (UID: \"d07c7092-6e69-4ca2-9c12-02991c2cda50\") " pod="openshift-console/console-559df6b64-rl42v" Nov 29 07:46:26 crc kubenswrapper[4795]: I1129 07:46:26.653739 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-559df6b64-rl42v" Nov 29 07:46:26 crc kubenswrapper[4795]: I1129 07:46:26.763410 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-6df7d947f5-r6rgs"] Nov 29 07:46:26 crc kubenswrapper[4795]: I1129 07:46:26.764454 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-6df7d947f5-r6rgs" Nov 29 07:46:26 crc kubenswrapper[4795]: I1129 07:46:26.766225 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Nov 29 07:46:26 crc kubenswrapper[4795]: I1129 07:46:26.766323 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Nov 29 07:46:26 crc kubenswrapper[4795]: I1129 07:46:26.766413 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-bl9bjkmtj9tep" Nov 29 07:46:26 crc kubenswrapper[4795]: I1129 07:46:26.766558 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Nov 29 07:46:26 crc kubenswrapper[4795]: I1129 07:46:26.766583 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Nov 29 07:46:26 crc kubenswrapper[4795]: I1129 07:46:26.766781 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-8fqsc" Nov 29 07:46:26 crc kubenswrapper[4795]: I1129 07:46:26.768633 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-6df7d947f5-r6rgs"] Nov 29 07:46:26 crc kubenswrapper[4795]: I1129 07:46:26.839473 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42jxm\" (UniqueName: \"kubernetes.io/projected/5e9d05f6-bafa-49e3-8223-6a28c46bbca9-kube-api-access-42jxm\") pod \"metrics-server-6df7d947f5-r6rgs\" (UID: \"5e9d05f6-bafa-49e3-8223-6a28c46bbca9\") " pod="openshift-monitoring/metrics-server-6df7d947f5-r6rgs" Nov 29 07:46:26 crc kubenswrapper[4795]: I1129 07:46:26.839645 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e9d05f6-bafa-49e3-8223-6a28c46bbca9-client-ca-bundle\") pod \"metrics-server-6df7d947f5-r6rgs\" (UID: \"5e9d05f6-bafa-49e3-8223-6a28c46bbca9\") " pod="openshift-monitoring/metrics-server-6df7d947f5-r6rgs" Nov 29 07:46:26 crc kubenswrapper[4795]: I1129 07:46:26.839703 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/5e9d05f6-bafa-49e3-8223-6a28c46bbca9-secret-metrics-server-tls\") pod \"metrics-server-6df7d947f5-r6rgs\" (UID: \"5e9d05f6-bafa-49e3-8223-6a28c46bbca9\") " pod="openshift-monitoring/metrics-server-6df7d947f5-r6rgs" Nov 29 07:46:26 crc kubenswrapper[4795]: I1129 07:46:26.839808 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/5e9d05f6-bafa-49e3-8223-6a28c46bbca9-secret-metrics-client-certs\") pod \"metrics-server-6df7d947f5-r6rgs\" (UID: \"5e9d05f6-bafa-49e3-8223-6a28c46bbca9\") " pod="openshift-monitoring/metrics-server-6df7d947f5-r6rgs" Nov 29 07:46:26 crc kubenswrapper[4795]: I1129 07:46:26.839842 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/5e9d05f6-bafa-49e3-8223-6a28c46bbca9-metrics-server-audit-profiles\") pod \"metrics-server-6df7d947f5-r6rgs\" (UID: \"5e9d05f6-bafa-49e3-8223-6a28c46bbca9\") " pod="openshift-monitoring/metrics-server-6df7d947f5-r6rgs" Nov 29 07:46:26 crc kubenswrapper[4795]: I1129 07:46:26.840012 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e9d05f6-bafa-49e3-8223-6a28c46bbca9-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-6df7d947f5-r6rgs\" (UID: \"5e9d05f6-bafa-49e3-8223-6a28c46bbca9\") " pod="openshift-monitoring/metrics-server-6df7d947f5-r6rgs" Nov 29 07:46:26 crc kubenswrapper[4795]: I1129 07:46:26.840084 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/5e9d05f6-bafa-49e3-8223-6a28c46bbca9-audit-log\") pod \"metrics-server-6df7d947f5-r6rgs\" (UID: \"5e9d05f6-bafa-49e3-8223-6a28c46bbca9\") " pod="openshift-monitoring/metrics-server-6df7d947f5-r6rgs" Nov 29 07:46:26 crc kubenswrapper[4795]: I1129 07:46:26.941393 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/5e9d05f6-bafa-49e3-8223-6a28c46bbca9-secret-metrics-client-certs\") pod \"metrics-server-6df7d947f5-r6rgs\" (UID: \"5e9d05f6-bafa-49e3-8223-6a28c46bbca9\") " pod="openshift-monitoring/metrics-server-6df7d947f5-r6rgs" Nov 29 07:46:26 crc kubenswrapper[4795]: I1129 07:46:26.942217 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/5e9d05f6-bafa-49e3-8223-6a28c46bbca9-metrics-server-audit-profiles\") pod \"metrics-server-6df7d947f5-r6rgs\" (UID: \"5e9d05f6-bafa-49e3-8223-6a28c46bbca9\") " pod="openshift-monitoring/metrics-server-6df7d947f5-r6rgs" Nov 29 07:46:26 crc kubenswrapper[4795]: I1129 07:46:26.942287 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e9d05f6-bafa-49e3-8223-6a28c46bbca9-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-6df7d947f5-r6rgs\" (UID: \"5e9d05f6-bafa-49e3-8223-6a28c46bbca9\") " pod="openshift-monitoring/metrics-server-6df7d947f5-r6rgs" Nov 29 07:46:26 crc kubenswrapper[4795]: I1129 07:46:26.942317 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/5e9d05f6-bafa-49e3-8223-6a28c46bbca9-audit-log\") pod \"metrics-server-6df7d947f5-r6rgs\" (UID: \"5e9d05f6-bafa-49e3-8223-6a28c46bbca9\") " pod="openshift-monitoring/metrics-server-6df7d947f5-r6rgs" Nov 29 07:46:26 crc kubenswrapper[4795]: I1129 07:46:26.942342 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42jxm\" (UniqueName: \"kubernetes.io/projected/5e9d05f6-bafa-49e3-8223-6a28c46bbca9-kube-api-access-42jxm\") pod \"metrics-server-6df7d947f5-r6rgs\" (UID: \"5e9d05f6-bafa-49e3-8223-6a28c46bbca9\") " pod="openshift-monitoring/metrics-server-6df7d947f5-r6rgs" Nov 29 07:46:26 crc kubenswrapper[4795]: I1129 07:46:26.942372 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e9d05f6-bafa-49e3-8223-6a28c46bbca9-client-ca-bundle\") pod \"metrics-server-6df7d947f5-r6rgs\" (UID: \"5e9d05f6-bafa-49e3-8223-6a28c46bbca9\") " pod="openshift-monitoring/metrics-server-6df7d947f5-r6rgs" Nov 29 07:46:26 crc kubenswrapper[4795]: I1129 07:46:26.942393 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/5e9d05f6-bafa-49e3-8223-6a28c46bbca9-secret-metrics-server-tls\") pod \"metrics-server-6df7d947f5-r6rgs\" (UID: \"5e9d05f6-bafa-49e3-8223-6a28c46bbca9\") " pod="openshift-monitoring/metrics-server-6df7d947f5-r6rgs" Nov 29 07:46:26 crc kubenswrapper[4795]: I1129 07:46:26.943046 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/5e9d05f6-bafa-49e3-8223-6a28c46bbca9-audit-log\") pod \"metrics-server-6df7d947f5-r6rgs\" (UID: \"5e9d05f6-bafa-49e3-8223-6a28c46bbca9\") " pod="openshift-monitoring/metrics-server-6df7d947f5-r6rgs" Nov 29 07:46:26 crc kubenswrapper[4795]: I1129 07:46:26.943496 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e9d05f6-bafa-49e3-8223-6a28c46bbca9-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-6df7d947f5-r6rgs\" (UID: \"5e9d05f6-bafa-49e3-8223-6a28c46bbca9\") " pod="openshift-monitoring/metrics-server-6df7d947f5-r6rgs" Nov 29 07:46:26 crc kubenswrapper[4795]: I1129 07:46:26.943912 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/5e9d05f6-bafa-49e3-8223-6a28c46bbca9-metrics-server-audit-profiles\") pod \"metrics-server-6df7d947f5-r6rgs\" (UID: \"5e9d05f6-bafa-49e3-8223-6a28c46bbca9\") " pod="openshift-monitoring/metrics-server-6df7d947f5-r6rgs" Nov 29 07:46:26 crc kubenswrapper[4795]: I1129 07:46:26.946337 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/5e9d05f6-bafa-49e3-8223-6a28c46bbca9-secret-metrics-server-tls\") pod \"metrics-server-6df7d947f5-r6rgs\" (UID: \"5e9d05f6-bafa-49e3-8223-6a28c46bbca9\") " pod="openshift-monitoring/metrics-server-6df7d947f5-r6rgs" Nov 29 07:46:26 crc kubenswrapper[4795]: I1129 07:46:26.946975 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e9d05f6-bafa-49e3-8223-6a28c46bbca9-client-ca-bundle\") pod \"metrics-server-6df7d947f5-r6rgs\" (UID: \"5e9d05f6-bafa-49e3-8223-6a28c46bbca9\") " pod="openshift-monitoring/metrics-server-6df7d947f5-r6rgs" Nov 29 07:46:26 crc kubenswrapper[4795]: I1129 07:46:26.947497 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/5e9d05f6-bafa-49e3-8223-6a28c46bbca9-secret-metrics-client-certs\") pod \"metrics-server-6df7d947f5-r6rgs\" (UID: \"5e9d05f6-bafa-49e3-8223-6a28c46bbca9\") " pod="openshift-monitoring/metrics-server-6df7d947f5-r6rgs" Nov 29 07:46:26 crc kubenswrapper[4795]: I1129 07:46:26.960652 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42jxm\" (UniqueName: \"kubernetes.io/projected/5e9d05f6-bafa-49e3-8223-6a28c46bbca9-kube-api-access-42jxm\") pod \"metrics-server-6df7d947f5-r6rgs\" (UID: \"5e9d05f6-bafa-49e3-8223-6a28c46bbca9\") " pod="openshift-monitoring/metrics-server-6df7d947f5-r6rgs" Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.080223 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-6df7d947f5-r6rgs" Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.144089 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-7bcdf45565-22plx"] Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.145186 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-7bcdf45565-22plx" Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.149516 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.149804 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-6tstp" Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.158201 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-7bcdf45565-22plx"] Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.246454 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/187b4736-08c5-4d32-94b1-7129f8247476-monitoring-plugin-cert\") pod \"monitoring-plugin-7bcdf45565-22plx\" (UID: \"187b4736-08c5-4d32-94b1-7129f8247476\") " pod="openshift-monitoring/monitoring-plugin-7bcdf45565-22plx" Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.258724 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-r4lwx" event={"ID":"60007f1a-bd5c-46d4-901c-0c33d468cbd9","Type":"ContainerStarted","Data":"2b96995c1839d74fea6c089ee6b1b5e4cb1b7e554210e71831658be735dd3788"} Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.269725 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-gwbvq" event={"ID":"6fd992f0-9162-4936-bec5-764dab2c9090","Type":"ContainerStarted","Data":"d344b7f5d5ff63eefaca969d528625b8cfd1933464939472a353cd97519106bf"} Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.279626 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-566fddb674-r4lwx" podStartSLOduration=4.819349369 podStartE2EDuration="6.279605306s" podCreationTimestamp="2025-11-29 07:46:21 +0000 UTC" firstStartedPulling="2025-11-29 07:46:24.425998897 +0000 UTC m=+430.401574687" lastFinishedPulling="2025-11-29 07:46:25.886254834 +0000 UTC m=+431.861830624" observedRunningTime="2025-11-29 07:46:27.279005579 +0000 UTC m=+433.254581369" watchObservedRunningTime="2025-11-29 07:46:27.279605306 +0000 UTC m=+433.255181086" Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.280632 4795 generic.go:334] "Generic (PLEG): container finished" podID="cd07424d-8d26-40f5-a714-f951196f0109" containerID="94df8156bfaad5502099d426c723e3f00b7136551f004b22736826a94b1ae40d" exitCode=0 Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.280677 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"cd07424d-8d26-40f5-a714-f951196f0109","Type":"ContainerDied","Data":"94df8156bfaad5502099d426c723e3f00b7136551f004b22736826a94b1ae40d"} Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.348772 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/187b4736-08c5-4d32-94b1-7129f8247476-monitoring-plugin-cert\") pod \"monitoring-plugin-7bcdf45565-22plx\" (UID: \"187b4736-08c5-4d32-94b1-7129f8247476\") " pod="openshift-monitoring/monitoring-plugin-7bcdf45565-22plx" Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.357385 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/187b4736-08c5-4d32-94b1-7129f8247476-monitoring-plugin-cert\") pod \"monitoring-plugin-7bcdf45565-22plx\" (UID: \"187b4736-08c5-4d32-94b1-7129f8247476\") " pod="openshift-monitoring/monitoring-plugin-7bcdf45565-22plx" Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.532415 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-7bcdf45565-22plx" Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.576285 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-559df6b64-rl42v"] Nov 29 07:46:27 crc kubenswrapper[4795]: W1129 07:46:27.593898 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd07c7092_6e69_4ca2_9c12_02991c2cda50.slice/crio-4c1d0c1795e41d921e5f5fb65b43d295b4fdeacf4382ffbcb846f2efdc0f11ae WatchSource:0}: Error finding container 4c1d0c1795e41d921e5f5fb65b43d295b4fdeacf4382ffbcb846f2efdc0f11ae: Status 404 returned error can't find the container with id 4c1d0c1795e41d921e5f5fb65b43d295b4fdeacf4382ffbcb846f2efdc0f11ae Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.656096 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-6df7d947f5-r6rgs"] Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.742966 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.761941 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.767104 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.770874 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.771196 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.771649 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.771891 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.772119 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-wtqzm" Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.772282 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.772432 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.773273 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.773421 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.773429 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-8fdob5bjpopgo" Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.773718 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.784174 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.794828 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.868933 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/586406a6-094e-4d66-bfe8-7c1f91755e26-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"586406a6-094e-4d66-bfe8-7c1f91755e26\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.868992 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/586406a6-094e-4d66-bfe8-7c1f91755e26-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"586406a6-094e-4d66-bfe8-7c1f91755e26\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.869031 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/586406a6-094e-4d66-bfe8-7c1f91755e26-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"586406a6-094e-4d66-bfe8-7c1f91755e26\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.869076 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/586406a6-094e-4d66-bfe8-7c1f91755e26-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"586406a6-094e-4d66-bfe8-7c1f91755e26\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.869114 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/586406a6-094e-4d66-bfe8-7c1f91755e26-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"586406a6-094e-4d66-bfe8-7c1f91755e26\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.869158 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/586406a6-094e-4d66-bfe8-7c1f91755e26-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"586406a6-094e-4d66-bfe8-7c1f91755e26\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.869281 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/586406a6-094e-4d66-bfe8-7c1f91755e26-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"586406a6-094e-4d66-bfe8-7c1f91755e26\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.869362 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/586406a6-094e-4d66-bfe8-7c1f91755e26-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"586406a6-094e-4d66-bfe8-7c1f91755e26\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.869410 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/586406a6-094e-4d66-bfe8-7c1f91755e26-config-out\") pod \"prometheus-k8s-0\" (UID: \"586406a6-094e-4d66-bfe8-7c1f91755e26\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.869431 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/586406a6-094e-4d66-bfe8-7c1f91755e26-config\") pod \"prometheus-k8s-0\" (UID: \"586406a6-094e-4d66-bfe8-7c1f91755e26\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.869488 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/586406a6-094e-4d66-bfe8-7c1f91755e26-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"586406a6-094e-4d66-bfe8-7c1f91755e26\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.869548 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/586406a6-094e-4d66-bfe8-7c1f91755e26-web-config\") pod \"prometheus-k8s-0\" (UID: \"586406a6-094e-4d66-bfe8-7c1f91755e26\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.869572 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/586406a6-094e-4d66-bfe8-7c1f91755e26-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"586406a6-094e-4d66-bfe8-7c1f91755e26\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.869648 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/586406a6-094e-4d66-bfe8-7c1f91755e26-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"586406a6-094e-4d66-bfe8-7c1f91755e26\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.869704 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnzcr\" (UniqueName: \"kubernetes.io/projected/586406a6-094e-4d66-bfe8-7c1f91755e26-kube-api-access-bnzcr\") pod \"prometheus-k8s-0\" (UID: \"586406a6-094e-4d66-bfe8-7c1f91755e26\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.869729 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/586406a6-094e-4d66-bfe8-7c1f91755e26-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"586406a6-094e-4d66-bfe8-7c1f91755e26\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.869750 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/586406a6-094e-4d66-bfe8-7c1f91755e26-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"586406a6-094e-4d66-bfe8-7c1f91755e26\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.869767 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/586406a6-094e-4d66-bfe8-7c1f91755e26-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"586406a6-094e-4d66-bfe8-7c1f91755e26\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.970651 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/586406a6-094e-4d66-bfe8-7c1f91755e26-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"586406a6-094e-4d66-bfe8-7c1f91755e26\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.970719 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/586406a6-094e-4d66-bfe8-7c1f91755e26-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"586406a6-094e-4d66-bfe8-7c1f91755e26\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.970746 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/586406a6-094e-4d66-bfe8-7c1f91755e26-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"586406a6-094e-4d66-bfe8-7c1f91755e26\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.970783 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/586406a6-094e-4d66-bfe8-7c1f91755e26-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"586406a6-094e-4d66-bfe8-7c1f91755e26\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.970810 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/586406a6-094e-4d66-bfe8-7c1f91755e26-config-out\") pod \"prometheus-k8s-0\" (UID: \"586406a6-094e-4d66-bfe8-7c1f91755e26\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.970833 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/586406a6-094e-4d66-bfe8-7c1f91755e26-config\") pod \"prometheus-k8s-0\" (UID: \"586406a6-094e-4d66-bfe8-7c1f91755e26\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.970865 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/586406a6-094e-4d66-bfe8-7c1f91755e26-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"586406a6-094e-4d66-bfe8-7c1f91755e26\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.970900 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/586406a6-094e-4d66-bfe8-7c1f91755e26-web-config\") pod \"prometheus-k8s-0\" (UID: \"586406a6-094e-4d66-bfe8-7c1f91755e26\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.970919 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/586406a6-094e-4d66-bfe8-7c1f91755e26-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"586406a6-094e-4d66-bfe8-7c1f91755e26\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.970943 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/586406a6-094e-4d66-bfe8-7c1f91755e26-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"586406a6-094e-4d66-bfe8-7c1f91755e26\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.970973 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnzcr\" (UniqueName: \"kubernetes.io/projected/586406a6-094e-4d66-bfe8-7c1f91755e26-kube-api-access-bnzcr\") pod \"prometheus-k8s-0\" (UID: \"586406a6-094e-4d66-bfe8-7c1f91755e26\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.970998 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/586406a6-094e-4d66-bfe8-7c1f91755e26-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"586406a6-094e-4d66-bfe8-7c1f91755e26\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.971018 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/586406a6-094e-4d66-bfe8-7c1f91755e26-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"586406a6-094e-4d66-bfe8-7c1f91755e26\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.971040 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/586406a6-094e-4d66-bfe8-7c1f91755e26-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"586406a6-094e-4d66-bfe8-7c1f91755e26\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.971068 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/586406a6-094e-4d66-bfe8-7c1f91755e26-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"586406a6-094e-4d66-bfe8-7c1f91755e26\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.971099 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/586406a6-094e-4d66-bfe8-7c1f91755e26-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"586406a6-094e-4d66-bfe8-7c1f91755e26\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.971127 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/586406a6-094e-4d66-bfe8-7c1f91755e26-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"586406a6-094e-4d66-bfe8-7c1f91755e26\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.971158 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/586406a6-094e-4d66-bfe8-7c1f91755e26-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"586406a6-094e-4d66-bfe8-7c1f91755e26\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.972312 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/586406a6-094e-4d66-bfe8-7c1f91755e26-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"586406a6-094e-4d66-bfe8-7c1f91755e26\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.972957 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/586406a6-094e-4d66-bfe8-7c1f91755e26-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"586406a6-094e-4d66-bfe8-7c1f91755e26\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.973524 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/586406a6-094e-4d66-bfe8-7c1f91755e26-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"586406a6-094e-4d66-bfe8-7c1f91755e26\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.973871 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/586406a6-094e-4d66-bfe8-7c1f91755e26-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"586406a6-094e-4d66-bfe8-7c1f91755e26\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.974560 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/586406a6-094e-4d66-bfe8-7c1f91755e26-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"586406a6-094e-4d66-bfe8-7c1f91755e26\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.976097 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/586406a6-094e-4d66-bfe8-7c1f91755e26-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"586406a6-094e-4d66-bfe8-7c1f91755e26\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.981794 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/586406a6-094e-4d66-bfe8-7c1f91755e26-web-config\") pod \"prometheus-k8s-0\" (UID: \"586406a6-094e-4d66-bfe8-7c1f91755e26\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.981897 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/586406a6-094e-4d66-bfe8-7c1f91755e26-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"586406a6-094e-4d66-bfe8-7c1f91755e26\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.981897 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/586406a6-094e-4d66-bfe8-7c1f91755e26-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"586406a6-094e-4d66-bfe8-7c1f91755e26\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.981948 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/586406a6-094e-4d66-bfe8-7c1f91755e26-config-out\") pod \"prometheus-k8s-0\" (UID: \"586406a6-094e-4d66-bfe8-7c1f91755e26\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.982347 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/586406a6-094e-4d66-bfe8-7c1f91755e26-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"586406a6-094e-4d66-bfe8-7c1f91755e26\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.982411 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/586406a6-094e-4d66-bfe8-7c1f91755e26-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"586406a6-094e-4d66-bfe8-7c1f91755e26\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.983122 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/586406a6-094e-4d66-bfe8-7c1f91755e26-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"586406a6-094e-4d66-bfe8-7c1f91755e26\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.983446 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/586406a6-094e-4d66-bfe8-7c1f91755e26-config\") pod \"prometheus-k8s-0\" (UID: \"586406a6-094e-4d66-bfe8-7c1f91755e26\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.986728 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/586406a6-094e-4d66-bfe8-7c1f91755e26-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"586406a6-094e-4d66-bfe8-7c1f91755e26\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.991040 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/586406a6-094e-4d66-bfe8-7c1f91755e26-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"586406a6-094e-4d66-bfe8-7c1f91755e26\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.991058 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnzcr\" (UniqueName: \"kubernetes.io/projected/586406a6-094e-4d66-bfe8-7c1f91755e26-kube-api-access-bnzcr\") pod \"prometheus-k8s-0\" (UID: \"586406a6-094e-4d66-bfe8-7c1f91755e26\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 29 07:46:27 crc kubenswrapper[4795]: I1129 07:46:27.998522 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/586406a6-094e-4d66-bfe8-7c1f91755e26-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"586406a6-094e-4d66-bfe8-7c1f91755e26\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 29 07:46:28 crc kubenswrapper[4795]: I1129 07:46:28.011607 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-7bcdf45565-22plx"] Nov 29 07:46:28 crc kubenswrapper[4795]: W1129 07:46:28.017153 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod187b4736_08c5_4d32_94b1_7129f8247476.slice/crio-eda2ed1057066de494b2c92b14af8cfcaa09cdadc5ce694ee6f9ca5668d9d731 WatchSource:0}: Error finding container eda2ed1057066de494b2c92b14af8cfcaa09cdadc5ce694ee6f9ca5668d9d731: Status 404 returned error can't find the container with id eda2ed1057066de494b2c92b14af8cfcaa09cdadc5ce694ee6f9ca5668d9d731 Nov 29 07:46:28 crc kubenswrapper[4795]: I1129 07:46:28.095759 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Nov 29 07:46:28 crc kubenswrapper[4795]: I1129 07:46:28.288217 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-7bcdf45565-22plx" event={"ID":"187b4736-08c5-4d32-94b1-7129f8247476","Type":"ContainerStarted","Data":"eda2ed1057066de494b2c92b14af8cfcaa09cdadc5ce694ee6f9ca5668d9d731"} Nov 29 07:46:28 crc kubenswrapper[4795]: I1129 07:46:28.289859 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-gwbvq" event={"ID":"6fd992f0-9162-4936-bec5-764dab2c9090","Type":"ContainerStarted","Data":"651fc9b0613c31df84fa785d32107a9c5e0fff14b72788826defb0f7b2e46b4e"} Nov 29 07:46:28 crc kubenswrapper[4795]: I1129 07:46:28.289902 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-gwbvq" event={"ID":"6fd992f0-9162-4936-bec5-764dab2c9090","Type":"ContainerStarted","Data":"c7e2e44c1cf0279bce93e3d30059c37b0172d614dbb52a817c708072d9038d75"} Nov 29 07:46:28 crc kubenswrapper[4795]: I1129 07:46:28.291797 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6b77f7dd4f-cv6b5" event={"ID":"537c19bc-51da-4b65-9baa-176ac3225a2d","Type":"ContainerStarted","Data":"ad782853867e5c0b1cef0fa175d0f94c81b61991cc6fe7d49f3a5d992dd3738a"} Nov 29 07:46:28 crc kubenswrapper[4795]: I1129 07:46:28.292625 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-6df7d947f5-r6rgs" event={"ID":"5e9d05f6-bafa-49e3-8223-6a28c46bbca9","Type":"ContainerStarted","Data":"3de5b0626968eea73ea161cea63ed5f327b558cebc10b5af0772f201a7b1e113"} Nov 29 07:46:28 crc kubenswrapper[4795]: I1129 07:46:28.294107 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-gpgm5" event={"ID":"da700315-23ec-4eba-8df0-32845329d616","Type":"ContainerStarted","Data":"0537d56e3294bc303eb62d8c391b055f5d1c37e00482548cb55be43fea26e459"} Nov 29 07:46:28 crc kubenswrapper[4795]: I1129 07:46:28.295192 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-559df6b64-rl42v" event={"ID":"d07c7092-6e69-4ca2-9c12-02991c2cda50","Type":"ContainerStarted","Data":"cd47aafb3c4ba01b183e8adf20e80ae24359c987ac92c4fc1caa7648ce256055"} Nov 29 07:46:28 crc kubenswrapper[4795]: I1129 07:46:28.295219 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-559df6b64-rl42v" event={"ID":"d07c7092-6e69-4ca2-9c12-02991c2cda50","Type":"ContainerStarted","Data":"4c1d0c1795e41d921e5f5fb65b43d295b4fdeacf4382ffbcb846f2efdc0f11ae"} Nov 29 07:46:28 crc kubenswrapper[4795]: I1129 07:46:28.315429 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-gwbvq" podStartSLOduration=4.470148905 podStartE2EDuration="7.315409987s" podCreationTimestamp="2025-11-29 07:46:21 +0000 UTC" firstStartedPulling="2025-11-29 07:46:22.727634041 +0000 UTC m=+428.703209831" lastFinishedPulling="2025-11-29 07:46:25.572895123 +0000 UTC m=+431.548470913" observedRunningTime="2025-11-29 07:46:28.307870262 +0000 UTC m=+434.283446052" watchObservedRunningTime="2025-11-29 07:46:28.315409987 +0000 UTC m=+434.290985777" Nov 29 07:46:28 crc kubenswrapper[4795]: I1129 07:46:28.329519 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-gpgm5" podStartSLOduration=4.813056671 podStartE2EDuration="7.329468049s" podCreationTimestamp="2025-11-29 07:46:21 +0000 UTC" firstStartedPulling="2025-11-29 07:46:21.747144094 +0000 UTC m=+427.722719884" lastFinishedPulling="2025-11-29 07:46:24.263555472 +0000 UTC m=+430.239131262" observedRunningTime="2025-11-29 07:46:28.324184418 +0000 UTC m=+434.299760208" watchObservedRunningTime="2025-11-29 07:46:28.329468049 +0000 UTC m=+434.305043849" Nov 29 07:46:28 crc kubenswrapper[4795]: I1129 07:46:28.344806 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-559df6b64-rl42v" podStartSLOduration=2.344782537 podStartE2EDuration="2.344782537s" podCreationTimestamp="2025-11-29 07:46:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:46:28.341451722 +0000 UTC m=+434.317027512" watchObservedRunningTime="2025-11-29 07:46:28.344782537 +0000 UTC m=+434.320358327" Nov 29 07:46:28 crc kubenswrapper[4795]: I1129 07:46:28.501250 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Nov 29 07:46:28 crc kubenswrapper[4795]: W1129 07:46:28.502970 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod586406a6_094e_4d66_bfe8_7c1f91755e26.slice/crio-bdd5510ec7fc997e78055c67cbb924770511db215a3df10dc2c2ea58bd18b179 WatchSource:0}: Error finding container bdd5510ec7fc997e78055c67cbb924770511db215a3df10dc2c2ea58bd18b179: Status 404 returned error can't find the container with id bdd5510ec7fc997e78055c67cbb924770511db215a3df10dc2c2ea58bd18b179 Nov 29 07:46:29 crc kubenswrapper[4795]: I1129 07:46:29.302561 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6b77f7dd4f-cv6b5" event={"ID":"537c19bc-51da-4b65-9baa-176ac3225a2d","Type":"ContainerStarted","Data":"73637eb9cde7e8306a88f9889a8e931c768f978497b1f688c8124779dc419379"} Nov 29 07:46:29 crc kubenswrapper[4795]: I1129 07:46:29.303821 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"586406a6-094e-4d66-bfe8-7c1f91755e26","Type":"ContainerStarted","Data":"bdd5510ec7fc997e78055c67cbb924770511db215a3df10dc2c2ea58bd18b179"} Nov 29 07:46:30 crc kubenswrapper[4795]: I1129 07:46:30.319013 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6b77f7dd4f-cv6b5" event={"ID":"537c19bc-51da-4b65-9baa-176ac3225a2d","Type":"ContainerStarted","Data":"1d03986f995304a3d2713c02a1f22541e50cd64c6dc0751e57e45562e18854c0"} Nov 29 07:46:30 crc kubenswrapper[4795]: I1129 07:46:30.323735 4795 generic.go:334] "Generic (PLEG): container finished" podID="586406a6-094e-4d66-bfe8-7c1f91755e26" containerID="51d8444799d558578746e36319c406077f2d2db16fc3cda45a4e942d22440d5e" exitCode=0 Nov 29 07:46:30 crc kubenswrapper[4795]: I1129 07:46:30.323816 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"586406a6-094e-4d66-bfe8-7c1f91755e26","Type":"ContainerDied","Data":"51d8444799d558578746e36319c406077f2d2db16fc3cda45a4e942d22440d5e"} Nov 29 07:46:35 crc kubenswrapper[4795]: I1129 07:46:35.366187 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"cd07424d-8d26-40f5-a714-f951196f0109","Type":"ContainerStarted","Data":"81fa53515b7975269bff7bf5f67b5d0937f4c0429eacebaf79e2be58f6040848"} Nov 29 07:46:35 crc kubenswrapper[4795]: I1129 07:46:35.366807 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"cd07424d-8d26-40f5-a714-f951196f0109","Type":"ContainerStarted","Data":"0b7149b1c9af304f36dce5f54a26df9e1b3c662956cb17660b379295b721973b"} Nov 29 07:46:35 crc kubenswrapper[4795]: I1129 07:46:35.366834 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"cd07424d-8d26-40f5-a714-f951196f0109","Type":"ContainerStarted","Data":"dc6d0f61c9557f5dd3030dc8e87efcc55b0a984a736251395a1a1c4cd2d22cae"} Nov 29 07:46:35 crc kubenswrapper[4795]: I1129 07:46:35.366846 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"cd07424d-8d26-40f5-a714-f951196f0109","Type":"ContainerStarted","Data":"3e27f599cdf5bafb57ad01223fbd829712732d6fd9c6f9ececb2acc3e941de7d"} Nov 29 07:46:35 crc kubenswrapper[4795]: I1129 07:46:35.366872 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"cd07424d-8d26-40f5-a714-f951196f0109","Type":"ContainerStarted","Data":"4bd627c826edeb8929754e6706c5da208245750cfca80d4bf532dfe45a451390"} Nov 29 07:46:35 crc kubenswrapper[4795]: I1129 07:46:35.369083 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-6df7d947f5-r6rgs" event={"ID":"5e9d05f6-bafa-49e3-8223-6a28c46bbca9","Type":"ContainerStarted","Data":"e389dba706c8999d71d47c74541cf0dc8f98b3276d94f043cece8e6a8d73d558"} Nov 29 07:46:35 crc kubenswrapper[4795]: I1129 07:46:35.380508 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"586406a6-094e-4d66-bfe8-7c1f91755e26","Type":"ContainerStarted","Data":"5d7dd837213648605692a2a70a49c8ec0ced1de18e9ad764163bce2e4084ba9c"} Nov 29 07:46:35 crc kubenswrapper[4795]: I1129 07:46:35.380564 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"586406a6-094e-4d66-bfe8-7c1f91755e26","Type":"ContainerStarted","Data":"5d1b3070fe1ea4d0e6279a03c4a00b7966f5f8f87b3b72418fed8f0db35c5de0"} Nov 29 07:46:35 crc kubenswrapper[4795]: I1129 07:46:35.380581 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"586406a6-094e-4d66-bfe8-7c1f91755e26","Type":"ContainerStarted","Data":"11725ba968b59b16fbd1836de061b1f96cb66ad7049cc3f6b4fa5d3c81f233a4"} Nov 29 07:46:35 crc kubenswrapper[4795]: I1129 07:46:35.380610 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"586406a6-094e-4d66-bfe8-7c1f91755e26","Type":"ContainerStarted","Data":"57331b2759d58af14e9f8527f8f033de85c60c9ee4feeb2728deb9aa3abd7c48"} Nov 29 07:46:35 crc kubenswrapper[4795]: I1129 07:46:35.380621 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"586406a6-094e-4d66-bfe8-7c1f91755e26","Type":"ContainerStarted","Data":"fcce5b0bbfcec35baa66eb226bedb135af8d0dd2fd70cd6ef55c30ab0b658adc"} Nov 29 07:46:35 crc kubenswrapper[4795]: I1129 07:46:35.382162 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-7bcdf45565-22plx" event={"ID":"187b4736-08c5-4d32-94b1-7129f8247476","Type":"ContainerStarted","Data":"b77099e7d540dd19a57909972a5a3a0bb1dc409649d4f23c746eb966cdfe92df"} Nov 29 07:46:35 crc kubenswrapper[4795]: I1129 07:46:35.382974 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-7bcdf45565-22plx" Nov 29 07:46:35 crc kubenswrapper[4795]: I1129 07:46:35.385702 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6b77f7dd4f-cv6b5" event={"ID":"537c19bc-51da-4b65-9baa-176ac3225a2d","Type":"ContainerStarted","Data":"c7a3538af5da113ef51a68f26f8e3eee1db5e1dac5fe5a24fec2c1fd42aad35e"} Nov 29 07:46:35 crc kubenswrapper[4795]: I1129 07:46:35.385734 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6b77f7dd4f-cv6b5" event={"ID":"537c19bc-51da-4b65-9baa-176ac3225a2d","Type":"ContainerStarted","Data":"8d37bc4543e227a22bcfb33e9bab55cde792869a457056ffc6913526463031eb"} Nov 29 07:46:35 crc kubenswrapper[4795]: I1129 07:46:35.385752 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6b77f7dd4f-cv6b5" event={"ID":"537c19bc-51da-4b65-9baa-176ac3225a2d","Type":"ContainerStarted","Data":"cacf17b2290f29de07067b5c65bd8d999dad951036fa1a1f0216ea33e08eb426"} Nov 29 07:46:35 crc kubenswrapper[4795]: I1129 07:46:35.385961 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-6b77f7dd4f-cv6b5" Nov 29 07:46:35 crc kubenswrapper[4795]: I1129 07:46:35.394585 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-6b77f7dd4f-cv6b5" Nov 29 07:46:35 crc kubenswrapper[4795]: I1129 07:46:35.395934 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-7bcdf45565-22plx" Nov 29 07:46:35 crc kubenswrapper[4795]: I1129 07:46:35.400003 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-6df7d947f5-r6rgs" podStartSLOduration=2.679277261 podStartE2EDuration="9.399987965s" podCreationTimestamp="2025-11-29 07:46:26 +0000 UTC" firstStartedPulling="2025-11-29 07:46:27.68991568 +0000 UTC m=+433.665491480" lastFinishedPulling="2025-11-29 07:46:34.410626394 +0000 UTC m=+440.386202184" observedRunningTime="2025-11-29 07:46:35.398692448 +0000 UTC m=+441.374268248" watchObservedRunningTime="2025-11-29 07:46:35.399987965 +0000 UTC m=+441.375563745" Nov 29 07:46:35 crc kubenswrapper[4795]: I1129 07:46:35.417061 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-7bcdf45565-22plx" podStartSLOduration=2.02644409 podStartE2EDuration="8.417041033s" podCreationTimestamp="2025-11-29 07:46:27 +0000 UTC" firstStartedPulling="2025-11-29 07:46:28.019875356 +0000 UTC m=+433.995451146" lastFinishedPulling="2025-11-29 07:46:34.410472299 +0000 UTC m=+440.386048089" observedRunningTime="2025-11-29 07:46:35.412576655 +0000 UTC m=+441.388152455" watchObservedRunningTime="2025-11-29 07:46:35.417041033 +0000 UTC m=+441.392616823" Nov 29 07:46:35 crc kubenswrapper[4795]: I1129 07:46:35.435577 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-6b77f7dd4f-cv6b5" podStartSLOduration=2.509006146 podStartE2EDuration="12.435554662s" podCreationTimestamp="2025-11-29 07:46:23 +0000 UTC" firstStartedPulling="2025-11-29 07:46:24.664537228 +0000 UTC m=+430.640113018" lastFinishedPulling="2025-11-29 07:46:34.591085744 +0000 UTC m=+440.566661534" observedRunningTime="2025-11-29 07:46:35.434202024 +0000 UTC m=+441.409777814" watchObservedRunningTime="2025-11-29 07:46:35.435554662 +0000 UTC m=+441.411130452" Nov 29 07:46:36 crc kubenswrapper[4795]: I1129 07:46:36.407951 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"586406a6-094e-4d66-bfe8-7c1f91755e26","Type":"ContainerStarted","Data":"4c4f4741c764beb7a191f62a267454146471d8ea0bb942109156cba7e81b5d13"} Nov 29 07:46:36 crc kubenswrapper[4795]: I1129 07:46:36.412495 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"cd07424d-8d26-40f5-a714-f951196f0109","Type":"ContainerStarted","Data":"379de1ddbd0f212d4a6235538eefb8385c7939af384bb8a9c91ee59a00f815ce"} Nov 29 07:46:36 crc kubenswrapper[4795]: I1129 07:46:36.439993 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=5.178584128 podStartE2EDuration="9.439973175s" podCreationTimestamp="2025-11-29 07:46:27 +0000 UTC" firstStartedPulling="2025-11-29 07:46:30.326764093 +0000 UTC m=+436.302339883" lastFinishedPulling="2025-11-29 07:46:34.58815314 +0000 UTC m=+440.563728930" observedRunningTime="2025-11-29 07:46:36.439315816 +0000 UTC m=+442.414891606" watchObservedRunningTime="2025-11-29 07:46:36.439973175 +0000 UTC m=+442.415548985" Nov 29 07:46:36 crc kubenswrapper[4795]: I1129 07:46:36.477633 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=4.120108601 podStartE2EDuration="14.477608101s" podCreationTimestamp="2025-11-29 07:46:22 +0000 UTC" firstStartedPulling="2025-11-29 07:46:24.205206833 +0000 UTC m=+430.180782623" lastFinishedPulling="2025-11-29 07:46:34.562706333 +0000 UTC m=+440.538282123" observedRunningTime="2025-11-29 07:46:36.471607249 +0000 UTC m=+442.447183059" watchObservedRunningTime="2025-11-29 07:46:36.477608101 +0000 UTC m=+442.453183891" Nov 29 07:46:36 crc kubenswrapper[4795]: I1129 07:46:36.654102 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-559df6b64-rl42v" Nov 29 07:46:36 crc kubenswrapper[4795]: I1129 07:46:36.654156 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-559df6b64-rl42v" Nov 29 07:46:36 crc kubenswrapper[4795]: I1129 07:46:36.662715 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-559df6b64-rl42v" Nov 29 07:46:37 crc kubenswrapper[4795]: I1129 07:46:37.421683 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-559df6b64-rl42v" Nov 29 07:46:37 crc kubenswrapper[4795]: I1129 07:46:37.467228 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-w4g7w"] Nov 29 07:46:38 crc kubenswrapper[4795]: I1129 07:46:38.096729 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Nov 29 07:46:41 crc kubenswrapper[4795]: I1129 07:46:41.941289 4795 patch_prober.go:28] interesting pod/machine-config-daemon-bkmq6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:46:41 crc kubenswrapper[4795]: I1129 07:46:41.942222 4795 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:46:41 crc kubenswrapper[4795]: I1129 07:46:41.942288 4795 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" Nov 29 07:46:41 crc kubenswrapper[4795]: I1129 07:46:41.943289 4795 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2126419e14a4a2775ea5be60a712196b4855482260536f930abe91252856634d"} pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 07:46:41 crc kubenswrapper[4795]: I1129 07:46:41.943360 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" containerID="cri-o://2126419e14a4a2775ea5be60a712196b4855482260536f930abe91252856634d" gracePeriod=600 Nov 29 07:46:42 crc kubenswrapper[4795]: I1129 07:46:42.457097 4795 generic.go:334] "Generic (PLEG): container finished" podID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerID="2126419e14a4a2775ea5be60a712196b4855482260536f930abe91252856634d" exitCode=0 Nov 29 07:46:42 crc kubenswrapper[4795]: I1129 07:46:42.457186 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" event={"ID":"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1","Type":"ContainerDied","Data":"2126419e14a4a2775ea5be60a712196b4855482260536f930abe91252856634d"} Nov 29 07:46:42 crc kubenswrapper[4795]: I1129 07:46:42.457276 4795 scope.go:117] "RemoveContainer" containerID="63f3aac368767d0534a658bc0223f5cda0a951940f2d6ff3f1c89a6ac401a7d5" Nov 29 07:46:43 crc kubenswrapper[4795]: I1129 07:46:43.465392 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" event={"ID":"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1","Type":"ContainerStarted","Data":"dc32e1210618295b7a09888e63f3d2d9f6eb19cbef74c764b86be7c539c45ed2"} Nov 29 07:46:47 crc kubenswrapper[4795]: I1129 07:46:47.081343 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-6df7d947f5-r6rgs" Nov 29 07:46:47 crc kubenswrapper[4795]: I1129 07:46:47.082680 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-6df7d947f5-r6rgs" Nov 29 07:46:50 crc kubenswrapper[4795]: I1129 07:46:50.099495 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" podUID="04596222-6779-478e-96cd-3aa99a923aa4" containerName="registry" containerID="cri-o://6749a4f3e8655d1adec49be5b226c66b6661df08001a84fb4b426da7f21249e9" gracePeriod=30 Nov 29 07:46:50 crc kubenswrapper[4795]: I1129 07:46:50.987365 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:46:51 crc kubenswrapper[4795]: I1129 07:46:51.136539 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/04596222-6779-478e-96cd-3aa99a923aa4-registry-certificates\") pod \"04596222-6779-478e-96cd-3aa99a923aa4\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " Nov 29 07:46:51 crc kubenswrapper[4795]: I1129 07:46:51.136634 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mhph7\" (UniqueName: \"kubernetes.io/projected/04596222-6779-478e-96cd-3aa99a923aa4-kube-api-access-mhph7\") pod \"04596222-6779-478e-96cd-3aa99a923aa4\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " Nov 29 07:46:51 crc kubenswrapper[4795]: I1129 07:46:51.136696 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/04596222-6779-478e-96cd-3aa99a923aa4-installation-pull-secrets\") pod \"04596222-6779-478e-96cd-3aa99a923aa4\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " Nov 29 07:46:51 crc kubenswrapper[4795]: I1129 07:46:51.136724 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/04596222-6779-478e-96cd-3aa99a923aa4-registry-tls\") pod \"04596222-6779-478e-96cd-3aa99a923aa4\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " Nov 29 07:46:51 crc kubenswrapper[4795]: I1129 07:46:51.136753 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/04596222-6779-478e-96cd-3aa99a923aa4-ca-trust-extracted\") pod \"04596222-6779-478e-96cd-3aa99a923aa4\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " Nov 29 07:46:51 crc kubenswrapper[4795]: I1129 07:46:51.136930 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"04596222-6779-478e-96cd-3aa99a923aa4\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " Nov 29 07:46:51 crc kubenswrapper[4795]: I1129 07:46:51.136963 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/04596222-6779-478e-96cd-3aa99a923aa4-bound-sa-token\") pod \"04596222-6779-478e-96cd-3aa99a923aa4\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " Nov 29 07:46:51 crc kubenswrapper[4795]: I1129 07:46:51.136997 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04596222-6779-478e-96cd-3aa99a923aa4-trusted-ca\") pod \"04596222-6779-478e-96cd-3aa99a923aa4\" (UID: \"04596222-6779-478e-96cd-3aa99a923aa4\") " Nov 29 07:46:51 crc kubenswrapper[4795]: I1129 07:46:51.137712 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04596222-6779-478e-96cd-3aa99a923aa4-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "04596222-6779-478e-96cd-3aa99a923aa4" (UID: "04596222-6779-478e-96cd-3aa99a923aa4"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:46:51 crc kubenswrapper[4795]: I1129 07:46:51.137730 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04596222-6779-478e-96cd-3aa99a923aa4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "04596222-6779-478e-96cd-3aa99a923aa4" (UID: "04596222-6779-478e-96cd-3aa99a923aa4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:46:51 crc kubenswrapper[4795]: I1129 07:46:51.146052 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04596222-6779-478e-96cd-3aa99a923aa4-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "04596222-6779-478e-96cd-3aa99a923aa4" (UID: "04596222-6779-478e-96cd-3aa99a923aa4"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:46:51 crc kubenswrapper[4795]: I1129 07:46:51.148567 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04596222-6779-478e-96cd-3aa99a923aa4-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "04596222-6779-478e-96cd-3aa99a923aa4" (UID: "04596222-6779-478e-96cd-3aa99a923aa4"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:46:51 crc kubenswrapper[4795]: I1129 07:46:51.149127 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04596222-6779-478e-96cd-3aa99a923aa4-kube-api-access-mhph7" (OuterVolumeSpecName: "kube-api-access-mhph7") pod "04596222-6779-478e-96cd-3aa99a923aa4" (UID: "04596222-6779-478e-96cd-3aa99a923aa4"). InnerVolumeSpecName "kube-api-access-mhph7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:46:51 crc kubenswrapper[4795]: I1129 07:46:51.149494 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04596222-6779-478e-96cd-3aa99a923aa4-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "04596222-6779-478e-96cd-3aa99a923aa4" (UID: "04596222-6779-478e-96cd-3aa99a923aa4"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:46:51 crc kubenswrapper[4795]: I1129 07:46:51.152951 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "04596222-6779-478e-96cd-3aa99a923aa4" (UID: "04596222-6779-478e-96cd-3aa99a923aa4"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 29 07:46:51 crc kubenswrapper[4795]: I1129 07:46:51.157665 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04596222-6779-478e-96cd-3aa99a923aa4-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "04596222-6779-478e-96cd-3aa99a923aa4" (UID: "04596222-6779-478e-96cd-3aa99a923aa4"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:46:51 crc kubenswrapper[4795]: I1129 07:46:51.238754 4795 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/04596222-6779-478e-96cd-3aa99a923aa4-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 29 07:46:51 crc kubenswrapper[4795]: I1129 07:46:51.239167 4795 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/04596222-6779-478e-96cd-3aa99a923aa4-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 29 07:46:51 crc kubenswrapper[4795]: I1129 07:46:51.239177 4795 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04596222-6779-478e-96cd-3aa99a923aa4-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:46:51 crc kubenswrapper[4795]: I1129 07:46:51.239186 4795 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/04596222-6779-478e-96cd-3aa99a923aa4-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 29 07:46:51 crc kubenswrapper[4795]: I1129 07:46:51.239198 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mhph7\" (UniqueName: \"kubernetes.io/projected/04596222-6779-478e-96cd-3aa99a923aa4-kube-api-access-mhph7\") on node \"crc\" DevicePath \"\"" Nov 29 07:46:51 crc kubenswrapper[4795]: I1129 07:46:51.239207 4795 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/04596222-6779-478e-96cd-3aa99a923aa4-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 29 07:46:51 crc kubenswrapper[4795]: I1129 07:46:51.239214 4795 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/04596222-6779-478e-96cd-3aa99a923aa4-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 29 07:46:51 crc kubenswrapper[4795]: I1129 07:46:51.541481 4795 generic.go:334] "Generic (PLEG): container finished" podID="04596222-6779-478e-96cd-3aa99a923aa4" containerID="6749a4f3e8655d1adec49be5b226c66b6661df08001a84fb4b426da7f21249e9" exitCode=0 Nov 29 07:46:51 crc kubenswrapper[4795]: I1129 07:46:51.541567 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" Nov 29 07:46:51 crc kubenswrapper[4795]: I1129 07:46:51.541748 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" event={"ID":"04596222-6779-478e-96cd-3aa99a923aa4","Type":"ContainerDied","Data":"6749a4f3e8655d1adec49be5b226c66b6661df08001a84fb4b426da7f21249e9"} Nov 29 07:46:51 crc kubenswrapper[4795]: I1129 07:46:51.541888 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-lm9wt" event={"ID":"04596222-6779-478e-96cd-3aa99a923aa4","Type":"ContainerDied","Data":"6a6c60f71e6c71d575c2c0201236875b99a1dcdcdecd104c4af5914393d78bb3"} Nov 29 07:46:51 crc kubenswrapper[4795]: I1129 07:46:51.541924 4795 scope.go:117] "RemoveContainer" containerID="6749a4f3e8655d1adec49be5b226c66b6661df08001a84fb4b426da7f21249e9" Nov 29 07:46:51 crc kubenswrapper[4795]: I1129 07:46:51.578148 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lm9wt"] Nov 29 07:46:51 crc kubenswrapper[4795]: I1129 07:46:51.580364 4795 scope.go:117] "RemoveContainer" containerID="6749a4f3e8655d1adec49be5b226c66b6661df08001a84fb4b426da7f21249e9" Nov 29 07:46:51 crc kubenswrapper[4795]: E1129 07:46:51.580752 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6749a4f3e8655d1adec49be5b226c66b6661df08001a84fb4b426da7f21249e9\": container with ID starting with 6749a4f3e8655d1adec49be5b226c66b6661df08001a84fb4b426da7f21249e9 not found: ID does not exist" containerID="6749a4f3e8655d1adec49be5b226c66b6661df08001a84fb4b426da7f21249e9" Nov 29 07:46:51 crc kubenswrapper[4795]: I1129 07:46:51.580784 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6749a4f3e8655d1adec49be5b226c66b6661df08001a84fb4b426da7f21249e9"} err="failed to get container status \"6749a4f3e8655d1adec49be5b226c66b6661df08001a84fb4b426da7f21249e9\": rpc error: code = NotFound desc = could not find container \"6749a4f3e8655d1adec49be5b226c66b6661df08001a84fb4b426da7f21249e9\": container with ID starting with 6749a4f3e8655d1adec49be5b226c66b6661df08001a84fb4b426da7f21249e9 not found: ID does not exist" Nov 29 07:46:51 crc kubenswrapper[4795]: I1129 07:46:51.586741 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lm9wt"] Nov 29 07:46:52 crc kubenswrapper[4795]: I1129 07:46:52.284078 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04596222-6779-478e-96cd-3aa99a923aa4" path="/var/lib/kubelet/pods/04596222-6779-478e-96cd-3aa99a923aa4/volumes" Nov 29 07:47:02 crc kubenswrapper[4795]: I1129 07:47:02.519177 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-w4g7w" podUID="9b02347e-bd6a-4a77-97d4-d8276d1b6167" containerName="console" containerID="cri-o://988ef1e2a4db717859d2cc1f5d2af7487b8891115d030bac5aa1a59b869dbf18" gracePeriod=15 Nov 29 07:47:07 crc kubenswrapper[4795]: I1129 07:47:07.092542 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-6df7d947f5-r6rgs" Nov 29 07:47:07 crc kubenswrapper[4795]: I1129 07:47:07.108110 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-6df7d947f5-r6rgs" Nov 29 07:47:07 crc kubenswrapper[4795]: I1129 07:47:07.751706 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-w4g7w_9b02347e-bd6a-4a77-97d4-d8276d1b6167/console/0.log" Nov 29 07:47:07 crc kubenswrapper[4795]: I1129 07:47:07.751774 4795 generic.go:334] "Generic (PLEG): container finished" podID="9b02347e-bd6a-4a77-97d4-d8276d1b6167" containerID="988ef1e2a4db717859d2cc1f5d2af7487b8891115d030bac5aa1a59b869dbf18" exitCode=2 Nov 29 07:47:07 crc kubenswrapper[4795]: I1129 07:47:07.751815 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-w4g7w" event={"ID":"9b02347e-bd6a-4a77-97d4-d8276d1b6167","Type":"ContainerDied","Data":"988ef1e2a4db717859d2cc1f5d2af7487b8891115d030bac5aa1a59b869dbf18"} Nov 29 07:47:08 crc kubenswrapper[4795]: I1129 07:47:08.122781 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-w4g7w_9b02347e-bd6a-4a77-97d4-d8276d1b6167/console/0.log" Nov 29 07:47:08 crc kubenswrapper[4795]: I1129 07:47:08.123076 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-w4g7w" Nov 29 07:47:08 crc kubenswrapper[4795]: I1129 07:47:08.308091 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-886lh\" (UniqueName: \"kubernetes.io/projected/9b02347e-bd6a-4a77-97d4-d8276d1b6167-kube-api-access-886lh\") pod \"9b02347e-bd6a-4a77-97d4-d8276d1b6167\" (UID: \"9b02347e-bd6a-4a77-97d4-d8276d1b6167\") " Nov 29 07:47:08 crc kubenswrapper[4795]: I1129 07:47:08.308202 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9b02347e-bd6a-4a77-97d4-d8276d1b6167-oauth-serving-cert\") pod \"9b02347e-bd6a-4a77-97d4-d8276d1b6167\" (UID: \"9b02347e-bd6a-4a77-97d4-d8276d1b6167\") " Nov 29 07:47:08 crc kubenswrapper[4795]: I1129 07:47:08.308226 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9b02347e-bd6a-4a77-97d4-d8276d1b6167-console-serving-cert\") pod \"9b02347e-bd6a-4a77-97d4-d8276d1b6167\" (UID: \"9b02347e-bd6a-4a77-97d4-d8276d1b6167\") " Nov 29 07:47:08 crc kubenswrapper[4795]: I1129 07:47:08.308281 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b02347e-bd6a-4a77-97d4-d8276d1b6167-trusted-ca-bundle\") pod \"9b02347e-bd6a-4a77-97d4-d8276d1b6167\" (UID: \"9b02347e-bd6a-4a77-97d4-d8276d1b6167\") " Nov 29 07:47:08 crc kubenswrapper[4795]: I1129 07:47:08.308516 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9b02347e-bd6a-4a77-97d4-d8276d1b6167-service-ca\") pod \"9b02347e-bd6a-4a77-97d4-d8276d1b6167\" (UID: \"9b02347e-bd6a-4a77-97d4-d8276d1b6167\") " Nov 29 07:47:08 crc kubenswrapper[4795]: I1129 07:47:08.309252 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b02347e-bd6a-4a77-97d4-d8276d1b6167-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "9b02347e-bd6a-4a77-97d4-d8276d1b6167" (UID: "9b02347e-bd6a-4a77-97d4-d8276d1b6167"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:47:08 crc kubenswrapper[4795]: I1129 07:47:08.309307 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b02347e-bd6a-4a77-97d4-d8276d1b6167-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "9b02347e-bd6a-4a77-97d4-d8276d1b6167" (UID: "9b02347e-bd6a-4a77-97d4-d8276d1b6167"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:47:08 crc kubenswrapper[4795]: I1129 07:47:08.309296 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b02347e-bd6a-4a77-97d4-d8276d1b6167-service-ca" (OuterVolumeSpecName: "service-ca") pod "9b02347e-bd6a-4a77-97d4-d8276d1b6167" (UID: "9b02347e-bd6a-4a77-97d4-d8276d1b6167"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:47:08 crc kubenswrapper[4795]: I1129 07:47:08.310968 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9b02347e-bd6a-4a77-97d4-d8276d1b6167-console-oauth-config\") pod \"9b02347e-bd6a-4a77-97d4-d8276d1b6167\" (UID: \"9b02347e-bd6a-4a77-97d4-d8276d1b6167\") " Nov 29 07:47:08 crc kubenswrapper[4795]: I1129 07:47:08.311037 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9b02347e-bd6a-4a77-97d4-d8276d1b6167-console-config\") pod \"9b02347e-bd6a-4a77-97d4-d8276d1b6167\" (UID: \"9b02347e-bd6a-4a77-97d4-d8276d1b6167\") " Nov 29 07:47:08 crc kubenswrapper[4795]: I1129 07:47:08.311353 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b02347e-bd6a-4a77-97d4-d8276d1b6167-console-config" (OuterVolumeSpecName: "console-config") pod "9b02347e-bd6a-4a77-97d4-d8276d1b6167" (UID: "9b02347e-bd6a-4a77-97d4-d8276d1b6167"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:47:08 crc kubenswrapper[4795]: I1129 07:47:08.312858 4795 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9b02347e-bd6a-4a77-97d4-d8276d1b6167-console-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:47:08 crc kubenswrapper[4795]: I1129 07:47:08.312898 4795 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9b02347e-bd6a-4a77-97d4-d8276d1b6167-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:47:08 crc kubenswrapper[4795]: I1129 07:47:08.312912 4795 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b02347e-bd6a-4a77-97d4-d8276d1b6167-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:47:08 crc kubenswrapper[4795]: I1129 07:47:08.312927 4795 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9b02347e-bd6a-4a77-97d4-d8276d1b6167-service-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:47:08 crc kubenswrapper[4795]: I1129 07:47:08.314177 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b02347e-bd6a-4a77-97d4-d8276d1b6167-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "9b02347e-bd6a-4a77-97d4-d8276d1b6167" (UID: "9b02347e-bd6a-4a77-97d4-d8276d1b6167"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:47:08 crc kubenswrapper[4795]: I1129 07:47:08.314533 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b02347e-bd6a-4a77-97d4-d8276d1b6167-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "9b02347e-bd6a-4a77-97d4-d8276d1b6167" (UID: "9b02347e-bd6a-4a77-97d4-d8276d1b6167"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:47:08 crc kubenswrapper[4795]: I1129 07:47:08.314758 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b02347e-bd6a-4a77-97d4-d8276d1b6167-kube-api-access-886lh" (OuterVolumeSpecName: "kube-api-access-886lh") pod "9b02347e-bd6a-4a77-97d4-d8276d1b6167" (UID: "9b02347e-bd6a-4a77-97d4-d8276d1b6167"). InnerVolumeSpecName "kube-api-access-886lh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:47:08 crc kubenswrapper[4795]: I1129 07:47:08.413315 4795 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9b02347e-bd6a-4a77-97d4-d8276d1b6167-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:47:08 crc kubenswrapper[4795]: I1129 07:47:08.413357 4795 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9b02347e-bd6a-4a77-97d4-d8276d1b6167-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:47:08 crc kubenswrapper[4795]: I1129 07:47:08.413369 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-886lh\" (UniqueName: \"kubernetes.io/projected/9b02347e-bd6a-4a77-97d4-d8276d1b6167-kube-api-access-886lh\") on node \"crc\" DevicePath \"\"" Nov 29 07:47:08 crc kubenswrapper[4795]: I1129 07:47:08.759474 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-w4g7w_9b02347e-bd6a-4a77-97d4-d8276d1b6167/console/0.log" Nov 29 07:47:08 crc kubenswrapper[4795]: I1129 07:47:08.761150 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-w4g7w" Nov 29 07:47:08 crc kubenswrapper[4795]: I1129 07:47:08.761198 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-w4g7w" event={"ID":"9b02347e-bd6a-4a77-97d4-d8276d1b6167","Type":"ContainerDied","Data":"abc0c8c629c54a627a9a9a6cc91813a8848d0844d2ec5afbb925ebc66a9b34bb"} Nov 29 07:47:08 crc kubenswrapper[4795]: I1129 07:47:08.761253 4795 scope.go:117] "RemoveContainer" containerID="988ef1e2a4db717859d2cc1f5d2af7487b8891115d030bac5aa1a59b869dbf18" Nov 29 07:47:08 crc kubenswrapper[4795]: I1129 07:47:08.805463 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-w4g7w"] Nov 29 07:47:08 crc kubenswrapper[4795]: I1129 07:47:08.808247 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-w4g7w"] Nov 29 07:47:10 crc kubenswrapper[4795]: I1129 07:47:10.283976 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b02347e-bd6a-4a77-97d4-d8276d1b6167" path="/var/lib/kubelet/pods/9b02347e-bd6a-4a77-97d4-d8276d1b6167/volumes" Nov 29 07:47:28 crc kubenswrapper[4795]: I1129 07:47:28.096404 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Nov 29 07:47:28 crc kubenswrapper[4795]: I1129 07:47:28.128703 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Nov 29 07:47:28 crc kubenswrapper[4795]: I1129 07:47:28.909161 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Nov 29 07:47:54 crc kubenswrapper[4795]: I1129 07:47:54.431339 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6bc6b4bc97-zksdr"] Nov 29 07:47:54 crc kubenswrapper[4795]: E1129 07:47:54.432284 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b02347e-bd6a-4a77-97d4-d8276d1b6167" containerName="console" Nov 29 07:47:54 crc kubenswrapper[4795]: I1129 07:47:54.432298 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b02347e-bd6a-4a77-97d4-d8276d1b6167" containerName="console" Nov 29 07:47:54 crc kubenswrapper[4795]: E1129 07:47:54.432308 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04596222-6779-478e-96cd-3aa99a923aa4" containerName="registry" Nov 29 07:47:54 crc kubenswrapper[4795]: I1129 07:47:54.432314 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="04596222-6779-478e-96cd-3aa99a923aa4" containerName="registry" Nov 29 07:47:54 crc kubenswrapper[4795]: I1129 07:47:54.432422 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b02347e-bd6a-4a77-97d4-d8276d1b6167" containerName="console" Nov 29 07:47:54 crc kubenswrapper[4795]: I1129 07:47:54.432436 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="04596222-6779-478e-96cd-3aa99a923aa4" containerName="registry" Nov 29 07:47:54 crc kubenswrapper[4795]: I1129 07:47:54.432829 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6bc6b4bc97-zksdr" Nov 29 07:47:54 crc kubenswrapper[4795]: I1129 07:47:54.446646 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6bc6b4bc97-zksdr"] Nov 29 07:47:54 crc kubenswrapper[4795]: I1129 07:47:54.561561 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ebfadff4-591c-4aec-af45-46b370c3a74d-console-oauth-config\") pod \"console-6bc6b4bc97-zksdr\" (UID: \"ebfadff4-591c-4aec-af45-46b370c3a74d\") " pod="openshift-console/console-6bc6b4bc97-zksdr" Nov 29 07:47:54 crc kubenswrapper[4795]: I1129 07:47:54.561632 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ebfadff4-591c-4aec-af45-46b370c3a74d-console-config\") pod \"console-6bc6b4bc97-zksdr\" (UID: \"ebfadff4-591c-4aec-af45-46b370c3a74d\") " pod="openshift-console/console-6bc6b4bc97-zksdr" Nov 29 07:47:54 crc kubenswrapper[4795]: I1129 07:47:54.561663 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebfadff4-591c-4aec-af45-46b370c3a74d-trusted-ca-bundle\") pod \"console-6bc6b4bc97-zksdr\" (UID: \"ebfadff4-591c-4aec-af45-46b370c3a74d\") " pod="openshift-console/console-6bc6b4bc97-zksdr" Nov 29 07:47:54 crc kubenswrapper[4795]: I1129 07:47:54.561689 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ebfadff4-591c-4aec-af45-46b370c3a74d-oauth-serving-cert\") pod \"console-6bc6b4bc97-zksdr\" (UID: \"ebfadff4-591c-4aec-af45-46b370c3a74d\") " pod="openshift-console/console-6bc6b4bc97-zksdr" Nov 29 07:47:54 crc kubenswrapper[4795]: I1129 07:47:54.561961 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zw2bm\" (UniqueName: \"kubernetes.io/projected/ebfadff4-591c-4aec-af45-46b370c3a74d-kube-api-access-zw2bm\") pod \"console-6bc6b4bc97-zksdr\" (UID: \"ebfadff4-591c-4aec-af45-46b370c3a74d\") " pod="openshift-console/console-6bc6b4bc97-zksdr" Nov 29 07:47:54 crc kubenswrapper[4795]: I1129 07:47:54.562123 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ebfadff4-591c-4aec-af45-46b370c3a74d-service-ca\") pod \"console-6bc6b4bc97-zksdr\" (UID: \"ebfadff4-591c-4aec-af45-46b370c3a74d\") " pod="openshift-console/console-6bc6b4bc97-zksdr" Nov 29 07:47:54 crc kubenswrapper[4795]: I1129 07:47:54.562248 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ebfadff4-591c-4aec-af45-46b370c3a74d-console-serving-cert\") pod \"console-6bc6b4bc97-zksdr\" (UID: \"ebfadff4-591c-4aec-af45-46b370c3a74d\") " pod="openshift-console/console-6bc6b4bc97-zksdr" Nov 29 07:47:54 crc kubenswrapper[4795]: I1129 07:47:54.663890 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ebfadff4-591c-4aec-af45-46b370c3a74d-service-ca\") pod \"console-6bc6b4bc97-zksdr\" (UID: \"ebfadff4-591c-4aec-af45-46b370c3a74d\") " pod="openshift-console/console-6bc6b4bc97-zksdr" Nov 29 07:47:54 crc kubenswrapper[4795]: I1129 07:47:54.663962 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ebfadff4-591c-4aec-af45-46b370c3a74d-console-serving-cert\") pod \"console-6bc6b4bc97-zksdr\" (UID: \"ebfadff4-591c-4aec-af45-46b370c3a74d\") " pod="openshift-console/console-6bc6b4bc97-zksdr" Nov 29 07:47:54 crc kubenswrapper[4795]: I1129 07:47:54.664026 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ebfadff4-591c-4aec-af45-46b370c3a74d-console-oauth-config\") pod \"console-6bc6b4bc97-zksdr\" (UID: \"ebfadff4-591c-4aec-af45-46b370c3a74d\") " pod="openshift-console/console-6bc6b4bc97-zksdr" Nov 29 07:47:54 crc kubenswrapper[4795]: I1129 07:47:54.664052 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ebfadff4-591c-4aec-af45-46b370c3a74d-console-config\") pod \"console-6bc6b4bc97-zksdr\" (UID: \"ebfadff4-591c-4aec-af45-46b370c3a74d\") " pod="openshift-console/console-6bc6b4bc97-zksdr" Nov 29 07:47:54 crc kubenswrapper[4795]: I1129 07:47:54.665100 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebfadff4-591c-4aec-af45-46b370c3a74d-trusted-ca-bundle\") pod \"console-6bc6b4bc97-zksdr\" (UID: \"ebfadff4-591c-4aec-af45-46b370c3a74d\") " pod="openshift-console/console-6bc6b4bc97-zksdr" Nov 29 07:47:54 crc kubenswrapper[4795]: I1129 07:47:54.665178 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ebfadff4-591c-4aec-af45-46b370c3a74d-console-config\") pod \"console-6bc6b4bc97-zksdr\" (UID: \"ebfadff4-591c-4aec-af45-46b370c3a74d\") " pod="openshift-console/console-6bc6b4bc97-zksdr" Nov 29 07:47:54 crc kubenswrapper[4795]: I1129 07:47:54.665186 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ebfadff4-591c-4aec-af45-46b370c3a74d-oauth-serving-cert\") pod \"console-6bc6b4bc97-zksdr\" (UID: \"ebfadff4-591c-4aec-af45-46b370c3a74d\") " pod="openshift-console/console-6bc6b4bc97-zksdr" Nov 29 07:47:54 crc kubenswrapper[4795]: I1129 07:47:54.665276 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zw2bm\" (UniqueName: \"kubernetes.io/projected/ebfadff4-591c-4aec-af45-46b370c3a74d-kube-api-access-zw2bm\") pod \"console-6bc6b4bc97-zksdr\" (UID: \"ebfadff4-591c-4aec-af45-46b370c3a74d\") " pod="openshift-console/console-6bc6b4bc97-zksdr" Nov 29 07:47:54 crc kubenswrapper[4795]: I1129 07:47:54.665697 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ebfadff4-591c-4aec-af45-46b370c3a74d-service-ca\") pod \"console-6bc6b4bc97-zksdr\" (UID: \"ebfadff4-591c-4aec-af45-46b370c3a74d\") " pod="openshift-console/console-6bc6b4bc97-zksdr" Nov 29 07:47:54 crc kubenswrapper[4795]: I1129 07:47:54.665760 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebfadff4-591c-4aec-af45-46b370c3a74d-trusted-ca-bundle\") pod \"console-6bc6b4bc97-zksdr\" (UID: \"ebfadff4-591c-4aec-af45-46b370c3a74d\") " pod="openshift-console/console-6bc6b4bc97-zksdr" Nov 29 07:47:54 crc kubenswrapper[4795]: I1129 07:47:54.666119 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ebfadff4-591c-4aec-af45-46b370c3a74d-oauth-serving-cert\") pod \"console-6bc6b4bc97-zksdr\" (UID: \"ebfadff4-591c-4aec-af45-46b370c3a74d\") " pod="openshift-console/console-6bc6b4bc97-zksdr" Nov 29 07:47:54 crc kubenswrapper[4795]: I1129 07:47:54.672851 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ebfadff4-591c-4aec-af45-46b370c3a74d-console-oauth-config\") pod \"console-6bc6b4bc97-zksdr\" (UID: \"ebfadff4-591c-4aec-af45-46b370c3a74d\") " pod="openshift-console/console-6bc6b4bc97-zksdr" Nov 29 07:47:54 crc kubenswrapper[4795]: I1129 07:47:54.677145 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ebfadff4-591c-4aec-af45-46b370c3a74d-console-serving-cert\") pod \"console-6bc6b4bc97-zksdr\" (UID: \"ebfadff4-591c-4aec-af45-46b370c3a74d\") " pod="openshift-console/console-6bc6b4bc97-zksdr" Nov 29 07:47:54 crc kubenswrapper[4795]: I1129 07:47:54.681949 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zw2bm\" (UniqueName: \"kubernetes.io/projected/ebfadff4-591c-4aec-af45-46b370c3a74d-kube-api-access-zw2bm\") pod \"console-6bc6b4bc97-zksdr\" (UID: \"ebfadff4-591c-4aec-af45-46b370c3a74d\") " pod="openshift-console/console-6bc6b4bc97-zksdr" Nov 29 07:47:54 crc kubenswrapper[4795]: I1129 07:47:54.753329 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6bc6b4bc97-zksdr" Nov 29 07:47:55 crc kubenswrapper[4795]: I1129 07:47:55.160202 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6bc6b4bc97-zksdr"] Nov 29 07:47:55 crc kubenswrapper[4795]: W1129 07:47:55.169043 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podebfadff4_591c_4aec_af45_46b370c3a74d.slice/crio-86b79c2bca8de22f2a6b44b5e6759cbc59ad24819fba06f5b30b7a870248085d WatchSource:0}: Error finding container 86b79c2bca8de22f2a6b44b5e6759cbc59ad24819fba06f5b30b7a870248085d: Status 404 returned error can't find the container with id 86b79c2bca8de22f2a6b44b5e6759cbc59ad24819fba06f5b30b7a870248085d Nov 29 07:47:56 crc kubenswrapper[4795]: I1129 07:47:56.041855 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6bc6b4bc97-zksdr" event={"ID":"ebfadff4-591c-4aec-af45-46b370c3a74d","Type":"ContainerStarted","Data":"36678dd1ca76be15bac24c9741e6a175ea8b8bee999bfe79cab64a1435601e38"} Nov 29 07:47:56 crc kubenswrapper[4795]: I1129 07:47:56.042223 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6bc6b4bc97-zksdr" event={"ID":"ebfadff4-591c-4aec-af45-46b370c3a74d","Type":"ContainerStarted","Data":"86b79c2bca8de22f2a6b44b5e6759cbc59ad24819fba06f5b30b7a870248085d"} Nov 29 07:47:56 crc kubenswrapper[4795]: I1129 07:47:56.064151 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6bc6b4bc97-zksdr" podStartSLOduration=2.064125253 podStartE2EDuration="2.064125253s" podCreationTimestamp="2025-11-29 07:47:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:47:56.057942804 +0000 UTC m=+522.033518644" watchObservedRunningTime="2025-11-29 07:47:56.064125253 +0000 UTC m=+522.039701063" Nov 29 07:48:04 crc kubenswrapper[4795]: I1129 07:48:04.754628 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6bc6b4bc97-zksdr" Nov 29 07:48:04 crc kubenswrapper[4795]: I1129 07:48:04.755077 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6bc6b4bc97-zksdr" Nov 29 07:48:04 crc kubenswrapper[4795]: I1129 07:48:04.761764 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6bc6b4bc97-zksdr" Nov 29 07:48:05 crc kubenswrapper[4795]: I1129 07:48:05.097703 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6bc6b4bc97-zksdr" Nov 29 07:48:05 crc kubenswrapper[4795]: I1129 07:48:05.174694 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-559df6b64-rl42v"] Nov 29 07:48:30 crc kubenswrapper[4795]: I1129 07:48:30.214122 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-559df6b64-rl42v" podUID="d07c7092-6e69-4ca2-9c12-02991c2cda50" containerName="console" containerID="cri-o://cd47aafb3c4ba01b183e8adf20e80ae24359c987ac92c4fc1caa7648ce256055" gracePeriod=15 Nov 29 07:48:30 crc kubenswrapper[4795]: I1129 07:48:30.542153 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-559df6b64-rl42v_d07c7092-6e69-4ca2-9c12-02991c2cda50/console/0.log" Nov 29 07:48:30 crc kubenswrapper[4795]: I1129 07:48:30.542461 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-559df6b64-rl42v" Nov 29 07:48:30 crc kubenswrapper[4795]: I1129 07:48:30.726009 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d07c7092-6e69-4ca2-9c12-02991c2cda50-console-config\") pod \"d07c7092-6e69-4ca2-9c12-02991c2cda50\" (UID: \"d07c7092-6e69-4ca2-9c12-02991c2cda50\") " Nov 29 07:48:30 crc kubenswrapper[4795]: I1129 07:48:30.726387 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d07c7092-6e69-4ca2-9c12-02991c2cda50-console-serving-cert\") pod \"d07c7092-6e69-4ca2-9c12-02991c2cda50\" (UID: \"d07c7092-6e69-4ca2-9c12-02991c2cda50\") " Nov 29 07:48:30 crc kubenswrapper[4795]: I1129 07:48:30.726581 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d07c7092-6e69-4ca2-9c12-02991c2cda50-console-oauth-config\") pod \"d07c7092-6e69-4ca2-9c12-02991c2cda50\" (UID: \"d07c7092-6e69-4ca2-9c12-02991c2cda50\") " Nov 29 07:48:30 crc kubenswrapper[4795]: I1129 07:48:30.726792 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d07c7092-6e69-4ca2-9c12-02991c2cda50-service-ca\") pod \"d07c7092-6e69-4ca2-9c12-02991c2cda50\" (UID: \"d07c7092-6e69-4ca2-9c12-02991c2cda50\") " Nov 29 07:48:30 crc kubenswrapper[4795]: I1129 07:48:30.726875 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qlnxw\" (UniqueName: \"kubernetes.io/projected/d07c7092-6e69-4ca2-9c12-02991c2cda50-kube-api-access-qlnxw\") pod \"d07c7092-6e69-4ca2-9c12-02991c2cda50\" (UID: \"d07c7092-6e69-4ca2-9c12-02991c2cda50\") " Nov 29 07:48:30 crc kubenswrapper[4795]: I1129 07:48:30.726938 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d07c7092-6e69-4ca2-9c12-02991c2cda50-trusted-ca-bundle\") pod \"d07c7092-6e69-4ca2-9c12-02991c2cda50\" (UID: \"d07c7092-6e69-4ca2-9c12-02991c2cda50\") " Nov 29 07:48:30 crc kubenswrapper[4795]: I1129 07:48:30.727031 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d07c7092-6e69-4ca2-9c12-02991c2cda50-oauth-serving-cert\") pod \"d07c7092-6e69-4ca2-9c12-02991c2cda50\" (UID: \"d07c7092-6e69-4ca2-9c12-02991c2cda50\") " Nov 29 07:48:30 crc kubenswrapper[4795]: I1129 07:48:30.726925 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d07c7092-6e69-4ca2-9c12-02991c2cda50-console-config" (OuterVolumeSpecName: "console-config") pod "d07c7092-6e69-4ca2-9c12-02991c2cda50" (UID: "d07c7092-6e69-4ca2-9c12-02991c2cda50"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:48:30 crc kubenswrapper[4795]: I1129 07:48:30.727422 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d07c7092-6e69-4ca2-9c12-02991c2cda50-service-ca" (OuterVolumeSpecName: "service-ca") pod "d07c7092-6e69-4ca2-9c12-02991c2cda50" (UID: "d07c7092-6e69-4ca2-9c12-02991c2cda50"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:48:30 crc kubenswrapper[4795]: I1129 07:48:30.727622 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d07c7092-6e69-4ca2-9c12-02991c2cda50-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d07c7092-6e69-4ca2-9c12-02991c2cda50" (UID: "d07c7092-6e69-4ca2-9c12-02991c2cda50"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:48:30 crc kubenswrapper[4795]: I1129 07:48:30.727766 4795 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d07c7092-6e69-4ca2-9c12-02991c2cda50-console-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:48:30 crc kubenswrapper[4795]: I1129 07:48:30.728209 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d07c7092-6e69-4ca2-9c12-02991c2cda50-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "d07c7092-6e69-4ca2-9c12-02991c2cda50" (UID: "d07c7092-6e69-4ca2-9c12-02991c2cda50"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:48:30 crc kubenswrapper[4795]: I1129 07:48:30.732575 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d07c7092-6e69-4ca2-9c12-02991c2cda50-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "d07c7092-6e69-4ca2-9c12-02991c2cda50" (UID: "d07c7092-6e69-4ca2-9c12-02991c2cda50"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:48:30 crc kubenswrapper[4795]: I1129 07:48:30.732732 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d07c7092-6e69-4ca2-9c12-02991c2cda50-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "d07c7092-6e69-4ca2-9c12-02991c2cda50" (UID: "d07c7092-6e69-4ca2-9c12-02991c2cda50"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:48:30 crc kubenswrapper[4795]: I1129 07:48:30.732748 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d07c7092-6e69-4ca2-9c12-02991c2cda50-kube-api-access-qlnxw" (OuterVolumeSpecName: "kube-api-access-qlnxw") pod "d07c7092-6e69-4ca2-9c12-02991c2cda50" (UID: "d07c7092-6e69-4ca2-9c12-02991c2cda50"). InnerVolumeSpecName "kube-api-access-qlnxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:48:30 crc kubenswrapper[4795]: I1129 07:48:30.829798 4795 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d07c7092-6e69-4ca2-9c12-02991c2cda50-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:48:30 crc kubenswrapper[4795]: I1129 07:48:30.829872 4795 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d07c7092-6e69-4ca2-9c12-02991c2cda50-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:48:30 crc kubenswrapper[4795]: I1129 07:48:30.829891 4795 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d07c7092-6e69-4ca2-9c12-02991c2cda50-service-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:48:30 crc kubenswrapper[4795]: I1129 07:48:30.829905 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qlnxw\" (UniqueName: \"kubernetes.io/projected/d07c7092-6e69-4ca2-9c12-02991c2cda50-kube-api-access-qlnxw\") on node \"crc\" DevicePath \"\"" Nov 29 07:48:30 crc kubenswrapper[4795]: I1129 07:48:30.829920 4795 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d07c7092-6e69-4ca2-9c12-02991c2cda50-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:48:30 crc kubenswrapper[4795]: I1129 07:48:30.829933 4795 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d07c7092-6e69-4ca2-9c12-02991c2cda50-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:48:31 crc kubenswrapper[4795]: I1129 07:48:31.277254 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-559df6b64-rl42v_d07c7092-6e69-4ca2-9c12-02991c2cda50/console/0.log" Nov 29 07:48:31 crc kubenswrapper[4795]: I1129 07:48:31.277308 4795 generic.go:334] "Generic (PLEG): container finished" podID="d07c7092-6e69-4ca2-9c12-02991c2cda50" containerID="cd47aafb3c4ba01b183e8adf20e80ae24359c987ac92c4fc1caa7648ce256055" exitCode=2 Nov 29 07:48:31 crc kubenswrapper[4795]: I1129 07:48:31.277337 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-559df6b64-rl42v" event={"ID":"d07c7092-6e69-4ca2-9c12-02991c2cda50","Type":"ContainerDied","Data":"cd47aafb3c4ba01b183e8adf20e80ae24359c987ac92c4fc1caa7648ce256055"} Nov 29 07:48:31 crc kubenswrapper[4795]: I1129 07:48:31.277368 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-559df6b64-rl42v" event={"ID":"d07c7092-6e69-4ca2-9c12-02991c2cda50","Type":"ContainerDied","Data":"4c1d0c1795e41d921e5f5fb65b43d295b4fdeacf4382ffbcb846f2efdc0f11ae"} Nov 29 07:48:31 crc kubenswrapper[4795]: I1129 07:48:31.277376 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-559df6b64-rl42v" Nov 29 07:48:31 crc kubenswrapper[4795]: I1129 07:48:31.277398 4795 scope.go:117] "RemoveContainer" containerID="cd47aafb3c4ba01b183e8adf20e80ae24359c987ac92c4fc1caa7648ce256055" Nov 29 07:48:31 crc kubenswrapper[4795]: I1129 07:48:31.296504 4795 scope.go:117] "RemoveContainer" containerID="cd47aafb3c4ba01b183e8adf20e80ae24359c987ac92c4fc1caa7648ce256055" Nov 29 07:48:31 crc kubenswrapper[4795]: E1129 07:48:31.299850 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd47aafb3c4ba01b183e8adf20e80ae24359c987ac92c4fc1caa7648ce256055\": container with ID starting with cd47aafb3c4ba01b183e8adf20e80ae24359c987ac92c4fc1caa7648ce256055 not found: ID does not exist" containerID="cd47aafb3c4ba01b183e8adf20e80ae24359c987ac92c4fc1caa7648ce256055" Nov 29 07:48:31 crc kubenswrapper[4795]: I1129 07:48:31.300047 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd47aafb3c4ba01b183e8adf20e80ae24359c987ac92c4fc1caa7648ce256055"} err="failed to get container status \"cd47aafb3c4ba01b183e8adf20e80ae24359c987ac92c4fc1caa7648ce256055\": rpc error: code = NotFound desc = could not find container \"cd47aafb3c4ba01b183e8adf20e80ae24359c987ac92c4fc1caa7648ce256055\": container with ID starting with cd47aafb3c4ba01b183e8adf20e80ae24359c987ac92c4fc1caa7648ce256055 not found: ID does not exist" Nov 29 07:48:31 crc kubenswrapper[4795]: I1129 07:48:31.306966 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-559df6b64-rl42v"] Nov 29 07:48:31 crc kubenswrapper[4795]: I1129 07:48:31.310803 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-559df6b64-rl42v"] Nov 29 07:48:32 crc kubenswrapper[4795]: I1129 07:48:32.283537 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d07c7092-6e69-4ca2-9c12-02991c2cda50" path="/var/lib/kubelet/pods/d07c7092-6e69-4ca2-9c12-02991c2cda50/volumes" Nov 29 07:49:11 crc kubenswrapper[4795]: I1129 07:49:11.941707 4795 patch_prober.go:28] interesting pod/machine-config-daemon-bkmq6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:49:11 crc kubenswrapper[4795]: I1129 07:49:11.942554 4795 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:49:41 crc kubenswrapper[4795]: I1129 07:49:41.941259 4795 patch_prober.go:28] interesting pod/machine-config-daemon-bkmq6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:49:41 crc kubenswrapper[4795]: I1129 07:49:41.941998 4795 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:50:11 crc kubenswrapper[4795]: I1129 07:50:11.941579 4795 patch_prober.go:28] interesting pod/machine-config-daemon-bkmq6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:50:11 crc kubenswrapper[4795]: I1129 07:50:11.943829 4795 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:50:11 crc kubenswrapper[4795]: I1129 07:50:11.944001 4795 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" Nov 29 07:50:11 crc kubenswrapper[4795]: I1129 07:50:11.945227 4795 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dc32e1210618295b7a09888e63f3d2d9f6eb19cbef74c764b86be7c539c45ed2"} pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 07:50:11 crc kubenswrapper[4795]: I1129 07:50:11.945330 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" containerID="cri-o://dc32e1210618295b7a09888e63f3d2d9f6eb19cbef74c764b86be7c539c45ed2" gracePeriod=600 Nov 29 07:50:12 crc kubenswrapper[4795]: I1129 07:50:12.895219 4795 generic.go:334] "Generic (PLEG): container finished" podID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerID="dc32e1210618295b7a09888e63f3d2d9f6eb19cbef74c764b86be7c539c45ed2" exitCode=0 Nov 29 07:50:12 crc kubenswrapper[4795]: I1129 07:50:12.895265 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" event={"ID":"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1","Type":"ContainerDied","Data":"dc32e1210618295b7a09888e63f3d2d9f6eb19cbef74c764b86be7c539c45ed2"} Nov 29 07:50:12 crc kubenswrapper[4795]: I1129 07:50:12.895300 4795 scope.go:117] "RemoveContainer" containerID="2126419e14a4a2775ea5be60a712196b4855482260536f930abe91252856634d" Nov 29 07:50:13 crc kubenswrapper[4795]: I1129 07:50:13.903734 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" event={"ID":"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1","Type":"ContainerStarted","Data":"50234306d145892f3e442365375a5d5625f50018f93ebd19c811e44bda9ed58d"} Nov 29 07:51:53 crc kubenswrapper[4795]: I1129 07:51:53.678410 4795 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 29 07:52:04 crc kubenswrapper[4795]: I1129 07:52:04.579846 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921076vbr"] Nov 29 07:52:04 crc kubenswrapper[4795]: E1129 07:52:04.580977 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d07c7092-6e69-4ca2-9c12-02991c2cda50" containerName="console" Nov 29 07:52:04 crc kubenswrapper[4795]: I1129 07:52:04.580992 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="d07c7092-6e69-4ca2-9c12-02991c2cda50" containerName="console" Nov 29 07:52:04 crc kubenswrapper[4795]: I1129 07:52:04.581158 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="d07c7092-6e69-4ca2-9c12-02991c2cda50" containerName="console" Nov 29 07:52:04 crc kubenswrapper[4795]: I1129 07:52:04.582073 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921076vbr" Nov 29 07:52:04 crc kubenswrapper[4795]: I1129 07:52:04.584570 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 29 07:52:04 crc kubenswrapper[4795]: I1129 07:52:04.599069 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921076vbr"] Nov 29 07:52:04 crc kubenswrapper[4795]: I1129 07:52:04.686373 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnwn2\" (UniqueName: \"kubernetes.io/projected/45700729-39c7-4828-81fd-3763836d1dbe-kube-api-access-jnwn2\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921076vbr\" (UID: \"45700729-39c7-4828-81fd-3763836d1dbe\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921076vbr" Nov 29 07:52:04 crc kubenswrapper[4795]: I1129 07:52:04.686432 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/45700729-39c7-4828-81fd-3763836d1dbe-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921076vbr\" (UID: \"45700729-39c7-4828-81fd-3763836d1dbe\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921076vbr" Nov 29 07:52:04 crc kubenswrapper[4795]: I1129 07:52:04.686452 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/45700729-39c7-4828-81fd-3763836d1dbe-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921076vbr\" (UID: \"45700729-39c7-4828-81fd-3763836d1dbe\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921076vbr" Nov 29 07:52:04 crc kubenswrapper[4795]: I1129 07:52:04.787833 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnwn2\" (UniqueName: \"kubernetes.io/projected/45700729-39c7-4828-81fd-3763836d1dbe-kube-api-access-jnwn2\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921076vbr\" (UID: \"45700729-39c7-4828-81fd-3763836d1dbe\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921076vbr" Nov 29 07:52:04 crc kubenswrapper[4795]: I1129 07:52:04.787944 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/45700729-39c7-4828-81fd-3763836d1dbe-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921076vbr\" (UID: \"45700729-39c7-4828-81fd-3763836d1dbe\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921076vbr" Nov 29 07:52:04 crc kubenswrapper[4795]: I1129 07:52:04.787994 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/45700729-39c7-4828-81fd-3763836d1dbe-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921076vbr\" (UID: \"45700729-39c7-4828-81fd-3763836d1dbe\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921076vbr" Nov 29 07:52:04 crc kubenswrapper[4795]: I1129 07:52:04.788710 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/45700729-39c7-4828-81fd-3763836d1dbe-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921076vbr\" (UID: \"45700729-39c7-4828-81fd-3763836d1dbe\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921076vbr" Nov 29 07:52:04 crc kubenswrapper[4795]: I1129 07:52:04.788800 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/45700729-39c7-4828-81fd-3763836d1dbe-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921076vbr\" (UID: \"45700729-39c7-4828-81fd-3763836d1dbe\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921076vbr" Nov 29 07:52:04 crc kubenswrapper[4795]: I1129 07:52:04.807969 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnwn2\" (UniqueName: \"kubernetes.io/projected/45700729-39c7-4828-81fd-3763836d1dbe-kube-api-access-jnwn2\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921076vbr\" (UID: \"45700729-39c7-4828-81fd-3763836d1dbe\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921076vbr" Nov 29 07:52:04 crc kubenswrapper[4795]: I1129 07:52:04.913715 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921076vbr" Nov 29 07:52:05 crc kubenswrapper[4795]: I1129 07:52:05.104868 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921076vbr"] Nov 29 07:52:05 crc kubenswrapper[4795]: I1129 07:52:05.590983 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921076vbr" event={"ID":"45700729-39c7-4828-81fd-3763836d1dbe","Type":"ContainerStarted","Data":"2d3367ae066cb1b89fb138c66b90e380cbd431df0373471dcc0da56c32460dc6"} Nov 29 07:52:05 crc kubenswrapper[4795]: I1129 07:52:05.591070 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921076vbr" event={"ID":"45700729-39c7-4828-81fd-3763836d1dbe","Type":"ContainerStarted","Data":"d45c46a10ed982fd463ba2ad0894917ebb3223d299e0a68ddbb4b42216f3fdee"} Nov 29 07:52:06 crc kubenswrapper[4795]: I1129 07:52:06.598953 4795 generic.go:334] "Generic (PLEG): container finished" podID="45700729-39c7-4828-81fd-3763836d1dbe" containerID="2d3367ae066cb1b89fb138c66b90e380cbd431df0373471dcc0da56c32460dc6" exitCode=0 Nov 29 07:52:06 crc kubenswrapper[4795]: I1129 07:52:06.598991 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921076vbr" event={"ID":"45700729-39c7-4828-81fd-3763836d1dbe","Type":"ContainerDied","Data":"2d3367ae066cb1b89fb138c66b90e380cbd431df0373471dcc0da56c32460dc6"} Nov 29 07:52:06 crc kubenswrapper[4795]: I1129 07:52:06.601357 4795 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 29 07:52:06 crc kubenswrapper[4795]: I1129 07:52:06.925711 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2c2ml"] Nov 29 07:52:06 crc kubenswrapper[4795]: I1129 07:52:06.927084 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2c2ml" Nov 29 07:52:06 crc kubenswrapper[4795]: I1129 07:52:06.943658 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2c2ml"] Nov 29 07:52:07 crc kubenswrapper[4795]: I1129 07:52:07.022362 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2267270-ccf6-4757-94c0-f7936459e507-catalog-content\") pod \"redhat-operators-2c2ml\" (UID: \"d2267270-ccf6-4757-94c0-f7936459e507\") " pod="openshift-marketplace/redhat-operators-2c2ml" Nov 29 07:52:07 crc kubenswrapper[4795]: I1129 07:52:07.022431 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntmq2\" (UniqueName: \"kubernetes.io/projected/d2267270-ccf6-4757-94c0-f7936459e507-kube-api-access-ntmq2\") pod \"redhat-operators-2c2ml\" (UID: \"d2267270-ccf6-4757-94c0-f7936459e507\") " pod="openshift-marketplace/redhat-operators-2c2ml" Nov 29 07:52:07 crc kubenswrapper[4795]: I1129 07:52:07.022757 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2267270-ccf6-4757-94c0-f7936459e507-utilities\") pod \"redhat-operators-2c2ml\" (UID: \"d2267270-ccf6-4757-94c0-f7936459e507\") " pod="openshift-marketplace/redhat-operators-2c2ml" Nov 29 07:52:07 crc kubenswrapper[4795]: I1129 07:52:07.124567 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2267270-ccf6-4757-94c0-f7936459e507-utilities\") pod \"redhat-operators-2c2ml\" (UID: \"d2267270-ccf6-4757-94c0-f7936459e507\") " pod="openshift-marketplace/redhat-operators-2c2ml" Nov 29 07:52:07 crc kubenswrapper[4795]: I1129 07:52:07.124651 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2267270-ccf6-4757-94c0-f7936459e507-catalog-content\") pod \"redhat-operators-2c2ml\" (UID: \"d2267270-ccf6-4757-94c0-f7936459e507\") " pod="openshift-marketplace/redhat-operators-2c2ml" Nov 29 07:52:07 crc kubenswrapper[4795]: I1129 07:52:07.124708 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntmq2\" (UniqueName: \"kubernetes.io/projected/d2267270-ccf6-4757-94c0-f7936459e507-kube-api-access-ntmq2\") pod \"redhat-operators-2c2ml\" (UID: \"d2267270-ccf6-4757-94c0-f7936459e507\") " pod="openshift-marketplace/redhat-operators-2c2ml" Nov 29 07:52:07 crc kubenswrapper[4795]: I1129 07:52:07.125259 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2267270-ccf6-4757-94c0-f7936459e507-utilities\") pod \"redhat-operators-2c2ml\" (UID: \"d2267270-ccf6-4757-94c0-f7936459e507\") " pod="openshift-marketplace/redhat-operators-2c2ml" Nov 29 07:52:07 crc kubenswrapper[4795]: I1129 07:52:07.125304 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2267270-ccf6-4757-94c0-f7936459e507-catalog-content\") pod \"redhat-operators-2c2ml\" (UID: \"d2267270-ccf6-4757-94c0-f7936459e507\") " pod="openshift-marketplace/redhat-operators-2c2ml" Nov 29 07:52:07 crc kubenswrapper[4795]: I1129 07:52:07.148160 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntmq2\" (UniqueName: \"kubernetes.io/projected/d2267270-ccf6-4757-94c0-f7936459e507-kube-api-access-ntmq2\") pod \"redhat-operators-2c2ml\" (UID: \"d2267270-ccf6-4757-94c0-f7936459e507\") " pod="openshift-marketplace/redhat-operators-2c2ml" Nov 29 07:52:07 crc kubenswrapper[4795]: I1129 07:52:07.290489 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2c2ml" Nov 29 07:52:07 crc kubenswrapper[4795]: I1129 07:52:07.535954 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2c2ml"] Nov 29 07:52:07 crc kubenswrapper[4795]: I1129 07:52:07.606292 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2c2ml" event={"ID":"d2267270-ccf6-4757-94c0-f7936459e507","Type":"ContainerStarted","Data":"4bf5d819a2975f861a7cb6c8cc4abd13f0826ab007aa24fe189ed7b04da76946"} Nov 29 07:52:08 crc kubenswrapper[4795]: I1129 07:52:08.612927 4795 generic.go:334] "Generic (PLEG): container finished" podID="d2267270-ccf6-4757-94c0-f7936459e507" containerID="53d8c6da72b4d9ee2e93ebd64b6e7798ac04e01faaa4648d1c2281066d780d6e" exitCode=0 Nov 29 07:52:08 crc kubenswrapper[4795]: I1129 07:52:08.612971 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2c2ml" event={"ID":"d2267270-ccf6-4757-94c0-f7936459e507","Type":"ContainerDied","Data":"53d8c6da72b4d9ee2e93ebd64b6e7798ac04e01faaa4648d1c2281066d780d6e"} Nov 29 07:52:08 crc kubenswrapper[4795]: I1129 07:52:08.616001 4795 generic.go:334] "Generic (PLEG): container finished" podID="45700729-39c7-4828-81fd-3763836d1dbe" containerID="2cf98698b6b71520c2632a75b699f844c3a7752d30b09f60d1b663ab31d25069" exitCode=0 Nov 29 07:52:08 crc kubenswrapper[4795]: I1129 07:52:08.616027 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921076vbr" event={"ID":"45700729-39c7-4828-81fd-3763836d1dbe","Type":"ContainerDied","Data":"2cf98698b6b71520c2632a75b699f844c3a7752d30b09f60d1b663ab31d25069"} Nov 29 07:52:09 crc kubenswrapper[4795]: I1129 07:52:09.703571 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2c2ml" event={"ID":"d2267270-ccf6-4757-94c0-f7936459e507","Type":"ContainerStarted","Data":"67fdba5565cdf72e475112467336f9888e50a202828f4b786ec8fcfac28db6c4"} Nov 29 07:52:09 crc kubenswrapper[4795]: I1129 07:52:09.706356 4795 generic.go:334] "Generic (PLEG): container finished" podID="45700729-39c7-4828-81fd-3763836d1dbe" containerID="eb7481c03bfcd5c65a5aada1440575e927c0b3212446e256e1c34d7ce6a3e69c" exitCode=0 Nov 29 07:52:09 crc kubenswrapper[4795]: I1129 07:52:09.706473 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921076vbr" event={"ID":"45700729-39c7-4828-81fd-3763836d1dbe","Type":"ContainerDied","Data":"eb7481c03bfcd5c65a5aada1440575e927c0b3212446e256e1c34d7ce6a3e69c"} Nov 29 07:52:11 crc kubenswrapper[4795]: I1129 07:52:11.313847 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921076vbr" Nov 29 07:52:11 crc kubenswrapper[4795]: I1129 07:52:11.422633 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/45700729-39c7-4828-81fd-3763836d1dbe-util\") pod \"45700729-39c7-4828-81fd-3763836d1dbe\" (UID: \"45700729-39c7-4828-81fd-3763836d1dbe\") " Nov 29 07:52:11 crc kubenswrapper[4795]: I1129 07:52:11.422724 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnwn2\" (UniqueName: \"kubernetes.io/projected/45700729-39c7-4828-81fd-3763836d1dbe-kube-api-access-jnwn2\") pod \"45700729-39c7-4828-81fd-3763836d1dbe\" (UID: \"45700729-39c7-4828-81fd-3763836d1dbe\") " Nov 29 07:52:11 crc kubenswrapper[4795]: I1129 07:52:11.422765 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/45700729-39c7-4828-81fd-3763836d1dbe-bundle\") pod \"45700729-39c7-4828-81fd-3763836d1dbe\" (UID: \"45700729-39c7-4828-81fd-3763836d1dbe\") " Nov 29 07:52:11 crc kubenswrapper[4795]: I1129 07:52:11.424866 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45700729-39c7-4828-81fd-3763836d1dbe-bundle" (OuterVolumeSpecName: "bundle") pod "45700729-39c7-4828-81fd-3763836d1dbe" (UID: "45700729-39c7-4828-81fd-3763836d1dbe"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:52:11 crc kubenswrapper[4795]: I1129 07:52:11.437230 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45700729-39c7-4828-81fd-3763836d1dbe-util" (OuterVolumeSpecName: "util") pod "45700729-39c7-4828-81fd-3763836d1dbe" (UID: "45700729-39c7-4828-81fd-3763836d1dbe"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:52:11 crc kubenswrapper[4795]: I1129 07:52:11.461609 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45700729-39c7-4828-81fd-3763836d1dbe-kube-api-access-jnwn2" (OuterVolumeSpecName: "kube-api-access-jnwn2") pod "45700729-39c7-4828-81fd-3763836d1dbe" (UID: "45700729-39c7-4828-81fd-3763836d1dbe"). InnerVolumeSpecName "kube-api-access-jnwn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:52:11 crc kubenswrapper[4795]: I1129 07:52:11.524858 4795 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/45700729-39c7-4828-81fd-3763836d1dbe-util\") on node \"crc\" DevicePath \"\"" Nov 29 07:52:11 crc kubenswrapper[4795]: I1129 07:52:11.524897 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnwn2\" (UniqueName: \"kubernetes.io/projected/45700729-39c7-4828-81fd-3763836d1dbe-kube-api-access-jnwn2\") on node \"crc\" DevicePath \"\"" Nov 29 07:52:11 crc kubenswrapper[4795]: I1129 07:52:11.524912 4795 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/45700729-39c7-4828-81fd-3763836d1dbe-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:52:11 crc kubenswrapper[4795]: I1129 07:52:11.719935 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921076vbr" event={"ID":"45700729-39c7-4828-81fd-3763836d1dbe","Type":"ContainerDied","Data":"d45c46a10ed982fd463ba2ad0894917ebb3223d299e0a68ddbb4b42216f3fdee"} Nov 29 07:52:11 crc kubenswrapper[4795]: I1129 07:52:11.719977 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921076vbr" Nov 29 07:52:11 crc kubenswrapper[4795]: I1129 07:52:11.719988 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d45c46a10ed982fd463ba2ad0894917ebb3223d299e0a68ddbb4b42216f3fdee" Nov 29 07:52:12 crc kubenswrapper[4795]: I1129 07:52:12.726663 4795 generic.go:334] "Generic (PLEG): container finished" podID="d2267270-ccf6-4757-94c0-f7936459e507" containerID="67fdba5565cdf72e475112467336f9888e50a202828f4b786ec8fcfac28db6c4" exitCode=0 Nov 29 07:52:12 crc kubenswrapper[4795]: I1129 07:52:12.726739 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2c2ml" event={"ID":"d2267270-ccf6-4757-94c0-f7936459e507","Type":"ContainerDied","Data":"67fdba5565cdf72e475112467336f9888e50a202828f4b786ec8fcfac28db6c4"} Nov 29 07:52:15 crc kubenswrapper[4795]: I1129 07:52:15.899847 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-km2g9"] Nov 29 07:52:15 crc kubenswrapper[4795]: I1129 07:52:15.900673 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" podUID="3d3ff2b2-cbaa-4309-805a-2b044f867d3a" containerName="ovn-controller" containerID="cri-o://25239d7c14973e35144f9ac1b909564749220fc32b1851b68e08434a5aca7def" gracePeriod=30 Nov 29 07:52:15 crc kubenswrapper[4795]: I1129 07:52:15.900718 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" podUID="3d3ff2b2-cbaa-4309-805a-2b044f867d3a" containerName="nbdb" containerID="cri-o://248e5291402287c1a589a168527135bd1df15e293a7614681fc896fcbdeb9dcf" gracePeriod=30 Nov 29 07:52:15 crc kubenswrapper[4795]: I1129 07:52:15.900807 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" podUID="3d3ff2b2-cbaa-4309-805a-2b044f867d3a" containerName="northd" containerID="cri-o://8112226f945ad27e0c5af4d588800322977e17efc3eac9acec2ce591f30f46fe" gracePeriod=30 Nov 29 07:52:15 crc kubenswrapper[4795]: I1129 07:52:15.900867 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" podUID="3d3ff2b2-cbaa-4309-805a-2b044f867d3a" containerName="sbdb" containerID="cri-o://c63ac8cc3766762113c8fdfdc459f77570fc533bf636011f1e64a5c0a55b3ee0" gracePeriod=30 Nov 29 07:52:15 crc kubenswrapper[4795]: I1129 07:52:15.900950 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" podUID="3d3ff2b2-cbaa-4309-805a-2b044f867d3a" containerName="kube-rbac-proxy-node" containerID="cri-o://ceed8db447ae237cae16d2aa37b1651eade557ea7aaab5bc7b55678ce6e36f57" gracePeriod=30 Nov 29 07:52:15 crc kubenswrapper[4795]: I1129 07:52:15.900973 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" podUID="3d3ff2b2-cbaa-4309-805a-2b044f867d3a" containerName="ovn-acl-logging" containerID="cri-o://feacad8666527640ec9d6bee04a743e99e0cc9f9e9c6432c1a70bc02e85fa039" gracePeriod=30 Nov 29 07:52:15 crc kubenswrapper[4795]: I1129 07:52:15.900988 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" podUID="3d3ff2b2-cbaa-4309-805a-2b044f867d3a" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://bcce9ccf35e72987d01428b7573aafacfe90fc973009657b077f40b6fe6ddecb" gracePeriod=30 Nov 29 07:52:15 crc kubenswrapper[4795]: I1129 07:52:15.936358 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" podUID="3d3ff2b2-cbaa-4309-805a-2b044f867d3a" containerName="ovnkube-controller" containerID="cri-o://c590287cbefc95d9033e3848de009232747de0853c936fe3696973b2f36cd332" gracePeriod=30 Nov 29 07:52:16 crc kubenswrapper[4795]: E1129 07:52:16.759562 4795 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c590287cbefc95d9033e3848de009232747de0853c936fe3696973b2f36cd332 is running failed: container process not found" containerID="c590287cbefc95d9033e3848de009232747de0853c936fe3696973b2f36cd332" cmd=["/bin/bash","-c","#!/bin/bash\ntest -f /etc/cni/net.d/10-ovn-kubernetes.conf\n"] Nov 29 07:52:16 crc kubenswrapper[4795]: E1129 07:52:16.760322 4795 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c590287cbefc95d9033e3848de009232747de0853c936fe3696973b2f36cd332 is running failed: container process not found" containerID="c590287cbefc95d9033e3848de009232747de0853c936fe3696973b2f36cd332" cmd=["/bin/bash","-c","#!/bin/bash\ntest -f /etc/cni/net.d/10-ovn-kubernetes.conf\n"] Nov 29 07:52:16 crc kubenswrapper[4795]: E1129 07:52:16.760812 4795 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c590287cbefc95d9033e3848de009232747de0853c936fe3696973b2f36cd332 is running failed: container process not found" containerID="c590287cbefc95d9033e3848de009232747de0853c936fe3696973b2f36cd332" cmd=["/bin/bash","-c","#!/bin/bash\ntest -f /etc/cni/net.d/10-ovn-kubernetes.conf\n"] Nov 29 07:52:16 crc kubenswrapper[4795]: E1129 07:52:16.760859 4795 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c590287cbefc95d9033e3848de009232747de0853c936fe3696973b2f36cd332 is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" podUID="3d3ff2b2-cbaa-4309-805a-2b044f867d3a" containerName="ovnkube-controller" Nov 29 07:52:17 crc kubenswrapper[4795]: I1129 07:52:17.781558 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2c2ml" event={"ID":"d2267270-ccf6-4757-94c0-f7936459e507","Type":"ContainerStarted","Data":"0ba148909352e4ad238ad0abf2da6690af63d8f5c07236534002722a1ed06791"} Nov 29 07:52:17 crc kubenswrapper[4795]: I1129 07:52:17.783802 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-km2g9_3d3ff2b2-cbaa-4309-805a-2b044f867d3a/ovnkube-controller/3.log" Nov 29 07:52:17 crc kubenswrapper[4795]: I1129 07:52:17.786036 4795 generic.go:334] "Generic (PLEG): container finished" podID="3d3ff2b2-cbaa-4309-805a-2b044f867d3a" containerID="bcce9ccf35e72987d01428b7573aafacfe90fc973009657b077f40b6fe6ddecb" exitCode=0 Nov 29 07:52:17 crc kubenswrapper[4795]: I1129 07:52:17.786084 4795 generic.go:334] "Generic (PLEG): container finished" podID="3d3ff2b2-cbaa-4309-805a-2b044f867d3a" containerID="ceed8db447ae237cae16d2aa37b1651eade557ea7aaab5bc7b55678ce6e36f57" exitCode=0 Nov 29 07:52:17 crc kubenswrapper[4795]: I1129 07:52:17.786110 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" event={"ID":"3d3ff2b2-cbaa-4309-805a-2b044f867d3a","Type":"ContainerDied","Data":"bcce9ccf35e72987d01428b7573aafacfe90fc973009657b077f40b6fe6ddecb"} Nov 29 07:52:17 crc kubenswrapper[4795]: I1129 07:52:17.786160 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" event={"ID":"3d3ff2b2-cbaa-4309-805a-2b044f867d3a","Type":"ContainerDied","Data":"ceed8db447ae237cae16d2aa37b1651eade557ea7aaab5bc7b55678ce6e36f57"} Nov 29 07:52:17 crc kubenswrapper[4795]: I1129 07:52:17.849927 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2c2ml" podStartSLOduration=4.235063557 podStartE2EDuration="11.849903086s" podCreationTimestamp="2025-11-29 07:52:06 +0000 UTC" firstStartedPulling="2025-11-29 07:52:08.61425059 +0000 UTC m=+774.589826380" lastFinishedPulling="2025-11-29 07:52:16.229090119 +0000 UTC m=+782.204665909" observedRunningTime="2025-11-29 07:52:17.84241464 +0000 UTC m=+783.817990450" watchObservedRunningTime="2025-11-29 07:52:17.849903086 +0000 UTC m=+783.825478896" Nov 29 07:52:18 crc kubenswrapper[4795]: I1129 07:52:18.803536 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hbg2m_50b9c3ea-4ff5-434f-803c-2365a0938f9a/kube-multus/2.log" Nov 29 07:52:18 crc kubenswrapper[4795]: I1129 07:52:18.809537 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hbg2m_50b9c3ea-4ff5-434f-803c-2365a0938f9a/kube-multus/1.log" Nov 29 07:52:18 crc kubenswrapper[4795]: I1129 07:52:18.809605 4795 generic.go:334] "Generic (PLEG): container finished" podID="50b9c3ea-4ff5-434f-803c-2365a0938f9a" containerID="7149d6e745acac3b4c7e1d322eb2093244224711537fd12d0f91070dfdf31932" exitCode=2 Nov 29 07:52:18 crc kubenswrapper[4795]: I1129 07:52:18.809678 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hbg2m" event={"ID":"50b9c3ea-4ff5-434f-803c-2365a0938f9a","Type":"ContainerDied","Data":"7149d6e745acac3b4c7e1d322eb2093244224711537fd12d0f91070dfdf31932"} Nov 29 07:52:18 crc kubenswrapper[4795]: I1129 07:52:18.809800 4795 scope.go:117] "RemoveContainer" containerID="00a5c9d1a3e5371aa4899c05c4c4eecdbbbe3666777446a5fa82e13837a23050" Nov 29 07:52:18 crc kubenswrapper[4795]: I1129 07:52:18.810495 4795 scope.go:117] "RemoveContainer" containerID="7149d6e745acac3b4c7e1d322eb2093244224711537fd12d0f91070dfdf31932" Nov 29 07:52:18 crc kubenswrapper[4795]: I1129 07:52:18.824370 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-km2g9_3d3ff2b2-cbaa-4309-805a-2b044f867d3a/ovnkube-controller/3.log" Nov 29 07:52:18 crc kubenswrapper[4795]: I1129 07:52:18.829506 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-km2g9_3d3ff2b2-cbaa-4309-805a-2b044f867d3a/ovn-acl-logging/0.log" Nov 29 07:52:18 crc kubenswrapper[4795]: I1129 07:52:18.829860 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-km2g9_3d3ff2b2-cbaa-4309-805a-2b044f867d3a/ovn-controller/0.log" Nov 29 07:52:18 crc kubenswrapper[4795]: I1129 07:52:18.830116 4795 generic.go:334] "Generic (PLEG): container finished" podID="3d3ff2b2-cbaa-4309-805a-2b044f867d3a" containerID="c590287cbefc95d9033e3848de009232747de0853c936fe3696973b2f36cd332" exitCode=0 Nov 29 07:52:18 crc kubenswrapper[4795]: I1129 07:52:18.830135 4795 generic.go:334] "Generic (PLEG): container finished" podID="3d3ff2b2-cbaa-4309-805a-2b044f867d3a" containerID="c63ac8cc3766762113c8fdfdc459f77570fc533bf636011f1e64a5c0a55b3ee0" exitCode=0 Nov 29 07:52:18 crc kubenswrapper[4795]: I1129 07:52:18.830143 4795 generic.go:334] "Generic (PLEG): container finished" podID="3d3ff2b2-cbaa-4309-805a-2b044f867d3a" containerID="248e5291402287c1a589a168527135bd1df15e293a7614681fc896fcbdeb9dcf" exitCode=0 Nov 29 07:52:18 crc kubenswrapper[4795]: I1129 07:52:18.830152 4795 generic.go:334] "Generic (PLEG): container finished" podID="3d3ff2b2-cbaa-4309-805a-2b044f867d3a" containerID="8112226f945ad27e0c5af4d588800322977e17efc3eac9acec2ce591f30f46fe" exitCode=0 Nov 29 07:52:18 crc kubenswrapper[4795]: I1129 07:52:18.830159 4795 generic.go:334] "Generic (PLEG): container finished" podID="3d3ff2b2-cbaa-4309-805a-2b044f867d3a" containerID="feacad8666527640ec9d6bee04a743e99e0cc9f9e9c6432c1a70bc02e85fa039" exitCode=143 Nov 29 07:52:18 crc kubenswrapper[4795]: I1129 07:52:18.830166 4795 generic.go:334] "Generic (PLEG): container finished" podID="3d3ff2b2-cbaa-4309-805a-2b044f867d3a" containerID="25239d7c14973e35144f9ac1b909564749220fc32b1851b68e08434a5aca7def" exitCode=143 Nov 29 07:52:18 crc kubenswrapper[4795]: I1129 07:52:18.830182 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" event={"ID":"3d3ff2b2-cbaa-4309-805a-2b044f867d3a","Type":"ContainerDied","Data":"c590287cbefc95d9033e3848de009232747de0853c936fe3696973b2f36cd332"} Nov 29 07:52:18 crc kubenswrapper[4795]: I1129 07:52:18.830289 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" event={"ID":"3d3ff2b2-cbaa-4309-805a-2b044f867d3a","Type":"ContainerDied","Data":"c63ac8cc3766762113c8fdfdc459f77570fc533bf636011f1e64a5c0a55b3ee0"} Nov 29 07:52:18 crc kubenswrapper[4795]: I1129 07:52:18.830302 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" event={"ID":"3d3ff2b2-cbaa-4309-805a-2b044f867d3a","Type":"ContainerDied","Data":"248e5291402287c1a589a168527135bd1df15e293a7614681fc896fcbdeb9dcf"} Nov 29 07:52:18 crc kubenswrapper[4795]: I1129 07:52:18.830310 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" event={"ID":"3d3ff2b2-cbaa-4309-805a-2b044f867d3a","Type":"ContainerDied","Data":"8112226f945ad27e0c5af4d588800322977e17efc3eac9acec2ce591f30f46fe"} Nov 29 07:52:18 crc kubenswrapper[4795]: I1129 07:52:18.830321 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" event={"ID":"3d3ff2b2-cbaa-4309-805a-2b044f867d3a","Type":"ContainerDied","Data":"feacad8666527640ec9d6bee04a743e99e0cc9f9e9c6432c1a70bc02e85fa039"} Nov 29 07:52:18 crc kubenswrapper[4795]: I1129 07:52:18.830330 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" event={"ID":"3d3ff2b2-cbaa-4309-805a-2b044f867d3a","Type":"ContainerDied","Data":"25239d7c14973e35144f9ac1b909564749220fc32b1851b68e08434a5aca7def"} Nov 29 07:52:18 crc kubenswrapper[4795]: I1129 07:52:18.883613 4795 scope.go:117] "RemoveContainer" containerID="265a8143fe9383f13d2e5d972c3fdd716990d82b30245eeb740f8a0fdb4f089c" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.031921 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-km2g9_3d3ff2b2-cbaa-4309-805a-2b044f867d3a/ovn-acl-logging/0.log" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.032358 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-km2g9_3d3ff2b2-cbaa-4309-805a-2b044f867d3a/ovn-controller/0.log" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.032884 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.132634 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-j55pl"] Nov 29 07:52:19 crc kubenswrapper[4795]: E1129 07:52:19.132867 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d3ff2b2-cbaa-4309-805a-2b044f867d3a" containerName="ovnkube-controller" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.132878 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d3ff2b2-cbaa-4309-805a-2b044f867d3a" containerName="ovnkube-controller" Nov 29 07:52:19 crc kubenswrapper[4795]: E1129 07:52:19.132888 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d3ff2b2-cbaa-4309-805a-2b044f867d3a" containerName="ovnkube-controller" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.132894 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d3ff2b2-cbaa-4309-805a-2b044f867d3a" containerName="ovnkube-controller" Nov 29 07:52:19 crc kubenswrapper[4795]: E1129 07:52:19.132902 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d3ff2b2-cbaa-4309-805a-2b044f867d3a" containerName="ovnkube-controller" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.132911 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d3ff2b2-cbaa-4309-805a-2b044f867d3a" containerName="ovnkube-controller" Nov 29 07:52:19 crc kubenswrapper[4795]: E1129 07:52:19.132917 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45700729-39c7-4828-81fd-3763836d1dbe" containerName="util" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.132923 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="45700729-39c7-4828-81fd-3763836d1dbe" containerName="util" Nov 29 07:52:19 crc kubenswrapper[4795]: E1129 07:52:19.132932 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d3ff2b2-cbaa-4309-805a-2b044f867d3a" containerName="northd" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.132937 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d3ff2b2-cbaa-4309-805a-2b044f867d3a" containerName="northd" Nov 29 07:52:19 crc kubenswrapper[4795]: E1129 07:52:19.132949 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d3ff2b2-cbaa-4309-805a-2b044f867d3a" containerName="ovn-controller" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.132954 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d3ff2b2-cbaa-4309-805a-2b044f867d3a" containerName="ovn-controller" Nov 29 07:52:19 crc kubenswrapper[4795]: E1129 07:52:19.132962 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d3ff2b2-cbaa-4309-805a-2b044f867d3a" containerName="kube-rbac-proxy-node" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.132967 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d3ff2b2-cbaa-4309-805a-2b044f867d3a" containerName="kube-rbac-proxy-node" Nov 29 07:52:19 crc kubenswrapper[4795]: E1129 07:52:19.132977 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d3ff2b2-cbaa-4309-805a-2b044f867d3a" containerName="sbdb" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.132983 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d3ff2b2-cbaa-4309-805a-2b044f867d3a" containerName="sbdb" Nov 29 07:52:19 crc kubenswrapper[4795]: E1129 07:52:19.132990 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d3ff2b2-cbaa-4309-805a-2b044f867d3a" containerName="kubecfg-setup" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.132996 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d3ff2b2-cbaa-4309-805a-2b044f867d3a" containerName="kubecfg-setup" Nov 29 07:52:19 crc kubenswrapper[4795]: E1129 07:52:19.133010 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45700729-39c7-4828-81fd-3763836d1dbe" containerName="extract" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.133017 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="45700729-39c7-4828-81fd-3763836d1dbe" containerName="extract" Nov 29 07:52:19 crc kubenswrapper[4795]: E1129 07:52:19.133028 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d3ff2b2-cbaa-4309-805a-2b044f867d3a" containerName="nbdb" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.133035 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d3ff2b2-cbaa-4309-805a-2b044f867d3a" containerName="nbdb" Nov 29 07:52:19 crc kubenswrapper[4795]: E1129 07:52:19.133045 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45700729-39c7-4828-81fd-3763836d1dbe" containerName="pull" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.133052 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="45700729-39c7-4828-81fd-3763836d1dbe" containerName="pull" Nov 29 07:52:19 crc kubenswrapper[4795]: E1129 07:52:19.133062 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d3ff2b2-cbaa-4309-805a-2b044f867d3a" containerName="ovn-acl-logging" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.133068 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d3ff2b2-cbaa-4309-805a-2b044f867d3a" containerName="ovn-acl-logging" Nov 29 07:52:19 crc kubenswrapper[4795]: E1129 07:52:19.133078 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d3ff2b2-cbaa-4309-805a-2b044f867d3a" containerName="kube-rbac-proxy-ovn-metrics" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.133084 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d3ff2b2-cbaa-4309-805a-2b044f867d3a" containerName="kube-rbac-proxy-ovn-metrics" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.133183 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d3ff2b2-cbaa-4309-805a-2b044f867d3a" containerName="ovnkube-controller" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.133194 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d3ff2b2-cbaa-4309-805a-2b044f867d3a" containerName="ovnkube-controller" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.133203 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d3ff2b2-cbaa-4309-805a-2b044f867d3a" containerName="ovnkube-controller" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.133214 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d3ff2b2-cbaa-4309-805a-2b044f867d3a" containerName="northd" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.133221 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d3ff2b2-cbaa-4309-805a-2b044f867d3a" containerName="ovn-controller" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.133231 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="45700729-39c7-4828-81fd-3763836d1dbe" containerName="extract" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.133238 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d3ff2b2-cbaa-4309-805a-2b044f867d3a" containerName="kube-rbac-proxy-node" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.133246 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d3ff2b2-cbaa-4309-805a-2b044f867d3a" containerName="ovnkube-controller" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.133254 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d3ff2b2-cbaa-4309-805a-2b044f867d3a" containerName="kube-rbac-proxy-ovn-metrics" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.133263 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d3ff2b2-cbaa-4309-805a-2b044f867d3a" containerName="ovn-acl-logging" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.133270 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d3ff2b2-cbaa-4309-805a-2b044f867d3a" containerName="nbdb" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.133277 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d3ff2b2-cbaa-4309-805a-2b044f867d3a" containerName="sbdb" Nov 29 07:52:19 crc kubenswrapper[4795]: E1129 07:52:19.133367 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d3ff2b2-cbaa-4309-805a-2b044f867d3a" containerName="ovnkube-controller" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.133375 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d3ff2b2-cbaa-4309-805a-2b044f867d3a" containerName="ovnkube-controller" Nov 29 07:52:19 crc kubenswrapper[4795]: E1129 07:52:19.133401 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d3ff2b2-cbaa-4309-805a-2b044f867d3a" containerName="ovnkube-controller" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.133414 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d3ff2b2-cbaa-4309-805a-2b044f867d3a" containerName="ovnkube-controller" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.133514 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d3ff2b2-cbaa-4309-805a-2b044f867d3a" containerName="ovnkube-controller" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.135405 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-j55pl" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.232635 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-ovn-node-metrics-cert\") pod \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\" (UID: \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\") " Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.232735 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-env-overrides\") pod \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\" (UID: \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\") " Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.233096 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-systemd-units\") pod \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\" (UID: \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\") " Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.233129 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-var-lib-openvswitch\") pod \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\" (UID: \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\") " Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.233151 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-ovnkube-script-lib\") pod \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\" (UID: \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\") " Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.233170 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\" (UID: \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\") " Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.233194 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-run-ovn\") pod \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\" (UID: \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\") " Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.233227 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-host-cni-netd\") pod \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\" (UID: \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\") " Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.233222 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "3d3ff2b2-cbaa-4309-805a-2b044f867d3a" (UID: "3d3ff2b2-cbaa-4309-805a-2b044f867d3a"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.233232 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "3d3ff2b2-cbaa-4309-805a-2b044f867d3a" (UID: "3d3ff2b2-cbaa-4309-805a-2b044f867d3a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.233255 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-host-run-ovn-kubernetes\") pod \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\" (UID: \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\") " Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.233321 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-ovnkube-config\") pod \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\" (UID: \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\") " Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.233276 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "3d3ff2b2-cbaa-4309-805a-2b044f867d3a" (UID: "3d3ff2b2-cbaa-4309-805a-2b044f867d3a"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.233370 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "3d3ff2b2-cbaa-4309-805a-2b044f867d3a" (UID: "3d3ff2b2-cbaa-4309-805a-2b044f867d3a"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.233293 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "3d3ff2b2-cbaa-4309-805a-2b044f867d3a" (UID: "3d3ff2b2-cbaa-4309-805a-2b044f867d3a"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.233347 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "3d3ff2b2-cbaa-4309-805a-2b044f867d3a" (UID: "3d3ff2b2-cbaa-4309-805a-2b044f867d3a"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.233377 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-log-socket" (OuterVolumeSpecName: "log-socket") pod "3d3ff2b2-cbaa-4309-805a-2b044f867d3a" (UID: "3d3ff2b2-cbaa-4309-805a-2b044f867d3a"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.233355 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-log-socket\") pod \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\" (UID: \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\") " Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.233444 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-host-slash\") pod \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\" (UID: \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\") " Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.233465 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-node-log\") pod \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\" (UID: \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\") " Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.233488 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-run-openvswitch\") pod \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\" (UID: \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\") " Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.233501 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-run-systemd\") pod \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\" (UID: \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\") " Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.233518 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-host-cni-bin\") pod \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\" (UID: \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\") " Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.233535 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-host-kubelet\") pod \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\" (UID: \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\") " Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.233553 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-host-run-netns\") pod \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\" (UID: \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\") " Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.233578 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-psskj\" (UniqueName: \"kubernetes.io/projected/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-kube-api-access-psskj\") pod \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\" (UID: \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\") " Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.233402 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "3d3ff2b2-cbaa-4309-805a-2b044f867d3a" (UID: "3d3ff2b2-cbaa-4309-805a-2b044f867d3a"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.233626 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "3d3ff2b2-cbaa-4309-805a-2b044f867d3a" (UID: "3d3ff2b2-cbaa-4309-805a-2b044f867d3a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.233606 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "3d3ff2b2-cbaa-4309-805a-2b044f867d3a" (UID: "3d3ff2b2-cbaa-4309-805a-2b044f867d3a"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.233631 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-etc-openvswitch\") pod \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\" (UID: \"3d3ff2b2-cbaa-4309-805a-2b044f867d3a\") " Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.233652 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-host-slash" (OuterVolumeSpecName: "host-slash") pod "3d3ff2b2-cbaa-4309-805a-2b044f867d3a" (UID: "3d3ff2b2-cbaa-4309-805a-2b044f867d3a"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.233679 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "3d3ff2b2-cbaa-4309-805a-2b044f867d3a" (UID: "3d3ff2b2-cbaa-4309-805a-2b044f867d3a"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.233681 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "3d3ff2b2-cbaa-4309-805a-2b044f867d3a" (UID: "3d3ff2b2-cbaa-4309-805a-2b044f867d3a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.233679 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-node-log" (OuterVolumeSpecName: "node-log") pod "3d3ff2b2-cbaa-4309-805a-2b044f867d3a" (UID: "3d3ff2b2-cbaa-4309-805a-2b044f867d3a"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.233705 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "3d3ff2b2-cbaa-4309-805a-2b044f867d3a" (UID: "3d3ff2b2-cbaa-4309-805a-2b044f867d3a"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.233729 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "3d3ff2b2-cbaa-4309-805a-2b044f867d3a" (UID: "3d3ff2b2-cbaa-4309-805a-2b044f867d3a"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.233730 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "3d3ff2b2-cbaa-4309-805a-2b044f867d3a" (UID: "3d3ff2b2-cbaa-4309-805a-2b044f867d3a"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.233866 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3af8e553-c0fb-4e8e-af77-0204083ec90e-host-cni-netd\") pod \"ovnkube-node-j55pl\" (UID: \"3af8e553-c0fb-4e8e-af77-0204083ec90e\") " pod="openshift-ovn-kubernetes/ovnkube-node-j55pl" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.233889 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3af8e553-c0fb-4e8e-af77-0204083ec90e-host-run-ovn-kubernetes\") pod \"ovnkube-node-j55pl\" (UID: \"3af8e553-c0fb-4e8e-af77-0204083ec90e\") " pod="openshift-ovn-kubernetes/ovnkube-node-j55pl" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.233910 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3af8e553-c0fb-4e8e-af77-0204083ec90e-run-systemd\") pod \"ovnkube-node-j55pl\" (UID: \"3af8e553-c0fb-4e8e-af77-0204083ec90e\") " pod="openshift-ovn-kubernetes/ovnkube-node-j55pl" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.233939 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3af8e553-c0fb-4e8e-af77-0204083ec90e-ovnkube-config\") pod \"ovnkube-node-j55pl\" (UID: \"3af8e553-c0fb-4e8e-af77-0204083ec90e\") " pod="openshift-ovn-kubernetes/ovnkube-node-j55pl" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.233957 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3af8e553-c0fb-4e8e-af77-0204083ec90e-host-kubelet\") pod \"ovnkube-node-j55pl\" (UID: \"3af8e553-c0fb-4e8e-af77-0204083ec90e\") " pod="openshift-ovn-kubernetes/ovnkube-node-j55pl" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.233975 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3af8e553-c0fb-4e8e-af77-0204083ec90e-env-overrides\") pod \"ovnkube-node-j55pl\" (UID: \"3af8e553-c0fb-4e8e-af77-0204083ec90e\") " pod="openshift-ovn-kubernetes/ovnkube-node-j55pl" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.233996 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3af8e553-c0fb-4e8e-af77-0204083ec90e-run-openvswitch\") pod \"ovnkube-node-j55pl\" (UID: \"3af8e553-c0fb-4e8e-af77-0204083ec90e\") " pod="openshift-ovn-kubernetes/ovnkube-node-j55pl" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.234068 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3af8e553-c0fb-4e8e-af77-0204083ec90e-run-ovn\") pod \"ovnkube-node-j55pl\" (UID: \"3af8e553-c0fb-4e8e-af77-0204083ec90e\") " pod="openshift-ovn-kubernetes/ovnkube-node-j55pl" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.234110 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3af8e553-c0fb-4e8e-af77-0204083ec90e-systemd-units\") pod \"ovnkube-node-j55pl\" (UID: \"3af8e553-c0fb-4e8e-af77-0204083ec90e\") " pod="openshift-ovn-kubernetes/ovnkube-node-j55pl" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.234133 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3af8e553-c0fb-4e8e-af77-0204083ec90e-etc-openvswitch\") pod \"ovnkube-node-j55pl\" (UID: \"3af8e553-c0fb-4e8e-af77-0204083ec90e\") " pod="openshift-ovn-kubernetes/ovnkube-node-j55pl" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.234210 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3af8e553-c0fb-4e8e-af77-0204083ec90e-host-run-netns\") pod \"ovnkube-node-j55pl\" (UID: \"3af8e553-c0fb-4e8e-af77-0204083ec90e\") " pod="openshift-ovn-kubernetes/ovnkube-node-j55pl" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.234244 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3af8e553-c0fb-4e8e-af77-0204083ec90e-var-lib-openvswitch\") pod \"ovnkube-node-j55pl\" (UID: \"3af8e553-c0fb-4e8e-af77-0204083ec90e\") " pod="openshift-ovn-kubernetes/ovnkube-node-j55pl" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.234279 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3af8e553-c0fb-4e8e-af77-0204083ec90e-host-slash\") pod \"ovnkube-node-j55pl\" (UID: \"3af8e553-c0fb-4e8e-af77-0204083ec90e\") " pod="openshift-ovn-kubernetes/ovnkube-node-j55pl" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.234301 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3af8e553-c0fb-4e8e-af77-0204083ec90e-ovnkube-script-lib\") pod \"ovnkube-node-j55pl\" (UID: \"3af8e553-c0fb-4e8e-af77-0204083ec90e\") " pod="openshift-ovn-kubernetes/ovnkube-node-j55pl" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.234340 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3af8e553-c0fb-4e8e-af77-0204083ec90e-log-socket\") pod \"ovnkube-node-j55pl\" (UID: \"3af8e553-c0fb-4e8e-af77-0204083ec90e\") " pod="openshift-ovn-kubernetes/ovnkube-node-j55pl" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.234391 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3af8e553-c0fb-4e8e-af77-0204083ec90e-ovn-node-metrics-cert\") pod \"ovnkube-node-j55pl\" (UID: \"3af8e553-c0fb-4e8e-af77-0204083ec90e\") " pod="openshift-ovn-kubernetes/ovnkube-node-j55pl" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.234433 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3af8e553-c0fb-4e8e-af77-0204083ec90e-host-cni-bin\") pod \"ovnkube-node-j55pl\" (UID: \"3af8e553-c0fb-4e8e-af77-0204083ec90e\") " pod="openshift-ovn-kubernetes/ovnkube-node-j55pl" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.234462 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3af8e553-c0fb-4e8e-af77-0204083ec90e-node-log\") pod \"ovnkube-node-j55pl\" (UID: \"3af8e553-c0fb-4e8e-af77-0204083ec90e\") " pod="openshift-ovn-kubernetes/ovnkube-node-j55pl" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.234504 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3af8e553-c0fb-4e8e-af77-0204083ec90e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-j55pl\" (UID: \"3af8e553-c0fb-4e8e-af77-0204083ec90e\") " pod="openshift-ovn-kubernetes/ovnkube-node-j55pl" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.234529 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbdz8\" (UniqueName: \"kubernetes.io/projected/3af8e553-c0fb-4e8e-af77-0204083ec90e-kube-api-access-xbdz8\") pod \"ovnkube-node-j55pl\" (UID: \"3af8e553-c0fb-4e8e-af77-0204083ec90e\") " pod="openshift-ovn-kubernetes/ovnkube-node-j55pl" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.234627 4795 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.234644 4795 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.234655 4795 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-systemd-units\") on node \"crc\" DevicePath \"\"" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.234668 4795 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.234680 4795 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.234692 4795 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.234703 4795 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-host-cni-netd\") on node \"crc\" DevicePath \"\"" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.234728 4795 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.234739 4795 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.234751 4795 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-log-socket\") on node \"crc\" DevicePath \"\"" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.234761 4795 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-host-slash\") on node \"crc\" DevicePath \"\"" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.234773 4795 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-node-log\") on node \"crc\" DevicePath \"\"" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.234784 4795 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-run-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.234796 4795 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-host-cni-bin\") on node \"crc\" DevicePath \"\"" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.234807 4795 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-host-kubelet\") on node \"crc\" DevicePath \"\"" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.234819 4795 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-host-run-netns\") on node \"crc\" DevicePath \"\"" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.234833 4795 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.248778 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "3d3ff2b2-cbaa-4309-805a-2b044f867d3a" (UID: "3d3ff2b2-cbaa-4309-805a-2b044f867d3a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.249567 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-kube-api-access-psskj" (OuterVolumeSpecName: "kube-api-access-psskj") pod "3d3ff2b2-cbaa-4309-805a-2b044f867d3a" (UID: "3d3ff2b2-cbaa-4309-805a-2b044f867d3a"). InnerVolumeSpecName "kube-api-access-psskj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.274165 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "3d3ff2b2-cbaa-4309-805a-2b044f867d3a" (UID: "3d3ff2b2-cbaa-4309-805a-2b044f867d3a"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.335191 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbdz8\" (UniqueName: \"kubernetes.io/projected/3af8e553-c0fb-4e8e-af77-0204083ec90e-kube-api-access-xbdz8\") pod \"ovnkube-node-j55pl\" (UID: \"3af8e553-c0fb-4e8e-af77-0204083ec90e\") " pod="openshift-ovn-kubernetes/ovnkube-node-j55pl" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.335238 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3af8e553-c0fb-4e8e-af77-0204083ec90e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-j55pl\" (UID: \"3af8e553-c0fb-4e8e-af77-0204083ec90e\") " pod="openshift-ovn-kubernetes/ovnkube-node-j55pl" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.335272 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3af8e553-c0fb-4e8e-af77-0204083ec90e-host-cni-netd\") pod \"ovnkube-node-j55pl\" (UID: \"3af8e553-c0fb-4e8e-af77-0204083ec90e\") " pod="openshift-ovn-kubernetes/ovnkube-node-j55pl" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.335291 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3af8e553-c0fb-4e8e-af77-0204083ec90e-run-systemd\") pod \"ovnkube-node-j55pl\" (UID: \"3af8e553-c0fb-4e8e-af77-0204083ec90e\") " pod="openshift-ovn-kubernetes/ovnkube-node-j55pl" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.335305 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3af8e553-c0fb-4e8e-af77-0204083ec90e-host-run-ovn-kubernetes\") pod \"ovnkube-node-j55pl\" (UID: \"3af8e553-c0fb-4e8e-af77-0204083ec90e\") " pod="openshift-ovn-kubernetes/ovnkube-node-j55pl" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.335331 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3af8e553-c0fb-4e8e-af77-0204083ec90e-ovnkube-config\") pod \"ovnkube-node-j55pl\" (UID: \"3af8e553-c0fb-4e8e-af77-0204083ec90e\") " pod="openshift-ovn-kubernetes/ovnkube-node-j55pl" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.335349 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3af8e553-c0fb-4e8e-af77-0204083ec90e-host-kubelet\") pod \"ovnkube-node-j55pl\" (UID: \"3af8e553-c0fb-4e8e-af77-0204083ec90e\") " pod="openshift-ovn-kubernetes/ovnkube-node-j55pl" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.335367 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3af8e553-c0fb-4e8e-af77-0204083ec90e-env-overrides\") pod \"ovnkube-node-j55pl\" (UID: \"3af8e553-c0fb-4e8e-af77-0204083ec90e\") " pod="openshift-ovn-kubernetes/ovnkube-node-j55pl" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.335385 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3af8e553-c0fb-4e8e-af77-0204083ec90e-run-openvswitch\") pod \"ovnkube-node-j55pl\" (UID: \"3af8e553-c0fb-4e8e-af77-0204083ec90e\") " pod="openshift-ovn-kubernetes/ovnkube-node-j55pl" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.335381 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3af8e553-c0fb-4e8e-af77-0204083ec90e-host-cni-netd\") pod \"ovnkube-node-j55pl\" (UID: \"3af8e553-c0fb-4e8e-af77-0204083ec90e\") " pod="openshift-ovn-kubernetes/ovnkube-node-j55pl" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.335408 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3af8e553-c0fb-4e8e-af77-0204083ec90e-run-ovn\") pod \"ovnkube-node-j55pl\" (UID: \"3af8e553-c0fb-4e8e-af77-0204083ec90e\") " pod="openshift-ovn-kubernetes/ovnkube-node-j55pl" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.335419 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3af8e553-c0fb-4e8e-af77-0204083ec90e-host-kubelet\") pod \"ovnkube-node-j55pl\" (UID: \"3af8e553-c0fb-4e8e-af77-0204083ec90e\") " pod="openshift-ovn-kubernetes/ovnkube-node-j55pl" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.335426 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3af8e553-c0fb-4e8e-af77-0204083ec90e-run-openvswitch\") pod \"ovnkube-node-j55pl\" (UID: \"3af8e553-c0fb-4e8e-af77-0204083ec90e\") " pod="openshift-ovn-kubernetes/ovnkube-node-j55pl" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.335390 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3af8e553-c0fb-4e8e-af77-0204083ec90e-run-systemd\") pod \"ovnkube-node-j55pl\" (UID: \"3af8e553-c0fb-4e8e-af77-0204083ec90e\") " pod="openshift-ovn-kubernetes/ovnkube-node-j55pl" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.335378 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3af8e553-c0fb-4e8e-af77-0204083ec90e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-j55pl\" (UID: \"3af8e553-c0fb-4e8e-af77-0204083ec90e\") " pod="openshift-ovn-kubernetes/ovnkube-node-j55pl" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.335435 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3af8e553-c0fb-4e8e-af77-0204083ec90e-etc-openvswitch\") pod \"ovnkube-node-j55pl\" (UID: \"3af8e553-c0fb-4e8e-af77-0204083ec90e\") " pod="openshift-ovn-kubernetes/ovnkube-node-j55pl" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.335435 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3af8e553-c0fb-4e8e-af77-0204083ec90e-host-run-ovn-kubernetes\") pod \"ovnkube-node-j55pl\" (UID: \"3af8e553-c0fb-4e8e-af77-0204083ec90e\") " pod="openshift-ovn-kubernetes/ovnkube-node-j55pl" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.335473 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3af8e553-c0fb-4e8e-af77-0204083ec90e-etc-openvswitch\") pod \"ovnkube-node-j55pl\" (UID: \"3af8e553-c0fb-4e8e-af77-0204083ec90e\") " pod="openshift-ovn-kubernetes/ovnkube-node-j55pl" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.335540 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3af8e553-c0fb-4e8e-af77-0204083ec90e-run-ovn\") pod \"ovnkube-node-j55pl\" (UID: \"3af8e553-c0fb-4e8e-af77-0204083ec90e\") " pod="openshift-ovn-kubernetes/ovnkube-node-j55pl" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.335650 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3af8e553-c0fb-4e8e-af77-0204083ec90e-systemd-units\") pod \"ovnkube-node-j55pl\" (UID: \"3af8e553-c0fb-4e8e-af77-0204083ec90e\") " pod="openshift-ovn-kubernetes/ovnkube-node-j55pl" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.335711 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3af8e553-c0fb-4e8e-af77-0204083ec90e-host-run-netns\") pod \"ovnkube-node-j55pl\" (UID: \"3af8e553-c0fb-4e8e-af77-0204083ec90e\") " pod="openshift-ovn-kubernetes/ovnkube-node-j55pl" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.335736 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3af8e553-c0fb-4e8e-af77-0204083ec90e-var-lib-openvswitch\") pod \"ovnkube-node-j55pl\" (UID: \"3af8e553-c0fb-4e8e-af77-0204083ec90e\") " pod="openshift-ovn-kubernetes/ovnkube-node-j55pl" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.335752 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3af8e553-c0fb-4e8e-af77-0204083ec90e-systemd-units\") pod \"ovnkube-node-j55pl\" (UID: \"3af8e553-c0fb-4e8e-af77-0204083ec90e\") " pod="openshift-ovn-kubernetes/ovnkube-node-j55pl" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.335771 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3af8e553-c0fb-4e8e-af77-0204083ec90e-host-slash\") pod \"ovnkube-node-j55pl\" (UID: \"3af8e553-c0fb-4e8e-af77-0204083ec90e\") " pod="openshift-ovn-kubernetes/ovnkube-node-j55pl" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.335786 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3af8e553-c0fb-4e8e-af77-0204083ec90e-host-run-netns\") pod \"ovnkube-node-j55pl\" (UID: \"3af8e553-c0fb-4e8e-af77-0204083ec90e\") " pod="openshift-ovn-kubernetes/ovnkube-node-j55pl" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.335790 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3af8e553-c0fb-4e8e-af77-0204083ec90e-ovnkube-script-lib\") pod \"ovnkube-node-j55pl\" (UID: \"3af8e553-c0fb-4e8e-af77-0204083ec90e\") " pod="openshift-ovn-kubernetes/ovnkube-node-j55pl" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.335838 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3af8e553-c0fb-4e8e-af77-0204083ec90e-log-socket\") pod \"ovnkube-node-j55pl\" (UID: \"3af8e553-c0fb-4e8e-af77-0204083ec90e\") " pod="openshift-ovn-kubernetes/ovnkube-node-j55pl" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.335878 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3af8e553-c0fb-4e8e-af77-0204083ec90e-ovn-node-metrics-cert\") pod \"ovnkube-node-j55pl\" (UID: \"3af8e553-c0fb-4e8e-af77-0204083ec90e\") " pod="openshift-ovn-kubernetes/ovnkube-node-j55pl" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.335917 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3af8e553-c0fb-4e8e-af77-0204083ec90e-host-cni-bin\") pod \"ovnkube-node-j55pl\" (UID: \"3af8e553-c0fb-4e8e-af77-0204083ec90e\") " pod="openshift-ovn-kubernetes/ovnkube-node-j55pl" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.335935 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3af8e553-c0fb-4e8e-af77-0204083ec90e-node-log\") pod \"ovnkube-node-j55pl\" (UID: \"3af8e553-c0fb-4e8e-af77-0204083ec90e\") " pod="openshift-ovn-kubernetes/ovnkube-node-j55pl" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.336003 4795 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.336018 4795 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-run-systemd\") on node \"crc\" DevicePath \"\"" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.336029 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-psskj\" (UniqueName: \"kubernetes.io/projected/3d3ff2b2-cbaa-4309-805a-2b044f867d3a-kube-api-access-psskj\") on node \"crc\" DevicePath \"\"" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.336032 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3af8e553-c0fb-4e8e-af77-0204083ec90e-env-overrides\") pod \"ovnkube-node-j55pl\" (UID: \"3af8e553-c0fb-4e8e-af77-0204083ec90e\") " pod="openshift-ovn-kubernetes/ovnkube-node-j55pl" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.336069 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3af8e553-c0fb-4e8e-af77-0204083ec90e-log-socket\") pod \"ovnkube-node-j55pl\" (UID: \"3af8e553-c0fb-4e8e-af77-0204083ec90e\") " pod="openshift-ovn-kubernetes/ovnkube-node-j55pl" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.336104 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3af8e553-c0fb-4e8e-af77-0204083ec90e-host-cni-bin\") pod \"ovnkube-node-j55pl\" (UID: \"3af8e553-c0fb-4e8e-af77-0204083ec90e\") " pod="openshift-ovn-kubernetes/ovnkube-node-j55pl" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.336055 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3af8e553-c0fb-4e8e-af77-0204083ec90e-node-log\") pod \"ovnkube-node-j55pl\" (UID: \"3af8e553-c0fb-4e8e-af77-0204083ec90e\") " pod="openshift-ovn-kubernetes/ovnkube-node-j55pl" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.336132 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3af8e553-c0fb-4e8e-af77-0204083ec90e-var-lib-openvswitch\") pod \"ovnkube-node-j55pl\" (UID: \"3af8e553-c0fb-4e8e-af77-0204083ec90e\") " pod="openshift-ovn-kubernetes/ovnkube-node-j55pl" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.336156 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3af8e553-c0fb-4e8e-af77-0204083ec90e-host-slash\") pod \"ovnkube-node-j55pl\" (UID: \"3af8e553-c0fb-4e8e-af77-0204083ec90e\") " pod="openshift-ovn-kubernetes/ovnkube-node-j55pl" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.336520 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3af8e553-c0fb-4e8e-af77-0204083ec90e-ovnkube-script-lib\") pod \"ovnkube-node-j55pl\" (UID: \"3af8e553-c0fb-4e8e-af77-0204083ec90e\") " pod="openshift-ovn-kubernetes/ovnkube-node-j55pl" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.336924 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3af8e553-c0fb-4e8e-af77-0204083ec90e-ovnkube-config\") pod \"ovnkube-node-j55pl\" (UID: \"3af8e553-c0fb-4e8e-af77-0204083ec90e\") " pod="openshift-ovn-kubernetes/ovnkube-node-j55pl" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.341144 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3af8e553-c0fb-4e8e-af77-0204083ec90e-ovn-node-metrics-cert\") pod \"ovnkube-node-j55pl\" (UID: \"3af8e553-c0fb-4e8e-af77-0204083ec90e\") " pod="openshift-ovn-kubernetes/ovnkube-node-j55pl" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.355429 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbdz8\" (UniqueName: \"kubernetes.io/projected/3af8e553-c0fb-4e8e-af77-0204083ec90e-kube-api-access-xbdz8\") pod \"ovnkube-node-j55pl\" (UID: \"3af8e553-c0fb-4e8e-af77-0204083ec90e\") " pod="openshift-ovn-kubernetes/ovnkube-node-j55pl" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.450929 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-j55pl" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.839896 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j55pl" event={"ID":"3af8e553-c0fb-4e8e-af77-0204083ec90e","Type":"ContainerStarted","Data":"6b02ed1d343d4af2d9fc7a7e7b0ba37ca4dbf9413edf0709cc813f98ea3d9fdc"} Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.848231 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-km2g9_3d3ff2b2-cbaa-4309-805a-2b044f867d3a/ovn-acl-logging/0.log" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.848726 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-km2g9_3d3ff2b2-cbaa-4309-805a-2b044f867d3a/ovn-controller/0.log" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.849139 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" event={"ID":"3d3ff2b2-cbaa-4309-805a-2b044f867d3a","Type":"ContainerDied","Data":"4d75662c6fe3de9e1cea2f71f945627df52c0bac515aeab0294c65662cf67ce5"} Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.849200 4795 scope.go:117] "RemoveContainer" containerID="c590287cbefc95d9033e3848de009232747de0853c936fe3696973b2f36cd332" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.849409 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-km2g9" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.863205 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hbg2m_50b9c3ea-4ff5-434f-803c-2365a0938f9a/kube-multus/2.log" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.863277 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hbg2m" event={"ID":"50b9c3ea-4ff5-434f-803c-2365a0938f9a","Type":"ContainerStarted","Data":"6bb3b4630b47aa985620fd116374c74f311e0c92d0160f6fa429e29ad515d2cd"} Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.902918 4795 scope.go:117] "RemoveContainer" containerID="c63ac8cc3766762113c8fdfdc459f77570fc533bf636011f1e64a5c0a55b3ee0" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.948115 4795 scope.go:117] "RemoveContainer" containerID="248e5291402287c1a589a168527135bd1df15e293a7614681fc896fcbdeb9dcf" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.970945 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-km2g9"] Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.977890 4795 scope.go:117] "RemoveContainer" containerID="8112226f945ad27e0c5af4d588800322977e17efc3eac9acec2ce591f30f46fe" Nov 29 07:52:19 crc kubenswrapper[4795]: I1129 07:52:19.992335 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-km2g9"] Nov 29 07:52:20 crc kubenswrapper[4795]: I1129 07:52:20.003115 4795 scope.go:117] "RemoveContainer" containerID="bcce9ccf35e72987d01428b7573aafacfe90fc973009657b077f40b6fe6ddecb" Nov 29 07:52:20 crc kubenswrapper[4795]: I1129 07:52:20.072217 4795 scope.go:117] "RemoveContainer" containerID="ceed8db447ae237cae16d2aa37b1651eade557ea7aaab5bc7b55678ce6e36f57" Nov 29 07:52:20 crc kubenswrapper[4795]: I1129 07:52:20.093518 4795 scope.go:117] "RemoveContainer" containerID="feacad8666527640ec9d6bee04a743e99e0cc9f9e9c6432c1a70bc02e85fa039" Nov 29 07:52:20 crc kubenswrapper[4795]: I1129 07:52:20.109806 4795 scope.go:117] "RemoveContainer" containerID="25239d7c14973e35144f9ac1b909564749220fc32b1851b68e08434a5aca7def" Nov 29 07:52:20 crc kubenswrapper[4795]: I1129 07:52:20.149791 4795 scope.go:117] "RemoveContainer" containerID="1e28f6209bd7be6beb5d86d316c3194ad3162ae8561ac532a3dfc894415fb406" Nov 29 07:52:20 crc kubenswrapper[4795]: I1129 07:52:20.292284 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d3ff2b2-cbaa-4309-805a-2b044f867d3a" path="/var/lib/kubelet/pods/3d3ff2b2-cbaa-4309-805a-2b044f867d3a/volumes" Nov 29 07:52:20 crc kubenswrapper[4795]: I1129 07:52:20.869713 4795 generic.go:334] "Generic (PLEG): container finished" podID="3af8e553-c0fb-4e8e-af77-0204083ec90e" containerID="56135fe34e28d955b5d3cc12858b3b15b3ca66e001b9b0e7a930f1f17610c0d6" exitCode=0 Nov 29 07:52:20 crc kubenswrapper[4795]: I1129 07:52:20.869795 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j55pl" event={"ID":"3af8e553-c0fb-4e8e-af77-0204083ec90e","Type":"ContainerDied","Data":"56135fe34e28d955b5d3cc12858b3b15b3ca66e001b9b0e7a930f1f17610c0d6"} Nov 29 07:52:21 crc kubenswrapper[4795]: I1129 07:52:21.930642 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j55pl" event={"ID":"3af8e553-c0fb-4e8e-af77-0204083ec90e","Type":"ContainerStarted","Data":"e246170590d924e9e3453dbcfa17828d9858d2571433d48bd10951d98698679f"} Nov 29 07:52:21 crc kubenswrapper[4795]: I1129 07:52:21.930945 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j55pl" event={"ID":"3af8e553-c0fb-4e8e-af77-0204083ec90e","Type":"ContainerStarted","Data":"02840c1149bad7b2fac90de63b33d2bb77dbdbe713b2da0143b25d7682d44b36"} Nov 29 07:52:21 crc kubenswrapper[4795]: I1129 07:52:21.930957 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j55pl" event={"ID":"3af8e553-c0fb-4e8e-af77-0204083ec90e","Type":"ContainerStarted","Data":"1d7e94ded1a899d9025cd8012568191ebeb11c5fed1514825424525bf0778c87"} Nov 29 07:52:22 crc kubenswrapper[4795]: I1129 07:52:22.939679 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j55pl" event={"ID":"3af8e553-c0fb-4e8e-af77-0204083ec90e","Type":"ContainerStarted","Data":"38e28d536eb2afb5408181399141af1f01f71554b31445e1228b3ce2833fe22e"} Nov 29 07:52:22 crc kubenswrapper[4795]: I1129 07:52:22.939996 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j55pl" event={"ID":"3af8e553-c0fb-4e8e-af77-0204083ec90e","Type":"ContainerStarted","Data":"0d025c9542c5d66b13ff8f2cd3c5dfb6ff56b6e753147180e5747990725ff491"} Nov 29 07:52:22 crc kubenswrapper[4795]: I1129 07:52:22.940006 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j55pl" event={"ID":"3af8e553-c0fb-4e8e-af77-0204083ec90e","Type":"ContainerStarted","Data":"5e1aa0f582a9eb68a6098fff76a001ad5889d05ce691e16a41c93acb5d4abb28"} Nov 29 07:52:23 crc kubenswrapper[4795]: I1129 07:52:23.031352 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-xmczn"] Nov 29 07:52:23 crc kubenswrapper[4795]: I1129 07:52:23.032110 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-xmczn" Nov 29 07:52:23 crc kubenswrapper[4795]: I1129 07:52:23.039974 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Nov 29 07:52:23 crc kubenswrapper[4795]: I1129 07:52:23.040916 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Nov 29 07:52:23 crc kubenswrapper[4795]: I1129 07:52:23.041364 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-j8fnb" Nov 29 07:52:23 crc kubenswrapper[4795]: I1129 07:52:23.170405 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-488tc\" (UniqueName: \"kubernetes.io/projected/69e46873-1e0c-4187-810e-584aa956ba47-kube-api-access-488tc\") pod \"obo-prometheus-operator-668cf9dfbb-xmczn\" (UID: \"69e46873-1e0c-4187-810e-584aa956ba47\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-xmczn" Nov 29 07:52:23 crc kubenswrapper[4795]: I1129 07:52:23.178351 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-8fdc8d5d-r6sxj"] Nov 29 07:52:23 crc kubenswrapper[4795]: I1129 07:52:23.179139 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8fdc8d5d-r6sxj" Nov 29 07:52:23 crc kubenswrapper[4795]: I1129 07:52:23.181082 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Nov 29 07:52:23 crc kubenswrapper[4795]: I1129 07:52:23.181236 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-pz95l" Nov 29 07:52:23 crc kubenswrapper[4795]: I1129 07:52:23.185818 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-8fdc8d5d-9scj8"] Nov 29 07:52:23 crc kubenswrapper[4795]: I1129 07:52:23.186714 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8fdc8d5d-9scj8" Nov 29 07:52:23 crc kubenswrapper[4795]: I1129 07:52:23.272666 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5f84b151-fbdd-40bc-9457-ec560370a162-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-8fdc8d5d-r6sxj\" (UID: \"5f84b151-fbdd-40bc-9457-ec560370a162\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8fdc8d5d-r6sxj" Nov 29 07:52:23 crc kubenswrapper[4795]: I1129 07:52:23.272844 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/63008df8-40b1-4ab0-966e-d88d426e3b1b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-8fdc8d5d-9scj8\" (UID: \"63008df8-40b1-4ab0-966e-d88d426e3b1b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8fdc8d5d-9scj8" Nov 29 07:52:23 crc kubenswrapper[4795]: I1129 07:52:23.272933 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/63008df8-40b1-4ab0-966e-d88d426e3b1b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-8fdc8d5d-9scj8\" (UID: \"63008df8-40b1-4ab0-966e-d88d426e3b1b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8fdc8d5d-9scj8" Nov 29 07:52:23 crc kubenswrapper[4795]: I1129 07:52:23.273034 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-488tc\" (UniqueName: \"kubernetes.io/projected/69e46873-1e0c-4187-810e-584aa956ba47-kube-api-access-488tc\") pod \"obo-prometheus-operator-668cf9dfbb-xmczn\" (UID: \"69e46873-1e0c-4187-810e-584aa956ba47\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-xmczn" Nov 29 07:52:23 crc kubenswrapper[4795]: I1129 07:52:23.273074 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5f84b151-fbdd-40bc-9457-ec560370a162-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-8fdc8d5d-r6sxj\" (UID: \"5f84b151-fbdd-40bc-9457-ec560370a162\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8fdc8d5d-r6sxj" Nov 29 07:52:23 crc kubenswrapper[4795]: I1129 07:52:23.299952 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-488tc\" (UniqueName: \"kubernetes.io/projected/69e46873-1e0c-4187-810e-584aa956ba47-kube-api-access-488tc\") pod \"obo-prometheus-operator-668cf9dfbb-xmczn\" (UID: \"69e46873-1e0c-4187-810e-584aa956ba47\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-xmczn" Nov 29 07:52:23 crc kubenswrapper[4795]: I1129 07:52:23.350514 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-xmczn" Nov 29 07:52:23 crc kubenswrapper[4795]: I1129 07:52:23.363426 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-7h8sj"] Nov 29 07:52:23 crc kubenswrapper[4795]: I1129 07:52:23.364764 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-7h8sj" Nov 29 07:52:23 crc kubenswrapper[4795]: I1129 07:52:23.368050 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-ktpg6" Nov 29 07:52:23 crc kubenswrapper[4795]: I1129 07:52:23.369437 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Nov 29 07:52:23 crc kubenswrapper[4795]: I1129 07:52:23.374236 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/63008df8-40b1-4ab0-966e-d88d426e3b1b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-8fdc8d5d-9scj8\" (UID: \"63008df8-40b1-4ab0-966e-d88d426e3b1b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8fdc8d5d-9scj8" Nov 29 07:52:23 crc kubenswrapper[4795]: I1129 07:52:23.374274 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/63008df8-40b1-4ab0-966e-d88d426e3b1b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-8fdc8d5d-9scj8\" (UID: \"63008df8-40b1-4ab0-966e-d88d426e3b1b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8fdc8d5d-9scj8" Nov 29 07:52:23 crc kubenswrapper[4795]: I1129 07:52:23.374335 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/52ff2b6a-cefb-4a70-ac45-9d0c5b9d315d-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-7h8sj\" (UID: \"52ff2b6a-cefb-4a70-ac45-9d0c5b9d315d\") " pod="openshift-operators/observability-operator-d8bb48f5d-7h8sj" Nov 29 07:52:23 crc kubenswrapper[4795]: I1129 07:52:23.374356 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbl8w\" (UniqueName: \"kubernetes.io/projected/52ff2b6a-cefb-4a70-ac45-9d0c5b9d315d-kube-api-access-fbl8w\") pod \"observability-operator-d8bb48f5d-7h8sj\" (UID: \"52ff2b6a-cefb-4a70-ac45-9d0c5b9d315d\") " pod="openshift-operators/observability-operator-d8bb48f5d-7h8sj" Nov 29 07:52:23 crc kubenswrapper[4795]: I1129 07:52:23.374417 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5f84b151-fbdd-40bc-9457-ec560370a162-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-8fdc8d5d-r6sxj\" (UID: \"5f84b151-fbdd-40bc-9457-ec560370a162\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8fdc8d5d-r6sxj" Nov 29 07:52:23 crc kubenswrapper[4795]: I1129 07:52:23.374500 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5f84b151-fbdd-40bc-9457-ec560370a162-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-8fdc8d5d-r6sxj\" (UID: \"5f84b151-fbdd-40bc-9457-ec560370a162\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8fdc8d5d-r6sxj" Nov 29 07:52:23 crc kubenswrapper[4795]: E1129 07:52:23.377562 4795 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-xmczn_openshift-operators_69e46873-1e0c-4187-810e-584aa956ba47_0(ce2f56c63f44ebb8c24aa5931d69f0e0d954ec0aeea63fbe35042a0ae06c3d84): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 29 07:52:23 crc kubenswrapper[4795]: E1129 07:52:23.377645 4795 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-xmczn_openshift-operators_69e46873-1e0c-4187-810e-584aa956ba47_0(ce2f56c63f44ebb8c24aa5931d69f0e0d954ec0aeea63fbe35042a0ae06c3d84): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-xmczn" Nov 29 07:52:23 crc kubenswrapper[4795]: E1129 07:52:23.377671 4795 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-xmczn_openshift-operators_69e46873-1e0c-4187-810e-584aa956ba47_0(ce2f56c63f44ebb8c24aa5931d69f0e0d954ec0aeea63fbe35042a0ae06c3d84): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-xmczn" Nov 29 07:52:23 crc kubenswrapper[4795]: E1129 07:52:23.377720 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-668cf9dfbb-xmczn_openshift-operators(69e46873-1e0c-4187-810e-584aa956ba47)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-668cf9dfbb-xmczn_openshift-operators(69e46873-1e0c-4187-810e-584aa956ba47)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-xmczn_openshift-operators_69e46873-1e0c-4187-810e-584aa956ba47_0(ce2f56c63f44ebb8c24aa5931d69f0e0d954ec0aeea63fbe35042a0ae06c3d84): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-xmczn" podUID="69e46873-1e0c-4187-810e-584aa956ba47" Nov 29 07:52:23 crc kubenswrapper[4795]: I1129 07:52:23.379009 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/63008df8-40b1-4ab0-966e-d88d426e3b1b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-8fdc8d5d-9scj8\" (UID: \"63008df8-40b1-4ab0-966e-d88d426e3b1b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8fdc8d5d-9scj8" Nov 29 07:52:23 crc kubenswrapper[4795]: I1129 07:52:23.379104 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/63008df8-40b1-4ab0-966e-d88d426e3b1b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-8fdc8d5d-9scj8\" (UID: \"63008df8-40b1-4ab0-966e-d88d426e3b1b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8fdc8d5d-9scj8" Nov 29 07:52:23 crc kubenswrapper[4795]: I1129 07:52:23.379453 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5f84b151-fbdd-40bc-9457-ec560370a162-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-8fdc8d5d-r6sxj\" (UID: \"5f84b151-fbdd-40bc-9457-ec560370a162\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8fdc8d5d-r6sxj" Nov 29 07:52:23 crc kubenswrapper[4795]: I1129 07:52:23.381731 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5f84b151-fbdd-40bc-9457-ec560370a162-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-8fdc8d5d-r6sxj\" (UID: \"5f84b151-fbdd-40bc-9457-ec560370a162\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8fdc8d5d-r6sxj" Nov 29 07:52:23 crc kubenswrapper[4795]: I1129 07:52:23.475294 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/52ff2b6a-cefb-4a70-ac45-9d0c5b9d315d-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-7h8sj\" (UID: \"52ff2b6a-cefb-4a70-ac45-9d0c5b9d315d\") " pod="openshift-operators/observability-operator-d8bb48f5d-7h8sj" Nov 29 07:52:23 crc kubenswrapper[4795]: I1129 07:52:23.475337 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbl8w\" (UniqueName: \"kubernetes.io/projected/52ff2b6a-cefb-4a70-ac45-9d0c5b9d315d-kube-api-access-fbl8w\") pod \"observability-operator-d8bb48f5d-7h8sj\" (UID: \"52ff2b6a-cefb-4a70-ac45-9d0c5b9d315d\") " pod="openshift-operators/observability-operator-d8bb48f5d-7h8sj" Nov 29 07:52:23 crc kubenswrapper[4795]: I1129 07:52:23.484648 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/52ff2b6a-cefb-4a70-ac45-9d0c5b9d315d-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-7h8sj\" (UID: \"52ff2b6a-cefb-4a70-ac45-9d0c5b9d315d\") " pod="openshift-operators/observability-operator-d8bb48f5d-7h8sj" Nov 29 07:52:23 crc kubenswrapper[4795]: I1129 07:52:23.490717 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5446b9c989-rb87j"] Nov 29 07:52:23 crc kubenswrapper[4795]: I1129 07:52:23.491445 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-rb87j" Nov 29 07:52:23 crc kubenswrapper[4795]: I1129 07:52:23.496352 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8fdc8d5d-r6sxj" Nov 29 07:52:23 crc kubenswrapper[4795]: I1129 07:52:23.497022 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-rdlk6" Nov 29 07:52:23 crc kubenswrapper[4795]: I1129 07:52:23.503117 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8fdc8d5d-9scj8" Nov 29 07:52:23 crc kubenswrapper[4795]: I1129 07:52:23.530447 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbl8w\" (UniqueName: \"kubernetes.io/projected/52ff2b6a-cefb-4a70-ac45-9d0c5b9d315d-kube-api-access-fbl8w\") pod \"observability-operator-d8bb48f5d-7h8sj\" (UID: \"52ff2b6a-cefb-4a70-ac45-9d0c5b9d315d\") " pod="openshift-operators/observability-operator-d8bb48f5d-7h8sj" Nov 29 07:52:23 crc kubenswrapper[4795]: E1129 07:52:23.541869 4795 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-8fdc8d5d-r6sxj_openshift-operators_5f84b151-fbdd-40bc-9457-ec560370a162_0(597ac3755f0304fb8ecdcf7cf2e4336cf5fe74799a2e4156b1a3b946072b14b9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 29 07:52:23 crc kubenswrapper[4795]: E1129 07:52:23.550943 4795 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-8fdc8d5d-r6sxj_openshift-operators_5f84b151-fbdd-40bc-9457-ec560370a162_0(597ac3755f0304fb8ecdcf7cf2e4336cf5fe74799a2e4156b1a3b946072b14b9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8fdc8d5d-r6sxj" Nov 29 07:52:23 crc kubenswrapper[4795]: E1129 07:52:23.551056 4795 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-8fdc8d5d-r6sxj_openshift-operators_5f84b151-fbdd-40bc-9457-ec560370a162_0(597ac3755f0304fb8ecdcf7cf2e4336cf5fe74799a2e4156b1a3b946072b14b9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8fdc8d5d-r6sxj" Nov 29 07:52:23 crc kubenswrapper[4795]: E1129 07:52:23.551159 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-8fdc8d5d-r6sxj_openshift-operators(5f84b151-fbdd-40bc-9457-ec560370a162)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-8fdc8d5d-r6sxj_openshift-operators(5f84b151-fbdd-40bc-9457-ec560370a162)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-8fdc8d5d-r6sxj_openshift-operators_5f84b151-fbdd-40bc-9457-ec560370a162_0(597ac3755f0304fb8ecdcf7cf2e4336cf5fe74799a2e4156b1a3b946072b14b9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8fdc8d5d-r6sxj" podUID="5f84b151-fbdd-40bc-9457-ec560370a162" Nov 29 07:52:23 crc kubenswrapper[4795]: E1129 07:52:23.564872 4795 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-8fdc8d5d-9scj8_openshift-operators_63008df8-40b1-4ab0-966e-d88d426e3b1b_0(1f107f34c1df0211a45255bbd96636907cf77cc96950f6c47dc41f8c6db97d5c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 29 07:52:23 crc kubenswrapper[4795]: E1129 07:52:23.564945 4795 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-8fdc8d5d-9scj8_openshift-operators_63008df8-40b1-4ab0-966e-d88d426e3b1b_0(1f107f34c1df0211a45255bbd96636907cf77cc96950f6c47dc41f8c6db97d5c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8fdc8d5d-9scj8" Nov 29 07:52:23 crc kubenswrapper[4795]: E1129 07:52:23.564965 4795 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-8fdc8d5d-9scj8_openshift-operators_63008df8-40b1-4ab0-966e-d88d426e3b1b_0(1f107f34c1df0211a45255bbd96636907cf77cc96950f6c47dc41f8c6db97d5c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8fdc8d5d-9scj8" Nov 29 07:52:23 crc kubenswrapper[4795]: E1129 07:52:23.565013 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-8fdc8d5d-9scj8_openshift-operators(63008df8-40b1-4ab0-966e-d88d426e3b1b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-8fdc8d5d-9scj8_openshift-operators(63008df8-40b1-4ab0-966e-d88d426e3b1b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-8fdc8d5d-9scj8_openshift-operators_63008df8-40b1-4ab0-966e-d88d426e3b1b_0(1f107f34c1df0211a45255bbd96636907cf77cc96950f6c47dc41f8c6db97d5c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8fdc8d5d-9scj8" podUID="63008df8-40b1-4ab0-966e-d88d426e3b1b" Nov 29 07:52:23 crc kubenswrapper[4795]: I1129 07:52:23.578363 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/7cca6c3b-ab29-47c1-94fe-bb8d2ac90062-openshift-service-ca\") pod \"perses-operator-5446b9c989-rb87j\" (UID: \"7cca6c3b-ab29-47c1-94fe-bb8d2ac90062\") " pod="openshift-operators/perses-operator-5446b9c989-rb87j" Nov 29 07:52:23 crc kubenswrapper[4795]: I1129 07:52:23.578538 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xh5nd\" (UniqueName: \"kubernetes.io/projected/7cca6c3b-ab29-47c1-94fe-bb8d2ac90062-kube-api-access-xh5nd\") pod \"perses-operator-5446b9c989-rb87j\" (UID: \"7cca6c3b-ab29-47c1-94fe-bb8d2ac90062\") " pod="openshift-operators/perses-operator-5446b9c989-rb87j" Nov 29 07:52:23 crc kubenswrapper[4795]: I1129 07:52:23.679950 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xh5nd\" (UniqueName: \"kubernetes.io/projected/7cca6c3b-ab29-47c1-94fe-bb8d2ac90062-kube-api-access-xh5nd\") pod \"perses-operator-5446b9c989-rb87j\" (UID: \"7cca6c3b-ab29-47c1-94fe-bb8d2ac90062\") " pod="openshift-operators/perses-operator-5446b9c989-rb87j" Nov 29 07:52:23 crc kubenswrapper[4795]: I1129 07:52:23.680157 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/7cca6c3b-ab29-47c1-94fe-bb8d2ac90062-openshift-service-ca\") pod \"perses-operator-5446b9c989-rb87j\" (UID: \"7cca6c3b-ab29-47c1-94fe-bb8d2ac90062\") " pod="openshift-operators/perses-operator-5446b9c989-rb87j" Nov 29 07:52:23 crc kubenswrapper[4795]: I1129 07:52:23.681191 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/7cca6c3b-ab29-47c1-94fe-bb8d2ac90062-openshift-service-ca\") pod \"perses-operator-5446b9c989-rb87j\" (UID: \"7cca6c3b-ab29-47c1-94fe-bb8d2ac90062\") " pod="openshift-operators/perses-operator-5446b9c989-rb87j" Nov 29 07:52:23 crc kubenswrapper[4795]: I1129 07:52:23.701457 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xh5nd\" (UniqueName: \"kubernetes.io/projected/7cca6c3b-ab29-47c1-94fe-bb8d2ac90062-kube-api-access-xh5nd\") pod \"perses-operator-5446b9c989-rb87j\" (UID: \"7cca6c3b-ab29-47c1-94fe-bb8d2ac90062\") " pod="openshift-operators/perses-operator-5446b9c989-rb87j" Nov 29 07:52:23 crc kubenswrapper[4795]: I1129 07:52:23.736024 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-7h8sj" Nov 29 07:52:23 crc kubenswrapper[4795]: E1129 07:52:23.772933 4795 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-7h8sj_openshift-operators_52ff2b6a-cefb-4a70-ac45-9d0c5b9d315d_0(602e0e5578580359f0ecbd789c2f6df6f2d27bd18b8e5c60fea4767e83f00a62): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 29 07:52:23 crc kubenswrapper[4795]: E1129 07:52:23.773008 4795 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-7h8sj_openshift-operators_52ff2b6a-cefb-4a70-ac45-9d0c5b9d315d_0(602e0e5578580359f0ecbd789c2f6df6f2d27bd18b8e5c60fea4767e83f00a62): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-7h8sj" Nov 29 07:52:23 crc kubenswrapper[4795]: E1129 07:52:23.773027 4795 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-7h8sj_openshift-operators_52ff2b6a-cefb-4a70-ac45-9d0c5b9d315d_0(602e0e5578580359f0ecbd789c2f6df6f2d27bd18b8e5c60fea4767e83f00a62): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-7h8sj" Nov 29 07:52:23 crc kubenswrapper[4795]: E1129 07:52:23.773065 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-d8bb48f5d-7h8sj_openshift-operators(52ff2b6a-cefb-4a70-ac45-9d0c5b9d315d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-d8bb48f5d-7h8sj_openshift-operators(52ff2b6a-cefb-4a70-ac45-9d0c5b9d315d)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-7h8sj_openshift-operators_52ff2b6a-cefb-4a70-ac45-9d0c5b9d315d_0(602e0e5578580359f0ecbd789c2f6df6f2d27bd18b8e5c60fea4767e83f00a62): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-d8bb48f5d-7h8sj" podUID="52ff2b6a-cefb-4a70-ac45-9d0c5b9d315d" Nov 29 07:52:23 crc kubenswrapper[4795]: I1129 07:52:23.833231 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-rb87j" Nov 29 07:52:23 crc kubenswrapper[4795]: E1129 07:52:23.860926 4795 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-rb87j_openshift-operators_7cca6c3b-ab29-47c1-94fe-bb8d2ac90062_0(a95e2338e8c9b7c6e6aa57cf9516e86af2f519d5a652314657c67c26caa8918b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 29 07:52:23 crc kubenswrapper[4795]: E1129 07:52:23.860996 4795 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-rb87j_openshift-operators_7cca6c3b-ab29-47c1-94fe-bb8d2ac90062_0(a95e2338e8c9b7c6e6aa57cf9516e86af2f519d5a652314657c67c26caa8918b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-rb87j" Nov 29 07:52:23 crc kubenswrapper[4795]: E1129 07:52:23.861024 4795 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-rb87j_openshift-operators_7cca6c3b-ab29-47c1-94fe-bb8d2ac90062_0(a95e2338e8c9b7c6e6aa57cf9516e86af2f519d5a652314657c67c26caa8918b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-rb87j" Nov 29 07:52:23 crc kubenswrapper[4795]: E1129 07:52:23.861082 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5446b9c989-rb87j_openshift-operators(7cca6c3b-ab29-47c1-94fe-bb8d2ac90062)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5446b9c989-rb87j_openshift-operators(7cca6c3b-ab29-47c1-94fe-bb8d2ac90062)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-rb87j_openshift-operators_7cca6c3b-ab29-47c1-94fe-bb8d2ac90062_0(a95e2338e8c9b7c6e6aa57cf9516e86af2f519d5a652314657c67c26caa8918b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5446b9c989-rb87j" podUID="7cca6c3b-ab29-47c1-94fe-bb8d2ac90062" Nov 29 07:52:27 crc kubenswrapper[4795]: I1129 07:52:27.291098 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2c2ml" Nov 29 07:52:27 crc kubenswrapper[4795]: I1129 07:52:27.292309 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2c2ml" Nov 29 07:52:27 crc kubenswrapper[4795]: I1129 07:52:27.359210 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2c2ml" Nov 29 07:52:28 crc kubenswrapper[4795]: I1129 07:52:28.032298 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2c2ml" Nov 29 07:52:28 crc kubenswrapper[4795]: I1129 07:52:28.122828 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2c2ml"] Nov 29 07:52:29 crc kubenswrapper[4795]: I1129 07:52:29.977095 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2c2ml" podUID="d2267270-ccf6-4757-94c0-f7936459e507" containerName="registry-server" containerID="cri-o://0ba148909352e4ad238ad0abf2da6690af63d8f5c07236534002722a1ed06791" gracePeriod=2 Nov 29 07:52:31 crc kubenswrapper[4795]: I1129 07:52:31.990339 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j55pl" event={"ID":"3af8e553-c0fb-4e8e-af77-0204083ec90e","Type":"ContainerStarted","Data":"50d66bbc1419b87328a97fed9953065665c1c8bd029cf8c9fb2a41b319ff22a5"} Nov 29 07:52:32 crc kubenswrapper[4795]: I1129 07:52:32.996755 4795 generic.go:334] "Generic (PLEG): container finished" podID="d2267270-ccf6-4757-94c0-f7936459e507" containerID="0ba148909352e4ad238ad0abf2da6690af63d8f5c07236534002722a1ed06791" exitCode=0 Nov 29 07:52:32 crc kubenswrapper[4795]: I1129 07:52:32.996793 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2c2ml" event={"ID":"d2267270-ccf6-4757-94c0-f7936459e507","Type":"ContainerDied","Data":"0ba148909352e4ad238ad0abf2da6690af63d8f5c07236534002722a1ed06791"} Nov 29 07:52:34 crc kubenswrapper[4795]: I1129 07:52:34.007090 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j55pl" event={"ID":"3af8e553-c0fb-4e8e-af77-0204083ec90e","Type":"ContainerStarted","Data":"d49b16d821260fa7cd6f7b3eefea89a27da62536e9ec40c6748134aac0be0474"} Nov 29 07:52:34 crc kubenswrapper[4795]: I1129 07:52:34.007764 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-j55pl" Nov 29 07:52:34 crc kubenswrapper[4795]: I1129 07:52:34.007790 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-j55pl" Nov 29 07:52:34 crc kubenswrapper[4795]: I1129 07:52:34.007808 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-j55pl" Nov 29 07:52:34 crc kubenswrapper[4795]: I1129 07:52:34.042114 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-j55pl" podStartSLOduration=15.042095966 podStartE2EDuration="15.042095966s" podCreationTimestamp="2025-11-29 07:52:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:52:34.039091999 +0000 UTC m=+800.014667789" watchObservedRunningTime="2025-11-29 07:52:34.042095966 +0000 UTC m=+800.017671756" Nov 29 07:52:34 crc kubenswrapper[4795]: I1129 07:52:34.093733 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-j55pl" Nov 29 07:52:34 crc kubenswrapper[4795]: I1129 07:52:34.095241 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-j55pl" Nov 29 07:52:35 crc kubenswrapper[4795]: I1129 07:52:35.204224 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2c2ml" Nov 29 07:52:35 crc kubenswrapper[4795]: I1129 07:52:35.275369 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-xmczn" Nov 29 07:52:35 crc kubenswrapper[4795]: I1129 07:52:35.276293 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-xmczn" Nov 29 07:52:35 crc kubenswrapper[4795]: I1129 07:52:35.277495 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8fdc8d5d-9scj8" Nov 29 07:52:35 crc kubenswrapper[4795]: I1129 07:52:35.278177 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8fdc8d5d-9scj8" Nov 29 07:52:35 crc kubenswrapper[4795]: I1129 07:52:35.288232 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2267270-ccf6-4757-94c0-f7936459e507-utilities\") pod \"d2267270-ccf6-4757-94c0-f7936459e507\" (UID: \"d2267270-ccf6-4757-94c0-f7936459e507\") " Nov 29 07:52:35 crc kubenswrapper[4795]: I1129 07:52:35.288322 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntmq2\" (UniqueName: \"kubernetes.io/projected/d2267270-ccf6-4757-94c0-f7936459e507-kube-api-access-ntmq2\") pod \"d2267270-ccf6-4757-94c0-f7936459e507\" (UID: \"d2267270-ccf6-4757-94c0-f7936459e507\") " Nov 29 07:52:35 crc kubenswrapper[4795]: I1129 07:52:35.288417 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2267270-ccf6-4757-94c0-f7936459e507-catalog-content\") pod \"d2267270-ccf6-4757-94c0-f7936459e507\" (UID: \"d2267270-ccf6-4757-94c0-f7936459e507\") " Nov 29 07:52:35 crc kubenswrapper[4795]: I1129 07:52:35.289286 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2267270-ccf6-4757-94c0-f7936459e507-utilities" (OuterVolumeSpecName: "utilities") pod "d2267270-ccf6-4757-94c0-f7936459e507" (UID: "d2267270-ccf6-4757-94c0-f7936459e507"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:52:35 crc kubenswrapper[4795]: I1129 07:52:35.300867 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2267270-ccf6-4757-94c0-f7936459e507-kube-api-access-ntmq2" (OuterVolumeSpecName: "kube-api-access-ntmq2") pod "d2267270-ccf6-4757-94c0-f7936459e507" (UID: "d2267270-ccf6-4757-94c0-f7936459e507"). InnerVolumeSpecName "kube-api-access-ntmq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:52:35 crc kubenswrapper[4795]: E1129 07:52:35.324514 4795 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-xmczn_openshift-operators_69e46873-1e0c-4187-810e-584aa956ba47_0(15cf65555bfa3c82542c84680b8d0a389823954baf3fbfa8416810e535ac9cef): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 29 07:52:35 crc kubenswrapper[4795]: E1129 07:52:35.325024 4795 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-xmczn_openshift-operators_69e46873-1e0c-4187-810e-584aa956ba47_0(15cf65555bfa3c82542c84680b8d0a389823954baf3fbfa8416810e535ac9cef): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-xmczn" Nov 29 07:52:35 crc kubenswrapper[4795]: E1129 07:52:35.325059 4795 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-xmczn_openshift-operators_69e46873-1e0c-4187-810e-584aa956ba47_0(15cf65555bfa3c82542c84680b8d0a389823954baf3fbfa8416810e535ac9cef): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-xmczn" Nov 29 07:52:35 crc kubenswrapper[4795]: E1129 07:52:35.326072 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-668cf9dfbb-xmczn_openshift-operators(69e46873-1e0c-4187-810e-584aa956ba47)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-668cf9dfbb-xmczn_openshift-operators(69e46873-1e0c-4187-810e-584aa956ba47)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-xmczn_openshift-operators_69e46873-1e0c-4187-810e-584aa956ba47_0(15cf65555bfa3c82542c84680b8d0a389823954baf3fbfa8416810e535ac9cef): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-xmczn" podUID="69e46873-1e0c-4187-810e-584aa956ba47" Nov 29 07:52:35 crc kubenswrapper[4795]: E1129 07:52:35.362864 4795 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-8fdc8d5d-9scj8_openshift-operators_63008df8-40b1-4ab0-966e-d88d426e3b1b_0(34f58147ddf9649d7b17462166605804da312e3ce800080aa279c925630e77a9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 29 07:52:35 crc kubenswrapper[4795]: E1129 07:52:35.362928 4795 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-8fdc8d5d-9scj8_openshift-operators_63008df8-40b1-4ab0-966e-d88d426e3b1b_0(34f58147ddf9649d7b17462166605804da312e3ce800080aa279c925630e77a9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8fdc8d5d-9scj8" Nov 29 07:52:35 crc kubenswrapper[4795]: E1129 07:52:35.362947 4795 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-8fdc8d5d-9scj8_openshift-operators_63008df8-40b1-4ab0-966e-d88d426e3b1b_0(34f58147ddf9649d7b17462166605804da312e3ce800080aa279c925630e77a9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8fdc8d5d-9scj8" Nov 29 07:52:35 crc kubenswrapper[4795]: E1129 07:52:35.362991 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-8fdc8d5d-9scj8_openshift-operators(63008df8-40b1-4ab0-966e-d88d426e3b1b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-8fdc8d5d-9scj8_openshift-operators(63008df8-40b1-4ab0-966e-d88d426e3b1b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-8fdc8d5d-9scj8_openshift-operators_63008df8-40b1-4ab0-966e-d88d426e3b1b_0(34f58147ddf9649d7b17462166605804da312e3ce800080aa279c925630e77a9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8fdc8d5d-9scj8" podUID="63008df8-40b1-4ab0-966e-d88d426e3b1b" Nov 29 07:52:35 crc kubenswrapper[4795]: I1129 07:52:35.390106 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntmq2\" (UniqueName: \"kubernetes.io/projected/d2267270-ccf6-4757-94c0-f7936459e507-kube-api-access-ntmq2\") on node \"crc\" DevicePath \"\"" Nov 29 07:52:35 crc kubenswrapper[4795]: I1129 07:52:35.390137 4795 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2267270-ccf6-4757-94c0-f7936459e507-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:52:35 crc kubenswrapper[4795]: I1129 07:52:35.459144 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2267270-ccf6-4757-94c0-f7936459e507-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d2267270-ccf6-4757-94c0-f7936459e507" (UID: "d2267270-ccf6-4757-94c0-f7936459e507"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:52:35 crc kubenswrapper[4795]: I1129 07:52:35.491223 4795 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2267270-ccf6-4757-94c0-f7936459e507-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:52:35 crc kubenswrapper[4795]: I1129 07:52:35.859975 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-8fdc8d5d-r6sxj"] Nov 29 07:52:35 crc kubenswrapper[4795]: I1129 07:52:35.860144 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8fdc8d5d-r6sxj" Nov 29 07:52:35 crc kubenswrapper[4795]: I1129 07:52:35.860616 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8fdc8d5d-r6sxj" Nov 29 07:52:35 crc kubenswrapper[4795]: I1129 07:52:35.865838 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-7h8sj"] Nov 29 07:52:35 crc kubenswrapper[4795]: I1129 07:52:35.866242 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-7h8sj" Nov 29 07:52:35 crc kubenswrapper[4795]: I1129 07:52:35.866824 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-7h8sj" Nov 29 07:52:35 crc kubenswrapper[4795]: E1129 07:52:35.887512 4795 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-8fdc8d5d-r6sxj_openshift-operators_5f84b151-fbdd-40bc-9457-ec560370a162_0(f17d1c1de886cfa33c3b27da7c3e744960de8bfc9b30e1bc5164e12f87de10a0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 29 07:52:35 crc kubenswrapper[4795]: E1129 07:52:35.887563 4795 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-8fdc8d5d-r6sxj_openshift-operators_5f84b151-fbdd-40bc-9457-ec560370a162_0(f17d1c1de886cfa33c3b27da7c3e744960de8bfc9b30e1bc5164e12f87de10a0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8fdc8d5d-r6sxj" Nov 29 07:52:35 crc kubenswrapper[4795]: E1129 07:52:35.887585 4795 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-8fdc8d5d-r6sxj_openshift-operators_5f84b151-fbdd-40bc-9457-ec560370a162_0(f17d1c1de886cfa33c3b27da7c3e744960de8bfc9b30e1bc5164e12f87de10a0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8fdc8d5d-r6sxj" Nov 29 07:52:35 crc kubenswrapper[4795]: E1129 07:52:35.887657 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-8fdc8d5d-r6sxj_openshift-operators(5f84b151-fbdd-40bc-9457-ec560370a162)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-8fdc8d5d-r6sxj_openshift-operators(5f84b151-fbdd-40bc-9457-ec560370a162)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-8fdc8d5d-r6sxj_openshift-operators_5f84b151-fbdd-40bc-9457-ec560370a162_0(f17d1c1de886cfa33c3b27da7c3e744960de8bfc9b30e1bc5164e12f87de10a0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8fdc8d5d-r6sxj" podUID="5f84b151-fbdd-40bc-9457-ec560370a162" Nov 29 07:52:35 crc kubenswrapper[4795]: I1129 07:52:35.894010 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-xmczn"] Nov 29 07:52:35 crc kubenswrapper[4795]: E1129 07:52:35.912618 4795 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-7h8sj_openshift-operators_52ff2b6a-cefb-4a70-ac45-9d0c5b9d315d_0(740c7f9303952ece8caa9a5a5b81abe7512db3d4a61c734c5bdf010ddcab7f56): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 29 07:52:35 crc kubenswrapper[4795]: E1129 07:52:35.912715 4795 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-7h8sj_openshift-operators_52ff2b6a-cefb-4a70-ac45-9d0c5b9d315d_0(740c7f9303952ece8caa9a5a5b81abe7512db3d4a61c734c5bdf010ddcab7f56): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-7h8sj" Nov 29 07:52:35 crc kubenswrapper[4795]: E1129 07:52:35.912744 4795 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-7h8sj_openshift-operators_52ff2b6a-cefb-4a70-ac45-9d0c5b9d315d_0(740c7f9303952ece8caa9a5a5b81abe7512db3d4a61c734c5bdf010ddcab7f56): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-7h8sj" Nov 29 07:52:35 crc kubenswrapper[4795]: E1129 07:52:35.912829 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-d8bb48f5d-7h8sj_openshift-operators(52ff2b6a-cefb-4a70-ac45-9d0c5b9d315d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-d8bb48f5d-7h8sj_openshift-operators(52ff2b6a-cefb-4a70-ac45-9d0c5b9d315d)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-7h8sj_openshift-operators_52ff2b6a-cefb-4a70-ac45-9d0c5b9d315d_0(740c7f9303952ece8caa9a5a5b81abe7512db3d4a61c734c5bdf010ddcab7f56): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-d8bb48f5d-7h8sj" podUID="52ff2b6a-cefb-4a70-ac45-9d0c5b9d315d" Nov 29 07:52:35 crc kubenswrapper[4795]: I1129 07:52:35.927232 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-rb87j"] Nov 29 07:52:35 crc kubenswrapper[4795]: I1129 07:52:35.927370 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-rb87j" Nov 29 07:52:35 crc kubenswrapper[4795]: I1129 07:52:35.927890 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-rb87j" Nov 29 07:52:35 crc kubenswrapper[4795]: I1129 07:52:35.937815 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-8fdc8d5d-9scj8"] Nov 29 07:52:35 crc kubenswrapper[4795]: E1129 07:52:35.959364 4795 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-rb87j_openshift-operators_7cca6c3b-ab29-47c1-94fe-bb8d2ac90062_0(b67fe3cda0dfb521bdec5b2afd2acf9ce2a7e8a00152e53613dc3dd4b70d44ed): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 29 07:52:35 crc kubenswrapper[4795]: E1129 07:52:35.959442 4795 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-rb87j_openshift-operators_7cca6c3b-ab29-47c1-94fe-bb8d2ac90062_0(b67fe3cda0dfb521bdec5b2afd2acf9ce2a7e8a00152e53613dc3dd4b70d44ed): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-rb87j" Nov 29 07:52:35 crc kubenswrapper[4795]: E1129 07:52:35.959470 4795 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-rb87j_openshift-operators_7cca6c3b-ab29-47c1-94fe-bb8d2ac90062_0(b67fe3cda0dfb521bdec5b2afd2acf9ce2a7e8a00152e53613dc3dd4b70d44ed): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-rb87j" Nov 29 07:52:35 crc kubenswrapper[4795]: E1129 07:52:35.959524 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5446b9c989-rb87j_openshift-operators(7cca6c3b-ab29-47c1-94fe-bb8d2ac90062)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5446b9c989-rb87j_openshift-operators(7cca6c3b-ab29-47c1-94fe-bb8d2ac90062)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-rb87j_openshift-operators_7cca6c3b-ab29-47c1-94fe-bb8d2ac90062_0(b67fe3cda0dfb521bdec5b2afd2acf9ce2a7e8a00152e53613dc3dd4b70d44ed): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5446b9c989-rb87j" podUID="7cca6c3b-ab29-47c1-94fe-bb8d2ac90062" Nov 29 07:52:36 crc kubenswrapper[4795]: I1129 07:52:36.025369 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8fdc8d5d-9scj8" Nov 29 07:52:36 crc kubenswrapper[4795]: I1129 07:52:36.025867 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8fdc8d5d-9scj8" Nov 29 07:52:36 crc kubenswrapper[4795]: I1129 07:52:36.026118 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2c2ml" Nov 29 07:52:36 crc kubenswrapper[4795]: I1129 07:52:36.036304 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2c2ml" event={"ID":"d2267270-ccf6-4757-94c0-f7936459e507","Type":"ContainerDied","Data":"4bf5d819a2975f861a7cb6c8cc4abd13f0826ab007aa24fe189ed7b04da76946"} Nov 29 07:52:36 crc kubenswrapper[4795]: I1129 07:52:36.036361 4795 scope.go:117] "RemoveContainer" containerID="0ba148909352e4ad238ad0abf2da6690af63d8f5c07236534002722a1ed06791" Nov 29 07:52:36 crc kubenswrapper[4795]: I1129 07:52:36.036514 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-xmczn" Nov 29 07:52:36 crc kubenswrapper[4795]: I1129 07:52:36.036904 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-xmczn" Nov 29 07:52:36 crc kubenswrapper[4795]: I1129 07:52:36.062053 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2c2ml"] Nov 29 07:52:36 crc kubenswrapper[4795]: I1129 07:52:36.068799 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2c2ml"] Nov 29 07:52:36 crc kubenswrapper[4795]: I1129 07:52:36.076332 4795 scope.go:117] "RemoveContainer" containerID="67fdba5565cdf72e475112467336f9888e50a202828f4b786ec8fcfac28db6c4" Nov 29 07:52:36 crc kubenswrapper[4795]: E1129 07:52:36.095980 4795 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-xmczn_openshift-operators_69e46873-1e0c-4187-810e-584aa956ba47_0(d18ce99a1ea3dcc3ef15289647b3ce6b5abdbf91a5b7ba939e93e766fe32f2d2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 29 07:52:36 crc kubenswrapper[4795]: E1129 07:52:36.096044 4795 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-xmczn_openshift-operators_69e46873-1e0c-4187-810e-584aa956ba47_0(d18ce99a1ea3dcc3ef15289647b3ce6b5abdbf91a5b7ba939e93e766fe32f2d2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-xmczn" Nov 29 07:52:36 crc kubenswrapper[4795]: E1129 07:52:36.096073 4795 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-xmczn_openshift-operators_69e46873-1e0c-4187-810e-584aa956ba47_0(d18ce99a1ea3dcc3ef15289647b3ce6b5abdbf91a5b7ba939e93e766fe32f2d2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-xmczn" Nov 29 07:52:36 crc kubenswrapper[4795]: E1129 07:52:36.096122 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-668cf9dfbb-xmczn_openshift-operators(69e46873-1e0c-4187-810e-584aa956ba47)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-668cf9dfbb-xmczn_openshift-operators(69e46873-1e0c-4187-810e-584aa956ba47)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-xmczn_openshift-operators_69e46873-1e0c-4187-810e-584aa956ba47_0(d18ce99a1ea3dcc3ef15289647b3ce6b5abdbf91a5b7ba939e93e766fe32f2d2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-xmczn" podUID="69e46873-1e0c-4187-810e-584aa956ba47" Nov 29 07:52:36 crc kubenswrapper[4795]: I1129 07:52:36.096932 4795 scope.go:117] "RemoveContainer" containerID="53d8c6da72b4d9ee2e93ebd64b6e7798ac04e01faaa4648d1c2281066d780d6e" Nov 29 07:52:36 crc kubenswrapper[4795]: E1129 07:52:36.101966 4795 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-8fdc8d5d-9scj8_openshift-operators_63008df8-40b1-4ab0-966e-d88d426e3b1b_0(272af20491b524800cdc461d995617af82936e53b74bf0f7e4ce5a213ee32cb3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 29 07:52:36 crc kubenswrapper[4795]: E1129 07:52:36.102005 4795 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-8fdc8d5d-9scj8_openshift-operators_63008df8-40b1-4ab0-966e-d88d426e3b1b_0(272af20491b524800cdc461d995617af82936e53b74bf0f7e4ce5a213ee32cb3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8fdc8d5d-9scj8" Nov 29 07:52:36 crc kubenswrapper[4795]: E1129 07:52:36.102035 4795 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-8fdc8d5d-9scj8_openshift-operators_63008df8-40b1-4ab0-966e-d88d426e3b1b_0(272af20491b524800cdc461d995617af82936e53b74bf0f7e4ce5a213ee32cb3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8fdc8d5d-9scj8" Nov 29 07:52:36 crc kubenswrapper[4795]: E1129 07:52:36.102068 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-8fdc8d5d-9scj8_openshift-operators(63008df8-40b1-4ab0-966e-d88d426e3b1b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-8fdc8d5d-9scj8_openshift-operators(63008df8-40b1-4ab0-966e-d88d426e3b1b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-8fdc8d5d-9scj8_openshift-operators_63008df8-40b1-4ab0-966e-d88d426e3b1b_0(272af20491b524800cdc461d995617af82936e53b74bf0f7e4ce5a213ee32cb3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8fdc8d5d-9scj8" podUID="63008df8-40b1-4ab0-966e-d88d426e3b1b" Nov 29 07:52:36 crc kubenswrapper[4795]: I1129 07:52:36.282907 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2267270-ccf6-4757-94c0-f7936459e507" path="/var/lib/kubelet/pods/d2267270-ccf6-4757-94c0-f7936459e507/volumes" Nov 29 07:52:41 crc kubenswrapper[4795]: I1129 07:52:41.941062 4795 patch_prober.go:28] interesting pod/machine-config-daemon-bkmq6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:52:41 crc kubenswrapper[4795]: I1129 07:52:41.941416 4795 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:52:46 crc kubenswrapper[4795]: I1129 07:52:46.274926 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8fdc8d5d-r6sxj" Nov 29 07:52:46 crc kubenswrapper[4795]: I1129 07:52:46.275996 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8fdc8d5d-r6sxj" Nov 29 07:52:46 crc kubenswrapper[4795]: I1129 07:52:46.505063 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-8fdc8d5d-r6sxj"] Nov 29 07:52:47 crc kubenswrapper[4795]: I1129 07:52:47.100959 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8fdc8d5d-r6sxj" event={"ID":"5f84b151-fbdd-40bc-9457-ec560370a162","Type":"ContainerStarted","Data":"a61b6f4d011a38b216178a8a68b58e49738e226429ba664766dbcb7dcbec8a97"} Nov 29 07:52:47 crc kubenswrapper[4795]: I1129 07:52:47.275416 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8fdc8d5d-9scj8" Nov 29 07:52:47 crc kubenswrapper[4795]: I1129 07:52:47.276319 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8fdc8d5d-9scj8" Nov 29 07:52:47 crc kubenswrapper[4795]: I1129 07:52:47.517359 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-8fdc8d5d-9scj8"] Nov 29 07:52:47 crc kubenswrapper[4795]: W1129 07:52:47.526368 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63008df8_40b1_4ab0_966e_d88d426e3b1b.slice/crio-96a26925caddc349a4b8f2688f4c464014f0247c5f7af6378f776f0c1d01da21 WatchSource:0}: Error finding container 96a26925caddc349a4b8f2688f4c464014f0247c5f7af6378f776f0c1d01da21: Status 404 returned error can't find the container with id 96a26925caddc349a4b8f2688f4c464014f0247c5f7af6378f776f0c1d01da21 Nov 29 07:52:48 crc kubenswrapper[4795]: I1129 07:52:48.114681 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8fdc8d5d-9scj8" event={"ID":"63008df8-40b1-4ab0-966e-d88d426e3b1b","Type":"ContainerStarted","Data":"96a26925caddc349a4b8f2688f4c464014f0247c5f7af6378f776f0c1d01da21"} Nov 29 07:52:48 crc kubenswrapper[4795]: I1129 07:52:48.277799 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-7h8sj" Nov 29 07:52:48 crc kubenswrapper[4795]: I1129 07:52:48.278385 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-7h8sj" Nov 29 07:52:48 crc kubenswrapper[4795]: I1129 07:52:48.476443 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-7h8sj"] Nov 29 07:52:48 crc kubenswrapper[4795]: W1129 07:52:48.490760 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52ff2b6a_cefb_4a70_ac45_9d0c5b9d315d.slice/crio-9c67f551fe0a59affc66760c8b8cf863f655ad7de72620ee32df2f6780f1e513 WatchSource:0}: Error finding container 9c67f551fe0a59affc66760c8b8cf863f655ad7de72620ee32df2f6780f1e513: Status 404 returned error can't find the container with id 9c67f551fe0a59affc66760c8b8cf863f655ad7de72620ee32df2f6780f1e513 Nov 29 07:52:49 crc kubenswrapper[4795]: I1129 07:52:49.123375 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-7h8sj" event={"ID":"52ff2b6a-cefb-4a70-ac45-9d0c5b9d315d","Type":"ContainerStarted","Data":"9c67f551fe0a59affc66760c8b8cf863f655ad7de72620ee32df2f6780f1e513"} Nov 29 07:52:49 crc kubenswrapper[4795]: I1129 07:52:49.274721 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-rb87j" Nov 29 07:52:49 crc kubenswrapper[4795]: I1129 07:52:49.275340 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-rb87j" Nov 29 07:52:49 crc kubenswrapper[4795]: I1129 07:52:49.481766 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-j55pl" Nov 29 07:52:49 crc kubenswrapper[4795]: I1129 07:52:49.779278 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-rb87j"] Nov 29 07:52:51 crc kubenswrapper[4795]: I1129 07:52:51.274832 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-xmczn" Nov 29 07:52:51 crc kubenswrapper[4795]: I1129 07:52:51.275771 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-xmczn" Nov 29 07:53:01 crc kubenswrapper[4795]: W1129 07:53:01.503493 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7cca6c3b_ab29_47c1_94fe_bb8d2ac90062.slice/crio-6c0dccd79daba2e6422a9a9218963cea836184fc83c96f606032c2e94bd839b1 WatchSource:0}: Error finding container 6c0dccd79daba2e6422a9a9218963cea836184fc83c96f606032c2e94bd839b1: Status 404 returned error can't find the container with id 6c0dccd79daba2e6422a9a9218963cea836184fc83c96f606032c2e94bd839b1 Nov 29 07:53:01 crc kubenswrapper[4795]: E1129 07:53:01.529048 4795 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec" Nov 29 07:53:01 crc kubenswrapper[4795]: E1129 07:53:01.529216 4795 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus-operator-admission-webhook,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec,Command:[],Args:[--web.enable-tls=true --web.cert-file=/tmp/k8s-webhook-server/serving-certs/tls.crt --web.key-file=/tmp/k8s-webhook-server/serving-certs/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{209715200 0} {} BinarySI},},Requests:ResourceList{cpu: {{50 -3} {} 50m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:apiservice-cert,ReadOnly:false,MountPath:/apiserver.local.config/certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod obo-prometheus-operator-admission-webhook-8fdc8d5d-r6sxj_openshift-operators(5f84b151-fbdd-40bc-9457-ec560370a162): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 29 07:53:01 crc kubenswrapper[4795]: E1129 07:53:01.530871 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator-admission-webhook\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8fdc8d5d-r6sxj" podUID="5f84b151-fbdd-40bc-9457-ec560370a162" Nov 29 07:53:02 crc kubenswrapper[4795]: I1129 07:53:02.366168 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-rb87j" event={"ID":"7cca6c3b-ab29-47c1-94fe-bb8d2ac90062","Type":"ContainerStarted","Data":"6c0dccd79daba2e6422a9a9218963cea836184fc83c96f606032c2e94bd839b1"} Nov 29 07:53:02 crc kubenswrapper[4795]: E1129 07:53:02.370078 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator-admission-webhook\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec\\\"\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8fdc8d5d-r6sxj" podUID="5f84b151-fbdd-40bc-9457-ec560370a162" Nov 29 07:53:04 crc kubenswrapper[4795]: I1129 07:53:04.753712 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-xmczn"] Nov 29 07:53:04 crc kubenswrapper[4795]: W1129 07:53:04.757372 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69e46873_1e0c_4187_810e_584aa956ba47.slice/crio-967e79dfc752b04e5770e6b02a54450e12e75a77e5c96e631bd1fcd6feecfe6f WatchSource:0}: Error finding container 967e79dfc752b04e5770e6b02a54450e12e75a77e5c96e631bd1fcd6feecfe6f: Status 404 returned error can't find the container with id 967e79dfc752b04e5770e6b02a54450e12e75a77e5c96e631bd1fcd6feecfe6f Nov 29 07:53:05 crc kubenswrapper[4795]: I1129 07:53:05.397679 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-xmczn" event={"ID":"69e46873-1e0c-4187-810e-584aa956ba47","Type":"ContainerStarted","Data":"967e79dfc752b04e5770e6b02a54450e12e75a77e5c96e631bd1fcd6feecfe6f"} Nov 29 07:53:05 crc kubenswrapper[4795]: I1129 07:53:05.399090 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8fdc8d5d-9scj8" event={"ID":"63008df8-40b1-4ab0-966e-d88d426e3b1b","Type":"ContainerStarted","Data":"e0fe309cebf2d0d22687ed4df3ba827be7793117f275942cbbc8af9a768f464e"} Nov 29 07:53:05 crc kubenswrapper[4795]: I1129 07:53:05.401710 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-7h8sj" event={"ID":"52ff2b6a-cefb-4a70-ac45-9d0c5b9d315d","Type":"ContainerStarted","Data":"a8fadea46a44b7abe979a5c75704a7e4d4cd62a7966af8d217d8b5cb252b40a8"} Nov 29 07:53:05 crc kubenswrapper[4795]: I1129 07:53:05.402529 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-d8bb48f5d-7h8sj" Nov 29 07:53:05 crc kubenswrapper[4795]: I1129 07:53:05.422203 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8fdc8d5d-9scj8" podStartSLOduration=25.479045749 podStartE2EDuration="42.422178917s" podCreationTimestamp="2025-11-29 07:52:23 +0000 UTC" firstStartedPulling="2025-11-29 07:52:47.528936795 +0000 UTC m=+813.504512585" lastFinishedPulling="2025-11-29 07:53:04.472069963 +0000 UTC m=+830.447645753" observedRunningTime="2025-11-29 07:53:05.41569488 +0000 UTC m=+831.391270700" watchObservedRunningTime="2025-11-29 07:53:05.422178917 +0000 UTC m=+831.397754707" Nov 29 07:53:05 crc kubenswrapper[4795]: I1129 07:53:05.445445 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-d8bb48f5d-7h8sj" podStartSLOduration=26.420793293 podStartE2EDuration="42.445421798s" podCreationTimestamp="2025-11-29 07:52:23 +0000 UTC" firstStartedPulling="2025-11-29 07:52:48.493190528 +0000 UTC m=+814.468766308" lastFinishedPulling="2025-11-29 07:53:04.517819023 +0000 UTC m=+830.493394813" observedRunningTime="2025-11-29 07:53:05.444158062 +0000 UTC m=+831.419733852" watchObservedRunningTime="2025-11-29 07:53:05.445421798 +0000 UTC m=+831.420997588" Nov 29 07:53:05 crc kubenswrapper[4795]: I1129 07:53:05.564861 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-d8bb48f5d-7h8sj" Nov 29 07:53:06 crc kubenswrapper[4795]: I1129 07:53:06.411283 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-rb87j" event={"ID":"7cca6c3b-ab29-47c1-94fe-bb8d2ac90062","Type":"ContainerStarted","Data":"00c01f7845cc701ab0be5d2959b4ff9fd38d771b64ec7ff7b6c6afb8980921cd"} Nov 29 07:53:06 crc kubenswrapper[4795]: I1129 07:53:06.412088 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5446b9c989-rb87j" Nov 29 07:53:06 crc kubenswrapper[4795]: I1129 07:53:06.435343 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5446b9c989-rb87j" podStartSLOduration=39.029643899 podStartE2EDuration="43.435323121s" podCreationTimestamp="2025-11-29 07:52:23 +0000 UTC" firstStartedPulling="2025-11-29 07:53:01.506733541 +0000 UTC m=+827.482309331" lastFinishedPulling="2025-11-29 07:53:05.912412763 +0000 UTC m=+831.887988553" observedRunningTime="2025-11-29 07:53:06.434347493 +0000 UTC m=+832.409923283" watchObservedRunningTime="2025-11-29 07:53:06.435323121 +0000 UTC m=+832.410898901" Nov 29 07:53:08 crc kubenswrapper[4795]: I1129 07:53:08.426971 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-xmczn" event={"ID":"69e46873-1e0c-4187-810e-584aa956ba47","Type":"ContainerStarted","Data":"160bdec86a512c0201b35560713bff344fcd52d95bff2a49efe89f43c5e4b161"} Nov 29 07:53:08 crc kubenswrapper[4795]: I1129 07:53:08.452694 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-xmczn" podStartSLOduration=42.696450251 podStartE2EDuration="45.452677389s" podCreationTimestamp="2025-11-29 07:52:23 +0000 UTC" firstStartedPulling="2025-11-29 07:53:04.760355911 +0000 UTC m=+830.735931701" lastFinishedPulling="2025-11-29 07:53:07.516583049 +0000 UTC m=+833.492158839" observedRunningTime="2025-11-29 07:53:08.451133625 +0000 UTC m=+834.426709425" watchObservedRunningTime="2025-11-29 07:53:08.452677389 +0000 UTC m=+834.428253179" Nov 29 07:53:11 crc kubenswrapper[4795]: I1129 07:53:11.941416 4795 patch_prober.go:28] interesting pod/machine-config-daemon-bkmq6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:53:11 crc kubenswrapper[4795]: I1129 07:53:11.941858 4795 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:53:13 crc kubenswrapper[4795]: I1129 07:53:13.150804 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-2vns9"] Nov 29 07:53:13 crc kubenswrapper[4795]: E1129 07:53:13.151398 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2267270-ccf6-4757-94c0-f7936459e507" containerName="registry-server" Nov 29 07:53:13 crc kubenswrapper[4795]: I1129 07:53:13.151414 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2267270-ccf6-4757-94c0-f7936459e507" containerName="registry-server" Nov 29 07:53:13 crc kubenswrapper[4795]: E1129 07:53:13.151432 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2267270-ccf6-4757-94c0-f7936459e507" containerName="extract-utilities" Nov 29 07:53:13 crc kubenswrapper[4795]: I1129 07:53:13.151439 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2267270-ccf6-4757-94c0-f7936459e507" containerName="extract-utilities" Nov 29 07:53:13 crc kubenswrapper[4795]: E1129 07:53:13.151456 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2267270-ccf6-4757-94c0-f7936459e507" containerName="extract-content" Nov 29 07:53:13 crc kubenswrapper[4795]: I1129 07:53:13.151463 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2267270-ccf6-4757-94c0-f7936459e507" containerName="extract-content" Nov 29 07:53:13 crc kubenswrapper[4795]: I1129 07:53:13.151602 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2267270-ccf6-4757-94c0-f7936459e507" containerName="registry-server" Nov 29 07:53:13 crc kubenswrapper[4795]: I1129 07:53:13.152085 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-2vns9" Nov 29 07:53:13 crc kubenswrapper[4795]: I1129 07:53:13.155034 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Nov 29 07:53:13 crc kubenswrapper[4795]: I1129 07:53:13.155321 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Nov 29 07:53:13 crc kubenswrapper[4795]: I1129 07:53:13.155467 4795 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-djtk6" Nov 29 07:53:13 crc kubenswrapper[4795]: I1129 07:53:13.167810 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-l4ctt"] Nov 29 07:53:13 crc kubenswrapper[4795]: I1129 07:53:13.168701 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-l4ctt" Nov 29 07:53:13 crc kubenswrapper[4795]: I1129 07:53:13.172428 4795 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-fd6pd" Nov 29 07:53:13 crc kubenswrapper[4795]: I1129 07:53:13.177351 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-2vns9"] Nov 29 07:53:13 crc kubenswrapper[4795]: I1129 07:53:13.192475 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-l4ctt"] Nov 29 07:53:13 crc kubenswrapper[4795]: I1129 07:53:13.208545 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-mn9s2"] Nov 29 07:53:13 crc kubenswrapper[4795]: I1129 07:53:13.209407 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-mn9s2" Nov 29 07:53:13 crc kubenswrapper[4795]: I1129 07:53:13.211325 4795 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-wpp6k" Nov 29 07:53:13 crc kubenswrapper[4795]: I1129 07:53:13.215344 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-mn9s2"] Nov 29 07:53:13 crc kubenswrapper[4795]: I1129 07:53:13.242446 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st9lz\" (UniqueName: \"kubernetes.io/projected/780eadcf-077c-4f71-8570-5ebbca30d61e-kube-api-access-st9lz\") pod \"cert-manager-webhook-5655c58dd6-mn9s2\" (UID: \"780eadcf-077c-4f71-8570-5ebbca30d61e\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-mn9s2" Nov 29 07:53:13 crc kubenswrapper[4795]: I1129 07:53:13.242519 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnx98\" (UniqueName: \"kubernetes.io/projected/a921719c-ebed-49c3-9482-87b58c96c819-kube-api-access-hnx98\") pod \"cert-manager-cainjector-7f985d654d-2vns9\" (UID: \"a921719c-ebed-49c3-9482-87b58c96c819\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-2vns9" Nov 29 07:53:13 crc kubenswrapper[4795]: I1129 07:53:13.242553 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwfjc\" (UniqueName: \"kubernetes.io/projected/286839ed-cd16-46c0-81a4-d0c90bb32fb4-kube-api-access-bwfjc\") pod \"cert-manager-5b446d88c5-l4ctt\" (UID: \"286839ed-cd16-46c0-81a4-d0c90bb32fb4\") " pod="cert-manager/cert-manager-5b446d88c5-l4ctt" Nov 29 07:53:13 crc kubenswrapper[4795]: I1129 07:53:13.344254 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnx98\" (UniqueName: \"kubernetes.io/projected/a921719c-ebed-49c3-9482-87b58c96c819-kube-api-access-hnx98\") pod \"cert-manager-cainjector-7f985d654d-2vns9\" (UID: \"a921719c-ebed-49c3-9482-87b58c96c819\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-2vns9" Nov 29 07:53:13 crc kubenswrapper[4795]: I1129 07:53:13.344318 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwfjc\" (UniqueName: \"kubernetes.io/projected/286839ed-cd16-46c0-81a4-d0c90bb32fb4-kube-api-access-bwfjc\") pod \"cert-manager-5b446d88c5-l4ctt\" (UID: \"286839ed-cd16-46c0-81a4-d0c90bb32fb4\") " pod="cert-manager/cert-manager-5b446d88c5-l4ctt" Nov 29 07:53:13 crc kubenswrapper[4795]: I1129 07:53:13.344456 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-st9lz\" (UniqueName: \"kubernetes.io/projected/780eadcf-077c-4f71-8570-5ebbca30d61e-kube-api-access-st9lz\") pod \"cert-manager-webhook-5655c58dd6-mn9s2\" (UID: \"780eadcf-077c-4f71-8570-5ebbca30d61e\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-mn9s2" Nov 29 07:53:13 crc kubenswrapper[4795]: I1129 07:53:13.362551 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwfjc\" (UniqueName: \"kubernetes.io/projected/286839ed-cd16-46c0-81a4-d0c90bb32fb4-kube-api-access-bwfjc\") pod \"cert-manager-5b446d88c5-l4ctt\" (UID: \"286839ed-cd16-46c0-81a4-d0c90bb32fb4\") " pod="cert-manager/cert-manager-5b446d88c5-l4ctt" Nov 29 07:53:13 crc kubenswrapper[4795]: I1129 07:53:13.363706 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnx98\" (UniqueName: \"kubernetes.io/projected/a921719c-ebed-49c3-9482-87b58c96c819-kube-api-access-hnx98\") pod \"cert-manager-cainjector-7f985d654d-2vns9\" (UID: \"a921719c-ebed-49c3-9482-87b58c96c819\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-2vns9" Nov 29 07:53:13 crc kubenswrapper[4795]: I1129 07:53:13.365077 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-st9lz\" (UniqueName: \"kubernetes.io/projected/780eadcf-077c-4f71-8570-5ebbca30d61e-kube-api-access-st9lz\") pod \"cert-manager-webhook-5655c58dd6-mn9s2\" (UID: \"780eadcf-077c-4f71-8570-5ebbca30d61e\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-mn9s2" Nov 29 07:53:13 crc kubenswrapper[4795]: I1129 07:53:13.478522 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-2vns9" Nov 29 07:53:13 crc kubenswrapper[4795]: I1129 07:53:13.497060 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-l4ctt" Nov 29 07:53:13 crc kubenswrapper[4795]: I1129 07:53:13.525735 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-mn9s2" Nov 29 07:53:13 crc kubenswrapper[4795]: I1129 07:53:13.845702 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5446b9c989-rb87j" Nov 29 07:53:13 crc kubenswrapper[4795]: I1129 07:53:13.935463 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-2vns9"] Nov 29 07:53:14 crc kubenswrapper[4795]: I1129 07:53:14.016865 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-l4ctt"] Nov 29 07:53:14 crc kubenswrapper[4795]: W1129 07:53:14.018521 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod286839ed_cd16_46c0_81a4_d0c90bb32fb4.slice/crio-41d68add99c1d705e8b4a79474c9e40eed7d3f663db8d0491bc0c607005a2e53 WatchSource:0}: Error finding container 41d68add99c1d705e8b4a79474c9e40eed7d3f663db8d0491bc0c607005a2e53: Status 404 returned error can't find the container with id 41d68add99c1d705e8b4a79474c9e40eed7d3f663db8d0491bc0c607005a2e53 Nov 29 07:53:14 crc kubenswrapper[4795]: I1129 07:53:14.053298 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-mn9s2"] Nov 29 07:53:14 crc kubenswrapper[4795]: W1129 07:53:14.056608 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod780eadcf_077c_4f71_8570_5ebbca30d61e.slice/crio-aabd344654ddc1b387ed17affba65d90556fc111f0ea01c659c8a37e250613e2 WatchSource:0}: Error finding container aabd344654ddc1b387ed17affba65d90556fc111f0ea01c659c8a37e250613e2: Status 404 returned error can't find the container with id aabd344654ddc1b387ed17affba65d90556fc111f0ea01c659c8a37e250613e2 Nov 29 07:53:14 crc kubenswrapper[4795]: I1129 07:53:14.463460 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-mn9s2" event={"ID":"780eadcf-077c-4f71-8570-5ebbca30d61e","Type":"ContainerStarted","Data":"aabd344654ddc1b387ed17affba65d90556fc111f0ea01c659c8a37e250613e2"} Nov 29 07:53:14 crc kubenswrapper[4795]: I1129 07:53:14.464786 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-l4ctt" event={"ID":"286839ed-cd16-46c0-81a4-d0c90bb32fb4","Type":"ContainerStarted","Data":"41d68add99c1d705e8b4a79474c9e40eed7d3f663db8d0491bc0c607005a2e53"} Nov 29 07:53:14 crc kubenswrapper[4795]: I1129 07:53:14.465095 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-2vns9" event={"ID":"a921719c-ebed-49c3-9482-87b58c96c819","Type":"ContainerStarted","Data":"951b24c672b5219d749c688e99a31b8df45780a1829cd2ee89fa271fb813677e"} Nov 29 07:53:17 crc kubenswrapper[4795]: I1129 07:53:17.496805 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-2vns9" event={"ID":"a921719c-ebed-49c3-9482-87b58c96c819","Type":"ContainerStarted","Data":"65c7d7f8e239ba08c0d77ef587e0aafb04089fc91947546213c89a440a288e0f"} Nov 29 07:53:17 crc kubenswrapper[4795]: I1129 07:53:17.526285 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-2vns9" podStartSLOduration=1.755614655 podStartE2EDuration="4.526262079s" podCreationTimestamp="2025-11-29 07:53:13 +0000 UTC" firstStartedPulling="2025-11-29 07:53:13.938253351 +0000 UTC m=+839.913829141" lastFinishedPulling="2025-11-29 07:53:16.708900775 +0000 UTC m=+842.684476565" observedRunningTime="2025-11-29 07:53:17.520571495 +0000 UTC m=+843.496147285" watchObservedRunningTime="2025-11-29 07:53:17.526262079 +0000 UTC m=+843.501837869" Nov 29 07:53:21 crc kubenswrapper[4795]: I1129 07:53:21.523231 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8fdc8d5d-r6sxj" event={"ID":"5f84b151-fbdd-40bc-9457-ec560370a162","Type":"ContainerStarted","Data":"26c2d9885e6dc0f0abe2cb6b2ca55709cf0d99425246a20e7db49e590a4a5193"} Nov 29 07:53:21 crc kubenswrapper[4795]: I1129 07:53:21.525010 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-mn9s2" event={"ID":"780eadcf-077c-4f71-8570-5ebbca30d61e","Type":"ContainerStarted","Data":"5f96a55a67aab9e8ada394296db050de31521178c58b918119bd6d16c0b0faad"} Nov 29 07:53:21 crc kubenswrapper[4795]: I1129 07:53:21.525150 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-mn9s2" Nov 29 07:53:21 crc kubenswrapper[4795]: I1129 07:53:21.526299 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-l4ctt" event={"ID":"286839ed-cd16-46c0-81a4-d0c90bb32fb4","Type":"ContainerStarted","Data":"38c87d7ba0ac7249c01ce5840dd9a1069ea0e1b4b853f85e1dcdfd10e906455d"} Nov 29 07:53:21 crc kubenswrapper[4795]: I1129 07:53:21.544306 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8fdc8d5d-r6sxj" podStartSLOduration=-9223371978.31049 podStartE2EDuration="58.54428685s" podCreationTimestamp="2025-11-29 07:52:23 +0000 UTC" firstStartedPulling="2025-11-29 07:52:46.515944916 +0000 UTC m=+812.491520706" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:53:21.538889856 +0000 UTC m=+847.514465646" watchObservedRunningTime="2025-11-29 07:53:21.54428685 +0000 UTC m=+847.519862630" Nov 29 07:53:21 crc kubenswrapper[4795]: I1129 07:53:21.592528 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-mn9s2" podStartSLOduration=2.19346298 podStartE2EDuration="8.592509846s" podCreationTimestamp="2025-11-29 07:53:13 +0000 UTC" firstStartedPulling="2025-11-29 07:53:14.059458018 +0000 UTC m=+840.035033808" lastFinishedPulling="2025-11-29 07:53:20.458504884 +0000 UTC m=+846.434080674" observedRunningTime="2025-11-29 07:53:21.562104928 +0000 UTC m=+847.537680718" watchObservedRunningTime="2025-11-29 07:53:21.592509846 +0000 UTC m=+847.568085636" Nov 29 07:53:21 crc kubenswrapper[4795]: I1129 07:53:21.595650 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-l4ctt" podStartSLOduration=2.149255596 podStartE2EDuration="8.595637375s" podCreationTimestamp="2025-11-29 07:53:13 +0000 UTC" firstStartedPulling="2025-11-29 07:53:14.020473203 +0000 UTC m=+839.996048993" lastFinishedPulling="2025-11-29 07:53:20.466854982 +0000 UTC m=+846.442430772" observedRunningTime="2025-11-29 07:53:21.587258536 +0000 UTC m=+847.562834326" watchObservedRunningTime="2025-11-29 07:53:21.595637375 +0000 UTC m=+847.571213175" Nov 29 07:53:28 crc kubenswrapper[4795]: I1129 07:53:28.528901 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-mn9s2" Nov 29 07:53:41 crc kubenswrapper[4795]: I1129 07:53:41.941056 4795 patch_prober.go:28] interesting pod/machine-config-daemon-bkmq6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:53:41 crc kubenswrapper[4795]: I1129 07:53:41.941652 4795 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:53:41 crc kubenswrapper[4795]: I1129 07:53:41.941703 4795 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" Nov 29 07:53:41 crc kubenswrapper[4795]: I1129 07:53:41.942407 4795 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"50234306d145892f3e442365375a5d5625f50018f93ebd19c811e44bda9ed58d"} pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 07:53:41 crc kubenswrapper[4795]: I1129 07:53:41.942484 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" containerID="cri-o://50234306d145892f3e442365375a5d5625f50018f93ebd19c811e44bda9ed58d" gracePeriod=600 Nov 29 07:53:47 crc kubenswrapper[4795]: I1129 07:53:47.309225 4795 generic.go:334] "Generic (PLEG): container finished" podID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerID="50234306d145892f3e442365375a5d5625f50018f93ebd19c811e44bda9ed58d" exitCode=0 Nov 29 07:53:47 crc kubenswrapper[4795]: I1129 07:53:47.309299 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" event={"ID":"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1","Type":"ContainerDied","Data":"50234306d145892f3e442365375a5d5625f50018f93ebd19c811e44bda9ed58d"} Nov 29 07:53:47 crc kubenswrapper[4795]: I1129 07:53:47.309688 4795 scope.go:117] "RemoveContainer" containerID="dc32e1210618295b7a09888e63f3d2d9f6eb19cbef74c764b86be7c539c45ed2" Nov 29 07:53:48 crc kubenswrapper[4795]: I1129 07:53:48.316658 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" event={"ID":"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1","Type":"ContainerStarted","Data":"ab335149a24428cc9c0c5e9165d87fbe177cd1f8c86a0c5a601208bae120d8f2"} Nov 29 07:53:56 crc kubenswrapper[4795]: I1129 07:53:56.787565 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f5mq9r"] Nov 29 07:53:56 crc kubenswrapper[4795]: I1129 07:53:56.792669 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f5mq9r" Nov 29 07:53:56 crc kubenswrapper[4795]: I1129 07:53:56.796055 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f5mq9r"] Nov 29 07:53:56 crc kubenswrapper[4795]: I1129 07:53:56.798465 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 29 07:53:56 crc kubenswrapper[4795]: I1129 07:53:56.820769 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/45ac49e4-eef2-4cef-bac3-06fb40427b2b-util\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f5mq9r\" (UID: \"45ac49e4-eef2-4cef-bac3-06fb40427b2b\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f5mq9r" Nov 29 07:53:56 crc kubenswrapper[4795]: I1129 07:53:56.820820 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24wrw\" (UniqueName: \"kubernetes.io/projected/45ac49e4-eef2-4cef-bac3-06fb40427b2b-kube-api-access-24wrw\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f5mq9r\" (UID: \"45ac49e4-eef2-4cef-bac3-06fb40427b2b\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f5mq9r" Nov 29 07:53:56 crc kubenswrapper[4795]: I1129 07:53:56.820872 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/45ac49e4-eef2-4cef-bac3-06fb40427b2b-bundle\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f5mq9r\" (UID: \"45ac49e4-eef2-4cef-bac3-06fb40427b2b\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f5mq9r" Nov 29 07:53:56 crc kubenswrapper[4795]: I1129 07:53:56.921888 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/45ac49e4-eef2-4cef-bac3-06fb40427b2b-bundle\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f5mq9r\" (UID: \"45ac49e4-eef2-4cef-bac3-06fb40427b2b\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f5mq9r" Nov 29 07:53:56 crc kubenswrapper[4795]: I1129 07:53:56.922022 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/45ac49e4-eef2-4cef-bac3-06fb40427b2b-util\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f5mq9r\" (UID: \"45ac49e4-eef2-4cef-bac3-06fb40427b2b\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f5mq9r" Nov 29 07:53:56 crc kubenswrapper[4795]: I1129 07:53:56.922056 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24wrw\" (UniqueName: \"kubernetes.io/projected/45ac49e4-eef2-4cef-bac3-06fb40427b2b-kube-api-access-24wrw\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f5mq9r\" (UID: \"45ac49e4-eef2-4cef-bac3-06fb40427b2b\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f5mq9r" Nov 29 07:53:56 crc kubenswrapper[4795]: I1129 07:53:56.922612 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/45ac49e4-eef2-4cef-bac3-06fb40427b2b-bundle\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f5mq9r\" (UID: \"45ac49e4-eef2-4cef-bac3-06fb40427b2b\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f5mq9r" Nov 29 07:53:56 crc kubenswrapper[4795]: I1129 07:53:56.922665 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/45ac49e4-eef2-4cef-bac3-06fb40427b2b-util\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f5mq9r\" (UID: \"45ac49e4-eef2-4cef-bac3-06fb40427b2b\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f5mq9r" Nov 29 07:53:56 crc kubenswrapper[4795]: I1129 07:53:56.952759 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24wrw\" (UniqueName: \"kubernetes.io/projected/45ac49e4-eef2-4cef-bac3-06fb40427b2b-kube-api-access-24wrw\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f5mq9r\" (UID: \"45ac49e4-eef2-4cef-bac3-06fb40427b2b\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f5mq9r" Nov 29 07:53:56 crc kubenswrapper[4795]: I1129 07:53:56.954946 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8w8dhw"] Nov 29 07:53:56 crc kubenswrapper[4795]: I1129 07:53:56.956372 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8w8dhw" Nov 29 07:53:56 crc kubenswrapper[4795]: I1129 07:53:56.968333 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8w8dhw"] Nov 29 07:53:57 crc kubenswrapper[4795]: I1129 07:53:57.023734 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bf4954ba-ab7e-4c71-af52-f1cf638d0a0b-bundle\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8w8dhw\" (UID: \"bf4954ba-ab7e-4c71-af52-f1cf638d0a0b\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8w8dhw" Nov 29 07:53:57 crc kubenswrapper[4795]: I1129 07:53:57.023788 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5fgn\" (UniqueName: \"kubernetes.io/projected/bf4954ba-ab7e-4c71-af52-f1cf638d0a0b-kube-api-access-m5fgn\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8w8dhw\" (UID: \"bf4954ba-ab7e-4c71-af52-f1cf638d0a0b\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8w8dhw" Nov 29 07:53:57 crc kubenswrapper[4795]: I1129 07:53:57.023901 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bf4954ba-ab7e-4c71-af52-f1cf638d0a0b-util\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8w8dhw\" (UID: \"bf4954ba-ab7e-4c71-af52-f1cf638d0a0b\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8w8dhw" Nov 29 07:53:57 crc kubenswrapper[4795]: I1129 07:53:57.118347 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f5mq9r" Nov 29 07:53:57 crc kubenswrapper[4795]: I1129 07:53:57.124779 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bf4954ba-ab7e-4c71-af52-f1cf638d0a0b-bundle\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8w8dhw\" (UID: \"bf4954ba-ab7e-4c71-af52-f1cf638d0a0b\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8w8dhw" Nov 29 07:53:57 crc kubenswrapper[4795]: I1129 07:53:57.124828 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5fgn\" (UniqueName: \"kubernetes.io/projected/bf4954ba-ab7e-4c71-af52-f1cf638d0a0b-kube-api-access-m5fgn\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8w8dhw\" (UID: \"bf4954ba-ab7e-4c71-af52-f1cf638d0a0b\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8w8dhw" Nov 29 07:53:57 crc kubenswrapper[4795]: I1129 07:53:57.124900 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bf4954ba-ab7e-4c71-af52-f1cf638d0a0b-util\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8w8dhw\" (UID: \"bf4954ba-ab7e-4c71-af52-f1cf638d0a0b\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8w8dhw" Nov 29 07:53:57 crc kubenswrapper[4795]: I1129 07:53:57.125324 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bf4954ba-ab7e-4c71-af52-f1cf638d0a0b-bundle\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8w8dhw\" (UID: \"bf4954ba-ab7e-4c71-af52-f1cf638d0a0b\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8w8dhw" Nov 29 07:53:57 crc kubenswrapper[4795]: I1129 07:53:57.125367 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bf4954ba-ab7e-4c71-af52-f1cf638d0a0b-util\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8w8dhw\" (UID: \"bf4954ba-ab7e-4c71-af52-f1cf638d0a0b\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8w8dhw" Nov 29 07:53:57 crc kubenswrapper[4795]: I1129 07:53:57.143426 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5fgn\" (UniqueName: \"kubernetes.io/projected/bf4954ba-ab7e-4c71-af52-f1cf638d0a0b-kube-api-access-m5fgn\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8w8dhw\" (UID: \"bf4954ba-ab7e-4c71-af52-f1cf638d0a0b\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8w8dhw" Nov 29 07:53:57 crc kubenswrapper[4795]: I1129 07:53:57.276211 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8w8dhw" Nov 29 07:53:58 crc kubenswrapper[4795]: I1129 07:53:58.022640 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8w8dhw"] Nov 29 07:53:58 crc kubenswrapper[4795]: I1129 07:53:58.028988 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f5mq9r"] Nov 29 07:53:58 crc kubenswrapper[4795]: I1129 07:53:58.386278 4795 generic.go:334] "Generic (PLEG): container finished" podID="45ac49e4-eef2-4cef-bac3-06fb40427b2b" containerID="3c63b21b5b202a4868cf3db4858c0f5aa57420f9215305ff02bc7dbbc7b6b86e" exitCode=0 Nov 29 07:53:58 crc kubenswrapper[4795]: I1129 07:53:58.386409 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f5mq9r" event={"ID":"45ac49e4-eef2-4cef-bac3-06fb40427b2b","Type":"ContainerDied","Data":"3c63b21b5b202a4868cf3db4858c0f5aa57420f9215305ff02bc7dbbc7b6b86e"} Nov 29 07:53:58 crc kubenswrapper[4795]: I1129 07:53:58.387105 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f5mq9r" event={"ID":"45ac49e4-eef2-4cef-bac3-06fb40427b2b","Type":"ContainerStarted","Data":"7374651a6fa35e45dddf1fb75a6d96c76bacc5c36faad20fbd610f3a4c241958"} Nov 29 07:53:58 crc kubenswrapper[4795]: I1129 07:53:58.391008 4795 generic.go:334] "Generic (PLEG): container finished" podID="bf4954ba-ab7e-4c71-af52-f1cf638d0a0b" containerID="47ec79a9f1aa87ca7e5169b7cb574ff61e2d987e13b36b31b6c5312c400ad13f" exitCode=0 Nov 29 07:53:58 crc kubenswrapper[4795]: I1129 07:53:58.391054 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8w8dhw" event={"ID":"bf4954ba-ab7e-4c71-af52-f1cf638d0a0b","Type":"ContainerDied","Data":"47ec79a9f1aa87ca7e5169b7cb574ff61e2d987e13b36b31b6c5312c400ad13f"} Nov 29 07:53:58 crc kubenswrapper[4795]: I1129 07:53:58.391078 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8w8dhw" event={"ID":"bf4954ba-ab7e-4c71-af52-f1cf638d0a0b","Type":"ContainerStarted","Data":"7c3f61bf8802c503a0230219ec7385bd4e83695243bfa9bb0315c5a743a9b66e"} Nov 29 07:54:00 crc kubenswrapper[4795]: I1129 07:54:00.404343 4795 generic.go:334] "Generic (PLEG): container finished" podID="bf4954ba-ab7e-4c71-af52-f1cf638d0a0b" containerID="20f98a66ed7137b4bfc3c33804aec2a21060e9e7244abb33b925f3a4af247e04" exitCode=0 Nov 29 07:54:00 crc kubenswrapper[4795]: I1129 07:54:00.404535 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8w8dhw" event={"ID":"bf4954ba-ab7e-4c71-af52-f1cf638d0a0b","Type":"ContainerDied","Data":"20f98a66ed7137b4bfc3c33804aec2a21060e9e7244abb33b925f3a4af247e04"} Nov 29 07:54:00 crc kubenswrapper[4795]: I1129 07:54:00.406368 4795 generic.go:334] "Generic (PLEG): container finished" podID="45ac49e4-eef2-4cef-bac3-06fb40427b2b" containerID="078cd234d0fd424cdc213bc08854ca2f4759230c33b9de70fe1e1e9620933a05" exitCode=0 Nov 29 07:54:00 crc kubenswrapper[4795]: I1129 07:54:00.406397 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f5mq9r" event={"ID":"45ac49e4-eef2-4cef-bac3-06fb40427b2b","Type":"ContainerDied","Data":"078cd234d0fd424cdc213bc08854ca2f4759230c33b9de70fe1e1e9620933a05"} Nov 29 07:54:01 crc kubenswrapper[4795]: I1129 07:54:01.412606 4795 generic.go:334] "Generic (PLEG): container finished" podID="bf4954ba-ab7e-4c71-af52-f1cf638d0a0b" containerID="26f6f281f298fbf569abb48626402c55ef20ff02d7268dd03f85058d541ea5d1" exitCode=0 Nov 29 07:54:01 crc kubenswrapper[4795]: I1129 07:54:01.412680 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8w8dhw" event={"ID":"bf4954ba-ab7e-4c71-af52-f1cf638d0a0b","Type":"ContainerDied","Data":"26f6f281f298fbf569abb48626402c55ef20ff02d7268dd03f85058d541ea5d1"} Nov 29 07:54:01 crc kubenswrapper[4795]: I1129 07:54:01.414708 4795 generic.go:334] "Generic (PLEG): container finished" podID="45ac49e4-eef2-4cef-bac3-06fb40427b2b" containerID="e81d3fd15f7d9658ef9f66579ab860d67a713145ba2506c400af45ec397931e0" exitCode=0 Nov 29 07:54:01 crc kubenswrapper[4795]: I1129 07:54:01.414887 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f5mq9r" event={"ID":"45ac49e4-eef2-4cef-bac3-06fb40427b2b","Type":"ContainerDied","Data":"e81d3fd15f7d9658ef9f66579ab860d67a713145ba2506c400af45ec397931e0"} Nov 29 07:54:02 crc kubenswrapper[4795]: I1129 07:54:02.709448 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f5mq9r" Nov 29 07:54:02 crc kubenswrapper[4795]: I1129 07:54:02.756748 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8w8dhw" Nov 29 07:54:02 crc kubenswrapper[4795]: I1129 07:54:02.811935 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bf4954ba-ab7e-4c71-af52-f1cf638d0a0b-bundle\") pod \"bf4954ba-ab7e-4c71-af52-f1cf638d0a0b\" (UID: \"bf4954ba-ab7e-4c71-af52-f1cf638d0a0b\") " Nov 29 07:54:02 crc kubenswrapper[4795]: I1129 07:54:02.812327 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24wrw\" (UniqueName: \"kubernetes.io/projected/45ac49e4-eef2-4cef-bac3-06fb40427b2b-kube-api-access-24wrw\") pod \"45ac49e4-eef2-4cef-bac3-06fb40427b2b\" (UID: \"45ac49e4-eef2-4cef-bac3-06fb40427b2b\") " Nov 29 07:54:02 crc kubenswrapper[4795]: I1129 07:54:02.812460 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bf4954ba-ab7e-4c71-af52-f1cf638d0a0b-util\") pod \"bf4954ba-ab7e-4c71-af52-f1cf638d0a0b\" (UID: \"bf4954ba-ab7e-4c71-af52-f1cf638d0a0b\") " Nov 29 07:54:02 crc kubenswrapper[4795]: I1129 07:54:02.812996 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/45ac49e4-eef2-4cef-bac3-06fb40427b2b-util\") pod \"45ac49e4-eef2-4cef-bac3-06fb40427b2b\" (UID: \"45ac49e4-eef2-4cef-bac3-06fb40427b2b\") " Nov 29 07:54:02 crc kubenswrapper[4795]: I1129 07:54:02.813360 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5fgn\" (UniqueName: \"kubernetes.io/projected/bf4954ba-ab7e-4c71-af52-f1cf638d0a0b-kube-api-access-m5fgn\") pod \"bf4954ba-ab7e-4c71-af52-f1cf638d0a0b\" (UID: \"bf4954ba-ab7e-4c71-af52-f1cf638d0a0b\") " Nov 29 07:54:02 crc kubenswrapper[4795]: I1129 07:54:02.813462 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf4954ba-ab7e-4c71-af52-f1cf638d0a0b-bundle" (OuterVolumeSpecName: "bundle") pod "bf4954ba-ab7e-4c71-af52-f1cf638d0a0b" (UID: "bf4954ba-ab7e-4c71-af52-f1cf638d0a0b"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:54:02 crc kubenswrapper[4795]: I1129 07:54:02.813711 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/45ac49e4-eef2-4cef-bac3-06fb40427b2b-bundle\") pod \"45ac49e4-eef2-4cef-bac3-06fb40427b2b\" (UID: \"45ac49e4-eef2-4cef-bac3-06fb40427b2b\") " Nov 29 07:54:02 crc kubenswrapper[4795]: I1129 07:54:02.814565 4795 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bf4954ba-ab7e-4c71-af52-f1cf638d0a0b-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:54:02 crc kubenswrapper[4795]: I1129 07:54:02.816340 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45ac49e4-eef2-4cef-bac3-06fb40427b2b-bundle" (OuterVolumeSpecName: "bundle") pod "45ac49e4-eef2-4cef-bac3-06fb40427b2b" (UID: "45ac49e4-eef2-4cef-bac3-06fb40427b2b"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:54:02 crc kubenswrapper[4795]: I1129 07:54:02.819575 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45ac49e4-eef2-4cef-bac3-06fb40427b2b-kube-api-access-24wrw" (OuterVolumeSpecName: "kube-api-access-24wrw") pod "45ac49e4-eef2-4cef-bac3-06fb40427b2b" (UID: "45ac49e4-eef2-4cef-bac3-06fb40427b2b"). InnerVolumeSpecName "kube-api-access-24wrw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:54:02 crc kubenswrapper[4795]: I1129 07:54:02.820125 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf4954ba-ab7e-4c71-af52-f1cf638d0a0b-kube-api-access-m5fgn" (OuterVolumeSpecName: "kube-api-access-m5fgn") pod "bf4954ba-ab7e-4c71-af52-f1cf638d0a0b" (UID: "bf4954ba-ab7e-4c71-af52-f1cf638d0a0b"). InnerVolumeSpecName "kube-api-access-m5fgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:54:02 crc kubenswrapper[4795]: I1129 07:54:02.828949 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf4954ba-ab7e-4c71-af52-f1cf638d0a0b-util" (OuterVolumeSpecName: "util") pod "bf4954ba-ab7e-4c71-af52-f1cf638d0a0b" (UID: "bf4954ba-ab7e-4c71-af52-f1cf638d0a0b"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:54:02 crc kubenswrapper[4795]: I1129 07:54:02.845203 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45ac49e4-eef2-4cef-bac3-06fb40427b2b-util" (OuterVolumeSpecName: "util") pod "45ac49e4-eef2-4cef-bac3-06fb40427b2b" (UID: "45ac49e4-eef2-4cef-bac3-06fb40427b2b"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:54:02 crc kubenswrapper[4795]: I1129 07:54:02.916190 4795 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bf4954ba-ab7e-4c71-af52-f1cf638d0a0b-util\") on node \"crc\" DevicePath \"\"" Nov 29 07:54:02 crc kubenswrapper[4795]: I1129 07:54:02.916221 4795 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/45ac49e4-eef2-4cef-bac3-06fb40427b2b-util\") on node \"crc\" DevicePath \"\"" Nov 29 07:54:02 crc kubenswrapper[4795]: I1129 07:54:02.916231 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5fgn\" (UniqueName: \"kubernetes.io/projected/bf4954ba-ab7e-4c71-af52-f1cf638d0a0b-kube-api-access-m5fgn\") on node \"crc\" DevicePath \"\"" Nov 29 07:54:02 crc kubenswrapper[4795]: I1129 07:54:02.916241 4795 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/45ac49e4-eef2-4cef-bac3-06fb40427b2b-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:54:02 crc kubenswrapper[4795]: I1129 07:54:02.916252 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24wrw\" (UniqueName: \"kubernetes.io/projected/45ac49e4-eef2-4cef-bac3-06fb40427b2b-kube-api-access-24wrw\") on node \"crc\" DevicePath \"\"" Nov 29 07:54:03 crc kubenswrapper[4795]: I1129 07:54:03.433845 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f5mq9r" event={"ID":"45ac49e4-eef2-4cef-bac3-06fb40427b2b","Type":"ContainerDied","Data":"7374651a6fa35e45dddf1fb75a6d96c76bacc5c36faad20fbd610f3a4c241958"} Nov 29 07:54:03 crc kubenswrapper[4795]: I1129 07:54:03.433897 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f5mq9r" Nov 29 07:54:03 crc kubenswrapper[4795]: I1129 07:54:03.433921 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7374651a6fa35e45dddf1fb75a6d96c76bacc5c36faad20fbd610f3a4c241958" Nov 29 07:54:03 crc kubenswrapper[4795]: I1129 07:54:03.437711 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8w8dhw" event={"ID":"bf4954ba-ab7e-4c71-af52-f1cf638d0a0b","Type":"ContainerDied","Data":"7c3f61bf8802c503a0230219ec7385bd4e83695243bfa9bb0315c5a743a9b66e"} Nov 29 07:54:03 crc kubenswrapper[4795]: I1129 07:54:03.437752 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c3f61bf8802c503a0230219ec7385bd4e83695243bfa9bb0315c5a743a9b66e" Nov 29 07:54:03 crc kubenswrapper[4795]: I1129 07:54:03.437779 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8w8dhw" Nov 29 07:54:14 crc kubenswrapper[4795]: I1129 07:54:14.806575 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-585cfc87fc-tt7jf"] Nov 29 07:54:14 crc kubenswrapper[4795]: E1129 07:54:14.807443 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf4954ba-ab7e-4c71-af52-f1cf638d0a0b" containerName="util" Nov 29 07:54:14 crc kubenswrapper[4795]: I1129 07:54:14.807455 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf4954ba-ab7e-4c71-af52-f1cf638d0a0b" containerName="util" Nov 29 07:54:14 crc kubenswrapper[4795]: E1129 07:54:14.807467 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf4954ba-ab7e-4c71-af52-f1cf638d0a0b" containerName="extract" Nov 29 07:54:14 crc kubenswrapper[4795]: I1129 07:54:14.807473 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf4954ba-ab7e-4c71-af52-f1cf638d0a0b" containerName="extract" Nov 29 07:54:14 crc kubenswrapper[4795]: E1129 07:54:14.807492 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45ac49e4-eef2-4cef-bac3-06fb40427b2b" containerName="extract" Nov 29 07:54:14 crc kubenswrapper[4795]: I1129 07:54:14.807499 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="45ac49e4-eef2-4cef-bac3-06fb40427b2b" containerName="extract" Nov 29 07:54:14 crc kubenswrapper[4795]: E1129 07:54:14.807508 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45ac49e4-eef2-4cef-bac3-06fb40427b2b" containerName="pull" Nov 29 07:54:14 crc kubenswrapper[4795]: I1129 07:54:14.807516 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="45ac49e4-eef2-4cef-bac3-06fb40427b2b" containerName="pull" Nov 29 07:54:14 crc kubenswrapper[4795]: E1129 07:54:14.807530 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45ac49e4-eef2-4cef-bac3-06fb40427b2b" containerName="util" Nov 29 07:54:14 crc kubenswrapper[4795]: I1129 07:54:14.807536 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="45ac49e4-eef2-4cef-bac3-06fb40427b2b" containerName="util" Nov 29 07:54:14 crc kubenswrapper[4795]: E1129 07:54:14.807545 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf4954ba-ab7e-4c71-af52-f1cf638d0a0b" containerName="pull" Nov 29 07:54:14 crc kubenswrapper[4795]: I1129 07:54:14.807551 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf4954ba-ab7e-4c71-af52-f1cf638d0a0b" containerName="pull" Nov 29 07:54:14 crc kubenswrapper[4795]: I1129 07:54:14.807671 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="45ac49e4-eef2-4cef-bac3-06fb40427b2b" containerName="extract" Nov 29 07:54:14 crc kubenswrapper[4795]: I1129 07:54:14.807685 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf4954ba-ab7e-4c71-af52-f1cf638d0a0b" containerName="extract" Nov 29 07:54:14 crc kubenswrapper[4795]: I1129 07:54:14.808307 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-585cfc87fc-tt7jf" Nov 29 07:54:14 crc kubenswrapper[4795]: I1129 07:54:14.813710 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Nov 29 07:54:14 crc kubenswrapper[4795]: I1129 07:54:14.814471 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Nov 29 07:54:14 crc kubenswrapper[4795]: I1129 07:54:14.814639 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Nov 29 07:54:14 crc kubenswrapper[4795]: I1129 07:54:14.820710 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-zzhn8" Nov 29 07:54:14 crc kubenswrapper[4795]: I1129 07:54:14.822095 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Nov 29 07:54:14 crc kubenswrapper[4795]: I1129 07:54:14.822300 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Nov 29 07:54:14 crc kubenswrapper[4795]: I1129 07:54:14.828058 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-585cfc87fc-tt7jf"] Nov 29 07:54:14 crc kubenswrapper[4795]: I1129 07:54:14.890626 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/48d9911d-3ed8-4474-9537-cbfcb462dd44-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-585cfc87fc-tt7jf\" (UID: \"48d9911d-3ed8-4474-9537-cbfcb462dd44\") " pod="openshift-operators-redhat/loki-operator-controller-manager-585cfc87fc-tt7jf" Nov 29 07:54:14 crc kubenswrapper[4795]: I1129 07:54:14.890700 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/48d9911d-3ed8-4474-9537-cbfcb462dd44-manager-config\") pod \"loki-operator-controller-manager-585cfc87fc-tt7jf\" (UID: \"48d9911d-3ed8-4474-9537-cbfcb462dd44\") " pod="openshift-operators-redhat/loki-operator-controller-manager-585cfc87fc-tt7jf" Nov 29 07:54:14 crc kubenswrapper[4795]: I1129 07:54:14.890809 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2778n\" (UniqueName: \"kubernetes.io/projected/48d9911d-3ed8-4474-9537-cbfcb462dd44-kube-api-access-2778n\") pod \"loki-operator-controller-manager-585cfc87fc-tt7jf\" (UID: \"48d9911d-3ed8-4474-9537-cbfcb462dd44\") " pod="openshift-operators-redhat/loki-operator-controller-manager-585cfc87fc-tt7jf" Nov 29 07:54:14 crc kubenswrapper[4795]: I1129 07:54:14.890838 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/48d9911d-3ed8-4474-9537-cbfcb462dd44-apiservice-cert\") pod \"loki-operator-controller-manager-585cfc87fc-tt7jf\" (UID: \"48d9911d-3ed8-4474-9537-cbfcb462dd44\") " pod="openshift-operators-redhat/loki-operator-controller-manager-585cfc87fc-tt7jf" Nov 29 07:54:14 crc kubenswrapper[4795]: I1129 07:54:14.890863 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/48d9911d-3ed8-4474-9537-cbfcb462dd44-webhook-cert\") pod \"loki-operator-controller-manager-585cfc87fc-tt7jf\" (UID: \"48d9911d-3ed8-4474-9537-cbfcb462dd44\") " pod="openshift-operators-redhat/loki-operator-controller-manager-585cfc87fc-tt7jf" Nov 29 07:54:14 crc kubenswrapper[4795]: I1129 07:54:14.992434 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/48d9911d-3ed8-4474-9537-cbfcb462dd44-manager-config\") pod \"loki-operator-controller-manager-585cfc87fc-tt7jf\" (UID: \"48d9911d-3ed8-4474-9537-cbfcb462dd44\") " pod="openshift-operators-redhat/loki-operator-controller-manager-585cfc87fc-tt7jf" Nov 29 07:54:14 crc kubenswrapper[4795]: I1129 07:54:14.993320 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/48d9911d-3ed8-4474-9537-cbfcb462dd44-manager-config\") pod \"loki-operator-controller-manager-585cfc87fc-tt7jf\" (UID: \"48d9911d-3ed8-4474-9537-cbfcb462dd44\") " pod="openshift-operators-redhat/loki-operator-controller-manager-585cfc87fc-tt7jf" Nov 29 07:54:14 crc kubenswrapper[4795]: I1129 07:54:14.993470 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2778n\" (UniqueName: \"kubernetes.io/projected/48d9911d-3ed8-4474-9537-cbfcb462dd44-kube-api-access-2778n\") pod \"loki-operator-controller-manager-585cfc87fc-tt7jf\" (UID: \"48d9911d-3ed8-4474-9537-cbfcb462dd44\") " pod="openshift-operators-redhat/loki-operator-controller-manager-585cfc87fc-tt7jf" Nov 29 07:54:14 crc kubenswrapper[4795]: I1129 07:54:14.993820 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/48d9911d-3ed8-4474-9537-cbfcb462dd44-apiservice-cert\") pod \"loki-operator-controller-manager-585cfc87fc-tt7jf\" (UID: \"48d9911d-3ed8-4474-9537-cbfcb462dd44\") " pod="openshift-operators-redhat/loki-operator-controller-manager-585cfc87fc-tt7jf" Nov 29 07:54:14 crc kubenswrapper[4795]: I1129 07:54:14.994310 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/48d9911d-3ed8-4474-9537-cbfcb462dd44-webhook-cert\") pod \"loki-operator-controller-manager-585cfc87fc-tt7jf\" (UID: \"48d9911d-3ed8-4474-9537-cbfcb462dd44\") " pod="openshift-operators-redhat/loki-operator-controller-manager-585cfc87fc-tt7jf" Nov 29 07:54:14 crc kubenswrapper[4795]: I1129 07:54:14.994451 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/48d9911d-3ed8-4474-9537-cbfcb462dd44-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-585cfc87fc-tt7jf\" (UID: \"48d9911d-3ed8-4474-9537-cbfcb462dd44\") " pod="openshift-operators-redhat/loki-operator-controller-manager-585cfc87fc-tt7jf" Nov 29 07:54:15 crc kubenswrapper[4795]: I1129 07:54:15.000375 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/48d9911d-3ed8-4474-9537-cbfcb462dd44-apiservice-cert\") pod \"loki-operator-controller-manager-585cfc87fc-tt7jf\" (UID: \"48d9911d-3ed8-4474-9537-cbfcb462dd44\") " pod="openshift-operators-redhat/loki-operator-controller-manager-585cfc87fc-tt7jf" Nov 29 07:54:15 crc kubenswrapper[4795]: I1129 07:54:15.001048 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/48d9911d-3ed8-4474-9537-cbfcb462dd44-webhook-cert\") pod \"loki-operator-controller-manager-585cfc87fc-tt7jf\" (UID: \"48d9911d-3ed8-4474-9537-cbfcb462dd44\") " pod="openshift-operators-redhat/loki-operator-controller-manager-585cfc87fc-tt7jf" Nov 29 07:54:15 crc kubenswrapper[4795]: I1129 07:54:15.004721 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/48d9911d-3ed8-4474-9537-cbfcb462dd44-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-585cfc87fc-tt7jf\" (UID: \"48d9911d-3ed8-4474-9537-cbfcb462dd44\") " pod="openshift-operators-redhat/loki-operator-controller-manager-585cfc87fc-tt7jf" Nov 29 07:54:15 crc kubenswrapper[4795]: I1129 07:54:15.019057 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2778n\" (UniqueName: \"kubernetes.io/projected/48d9911d-3ed8-4474-9537-cbfcb462dd44-kube-api-access-2778n\") pod \"loki-operator-controller-manager-585cfc87fc-tt7jf\" (UID: \"48d9911d-3ed8-4474-9537-cbfcb462dd44\") " pod="openshift-operators-redhat/loki-operator-controller-manager-585cfc87fc-tt7jf" Nov 29 07:54:15 crc kubenswrapper[4795]: I1129 07:54:15.123733 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-585cfc87fc-tt7jf" Nov 29 07:54:15 crc kubenswrapper[4795]: I1129 07:54:15.637016 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-585cfc87fc-tt7jf"] Nov 29 07:54:15 crc kubenswrapper[4795]: I1129 07:54:15.647113 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/cluster-logging-operator-ff9846bd-gh2bv"] Nov 29 07:54:15 crc kubenswrapper[4795]: I1129 07:54:15.648422 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-ff9846bd-gh2bv" Nov 29 07:54:15 crc kubenswrapper[4795]: I1129 07:54:15.650994 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"cluster-logging-operator-dockercfg-9l7xq" Nov 29 07:54:15 crc kubenswrapper[4795]: I1129 07:54:15.652234 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"openshift-service-ca.crt" Nov 29 07:54:15 crc kubenswrapper[4795]: I1129 07:54:15.652544 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"kube-root-ca.crt" Nov 29 07:54:15 crc kubenswrapper[4795]: I1129 07:54:15.661567 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-ff9846bd-gh2bv"] Nov 29 07:54:15 crc kubenswrapper[4795]: I1129 07:54:15.703680 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxvpx\" (UniqueName: \"kubernetes.io/projected/ba3b17d1-c4c3-4575-b722-c8134c6cd690-kube-api-access-cxvpx\") pod \"cluster-logging-operator-ff9846bd-gh2bv\" (UID: \"ba3b17d1-c4c3-4575-b722-c8134c6cd690\") " pod="openshift-logging/cluster-logging-operator-ff9846bd-gh2bv" Nov 29 07:54:15 crc kubenswrapper[4795]: I1129 07:54:15.804946 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxvpx\" (UniqueName: \"kubernetes.io/projected/ba3b17d1-c4c3-4575-b722-c8134c6cd690-kube-api-access-cxvpx\") pod \"cluster-logging-operator-ff9846bd-gh2bv\" (UID: \"ba3b17d1-c4c3-4575-b722-c8134c6cd690\") " pod="openshift-logging/cluster-logging-operator-ff9846bd-gh2bv" Nov 29 07:54:15 crc kubenswrapper[4795]: I1129 07:54:15.822673 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxvpx\" (UniqueName: \"kubernetes.io/projected/ba3b17d1-c4c3-4575-b722-c8134c6cd690-kube-api-access-cxvpx\") pod \"cluster-logging-operator-ff9846bd-gh2bv\" (UID: \"ba3b17d1-c4c3-4575-b722-c8134c6cd690\") " pod="openshift-logging/cluster-logging-operator-ff9846bd-gh2bv" Nov 29 07:54:15 crc kubenswrapper[4795]: I1129 07:54:15.971296 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-ff9846bd-gh2bv" Nov 29 07:54:16 crc kubenswrapper[4795]: I1129 07:54:16.209348 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-ff9846bd-gh2bv"] Nov 29 07:54:16 crc kubenswrapper[4795]: I1129 07:54:16.525752 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-585cfc87fc-tt7jf" event={"ID":"48d9911d-3ed8-4474-9537-cbfcb462dd44","Type":"ContainerStarted","Data":"2b4d937c01241c067637aa685064619115ac4ddbb0d3992d207601f0ea562107"} Nov 29 07:54:16 crc kubenswrapper[4795]: I1129 07:54:16.527187 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-ff9846bd-gh2bv" event={"ID":"ba3b17d1-c4c3-4575-b722-c8134c6cd690","Type":"ContainerStarted","Data":"ab510ed6e79bf4bb3a41a34d3c48609ea9cf07ef11a73d6aff15ea7da319c6e3"} Nov 29 07:54:26 crc kubenswrapper[4795]: I1129 07:54:26.630360 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-585cfc87fc-tt7jf" event={"ID":"48d9911d-3ed8-4474-9537-cbfcb462dd44","Type":"ContainerStarted","Data":"4303d80dfa62cfbfd5e10d09b369f6425ad2ee35472917947d070cea8edf0a81"} Nov 29 07:54:26 crc kubenswrapper[4795]: I1129 07:54:26.631907 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-ff9846bd-gh2bv" event={"ID":"ba3b17d1-c4c3-4575-b722-c8134c6cd690","Type":"ContainerStarted","Data":"e68fa378629817323eef16e71df81273cefe3f090590348d3b77028b9c111f95"} Nov 29 07:54:26 crc kubenswrapper[4795]: I1129 07:54:26.661702 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/cluster-logging-operator-ff9846bd-gh2bv" podStartSLOduration=2.128054284 podStartE2EDuration="11.661685953s" podCreationTimestamp="2025-11-29 07:54:15 +0000 UTC" firstStartedPulling="2025-11-29 07:54:16.228996166 +0000 UTC m=+902.204571956" lastFinishedPulling="2025-11-29 07:54:25.762627825 +0000 UTC m=+911.738203625" observedRunningTime="2025-11-29 07:54:26.656558917 +0000 UTC m=+912.632134707" watchObservedRunningTime="2025-11-29 07:54:26.661685953 +0000 UTC m=+912.637261743" Nov 29 07:54:27 crc kubenswrapper[4795]: I1129 07:54:27.108970 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-t7gdn"] Nov 29 07:54:27 crc kubenswrapper[4795]: I1129 07:54:27.110482 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t7gdn" Nov 29 07:54:27 crc kubenswrapper[4795]: I1129 07:54:27.119322 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t7gdn"] Nov 29 07:54:27 crc kubenswrapper[4795]: I1129 07:54:27.222317 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d391c8f2-cee0-4f56-871a-63d84766a943-utilities\") pod \"redhat-marketplace-t7gdn\" (UID: \"d391c8f2-cee0-4f56-871a-63d84766a943\") " pod="openshift-marketplace/redhat-marketplace-t7gdn" Nov 29 07:54:27 crc kubenswrapper[4795]: I1129 07:54:27.222638 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d391c8f2-cee0-4f56-871a-63d84766a943-catalog-content\") pod \"redhat-marketplace-t7gdn\" (UID: \"d391c8f2-cee0-4f56-871a-63d84766a943\") " pod="openshift-marketplace/redhat-marketplace-t7gdn" Nov 29 07:54:27 crc kubenswrapper[4795]: I1129 07:54:27.222680 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nshcg\" (UniqueName: \"kubernetes.io/projected/d391c8f2-cee0-4f56-871a-63d84766a943-kube-api-access-nshcg\") pod \"redhat-marketplace-t7gdn\" (UID: \"d391c8f2-cee0-4f56-871a-63d84766a943\") " pod="openshift-marketplace/redhat-marketplace-t7gdn" Nov 29 07:54:27 crc kubenswrapper[4795]: I1129 07:54:27.323766 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nshcg\" (UniqueName: \"kubernetes.io/projected/d391c8f2-cee0-4f56-871a-63d84766a943-kube-api-access-nshcg\") pod \"redhat-marketplace-t7gdn\" (UID: \"d391c8f2-cee0-4f56-871a-63d84766a943\") " pod="openshift-marketplace/redhat-marketplace-t7gdn" Nov 29 07:54:27 crc kubenswrapper[4795]: I1129 07:54:27.323847 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d391c8f2-cee0-4f56-871a-63d84766a943-utilities\") pod \"redhat-marketplace-t7gdn\" (UID: \"d391c8f2-cee0-4f56-871a-63d84766a943\") " pod="openshift-marketplace/redhat-marketplace-t7gdn" Nov 29 07:54:27 crc kubenswrapper[4795]: I1129 07:54:27.323950 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d391c8f2-cee0-4f56-871a-63d84766a943-catalog-content\") pod \"redhat-marketplace-t7gdn\" (UID: \"d391c8f2-cee0-4f56-871a-63d84766a943\") " pod="openshift-marketplace/redhat-marketplace-t7gdn" Nov 29 07:54:27 crc kubenswrapper[4795]: I1129 07:54:27.324444 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d391c8f2-cee0-4f56-871a-63d84766a943-catalog-content\") pod \"redhat-marketplace-t7gdn\" (UID: \"d391c8f2-cee0-4f56-871a-63d84766a943\") " pod="openshift-marketplace/redhat-marketplace-t7gdn" Nov 29 07:54:27 crc kubenswrapper[4795]: I1129 07:54:27.324481 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d391c8f2-cee0-4f56-871a-63d84766a943-utilities\") pod \"redhat-marketplace-t7gdn\" (UID: \"d391c8f2-cee0-4f56-871a-63d84766a943\") " pod="openshift-marketplace/redhat-marketplace-t7gdn" Nov 29 07:54:27 crc kubenswrapper[4795]: I1129 07:54:27.345314 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nshcg\" (UniqueName: \"kubernetes.io/projected/d391c8f2-cee0-4f56-871a-63d84766a943-kube-api-access-nshcg\") pod \"redhat-marketplace-t7gdn\" (UID: \"d391c8f2-cee0-4f56-871a-63d84766a943\") " pod="openshift-marketplace/redhat-marketplace-t7gdn" Nov 29 07:54:27 crc kubenswrapper[4795]: I1129 07:54:27.427985 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t7gdn" Nov 29 07:54:27 crc kubenswrapper[4795]: I1129 07:54:27.909457 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gkz4q"] Nov 29 07:54:27 crc kubenswrapper[4795]: I1129 07:54:27.910921 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gkz4q" Nov 29 07:54:27 crc kubenswrapper[4795]: I1129 07:54:27.924095 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gkz4q"] Nov 29 07:54:27 crc kubenswrapper[4795]: I1129 07:54:27.935530 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tvgg\" (UniqueName: \"kubernetes.io/projected/4374d966-3bb0-4da2-9a6b-3827abf0789e-kube-api-access-9tvgg\") pod \"community-operators-gkz4q\" (UID: \"4374d966-3bb0-4da2-9a6b-3827abf0789e\") " pod="openshift-marketplace/community-operators-gkz4q" Nov 29 07:54:27 crc kubenswrapper[4795]: I1129 07:54:27.935616 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4374d966-3bb0-4da2-9a6b-3827abf0789e-catalog-content\") pod \"community-operators-gkz4q\" (UID: \"4374d966-3bb0-4da2-9a6b-3827abf0789e\") " pod="openshift-marketplace/community-operators-gkz4q" Nov 29 07:54:27 crc kubenswrapper[4795]: I1129 07:54:27.935641 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4374d966-3bb0-4da2-9a6b-3827abf0789e-utilities\") pod \"community-operators-gkz4q\" (UID: \"4374d966-3bb0-4da2-9a6b-3827abf0789e\") " pod="openshift-marketplace/community-operators-gkz4q" Nov 29 07:54:28 crc kubenswrapper[4795]: I1129 07:54:28.029152 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t7gdn"] Nov 29 07:54:28 crc kubenswrapper[4795]: I1129 07:54:28.036716 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tvgg\" (UniqueName: \"kubernetes.io/projected/4374d966-3bb0-4da2-9a6b-3827abf0789e-kube-api-access-9tvgg\") pod \"community-operators-gkz4q\" (UID: \"4374d966-3bb0-4da2-9a6b-3827abf0789e\") " pod="openshift-marketplace/community-operators-gkz4q" Nov 29 07:54:28 crc kubenswrapper[4795]: I1129 07:54:28.036772 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4374d966-3bb0-4da2-9a6b-3827abf0789e-catalog-content\") pod \"community-operators-gkz4q\" (UID: \"4374d966-3bb0-4da2-9a6b-3827abf0789e\") " pod="openshift-marketplace/community-operators-gkz4q" Nov 29 07:54:28 crc kubenswrapper[4795]: I1129 07:54:28.036806 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4374d966-3bb0-4da2-9a6b-3827abf0789e-utilities\") pod \"community-operators-gkz4q\" (UID: \"4374d966-3bb0-4da2-9a6b-3827abf0789e\") " pod="openshift-marketplace/community-operators-gkz4q" Nov 29 07:54:28 crc kubenswrapper[4795]: I1129 07:54:28.037424 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4374d966-3bb0-4da2-9a6b-3827abf0789e-catalog-content\") pod \"community-operators-gkz4q\" (UID: \"4374d966-3bb0-4da2-9a6b-3827abf0789e\") " pod="openshift-marketplace/community-operators-gkz4q" Nov 29 07:54:28 crc kubenswrapper[4795]: I1129 07:54:28.037430 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4374d966-3bb0-4da2-9a6b-3827abf0789e-utilities\") pod \"community-operators-gkz4q\" (UID: \"4374d966-3bb0-4da2-9a6b-3827abf0789e\") " pod="openshift-marketplace/community-operators-gkz4q" Nov 29 07:54:28 crc kubenswrapper[4795]: I1129 07:54:28.072497 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tvgg\" (UniqueName: \"kubernetes.io/projected/4374d966-3bb0-4da2-9a6b-3827abf0789e-kube-api-access-9tvgg\") pod \"community-operators-gkz4q\" (UID: \"4374d966-3bb0-4da2-9a6b-3827abf0789e\") " pod="openshift-marketplace/community-operators-gkz4q" Nov 29 07:54:28 crc kubenswrapper[4795]: I1129 07:54:28.235328 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gkz4q" Nov 29 07:54:28 crc kubenswrapper[4795]: I1129 07:54:28.675303 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t7gdn" event={"ID":"d391c8f2-cee0-4f56-871a-63d84766a943","Type":"ContainerDied","Data":"bba801d57dde641cdc7609569745cb63db75980acd7323a3b15bc91a8e1712bc"} Nov 29 07:54:28 crc kubenswrapper[4795]: I1129 07:54:28.677653 4795 generic.go:334] "Generic (PLEG): container finished" podID="d391c8f2-cee0-4f56-871a-63d84766a943" containerID="bba801d57dde641cdc7609569745cb63db75980acd7323a3b15bc91a8e1712bc" exitCode=0 Nov 29 07:54:28 crc kubenswrapper[4795]: I1129 07:54:28.677706 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t7gdn" event={"ID":"d391c8f2-cee0-4f56-871a-63d84766a943","Type":"ContainerStarted","Data":"c47458be0cff85ff02e6ffe18bd9d3d72b247e1c5b7b8bd366260fa576f2bfef"} Nov 29 07:54:28 crc kubenswrapper[4795]: I1129 07:54:28.810081 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gkz4q"] Nov 29 07:54:28 crc kubenswrapper[4795]: W1129 07:54:28.820149 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4374d966_3bb0_4da2_9a6b_3827abf0789e.slice/crio-d22f7cb7882962a537013b63bf9f209aaf3f02b6323c7999f70e2467aed56a7d WatchSource:0}: Error finding container d22f7cb7882962a537013b63bf9f209aaf3f02b6323c7999f70e2467aed56a7d: Status 404 returned error can't find the container with id d22f7cb7882962a537013b63bf9f209aaf3f02b6323c7999f70e2467aed56a7d Nov 29 07:54:29 crc kubenswrapper[4795]: I1129 07:54:29.737357 4795 generic.go:334] "Generic (PLEG): container finished" podID="4374d966-3bb0-4da2-9a6b-3827abf0789e" containerID="1a7e33a22a5dbebddb641d4374233cd5dab3380344ee808a71736f075153a0e2" exitCode=0 Nov 29 07:54:29 crc kubenswrapper[4795]: I1129 07:54:29.737547 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gkz4q" event={"ID":"4374d966-3bb0-4da2-9a6b-3827abf0789e","Type":"ContainerDied","Data":"1a7e33a22a5dbebddb641d4374233cd5dab3380344ee808a71736f075153a0e2"} Nov 29 07:54:29 crc kubenswrapper[4795]: I1129 07:54:29.738497 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gkz4q" event={"ID":"4374d966-3bb0-4da2-9a6b-3827abf0789e","Type":"ContainerStarted","Data":"d22f7cb7882962a537013b63bf9f209aaf3f02b6323c7999f70e2467aed56a7d"} Nov 29 07:54:30 crc kubenswrapper[4795]: I1129 07:54:30.749245 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t7gdn" event={"ID":"d391c8f2-cee0-4f56-871a-63d84766a943","Type":"ContainerStarted","Data":"48be6d699a79f59a132b468c71b22f4fc670d3cce623cf1803acdd76567dd652"} Nov 29 07:54:31 crc kubenswrapper[4795]: I1129 07:54:31.759200 4795 generic.go:334] "Generic (PLEG): container finished" podID="d391c8f2-cee0-4f56-871a-63d84766a943" containerID="48be6d699a79f59a132b468c71b22f4fc670d3cce623cf1803acdd76567dd652" exitCode=0 Nov 29 07:54:31 crc kubenswrapper[4795]: I1129 07:54:31.759511 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t7gdn" event={"ID":"d391c8f2-cee0-4f56-871a-63d84766a943","Type":"ContainerDied","Data":"48be6d699a79f59a132b468c71b22f4fc670d3cce623cf1803acdd76567dd652"} Nov 29 07:54:32 crc kubenswrapper[4795]: I1129 07:54:32.303155 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7jqlz"] Nov 29 07:54:32 crc kubenswrapper[4795]: I1129 07:54:32.304969 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7jqlz" Nov 29 07:54:32 crc kubenswrapper[4795]: I1129 07:54:32.317961 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7jqlz"] Nov 29 07:54:32 crc kubenswrapper[4795]: I1129 07:54:32.407383 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a39c8bcf-1902-430f-bc47-ebb206116279-utilities\") pod \"certified-operators-7jqlz\" (UID: \"a39c8bcf-1902-430f-bc47-ebb206116279\") " pod="openshift-marketplace/certified-operators-7jqlz" Nov 29 07:54:32 crc kubenswrapper[4795]: I1129 07:54:32.408135 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bjmc\" (UniqueName: \"kubernetes.io/projected/a39c8bcf-1902-430f-bc47-ebb206116279-kube-api-access-6bjmc\") pod \"certified-operators-7jqlz\" (UID: \"a39c8bcf-1902-430f-bc47-ebb206116279\") " pod="openshift-marketplace/certified-operators-7jqlz" Nov 29 07:54:32 crc kubenswrapper[4795]: I1129 07:54:32.408385 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a39c8bcf-1902-430f-bc47-ebb206116279-catalog-content\") pod \"certified-operators-7jqlz\" (UID: \"a39c8bcf-1902-430f-bc47-ebb206116279\") " pod="openshift-marketplace/certified-operators-7jqlz" Nov 29 07:54:32 crc kubenswrapper[4795]: I1129 07:54:32.509797 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a39c8bcf-1902-430f-bc47-ebb206116279-utilities\") pod \"certified-operators-7jqlz\" (UID: \"a39c8bcf-1902-430f-bc47-ebb206116279\") " pod="openshift-marketplace/certified-operators-7jqlz" Nov 29 07:54:32 crc kubenswrapper[4795]: I1129 07:54:32.509889 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bjmc\" (UniqueName: \"kubernetes.io/projected/a39c8bcf-1902-430f-bc47-ebb206116279-kube-api-access-6bjmc\") pod \"certified-operators-7jqlz\" (UID: \"a39c8bcf-1902-430f-bc47-ebb206116279\") " pod="openshift-marketplace/certified-operators-7jqlz" Nov 29 07:54:32 crc kubenswrapper[4795]: I1129 07:54:32.509958 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a39c8bcf-1902-430f-bc47-ebb206116279-catalog-content\") pod \"certified-operators-7jqlz\" (UID: \"a39c8bcf-1902-430f-bc47-ebb206116279\") " pod="openshift-marketplace/certified-operators-7jqlz" Nov 29 07:54:32 crc kubenswrapper[4795]: I1129 07:54:32.510304 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a39c8bcf-1902-430f-bc47-ebb206116279-utilities\") pod \"certified-operators-7jqlz\" (UID: \"a39c8bcf-1902-430f-bc47-ebb206116279\") " pod="openshift-marketplace/certified-operators-7jqlz" Nov 29 07:54:32 crc kubenswrapper[4795]: I1129 07:54:32.510363 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a39c8bcf-1902-430f-bc47-ebb206116279-catalog-content\") pod \"certified-operators-7jqlz\" (UID: \"a39c8bcf-1902-430f-bc47-ebb206116279\") " pod="openshift-marketplace/certified-operators-7jqlz" Nov 29 07:54:32 crc kubenswrapper[4795]: I1129 07:54:32.540886 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bjmc\" (UniqueName: \"kubernetes.io/projected/a39c8bcf-1902-430f-bc47-ebb206116279-kube-api-access-6bjmc\") pod \"certified-operators-7jqlz\" (UID: \"a39c8bcf-1902-430f-bc47-ebb206116279\") " pod="openshift-marketplace/certified-operators-7jqlz" Nov 29 07:54:32 crc kubenswrapper[4795]: I1129 07:54:32.675723 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7jqlz" Nov 29 07:54:35 crc kubenswrapper[4795]: I1129 07:54:35.613430 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7jqlz"] Nov 29 07:54:36 crc kubenswrapper[4795]: I1129 07:54:36.800006 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-585cfc87fc-tt7jf" event={"ID":"48d9911d-3ed8-4474-9537-cbfcb462dd44","Type":"ContainerStarted","Data":"7c4816b6b0b46cab646ead2fe08ed969fb231f0e9d1ad99793361f2a622cfb60"} Nov 29 07:54:36 crc kubenswrapper[4795]: I1129 07:54:36.801326 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7jqlz" event={"ID":"a39c8bcf-1902-430f-bc47-ebb206116279","Type":"ContainerStarted","Data":"1ff4a374ea28d212df7d4db9a33c9e2ed217efa911f9d80bc0eb9b7fb0decd32"} Nov 29 07:54:36 crc kubenswrapper[4795]: I1129 07:54:36.801352 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7jqlz" event={"ID":"a39c8bcf-1902-430f-bc47-ebb206116279","Type":"ContainerStarted","Data":"66308e6efe3eb68f16fa5766564a1a8992fa8aeef5ac86c89e207c801aa855b7"} Nov 29 07:54:36 crc kubenswrapper[4795]: I1129 07:54:36.803133 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t7gdn" event={"ID":"d391c8f2-cee0-4f56-871a-63d84766a943","Type":"ContainerStarted","Data":"b4d7bbeda716a0fe28240a6e2e9eef9eb22ac3d9654fbc58664676e8e14abeb4"} Nov 29 07:54:36 crc kubenswrapper[4795]: I1129 07:54:36.805374 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gkz4q" event={"ID":"4374d966-3bb0-4da2-9a6b-3827abf0789e","Type":"ContainerStarted","Data":"4569ee3e55ab4015a9b4467d304e0286607e7d5c1e595a82f619a9bbf48e4e3f"} Nov 29 07:54:37 crc kubenswrapper[4795]: I1129 07:54:37.813249 4795 generic.go:334] "Generic (PLEG): container finished" podID="a39c8bcf-1902-430f-bc47-ebb206116279" containerID="1ff4a374ea28d212df7d4db9a33c9e2ed217efa911f9d80bc0eb9b7fb0decd32" exitCode=0 Nov 29 07:54:37 crc kubenswrapper[4795]: I1129 07:54:37.813326 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7jqlz" event={"ID":"a39c8bcf-1902-430f-bc47-ebb206116279","Type":"ContainerDied","Data":"1ff4a374ea28d212df7d4db9a33c9e2ed217efa911f9d80bc0eb9b7fb0decd32"} Nov 29 07:54:37 crc kubenswrapper[4795]: I1129 07:54:37.816676 4795 generic.go:334] "Generic (PLEG): container finished" podID="4374d966-3bb0-4da2-9a6b-3827abf0789e" containerID="4569ee3e55ab4015a9b4467d304e0286607e7d5c1e595a82f619a9bbf48e4e3f" exitCode=0 Nov 29 07:54:37 crc kubenswrapper[4795]: I1129 07:54:37.816769 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gkz4q" event={"ID":"4374d966-3bb0-4da2-9a6b-3827abf0789e","Type":"ContainerDied","Data":"4569ee3e55ab4015a9b4467d304e0286607e7d5c1e595a82f619a9bbf48e4e3f"} Nov 29 07:54:37 crc kubenswrapper[4795]: I1129 07:54:37.817353 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-585cfc87fc-tt7jf" Nov 29 07:54:37 crc kubenswrapper[4795]: I1129 07:54:37.820039 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-585cfc87fc-tt7jf" Nov 29 07:54:37 crc kubenswrapper[4795]: I1129 07:54:37.873112 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-585cfc87fc-tt7jf" podStartSLOduration=3.076310218 podStartE2EDuration="23.873089677s" podCreationTimestamp="2025-11-29 07:54:14 +0000 UTC" firstStartedPulling="2025-11-29 07:54:15.634671331 +0000 UTC m=+901.610247121" lastFinishedPulling="2025-11-29 07:54:36.43145079 +0000 UTC m=+922.407026580" observedRunningTime="2025-11-29 07:54:37.868194297 +0000 UTC m=+923.843770087" watchObservedRunningTime="2025-11-29 07:54:37.873089677 +0000 UTC m=+923.848665467" Nov 29 07:54:37 crc kubenswrapper[4795]: I1129 07:54:37.900599 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-t7gdn" podStartSLOduration=3.196371192 podStartE2EDuration="10.900571501s" podCreationTimestamp="2025-11-29 07:54:27 +0000 UTC" firstStartedPulling="2025-11-29 07:54:28.676900354 +0000 UTC m=+914.652476144" lastFinishedPulling="2025-11-29 07:54:36.381100663 +0000 UTC m=+922.356676453" observedRunningTime="2025-11-29 07:54:37.90019418 +0000 UTC m=+923.875769980" watchObservedRunningTime="2025-11-29 07:54:37.900571501 +0000 UTC m=+923.876147291" Nov 29 07:54:38 crc kubenswrapper[4795]: I1129 07:54:38.824803 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gkz4q" event={"ID":"4374d966-3bb0-4da2-9a6b-3827abf0789e","Type":"ContainerStarted","Data":"a27abe1d5486c68d03c268b85c140a5e68456426ee855f8b9b93bbbced1a787e"} Nov 29 07:54:38 crc kubenswrapper[4795]: I1129 07:54:38.827751 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7jqlz" event={"ID":"a39c8bcf-1902-430f-bc47-ebb206116279","Type":"ContainerStarted","Data":"009203feedf65c73daba0a5101d58c1b56cac0f52269c93ddbd63d0cb333ca5c"} Nov 29 07:54:38 crc kubenswrapper[4795]: I1129 07:54:38.850703 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gkz4q" podStartSLOduration=3.316362125 podStartE2EDuration="11.850683896s" podCreationTimestamp="2025-11-29 07:54:27 +0000 UTC" firstStartedPulling="2025-11-29 07:54:29.739886609 +0000 UTC m=+915.715462399" lastFinishedPulling="2025-11-29 07:54:38.27420838 +0000 UTC m=+924.249784170" observedRunningTime="2025-11-29 07:54:38.849353798 +0000 UTC m=+924.824929588" watchObservedRunningTime="2025-11-29 07:54:38.850683896 +0000 UTC m=+924.826259686" Nov 29 07:54:41 crc kubenswrapper[4795]: I1129 07:54:41.846907 4795 generic.go:334] "Generic (PLEG): container finished" podID="a39c8bcf-1902-430f-bc47-ebb206116279" containerID="009203feedf65c73daba0a5101d58c1b56cac0f52269c93ddbd63d0cb333ca5c" exitCode=0 Nov 29 07:54:41 crc kubenswrapper[4795]: I1129 07:54:41.846987 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7jqlz" event={"ID":"a39c8bcf-1902-430f-bc47-ebb206116279","Type":"ContainerDied","Data":"009203feedf65c73daba0a5101d58c1b56cac0f52269c93ddbd63d0cb333ca5c"} Nov 29 07:54:42 crc kubenswrapper[4795]: I1129 07:54:42.457099 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Nov 29 07:54:42 crc kubenswrapper[4795]: I1129 07:54:42.458425 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Nov 29 07:54:42 crc kubenswrapper[4795]: I1129 07:54:42.460190 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Nov 29 07:54:42 crc kubenswrapper[4795]: I1129 07:54:42.460571 4795 reflector.go:368] Caches populated for *v1.Secret from object-"minio-dev"/"default-dockercfg-9prrd" Nov 29 07:54:42 crc kubenswrapper[4795]: I1129 07:54:42.461249 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Nov 29 07:54:42 crc kubenswrapper[4795]: I1129 07:54:42.473162 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Nov 29 07:54:42 crc kubenswrapper[4795]: I1129 07:54:42.650403 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qj26m\" (UniqueName: \"kubernetes.io/projected/46396e38-6379-4031-ad43-c4a947a3954d-kube-api-access-qj26m\") pod \"minio\" (UID: \"46396e38-6379-4031-ad43-c4a947a3954d\") " pod="minio-dev/minio" Nov 29 07:54:42 crc kubenswrapper[4795]: I1129 07:54:42.650801 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-16bf8146-661a-43d7-8e88-50e28bad4702\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-16bf8146-661a-43d7-8e88-50e28bad4702\") pod \"minio\" (UID: \"46396e38-6379-4031-ad43-c4a947a3954d\") " pod="minio-dev/minio" Nov 29 07:54:42 crc kubenswrapper[4795]: I1129 07:54:42.753015 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qj26m\" (UniqueName: \"kubernetes.io/projected/46396e38-6379-4031-ad43-c4a947a3954d-kube-api-access-qj26m\") pod \"minio\" (UID: \"46396e38-6379-4031-ad43-c4a947a3954d\") " pod="minio-dev/minio" Nov 29 07:54:42 crc kubenswrapper[4795]: I1129 07:54:42.753131 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-16bf8146-661a-43d7-8e88-50e28bad4702\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-16bf8146-661a-43d7-8e88-50e28bad4702\") pod \"minio\" (UID: \"46396e38-6379-4031-ad43-c4a947a3954d\") " pod="minio-dev/minio" Nov 29 07:54:42 crc kubenswrapper[4795]: I1129 07:54:42.756267 4795 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 29 07:54:42 crc kubenswrapper[4795]: I1129 07:54:42.756295 4795 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-16bf8146-661a-43d7-8e88-50e28bad4702\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-16bf8146-661a-43d7-8e88-50e28bad4702\") pod \"minio\" (UID: \"46396e38-6379-4031-ad43-c4a947a3954d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d2f5be3c84994a18a29d28c802338585461d98c167529294fece2981358068f2/globalmount\"" pod="minio-dev/minio" Nov 29 07:54:42 crc kubenswrapper[4795]: I1129 07:54:42.782903 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-16bf8146-661a-43d7-8e88-50e28bad4702\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-16bf8146-661a-43d7-8e88-50e28bad4702\") pod \"minio\" (UID: \"46396e38-6379-4031-ad43-c4a947a3954d\") " pod="minio-dev/minio" Nov 29 07:54:42 crc kubenswrapper[4795]: I1129 07:54:42.787694 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qj26m\" (UniqueName: \"kubernetes.io/projected/46396e38-6379-4031-ad43-c4a947a3954d-kube-api-access-qj26m\") pod \"minio\" (UID: \"46396e38-6379-4031-ad43-c4a947a3954d\") " pod="minio-dev/minio" Nov 29 07:54:43 crc kubenswrapper[4795]: I1129 07:54:43.083872 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Nov 29 07:54:43 crc kubenswrapper[4795]: I1129 07:54:43.861862 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7jqlz" event={"ID":"a39c8bcf-1902-430f-bc47-ebb206116279","Type":"ContainerStarted","Data":"fabc4810dfe21ae7b96a9a7265806208e2cd60c9895aacb7055012cb9cca5bb0"} Nov 29 07:54:43 crc kubenswrapper[4795]: I1129 07:54:43.885204 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7jqlz" podStartSLOduration=6.736458857 podStartE2EDuration="11.885184402s" podCreationTimestamp="2025-11-29 07:54:32 +0000 UTC" firstStartedPulling="2025-11-29 07:54:37.814695151 +0000 UTC m=+923.790270941" lastFinishedPulling="2025-11-29 07:54:42.963420706 +0000 UTC m=+928.938996486" observedRunningTime="2025-11-29 07:54:43.882893746 +0000 UTC m=+929.858469536" watchObservedRunningTime="2025-11-29 07:54:43.885184402 +0000 UTC m=+929.860760192" Nov 29 07:54:43 crc kubenswrapper[4795]: I1129 07:54:43.899194 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Nov 29 07:54:44 crc kubenswrapper[4795]: I1129 07:54:44.867938 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"46396e38-6379-4031-ad43-c4a947a3954d","Type":"ContainerStarted","Data":"4bc9e5d5b6ad9707c67daa9a2370c69b6574c48db78ce4e4fdbf4c3bb161cbac"} Nov 29 07:54:47 crc kubenswrapper[4795]: I1129 07:54:47.428180 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-t7gdn" Nov 29 07:54:47 crc kubenswrapper[4795]: I1129 07:54:47.428550 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-t7gdn" Nov 29 07:54:47 crc kubenswrapper[4795]: I1129 07:54:47.519344 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-t7gdn" Nov 29 07:54:47 crc kubenswrapper[4795]: I1129 07:54:47.930145 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-t7gdn" Nov 29 07:54:48 crc kubenswrapper[4795]: I1129 07:54:48.235463 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gkz4q" Nov 29 07:54:48 crc kubenswrapper[4795]: I1129 07:54:48.235527 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gkz4q" Nov 29 07:54:48 crc kubenswrapper[4795]: I1129 07:54:48.283290 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gkz4q" Nov 29 07:54:48 crc kubenswrapper[4795]: I1129 07:54:48.937410 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gkz4q" Nov 29 07:54:50 crc kubenswrapper[4795]: I1129 07:54:50.499296 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t7gdn"] Nov 29 07:54:50 crc kubenswrapper[4795]: I1129 07:54:50.499998 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-t7gdn" podUID="d391c8f2-cee0-4f56-871a-63d84766a943" containerName="registry-server" containerID="cri-o://b4d7bbeda716a0fe28240a6e2e9eef9eb22ac3d9654fbc58664676e8e14abeb4" gracePeriod=2 Nov 29 07:54:51 crc kubenswrapper[4795]: I1129 07:54:51.787663 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t7gdn" Nov 29 07:54:51 crc kubenswrapper[4795]: I1129 07:54:51.880969 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d391c8f2-cee0-4f56-871a-63d84766a943-utilities\") pod \"d391c8f2-cee0-4f56-871a-63d84766a943\" (UID: \"d391c8f2-cee0-4f56-871a-63d84766a943\") " Nov 29 07:54:51 crc kubenswrapper[4795]: I1129 07:54:51.881032 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nshcg\" (UniqueName: \"kubernetes.io/projected/d391c8f2-cee0-4f56-871a-63d84766a943-kube-api-access-nshcg\") pod \"d391c8f2-cee0-4f56-871a-63d84766a943\" (UID: \"d391c8f2-cee0-4f56-871a-63d84766a943\") " Nov 29 07:54:51 crc kubenswrapper[4795]: I1129 07:54:51.881084 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d391c8f2-cee0-4f56-871a-63d84766a943-catalog-content\") pod \"d391c8f2-cee0-4f56-871a-63d84766a943\" (UID: \"d391c8f2-cee0-4f56-871a-63d84766a943\") " Nov 29 07:54:51 crc kubenswrapper[4795]: I1129 07:54:51.882129 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d391c8f2-cee0-4f56-871a-63d84766a943-utilities" (OuterVolumeSpecName: "utilities") pod "d391c8f2-cee0-4f56-871a-63d84766a943" (UID: "d391c8f2-cee0-4f56-871a-63d84766a943"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:54:51 crc kubenswrapper[4795]: I1129 07:54:51.891408 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d391c8f2-cee0-4f56-871a-63d84766a943-kube-api-access-nshcg" (OuterVolumeSpecName: "kube-api-access-nshcg") pod "d391c8f2-cee0-4f56-871a-63d84766a943" (UID: "d391c8f2-cee0-4f56-871a-63d84766a943"). InnerVolumeSpecName "kube-api-access-nshcg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:54:51 crc kubenswrapper[4795]: I1129 07:54:51.898199 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gkz4q"] Nov 29 07:54:51 crc kubenswrapper[4795]: I1129 07:54:51.898441 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gkz4q" podUID="4374d966-3bb0-4da2-9a6b-3827abf0789e" containerName="registry-server" containerID="cri-o://a27abe1d5486c68d03c268b85c140a5e68456426ee855f8b9b93bbbced1a787e" gracePeriod=2 Nov 29 07:54:51 crc kubenswrapper[4795]: I1129 07:54:51.901499 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d391c8f2-cee0-4f56-871a-63d84766a943-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d391c8f2-cee0-4f56-871a-63d84766a943" (UID: "d391c8f2-cee0-4f56-871a-63d84766a943"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:54:51 crc kubenswrapper[4795]: I1129 07:54:51.959153 4795 generic.go:334] "Generic (PLEG): container finished" podID="d391c8f2-cee0-4f56-871a-63d84766a943" containerID="b4d7bbeda716a0fe28240a6e2e9eef9eb22ac3d9654fbc58664676e8e14abeb4" exitCode=0 Nov 29 07:54:51 crc kubenswrapper[4795]: I1129 07:54:51.959248 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t7gdn" event={"ID":"d391c8f2-cee0-4f56-871a-63d84766a943","Type":"ContainerDied","Data":"b4d7bbeda716a0fe28240a6e2e9eef9eb22ac3d9654fbc58664676e8e14abeb4"} Nov 29 07:54:51 crc kubenswrapper[4795]: I1129 07:54:51.959300 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t7gdn" event={"ID":"d391c8f2-cee0-4f56-871a-63d84766a943","Type":"ContainerDied","Data":"c47458be0cff85ff02e6ffe18bd9d3d72b247e1c5b7b8bd366260fa576f2bfef"} Nov 29 07:54:51 crc kubenswrapper[4795]: I1129 07:54:51.959331 4795 scope.go:117] "RemoveContainer" containerID="b4d7bbeda716a0fe28240a6e2e9eef9eb22ac3d9654fbc58664676e8e14abeb4" Nov 29 07:54:51 crc kubenswrapper[4795]: I1129 07:54:51.959710 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t7gdn" Nov 29 07:54:51 crc kubenswrapper[4795]: I1129 07:54:51.995701 4795 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d391c8f2-cee0-4f56-871a-63d84766a943-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:54:51 crc kubenswrapper[4795]: I1129 07:54:51.995784 4795 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d391c8f2-cee0-4f56-871a-63d84766a943-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:54:51 crc kubenswrapper[4795]: I1129 07:54:51.995796 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nshcg\" (UniqueName: \"kubernetes.io/projected/d391c8f2-cee0-4f56-871a-63d84766a943-kube-api-access-nshcg\") on node \"crc\" DevicePath \"\"" Nov 29 07:54:52 crc kubenswrapper[4795]: I1129 07:54:52.019107 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t7gdn"] Nov 29 07:54:52 crc kubenswrapper[4795]: I1129 07:54:52.040224 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-t7gdn"] Nov 29 07:54:52 crc kubenswrapper[4795]: I1129 07:54:52.285385 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d391c8f2-cee0-4f56-871a-63d84766a943" path="/var/lib/kubelet/pods/d391c8f2-cee0-4f56-871a-63d84766a943/volumes" Nov 29 07:54:52 crc kubenswrapper[4795]: I1129 07:54:52.676685 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7jqlz" Nov 29 07:54:52 crc kubenswrapper[4795]: I1129 07:54:52.676743 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7jqlz" Nov 29 07:54:52 crc kubenswrapper[4795]: I1129 07:54:52.721562 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7jqlz" Nov 29 07:54:52 crc kubenswrapper[4795]: I1129 07:54:52.994381 4795 generic.go:334] "Generic (PLEG): container finished" podID="4374d966-3bb0-4da2-9a6b-3827abf0789e" containerID="a27abe1d5486c68d03c268b85c140a5e68456426ee855f8b9b93bbbced1a787e" exitCode=0 Nov 29 07:54:52 crc kubenswrapper[4795]: I1129 07:54:52.994431 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gkz4q" event={"ID":"4374d966-3bb0-4da2-9a6b-3827abf0789e","Type":"ContainerDied","Data":"a27abe1d5486c68d03c268b85c140a5e68456426ee855f8b9b93bbbced1a787e"} Nov 29 07:54:53 crc kubenswrapper[4795]: I1129 07:54:53.042012 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7jqlz" Nov 29 07:54:53 crc kubenswrapper[4795]: I1129 07:54:53.516525 4795 scope.go:117] "RemoveContainer" containerID="48be6d699a79f59a132b468c71b22f4fc670d3cce623cf1803acdd76567dd652" Nov 29 07:54:53 crc kubenswrapper[4795]: I1129 07:54:53.575485 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gkz4q" Nov 29 07:54:53 crc kubenswrapper[4795]: I1129 07:54:53.629027 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9tvgg\" (UniqueName: \"kubernetes.io/projected/4374d966-3bb0-4da2-9a6b-3827abf0789e-kube-api-access-9tvgg\") pod \"4374d966-3bb0-4da2-9a6b-3827abf0789e\" (UID: \"4374d966-3bb0-4da2-9a6b-3827abf0789e\") " Nov 29 07:54:53 crc kubenswrapper[4795]: I1129 07:54:53.629085 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4374d966-3bb0-4da2-9a6b-3827abf0789e-utilities\") pod \"4374d966-3bb0-4da2-9a6b-3827abf0789e\" (UID: \"4374d966-3bb0-4da2-9a6b-3827abf0789e\") " Nov 29 07:54:53 crc kubenswrapper[4795]: I1129 07:54:53.629115 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4374d966-3bb0-4da2-9a6b-3827abf0789e-catalog-content\") pod \"4374d966-3bb0-4da2-9a6b-3827abf0789e\" (UID: \"4374d966-3bb0-4da2-9a6b-3827abf0789e\") " Nov 29 07:54:53 crc kubenswrapper[4795]: I1129 07:54:53.630157 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4374d966-3bb0-4da2-9a6b-3827abf0789e-utilities" (OuterVolumeSpecName: "utilities") pod "4374d966-3bb0-4da2-9a6b-3827abf0789e" (UID: "4374d966-3bb0-4da2-9a6b-3827abf0789e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:54:53 crc kubenswrapper[4795]: I1129 07:54:53.634878 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4374d966-3bb0-4da2-9a6b-3827abf0789e-kube-api-access-9tvgg" (OuterVolumeSpecName: "kube-api-access-9tvgg") pod "4374d966-3bb0-4da2-9a6b-3827abf0789e" (UID: "4374d966-3bb0-4da2-9a6b-3827abf0789e"). InnerVolumeSpecName "kube-api-access-9tvgg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:54:53 crc kubenswrapper[4795]: I1129 07:54:53.680822 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4374d966-3bb0-4da2-9a6b-3827abf0789e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4374d966-3bb0-4da2-9a6b-3827abf0789e" (UID: "4374d966-3bb0-4da2-9a6b-3827abf0789e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:54:53 crc kubenswrapper[4795]: I1129 07:54:53.730058 4795 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4374d966-3bb0-4da2-9a6b-3827abf0789e-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:54:53 crc kubenswrapper[4795]: I1129 07:54:53.730099 4795 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4374d966-3bb0-4da2-9a6b-3827abf0789e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:54:53 crc kubenswrapper[4795]: I1129 07:54:53.730114 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9tvgg\" (UniqueName: \"kubernetes.io/projected/4374d966-3bb0-4da2-9a6b-3827abf0789e-kube-api-access-9tvgg\") on node \"crc\" DevicePath \"\"" Nov 29 07:54:53 crc kubenswrapper[4795]: I1129 07:54:53.807448 4795 scope.go:117] "RemoveContainer" containerID="bba801d57dde641cdc7609569745cb63db75980acd7323a3b15bc91a8e1712bc" Nov 29 07:54:53 crc kubenswrapper[4795]: I1129 07:54:53.846660 4795 scope.go:117] "RemoveContainer" containerID="b4d7bbeda716a0fe28240a6e2e9eef9eb22ac3d9654fbc58664676e8e14abeb4" Nov 29 07:54:53 crc kubenswrapper[4795]: E1129 07:54:53.846983 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4d7bbeda716a0fe28240a6e2e9eef9eb22ac3d9654fbc58664676e8e14abeb4\": container with ID starting with b4d7bbeda716a0fe28240a6e2e9eef9eb22ac3d9654fbc58664676e8e14abeb4 not found: ID does not exist" containerID="b4d7bbeda716a0fe28240a6e2e9eef9eb22ac3d9654fbc58664676e8e14abeb4" Nov 29 07:54:53 crc kubenswrapper[4795]: I1129 07:54:53.847014 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4d7bbeda716a0fe28240a6e2e9eef9eb22ac3d9654fbc58664676e8e14abeb4"} err="failed to get container status \"b4d7bbeda716a0fe28240a6e2e9eef9eb22ac3d9654fbc58664676e8e14abeb4\": rpc error: code = NotFound desc = could not find container \"b4d7bbeda716a0fe28240a6e2e9eef9eb22ac3d9654fbc58664676e8e14abeb4\": container with ID starting with b4d7bbeda716a0fe28240a6e2e9eef9eb22ac3d9654fbc58664676e8e14abeb4 not found: ID does not exist" Nov 29 07:54:53 crc kubenswrapper[4795]: I1129 07:54:53.847034 4795 scope.go:117] "RemoveContainer" containerID="48be6d699a79f59a132b468c71b22f4fc670d3cce623cf1803acdd76567dd652" Nov 29 07:54:53 crc kubenswrapper[4795]: E1129 07:54:53.847341 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48be6d699a79f59a132b468c71b22f4fc670d3cce623cf1803acdd76567dd652\": container with ID starting with 48be6d699a79f59a132b468c71b22f4fc670d3cce623cf1803acdd76567dd652 not found: ID does not exist" containerID="48be6d699a79f59a132b468c71b22f4fc670d3cce623cf1803acdd76567dd652" Nov 29 07:54:53 crc kubenswrapper[4795]: I1129 07:54:53.847390 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48be6d699a79f59a132b468c71b22f4fc670d3cce623cf1803acdd76567dd652"} err="failed to get container status \"48be6d699a79f59a132b468c71b22f4fc670d3cce623cf1803acdd76567dd652\": rpc error: code = NotFound desc = could not find container \"48be6d699a79f59a132b468c71b22f4fc670d3cce623cf1803acdd76567dd652\": container with ID starting with 48be6d699a79f59a132b468c71b22f4fc670d3cce623cf1803acdd76567dd652 not found: ID does not exist" Nov 29 07:54:53 crc kubenswrapper[4795]: I1129 07:54:53.847412 4795 scope.go:117] "RemoveContainer" containerID="bba801d57dde641cdc7609569745cb63db75980acd7323a3b15bc91a8e1712bc" Nov 29 07:54:53 crc kubenswrapper[4795]: E1129 07:54:53.847675 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bba801d57dde641cdc7609569745cb63db75980acd7323a3b15bc91a8e1712bc\": container with ID starting with bba801d57dde641cdc7609569745cb63db75980acd7323a3b15bc91a8e1712bc not found: ID does not exist" containerID="bba801d57dde641cdc7609569745cb63db75980acd7323a3b15bc91a8e1712bc" Nov 29 07:54:53 crc kubenswrapper[4795]: I1129 07:54:53.847706 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bba801d57dde641cdc7609569745cb63db75980acd7323a3b15bc91a8e1712bc"} err="failed to get container status \"bba801d57dde641cdc7609569745cb63db75980acd7323a3b15bc91a8e1712bc\": rpc error: code = NotFound desc = could not find container \"bba801d57dde641cdc7609569745cb63db75980acd7323a3b15bc91a8e1712bc\": container with ID starting with bba801d57dde641cdc7609569745cb63db75980acd7323a3b15bc91a8e1712bc not found: ID does not exist" Nov 29 07:54:54 crc kubenswrapper[4795]: I1129 07:54:54.002851 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gkz4q" event={"ID":"4374d966-3bb0-4da2-9a6b-3827abf0789e","Type":"ContainerDied","Data":"d22f7cb7882962a537013b63bf9f209aaf3f02b6323c7999f70e2467aed56a7d"} Nov 29 07:54:54 crc kubenswrapper[4795]: I1129 07:54:54.003194 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gkz4q" Nov 29 07:54:54 crc kubenswrapper[4795]: I1129 07:54:54.003230 4795 scope.go:117] "RemoveContainer" containerID="a27abe1d5486c68d03c268b85c140a5e68456426ee855f8b9b93bbbced1a787e" Nov 29 07:54:54 crc kubenswrapper[4795]: I1129 07:54:54.006298 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"46396e38-6379-4031-ad43-c4a947a3954d","Type":"ContainerStarted","Data":"8aa5f623e3f828c01fd5eaa1494e0024c42cfe2031e2691aa6e2b07fd3059aff"} Nov 29 07:54:54 crc kubenswrapper[4795]: I1129 07:54:54.021949 4795 scope.go:117] "RemoveContainer" containerID="4569ee3e55ab4015a9b4467d304e0286607e7d5c1e595a82f619a9bbf48e4e3f" Nov 29 07:54:54 crc kubenswrapper[4795]: I1129 07:54:54.047640 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=5.097647313 podStartE2EDuration="15.04761954s" podCreationTimestamp="2025-11-29 07:54:39 +0000 UTC" firstStartedPulling="2025-11-29 07:54:43.906728876 +0000 UTC m=+929.882304656" lastFinishedPulling="2025-11-29 07:54:53.856701093 +0000 UTC m=+939.832276883" observedRunningTime="2025-11-29 07:54:54.027915508 +0000 UTC m=+940.003491288" watchObservedRunningTime="2025-11-29 07:54:54.04761954 +0000 UTC m=+940.023195340" Nov 29 07:54:54 crc kubenswrapper[4795]: I1129 07:54:54.054137 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gkz4q"] Nov 29 07:54:54 crc kubenswrapper[4795]: I1129 07:54:54.054185 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gkz4q"] Nov 29 07:54:54 crc kubenswrapper[4795]: I1129 07:54:54.170045 4795 scope.go:117] "RemoveContainer" containerID="1a7e33a22a5dbebddb641d4374233cd5dab3380344ee808a71736f075153a0e2" Nov 29 07:54:54 crc kubenswrapper[4795]: I1129 07:54:54.282927 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4374d966-3bb0-4da2-9a6b-3827abf0789e" path="/var/lib/kubelet/pods/4374d966-3bb0-4da2-9a6b-3827abf0789e/volumes" Nov 29 07:54:57 crc kubenswrapper[4795]: I1129 07:54:57.093867 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7jqlz"] Nov 29 07:54:57 crc kubenswrapper[4795]: I1129 07:54:57.094370 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7jqlz" podUID="a39c8bcf-1902-430f-bc47-ebb206116279" containerName="registry-server" containerID="cri-o://fabc4810dfe21ae7b96a9a7265806208e2cd60c9895aacb7055012cb9cca5bb0" gracePeriod=2 Nov 29 07:54:57 crc kubenswrapper[4795]: I1129 07:54:57.443692 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7jqlz" Nov 29 07:54:57 crc kubenswrapper[4795]: I1129 07:54:57.587679 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a39c8bcf-1902-430f-bc47-ebb206116279-catalog-content\") pod \"a39c8bcf-1902-430f-bc47-ebb206116279\" (UID: \"a39c8bcf-1902-430f-bc47-ebb206116279\") " Nov 29 07:54:57 crc kubenswrapper[4795]: I1129 07:54:57.587757 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a39c8bcf-1902-430f-bc47-ebb206116279-utilities\") pod \"a39c8bcf-1902-430f-bc47-ebb206116279\" (UID: \"a39c8bcf-1902-430f-bc47-ebb206116279\") " Nov 29 07:54:57 crc kubenswrapper[4795]: I1129 07:54:57.587856 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bjmc\" (UniqueName: \"kubernetes.io/projected/a39c8bcf-1902-430f-bc47-ebb206116279-kube-api-access-6bjmc\") pod \"a39c8bcf-1902-430f-bc47-ebb206116279\" (UID: \"a39c8bcf-1902-430f-bc47-ebb206116279\") " Nov 29 07:54:57 crc kubenswrapper[4795]: I1129 07:54:57.588746 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a39c8bcf-1902-430f-bc47-ebb206116279-utilities" (OuterVolumeSpecName: "utilities") pod "a39c8bcf-1902-430f-bc47-ebb206116279" (UID: "a39c8bcf-1902-430f-bc47-ebb206116279"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:54:57 crc kubenswrapper[4795]: I1129 07:54:57.592944 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a39c8bcf-1902-430f-bc47-ebb206116279-kube-api-access-6bjmc" (OuterVolumeSpecName: "kube-api-access-6bjmc") pod "a39c8bcf-1902-430f-bc47-ebb206116279" (UID: "a39c8bcf-1902-430f-bc47-ebb206116279"). InnerVolumeSpecName "kube-api-access-6bjmc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:54:57 crc kubenswrapper[4795]: I1129 07:54:57.640638 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a39c8bcf-1902-430f-bc47-ebb206116279-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a39c8bcf-1902-430f-bc47-ebb206116279" (UID: "a39c8bcf-1902-430f-bc47-ebb206116279"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:54:57 crc kubenswrapper[4795]: I1129 07:54:57.689446 4795 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a39c8bcf-1902-430f-bc47-ebb206116279-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:54:57 crc kubenswrapper[4795]: I1129 07:54:57.689500 4795 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a39c8bcf-1902-430f-bc47-ebb206116279-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:54:57 crc kubenswrapper[4795]: I1129 07:54:57.689512 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bjmc\" (UniqueName: \"kubernetes.io/projected/a39c8bcf-1902-430f-bc47-ebb206116279-kube-api-access-6bjmc\") on node \"crc\" DevicePath \"\"" Nov 29 07:54:57 crc kubenswrapper[4795]: I1129 07:54:57.730010 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-distributor-76cc67bf56-2j6fx"] Nov 29 07:54:57 crc kubenswrapper[4795]: E1129 07:54:57.730352 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a39c8bcf-1902-430f-bc47-ebb206116279" containerName="extract-utilities" Nov 29 07:54:57 crc kubenswrapper[4795]: I1129 07:54:57.730370 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="a39c8bcf-1902-430f-bc47-ebb206116279" containerName="extract-utilities" Nov 29 07:54:57 crc kubenswrapper[4795]: E1129 07:54:57.730378 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4374d966-3bb0-4da2-9a6b-3827abf0789e" containerName="extract-content" Nov 29 07:54:57 crc kubenswrapper[4795]: I1129 07:54:57.730385 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="4374d966-3bb0-4da2-9a6b-3827abf0789e" containerName="extract-content" Nov 29 07:54:57 crc kubenswrapper[4795]: E1129 07:54:57.730397 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4374d966-3bb0-4da2-9a6b-3827abf0789e" containerName="registry-server" Nov 29 07:54:57 crc kubenswrapper[4795]: I1129 07:54:57.730404 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="4374d966-3bb0-4da2-9a6b-3827abf0789e" containerName="registry-server" Nov 29 07:54:57 crc kubenswrapper[4795]: E1129 07:54:57.730414 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a39c8bcf-1902-430f-bc47-ebb206116279" containerName="registry-server" Nov 29 07:54:57 crc kubenswrapper[4795]: I1129 07:54:57.730421 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="a39c8bcf-1902-430f-bc47-ebb206116279" containerName="registry-server" Nov 29 07:54:57 crc kubenswrapper[4795]: E1129 07:54:57.730432 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d391c8f2-cee0-4f56-871a-63d84766a943" containerName="extract-content" Nov 29 07:54:57 crc kubenswrapper[4795]: I1129 07:54:57.730437 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="d391c8f2-cee0-4f56-871a-63d84766a943" containerName="extract-content" Nov 29 07:54:57 crc kubenswrapper[4795]: E1129 07:54:57.730446 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4374d966-3bb0-4da2-9a6b-3827abf0789e" containerName="extract-utilities" Nov 29 07:54:57 crc kubenswrapper[4795]: I1129 07:54:57.730454 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="4374d966-3bb0-4da2-9a6b-3827abf0789e" containerName="extract-utilities" Nov 29 07:54:57 crc kubenswrapper[4795]: E1129 07:54:57.730461 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d391c8f2-cee0-4f56-871a-63d84766a943" containerName="extract-utilities" Nov 29 07:54:57 crc kubenswrapper[4795]: I1129 07:54:57.730467 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="d391c8f2-cee0-4f56-871a-63d84766a943" containerName="extract-utilities" Nov 29 07:54:57 crc kubenswrapper[4795]: E1129 07:54:57.730493 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d391c8f2-cee0-4f56-871a-63d84766a943" containerName="registry-server" Nov 29 07:54:57 crc kubenswrapper[4795]: I1129 07:54:57.730501 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="d391c8f2-cee0-4f56-871a-63d84766a943" containerName="registry-server" Nov 29 07:54:57 crc kubenswrapper[4795]: E1129 07:54:57.730508 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a39c8bcf-1902-430f-bc47-ebb206116279" containerName="extract-content" Nov 29 07:54:57 crc kubenswrapper[4795]: I1129 07:54:57.730516 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="a39c8bcf-1902-430f-bc47-ebb206116279" containerName="extract-content" Nov 29 07:54:57 crc kubenswrapper[4795]: I1129 07:54:57.730651 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="a39c8bcf-1902-430f-bc47-ebb206116279" containerName="registry-server" Nov 29 07:54:57 crc kubenswrapper[4795]: I1129 07:54:57.730666 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="4374d966-3bb0-4da2-9a6b-3827abf0789e" containerName="registry-server" Nov 29 07:54:57 crc kubenswrapper[4795]: I1129 07:54:57.730681 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="d391c8f2-cee0-4f56-871a-63d84766a943" containerName="registry-server" Nov 29 07:54:57 crc kubenswrapper[4795]: I1129 07:54:57.731272 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-76cc67bf56-2j6fx" Nov 29 07:54:57 crc kubenswrapper[4795]: I1129 07:54:57.738674 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-dockercfg-2gwxt" Nov 29 07:54:57 crc kubenswrapper[4795]: I1129 07:54:57.738860 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-config" Nov 29 07:54:57 crc kubenswrapper[4795]: I1129 07:54:57.738743 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-ca-bundle" Nov 29 07:54:57 crc kubenswrapper[4795]: I1129 07:54:57.738785 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-http" Nov 29 07:54:57 crc kubenswrapper[4795]: I1129 07:54:57.739235 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-grpc" Nov 29 07:54:57 crc kubenswrapper[4795]: I1129 07:54:57.740132 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-76cc67bf56-2j6fx"] Nov 29 07:54:57 crc kubenswrapper[4795]: I1129 07:54:57.883284 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-querier-5895d59bb8-jqcx2"] Nov 29 07:54:57 crc kubenswrapper[4795]: I1129 07:54:57.884708 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-5895d59bb8-jqcx2" Nov 29 07:54:57 crc kubenswrapper[4795]: I1129 07:54:57.886685 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-grpc" Nov 29 07:54:57 crc kubenswrapper[4795]: I1129 07:54:57.887056 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-http" Nov 29 07:54:57 crc kubenswrapper[4795]: I1129 07:54:57.887500 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-s3" Nov 29 07:54:57 crc kubenswrapper[4795]: I1129 07:54:57.892071 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2fd41086-3cec-46c6-a4ed-82885461095c-logging-loki-ca-bundle\") pod \"logging-loki-distributor-76cc67bf56-2j6fx\" (UID: \"2fd41086-3cec-46c6-a4ed-82885461095c\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-2j6fx" Nov 29 07:54:57 crc kubenswrapper[4795]: I1129 07:54:57.892122 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fd41086-3cec-46c6-a4ed-82885461095c-config\") pod \"logging-loki-distributor-76cc67bf56-2j6fx\" (UID: \"2fd41086-3cec-46c6-a4ed-82885461095c\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-2j6fx" Nov 29 07:54:57 crc kubenswrapper[4795]: I1129 07:54:57.892224 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcr5j\" (UniqueName: \"kubernetes.io/projected/2fd41086-3cec-46c6-a4ed-82885461095c-kube-api-access-lcr5j\") pod \"logging-loki-distributor-76cc67bf56-2j6fx\" (UID: \"2fd41086-3cec-46c6-a4ed-82885461095c\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-2j6fx" Nov 29 07:54:57 crc kubenswrapper[4795]: I1129 07:54:57.892265 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/2fd41086-3cec-46c6-a4ed-82885461095c-logging-loki-distributor-http\") pod \"logging-loki-distributor-76cc67bf56-2j6fx\" (UID: \"2fd41086-3cec-46c6-a4ed-82885461095c\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-2j6fx" Nov 29 07:54:57 crc kubenswrapper[4795]: I1129 07:54:57.892381 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/2fd41086-3cec-46c6-a4ed-82885461095c-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-76cc67bf56-2j6fx\" (UID: \"2fd41086-3cec-46c6-a4ed-82885461095c\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-2j6fx" Nov 29 07:54:57 crc kubenswrapper[4795]: I1129 07:54:57.892997 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-5895d59bb8-jqcx2"] Nov 29 07:54:57 crc kubenswrapper[4795]: I1129 07:54:57.969573 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-query-frontend-84558f7c9f-g2h7b"] Nov 29 07:54:57 crc kubenswrapper[4795]: I1129 07:54:57.970669 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-g2h7b" Nov 29 07:54:57 crc kubenswrapper[4795]: I1129 07:54:57.974901 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-http" Nov 29 07:54:57 crc kubenswrapper[4795]: I1129 07:54:57.978742 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-grpc" Nov 29 07:54:57 crc kubenswrapper[4795]: I1129 07:54:57.988653 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-84558f7c9f-g2h7b"] Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:57.994215 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/996554b0-3876-4c69-be10-a5f2c4a5c2e4-logging-loki-querier-http\") pod \"logging-loki-querier-5895d59bb8-jqcx2\" (UID: \"996554b0-3876-4c69-be10-a5f2c4a5c2e4\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-jqcx2" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:57.997972 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fd41086-3cec-46c6-a4ed-82885461095c-config\") pod \"logging-loki-distributor-76cc67bf56-2j6fx\" (UID: \"2fd41086-3cec-46c6-a4ed-82885461095c\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-2j6fx" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:57.998005 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2fd41086-3cec-46c6-a4ed-82885461095c-logging-loki-ca-bundle\") pod \"logging-loki-distributor-76cc67bf56-2j6fx\" (UID: \"2fd41086-3cec-46c6-a4ed-82885461095c\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-2j6fx" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:57.998039 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9jnv\" (UniqueName: \"kubernetes.io/projected/996554b0-3876-4c69-be10-a5f2c4a5c2e4-kube-api-access-d9jnv\") pod \"logging-loki-querier-5895d59bb8-jqcx2\" (UID: \"996554b0-3876-4c69-be10-a5f2c4a5c2e4\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-jqcx2" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:57.998112 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/996554b0-3876-4c69-be10-a5f2c4a5c2e4-config\") pod \"logging-loki-querier-5895d59bb8-jqcx2\" (UID: \"996554b0-3876-4c69-be10-a5f2c4a5c2e4\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-jqcx2" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:57.998165 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/996554b0-3876-4c69-be10-a5f2c4a5c2e4-logging-loki-s3\") pod \"logging-loki-querier-5895d59bb8-jqcx2\" (UID: \"996554b0-3876-4c69-be10-a5f2c4a5c2e4\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-jqcx2" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:57.998187 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcr5j\" (UniqueName: \"kubernetes.io/projected/2fd41086-3cec-46c6-a4ed-82885461095c-kube-api-access-lcr5j\") pod \"logging-loki-distributor-76cc67bf56-2j6fx\" (UID: \"2fd41086-3cec-46c6-a4ed-82885461095c\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-2j6fx" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:57.998228 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/2fd41086-3cec-46c6-a4ed-82885461095c-logging-loki-distributor-http\") pod \"logging-loki-distributor-76cc67bf56-2j6fx\" (UID: \"2fd41086-3cec-46c6-a4ed-82885461095c\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-2j6fx" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:57.998271 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/996554b0-3876-4c69-be10-a5f2c4a5c2e4-logging-loki-ca-bundle\") pod \"logging-loki-querier-5895d59bb8-jqcx2\" (UID: \"996554b0-3876-4c69-be10-a5f2c4a5c2e4\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-jqcx2" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:57.998354 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/2fd41086-3cec-46c6-a4ed-82885461095c-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-76cc67bf56-2j6fx\" (UID: \"2fd41086-3cec-46c6-a4ed-82885461095c\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-2j6fx" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:57.998397 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/996554b0-3876-4c69-be10-a5f2c4a5c2e4-logging-loki-querier-grpc\") pod \"logging-loki-querier-5895d59bb8-jqcx2\" (UID: \"996554b0-3876-4c69-be10-a5f2c4a5c2e4\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-jqcx2" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:57.999684 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2fd41086-3cec-46c6-a4ed-82885461095c-logging-loki-ca-bundle\") pod \"logging-loki-distributor-76cc67bf56-2j6fx\" (UID: \"2fd41086-3cec-46c6-a4ed-82885461095c\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-2j6fx" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:57.999811 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fd41086-3cec-46c6-a4ed-82885461095c-config\") pod \"logging-loki-distributor-76cc67bf56-2j6fx\" (UID: \"2fd41086-3cec-46c6-a4ed-82885461095c\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-2j6fx" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.003068 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/2fd41086-3cec-46c6-a4ed-82885461095c-logging-loki-distributor-http\") pod \"logging-loki-distributor-76cc67bf56-2j6fx\" (UID: \"2fd41086-3cec-46c6-a4ed-82885461095c\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-2j6fx" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.015190 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/2fd41086-3cec-46c6-a4ed-82885461095c-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-76cc67bf56-2j6fx\" (UID: \"2fd41086-3cec-46c6-a4ed-82885461095c\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-2j6fx" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.026461 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcr5j\" (UniqueName: \"kubernetes.io/projected/2fd41086-3cec-46c6-a4ed-82885461095c-kube-api-access-lcr5j\") pod \"logging-loki-distributor-76cc67bf56-2j6fx\" (UID: \"2fd41086-3cec-46c6-a4ed-82885461095c\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-2j6fx" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.034120 4795 generic.go:334] "Generic (PLEG): container finished" podID="a39c8bcf-1902-430f-bc47-ebb206116279" containerID="fabc4810dfe21ae7b96a9a7265806208e2cd60c9895aacb7055012cb9cca5bb0" exitCode=0 Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.034166 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7jqlz" event={"ID":"a39c8bcf-1902-430f-bc47-ebb206116279","Type":"ContainerDied","Data":"fabc4810dfe21ae7b96a9a7265806208e2cd60c9895aacb7055012cb9cca5bb0"} Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.034168 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7jqlz" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.034194 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7jqlz" event={"ID":"a39c8bcf-1902-430f-bc47-ebb206116279","Type":"ContainerDied","Data":"66308e6efe3eb68f16fa5766564a1a8992fa8aeef5ac86c89e207c801aa855b7"} Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.034214 4795 scope.go:117] "RemoveContainer" containerID="fabc4810dfe21ae7b96a9a7265806208e2cd60c9895aacb7055012cb9cca5bb0" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.065227 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-76cc67bf56-2j6fx" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.067391 4795 scope.go:117] "RemoveContainer" containerID="009203feedf65c73daba0a5101d58c1b56cac0f52269c93ddbd63d0cb333ca5c" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.080337 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7jqlz"] Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.087518 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7jqlz"] Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.099686 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/996554b0-3876-4c69-be10-a5f2c4a5c2e4-logging-loki-querier-grpc\") pod \"logging-loki-querier-5895d59bb8-jqcx2\" (UID: \"996554b0-3876-4c69-be10-a5f2c4a5c2e4\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-jqcx2" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.100734 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/996554b0-3876-4c69-be10-a5f2c4a5c2e4-logging-loki-querier-http\") pod \"logging-loki-querier-5895d59bb8-jqcx2\" (UID: \"996554b0-3876-4c69-be10-a5f2c4a5c2e4\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-jqcx2" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.100819 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/73ef818d-4038-418e-87e6-a16224e788c5-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-84558f7c9f-g2h7b\" (UID: \"73ef818d-4038-418e-87e6-a16224e788c5\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-g2h7b" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.100850 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9jnv\" (UniqueName: \"kubernetes.io/projected/996554b0-3876-4c69-be10-a5f2c4a5c2e4-kube-api-access-d9jnv\") pod \"logging-loki-querier-5895d59bb8-jqcx2\" (UID: \"996554b0-3876-4c69-be10-a5f2c4a5c2e4\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-jqcx2" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.100885 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thf4l\" (UniqueName: \"kubernetes.io/projected/73ef818d-4038-418e-87e6-a16224e788c5-kube-api-access-thf4l\") pod \"logging-loki-query-frontend-84558f7c9f-g2h7b\" (UID: \"73ef818d-4038-418e-87e6-a16224e788c5\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-g2h7b" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.100938 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/996554b0-3876-4c69-be10-a5f2c4a5c2e4-config\") pod \"logging-loki-querier-5895d59bb8-jqcx2\" (UID: \"996554b0-3876-4c69-be10-a5f2c4a5c2e4\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-jqcx2" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.100974 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73ef818d-4038-418e-87e6-a16224e788c5-config\") pod \"logging-loki-query-frontend-84558f7c9f-g2h7b\" (UID: \"73ef818d-4038-418e-87e6-a16224e788c5\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-g2h7b" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.100996 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/73ef818d-4038-418e-87e6-a16224e788c5-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-84558f7c9f-g2h7b\" (UID: \"73ef818d-4038-418e-87e6-a16224e788c5\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-g2h7b" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.101019 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/996554b0-3876-4c69-be10-a5f2c4a5c2e4-logging-loki-s3\") pod \"logging-loki-querier-5895d59bb8-jqcx2\" (UID: \"996554b0-3876-4c69-be10-a5f2c4a5c2e4\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-jqcx2" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.101058 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/996554b0-3876-4c69-be10-a5f2c4a5c2e4-logging-loki-ca-bundle\") pod \"logging-loki-querier-5895d59bb8-jqcx2\" (UID: \"996554b0-3876-4c69-be10-a5f2c4a5c2e4\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-jqcx2" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.101102 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/73ef818d-4038-418e-87e6-a16224e788c5-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-84558f7c9f-g2h7b\" (UID: \"73ef818d-4038-418e-87e6-a16224e788c5\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-g2h7b" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.101815 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/996554b0-3876-4c69-be10-a5f2c4a5c2e4-config\") pod \"logging-loki-querier-5895d59bb8-jqcx2\" (UID: \"996554b0-3876-4c69-be10-a5f2c4a5c2e4\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-jqcx2" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.101923 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/996554b0-3876-4c69-be10-a5f2c4a5c2e4-logging-loki-ca-bundle\") pod \"logging-loki-querier-5895d59bb8-jqcx2\" (UID: \"996554b0-3876-4c69-be10-a5f2c4a5c2e4\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-jqcx2" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.104236 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/996554b0-3876-4c69-be10-a5f2c4a5c2e4-logging-loki-s3\") pod \"logging-loki-querier-5895d59bb8-jqcx2\" (UID: \"996554b0-3876-4c69-be10-a5f2c4a5c2e4\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-jqcx2" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.107777 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/996554b0-3876-4c69-be10-a5f2c4a5c2e4-logging-loki-querier-http\") pod \"logging-loki-querier-5895d59bb8-jqcx2\" (UID: \"996554b0-3876-4c69-be10-a5f2c4a5c2e4\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-jqcx2" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.107796 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/996554b0-3876-4c69-be10-a5f2c4a5c2e4-logging-loki-querier-grpc\") pod \"logging-loki-querier-5895d59bb8-jqcx2\" (UID: \"996554b0-3876-4c69-be10-a5f2c4a5c2e4\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-jqcx2" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.121407 4795 scope.go:117] "RemoveContainer" containerID="1ff4a374ea28d212df7d4db9a33c9e2ed217efa911f9d80bc0eb9b7fb0decd32" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.127659 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9jnv\" (UniqueName: \"kubernetes.io/projected/996554b0-3876-4c69-be10-a5f2c4a5c2e4-kube-api-access-d9jnv\") pod \"logging-loki-querier-5895d59bb8-jqcx2\" (UID: \"996554b0-3876-4c69-be10-a5f2c4a5c2e4\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-jqcx2" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.140456 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-575bf4587d-cslrg"] Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.141772 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-575bf4587d-cslrg" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.146915 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-http" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.147107 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway-ca-bundle" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.147217 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.147623 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.147752 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-client-http" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.198559 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-575bf4587d-fvfcs"] Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.200132 4795 scope.go:117] "RemoveContainer" containerID="fabc4810dfe21ae7b96a9a7265806208e2cd60c9895aacb7055012cb9cca5bb0" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.201909 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-5895d59bb8-jqcx2" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.210110 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thf4l\" (UniqueName: \"kubernetes.io/projected/73ef818d-4038-418e-87e6-a16224e788c5-kube-api-access-thf4l\") pod \"logging-loki-query-frontend-84558f7c9f-g2h7b\" (UID: \"73ef818d-4038-418e-87e6-a16224e788c5\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-g2h7b" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.210209 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73ef818d-4038-418e-87e6-a16224e788c5-config\") pod \"logging-loki-query-frontend-84558f7c9f-g2h7b\" (UID: \"73ef818d-4038-418e-87e6-a16224e788c5\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-g2h7b" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.210241 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/73ef818d-4038-418e-87e6-a16224e788c5-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-84558f7c9f-g2h7b\" (UID: \"73ef818d-4038-418e-87e6-a16224e788c5\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-g2h7b" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.210331 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/73ef818d-4038-418e-87e6-a16224e788c5-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-84558f7c9f-g2h7b\" (UID: \"73ef818d-4038-418e-87e6-a16224e788c5\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-g2h7b" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.210467 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/73ef818d-4038-418e-87e6-a16224e788c5-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-84558f7c9f-g2h7b\" (UID: \"73ef818d-4038-418e-87e6-a16224e788c5\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-g2h7b" Nov 29 07:54:58 crc kubenswrapper[4795]: E1129 07:54:58.212516 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fabc4810dfe21ae7b96a9a7265806208e2cd60c9895aacb7055012cb9cca5bb0\": container with ID starting with fabc4810dfe21ae7b96a9a7265806208e2cd60c9895aacb7055012cb9cca5bb0 not found: ID does not exist" containerID="fabc4810dfe21ae7b96a9a7265806208e2cd60c9895aacb7055012cb9cca5bb0" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.212564 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fabc4810dfe21ae7b96a9a7265806208e2cd60c9895aacb7055012cb9cca5bb0"} err="failed to get container status \"fabc4810dfe21ae7b96a9a7265806208e2cd60c9895aacb7055012cb9cca5bb0\": rpc error: code = NotFound desc = could not find container \"fabc4810dfe21ae7b96a9a7265806208e2cd60c9895aacb7055012cb9cca5bb0\": container with ID starting with fabc4810dfe21ae7b96a9a7265806208e2cd60c9895aacb7055012cb9cca5bb0 not found: ID does not exist" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.212697 4795 scope.go:117] "RemoveContainer" containerID="009203feedf65c73daba0a5101d58c1b56cac0f52269c93ddbd63d0cb333ca5c" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.216731 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73ef818d-4038-418e-87e6-a16224e788c5-config\") pod \"logging-loki-query-frontend-84558f7c9f-g2h7b\" (UID: \"73ef818d-4038-418e-87e6-a16224e788c5\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-g2h7b" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.218078 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/73ef818d-4038-418e-87e6-a16224e788c5-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-84558f7c9f-g2h7b\" (UID: \"73ef818d-4038-418e-87e6-a16224e788c5\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-g2h7b" Nov 29 07:54:58 crc kubenswrapper[4795]: E1129 07:54:58.218246 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"009203feedf65c73daba0a5101d58c1b56cac0f52269c93ddbd63d0cb333ca5c\": container with ID starting with 009203feedf65c73daba0a5101d58c1b56cac0f52269c93ddbd63d0cb333ca5c not found: ID does not exist" containerID="009203feedf65c73daba0a5101d58c1b56cac0f52269c93ddbd63d0cb333ca5c" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.218298 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"009203feedf65c73daba0a5101d58c1b56cac0f52269c93ddbd63d0cb333ca5c"} err="failed to get container status \"009203feedf65c73daba0a5101d58c1b56cac0f52269c93ddbd63d0cb333ca5c\": rpc error: code = NotFound desc = could not find container \"009203feedf65c73daba0a5101d58c1b56cac0f52269c93ddbd63d0cb333ca5c\": container with ID starting with 009203feedf65c73daba0a5101d58c1b56cac0f52269c93ddbd63d0cb333ca5c not found: ID does not exist" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.218329 4795 scope.go:117] "RemoveContainer" containerID="1ff4a374ea28d212df7d4db9a33c9e2ed217efa911f9d80bc0eb9b7fb0decd32" Nov 29 07:54:58 crc kubenswrapper[4795]: E1129 07:54:58.219742 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ff4a374ea28d212df7d4db9a33c9e2ed217efa911f9d80bc0eb9b7fb0decd32\": container with ID starting with 1ff4a374ea28d212df7d4db9a33c9e2ed217efa911f9d80bc0eb9b7fb0decd32 not found: ID does not exist" containerID="1ff4a374ea28d212df7d4db9a33c9e2ed217efa911f9d80bc0eb9b7fb0decd32" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.221255 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ff4a374ea28d212df7d4db9a33c9e2ed217efa911f9d80bc0eb9b7fb0decd32"} err="failed to get container status \"1ff4a374ea28d212df7d4db9a33c9e2ed217efa911f9d80bc0eb9b7fb0decd32\": rpc error: code = NotFound desc = could not find container \"1ff4a374ea28d212df7d4db9a33c9e2ed217efa911f9d80bc0eb9b7fb0decd32\": container with ID starting with 1ff4a374ea28d212df7d4db9a33c9e2ed217efa911f9d80bc0eb9b7fb0decd32 not found: ID does not exist" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.221375 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/73ef818d-4038-418e-87e6-a16224e788c5-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-84558f7c9f-g2h7b\" (UID: \"73ef818d-4038-418e-87e6-a16224e788c5\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-g2h7b" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.231388 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-575bf4587d-cslrg"] Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.231518 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-575bf4587d-fvfcs" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.231857 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/73ef818d-4038-418e-87e6-a16224e788c5-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-84558f7c9f-g2h7b\" (UID: \"73ef818d-4038-418e-87e6-a16224e788c5\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-g2h7b" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.233615 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-dockercfg-l6qhl" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.236062 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-575bf4587d-fvfcs"] Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.249584 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thf4l\" (UniqueName: \"kubernetes.io/projected/73ef818d-4038-418e-87e6-a16224e788c5-kube-api-access-thf4l\") pod \"logging-loki-query-frontend-84558f7c9f-g2h7b\" (UID: \"73ef818d-4038-418e-87e6-a16224e788c5\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-g2h7b" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.287915 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-g2h7b" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.291476 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a39c8bcf-1902-430f-bc47-ebb206116279" path="/var/lib/kubelet/pods/a39c8bcf-1902-430f-bc47-ebb206116279/volumes" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.319338 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04a66bf0-d1a8-4bf7-85e4-8974cc247cd0-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-575bf4587d-cslrg\" (UID: \"04a66bf0-d1a8-4bf7-85e4-8974cc247cd0\") " pod="openshift-logging/logging-loki-gateway-575bf4587d-cslrg" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.319395 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/c5d428b2-eb39-4936-819d-08321d96d015-lokistack-gateway\") pod \"logging-loki-gateway-575bf4587d-fvfcs\" (UID: \"c5d428b2-eb39-4936-819d-08321d96d015\") " pod="openshift-logging/logging-loki-gateway-575bf4587d-fvfcs" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.319527 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5d428b2-eb39-4936-819d-08321d96d015-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-575bf4587d-fvfcs\" (UID: \"c5d428b2-eb39-4936-819d-08321d96d015\") " pod="openshift-logging/logging-loki-gateway-575bf4587d-fvfcs" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.319662 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/04a66bf0-d1a8-4bf7-85e4-8974cc247cd0-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-575bf4587d-cslrg\" (UID: \"04a66bf0-d1a8-4bf7-85e4-8974cc247cd0\") " pod="openshift-logging/logging-loki-gateway-575bf4587d-cslrg" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.319744 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/c5d428b2-eb39-4936-819d-08321d96d015-tenants\") pod \"logging-loki-gateway-575bf4587d-fvfcs\" (UID: \"c5d428b2-eb39-4936-819d-08321d96d015\") " pod="openshift-logging/logging-loki-gateway-575bf4587d-fvfcs" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.319811 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/c5d428b2-eb39-4936-819d-08321d96d015-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-575bf4587d-fvfcs\" (UID: \"c5d428b2-eb39-4936-819d-08321d96d015\") " pod="openshift-logging/logging-loki-gateway-575bf4587d-fvfcs" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.319874 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/04a66bf0-d1a8-4bf7-85e4-8974cc247cd0-tenants\") pod \"logging-loki-gateway-575bf4587d-cslrg\" (UID: \"04a66bf0-d1a8-4bf7-85e4-8974cc247cd0\") " pod="openshift-logging/logging-loki-gateway-575bf4587d-cslrg" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.319936 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5d428b2-eb39-4936-819d-08321d96d015-logging-loki-ca-bundle\") pod \"logging-loki-gateway-575bf4587d-fvfcs\" (UID: \"c5d428b2-eb39-4936-819d-08321d96d015\") " pod="openshift-logging/logging-loki-gateway-575bf4587d-fvfcs" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.319974 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zp4w4\" (UniqueName: \"kubernetes.io/projected/c5d428b2-eb39-4936-819d-08321d96d015-kube-api-access-zp4w4\") pod \"logging-loki-gateway-575bf4587d-fvfcs\" (UID: \"c5d428b2-eb39-4936-819d-08321d96d015\") " pod="openshift-logging/logging-loki-gateway-575bf4587d-fvfcs" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.319997 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/04a66bf0-d1a8-4bf7-85e4-8974cc247cd0-lokistack-gateway\") pod \"logging-loki-gateway-575bf4587d-cslrg\" (UID: \"04a66bf0-d1a8-4bf7-85e4-8974cc247cd0\") " pod="openshift-logging/logging-loki-gateway-575bf4587d-cslrg" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.320017 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/04a66bf0-d1a8-4bf7-85e4-8974cc247cd0-rbac\") pod \"logging-loki-gateway-575bf4587d-cslrg\" (UID: \"04a66bf0-d1a8-4bf7-85e4-8974cc247cd0\") " pod="openshift-logging/logging-loki-gateway-575bf4587d-cslrg" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.320036 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/c5d428b2-eb39-4936-819d-08321d96d015-tls-secret\") pod \"logging-loki-gateway-575bf4587d-fvfcs\" (UID: \"c5d428b2-eb39-4936-819d-08321d96d015\") " pod="openshift-logging/logging-loki-gateway-575bf4587d-fvfcs" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.320080 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/04a66bf0-d1a8-4bf7-85e4-8974cc247cd0-tls-secret\") pod \"logging-loki-gateway-575bf4587d-cslrg\" (UID: \"04a66bf0-d1a8-4bf7-85e4-8974cc247cd0\") " pod="openshift-logging/logging-loki-gateway-575bf4587d-cslrg" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.320110 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dhst\" (UniqueName: \"kubernetes.io/projected/04a66bf0-d1a8-4bf7-85e4-8974cc247cd0-kube-api-access-7dhst\") pod \"logging-loki-gateway-575bf4587d-cslrg\" (UID: \"04a66bf0-d1a8-4bf7-85e4-8974cc247cd0\") " pod="openshift-logging/logging-loki-gateway-575bf4587d-cslrg" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.320139 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04a66bf0-d1a8-4bf7-85e4-8974cc247cd0-logging-loki-ca-bundle\") pod \"logging-loki-gateway-575bf4587d-cslrg\" (UID: \"04a66bf0-d1a8-4bf7-85e4-8974cc247cd0\") " pod="openshift-logging/logging-loki-gateway-575bf4587d-cslrg" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.320203 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/c5d428b2-eb39-4936-819d-08321d96d015-rbac\") pod \"logging-loki-gateway-575bf4587d-fvfcs\" (UID: \"c5d428b2-eb39-4936-819d-08321d96d015\") " pod="openshift-logging/logging-loki-gateway-575bf4587d-fvfcs" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.421709 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5d428b2-eb39-4936-819d-08321d96d015-logging-loki-ca-bundle\") pod \"logging-loki-gateway-575bf4587d-fvfcs\" (UID: \"c5d428b2-eb39-4936-819d-08321d96d015\") " pod="openshift-logging/logging-loki-gateway-575bf4587d-fvfcs" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.421883 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zp4w4\" (UniqueName: \"kubernetes.io/projected/c5d428b2-eb39-4936-819d-08321d96d015-kube-api-access-zp4w4\") pod \"logging-loki-gateway-575bf4587d-fvfcs\" (UID: \"c5d428b2-eb39-4936-819d-08321d96d015\") " pod="openshift-logging/logging-loki-gateway-575bf4587d-fvfcs" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.421918 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/04a66bf0-d1a8-4bf7-85e4-8974cc247cd0-lokistack-gateway\") pod \"logging-loki-gateway-575bf4587d-cslrg\" (UID: \"04a66bf0-d1a8-4bf7-85e4-8974cc247cd0\") " pod="openshift-logging/logging-loki-gateway-575bf4587d-cslrg" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.421952 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/04a66bf0-d1a8-4bf7-85e4-8974cc247cd0-rbac\") pod \"logging-loki-gateway-575bf4587d-cslrg\" (UID: \"04a66bf0-d1a8-4bf7-85e4-8974cc247cd0\") " pod="openshift-logging/logging-loki-gateway-575bf4587d-cslrg" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.421975 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/c5d428b2-eb39-4936-819d-08321d96d015-tls-secret\") pod \"logging-loki-gateway-575bf4587d-fvfcs\" (UID: \"c5d428b2-eb39-4936-819d-08321d96d015\") " pod="openshift-logging/logging-loki-gateway-575bf4587d-fvfcs" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.421998 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/04a66bf0-d1a8-4bf7-85e4-8974cc247cd0-tls-secret\") pod \"logging-loki-gateway-575bf4587d-cslrg\" (UID: \"04a66bf0-d1a8-4bf7-85e4-8974cc247cd0\") " pod="openshift-logging/logging-loki-gateway-575bf4587d-cslrg" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.422012 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dhst\" (UniqueName: \"kubernetes.io/projected/04a66bf0-d1a8-4bf7-85e4-8974cc247cd0-kube-api-access-7dhst\") pod \"logging-loki-gateway-575bf4587d-cslrg\" (UID: \"04a66bf0-d1a8-4bf7-85e4-8974cc247cd0\") " pod="openshift-logging/logging-loki-gateway-575bf4587d-cslrg" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.422043 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04a66bf0-d1a8-4bf7-85e4-8974cc247cd0-logging-loki-ca-bundle\") pod \"logging-loki-gateway-575bf4587d-cslrg\" (UID: \"04a66bf0-d1a8-4bf7-85e4-8974cc247cd0\") " pod="openshift-logging/logging-loki-gateway-575bf4587d-cslrg" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.422094 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/c5d428b2-eb39-4936-819d-08321d96d015-rbac\") pod \"logging-loki-gateway-575bf4587d-fvfcs\" (UID: \"c5d428b2-eb39-4936-819d-08321d96d015\") " pod="openshift-logging/logging-loki-gateway-575bf4587d-fvfcs" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.422159 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04a66bf0-d1a8-4bf7-85e4-8974cc247cd0-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-575bf4587d-cslrg\" (UID: \"04a66bf0-d1a8-4bf7-85e4-8974cc247cd0\") " pod="openshift-logging/logging-loki-gateway-575bf4587d-cslrg" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.422182 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/c5d428b2-eb39-4936-819d-08321d96d015-lokistack-gateway\") pod \"logging-loki-gateway-575bf4587d-fvfcs\" (UID: \"c5d428b2-eb39-4936-819d-08321d96d015\") " pod="openshift-logging/logging-loki-gateway-575bf4587d-fvfcs" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.422215 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5d428b2-eb39-4936-819d-08321d96d015-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-575bf4587d-fvfcs\" (UID: \"c5d428b2-eb39-4936-819d-08321d96d015\") " pod="openshift-logging/logging-loki-gateway-575bf4587d-fvfcs" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.422244 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/04a66bf0-d1a8-4bf7-85e4-8974cc247cd0-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-575bf4587d-cslrg\" (UID: \"04a66bf0-d1a8-4bf7-85e4-8974cc247cd0\") " pod="openshift-logging/logging-loki-gateway-575bf4587d-cslrg" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.422277 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/c5d428b2-eb39-4936-819d-08321d96d015-tenants\") pod \"logging-loki-gateway-575bf4587d-fvfcs\" (UID: \"c5d428b2-eb39-4936-819d-08321d96d015\") " pod="openshift-logging/logging-loki-gateway-575bf4587d-fvfcs" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.422304 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/c5d428b2-eb39-4936-819d-08321d96d015-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-575bf4587d-fvfcs\" (UID: \"c5d428b2-eb39-4936-819d-08321d96d015\") " pod="openshift-logging/logging-loki-gateway-575bf4587d-fvfcs" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.422368 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/04a66bf0-d1a8-4bf7-85e4-8974cc247cd0-tenants\") pod \"logging-loki-gateway-575bf4587d-cslrg\" (UID: \"04a66bf0-d1a8-4bf7-85e4-8974cc247cd0\") " pod="openshift-logging/logging-loki-gateway-575bf4587d-cslrg" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.422390 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5d428b2-eb39-4936-819d-08321d96d015-logging-loki-ca-bundle\") pod \"logging-loki-gateway-575bf4587d-fvfcs\" (UID: \"c5d428b2-eb39-4936-819d-08321d96d015\") " pod="openshift-logging/logging-loki-gateway-575bf4587d-fvfcs" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.423420 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04a66bf0-d1a8-4bf7-85e4-8974cc247cd0-logging-loki-ca-bundle\") pod \"logging-loki-gateway-575bf4587d-cslrg\" (UID: \"04a66bf0-d1a8-4bf7-85e4-8974cc247cd0\") " pod="openshift-logging/logging-loki-gateway-575bf4587d-cslrg" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.424510 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/04a66bf0-d1a8-4bf7-85e4-8974cc247cd0-rbac\") pod \"logging-loki-gateway-575bf4587d-cslrg\" (UID: \"04a66bf0-d1a8-4bf7-85e4-8974cc247cd0\") " pod="openshift-logging/logging-loki-gateway-575bf4587d-cslrg" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.424832 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04a66bf0-d1a8-4bf7-85e4-8974cc247cd0-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-575bf4587d-cslrg\" (UID: \"04a66bf0-d1a8-4bf7-85e4-8974cc247cd0\") " pod="openshift-logging/logging-loki-gateway-575bf4587d-cslrg" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.425532 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/04a66bf0-d1a8-4bf7-85e4-8974cc247cd0-lokistack-gateway\") pod \"logging-loki-gateway-575bf4587d-cslrg\" (UID: \"04a66bf0-d1a8-4bf7-85e4-8974cc247cd0\") " pod="openshift-logging/logging-loki-gateway-575bf4587d-cslrg" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.427851 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/c5d428b2-eb39-4936-819d-08321d96d015-tenants\") pod \"logging-loki-gateway-575bf4587d-fvfcs\" (UID: \"c5d428b2-eb39-4936-819d-08321d96d015\") " pod="openshift-logging/logging-loki-gateway-575bf4587d-fvfcs" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.428608 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5d428b2-eb39-4936-819d-08321d96d015-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-575bf4587d-fvfcs\" (UID: \"c5d428b2-eb39-4936-819d-08321d96d015\") " pod="openshift-logging/logging-loki-gateway-575bf4587d-fvfcs" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.428716 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/c5d428b2-eb39-4936-819d-08321d96d015-tls-secret\") pod \"logging-loki-gateway-575bf4587d-fvfcs\" (UID: \"c5d428b2-eb39-4936-819d-08321d96d015\") " pod="openshift-logging/logging-loki-gateway-575bf4587d-fvfcs" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.428881 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/c5d428b2-eb39-4936-819d-08321d96d015-lokistack-gateway\") pod \"logging-loki-gateway-575bf4587d-fvfcs\" (UID: \"c5d428b2-eb39-4936-819d-08321d96d015\") " pod="openshift-logging/logging-loki-gateway-575bf4587d-fvfcs" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.429485 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/c5d428b2-eb39-4936-819d-08321d96d015-rbac\") pod \"logging-loki-gateway-575bf4587d-fvfcs\" (UID: \"c5d428b2-eb39-4936-819d-08321d96d015\") " pod="openshift-logging/logging-loki-gateway-575bf4587d-fvfcs" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.434984 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/04a66bf0-d1a8-4bf7-85e4-8974cc247cd0-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-575bf4587d-cslrg\" (UID: \"04a66bf0-d1a8-4bf7-85e4-8974cc247cd0\") " pod="openshift-logging/logging-loki-gateway-575bf4587d-cslrg" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.435540 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/c5d428b2-eb39-4936-819d-08321d96d015-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-575bf4587d-fvfcs\" (UID: \"c5d428b2-eb39-4936-819d-08321d96d015\") " pod="openshift-logging/logging-loki-gateway-575bf4587d-fvfcs" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.443236 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/04a66bf0-d1a8-4bf7-85e4-8974cc247cd0-tenants\") pod \"logging-loki-gateway-575bf4587d-cslrg\" (UID: \"04a66bf0-d1a8-4bf7-85e4-8974cc247cd0\") " pod="openshift-logging/logging-loki-gateway-575bf4587d-cslrg" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.445401 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dhst\" (UniqueName: \"kubernetes.io/projected/04a66bf0-d1a8-4bf7-85e4-8974cc247cd0-kube-api-access-7dhst\") pod \"logging-loki-gateway-575bf4587d-cslrg\" (UID: \"04a66bf0-d1a8-4bf7-85e4-8974cc247cd0\") " pod="openshift-logging/logging-loki-gateway-575bf4587d-cslrg" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.446381 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zp4w4\" (UniqueName: \"kubernetes.io/projected/c5d428b2-eb39-4936-819d-08321d96d015-kube-api-access-zp4w4\") pod \"logging-loki-gateway-575bf4587d-fvfcs\" (UID: \"c5d428b2-eb39-4936-819d-08321d96d015\") " pod="openshift-logging/logging-loki-gateway-575bf4587d-fvfcs" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.449065 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/04a66bf0-d1a8-4bf7-85e4-8974cc247cd0-tls-secret\") pod \"logging-loki-gateway-575bf4587d-cslrg\" (UID: \"04a66bf0-d1a8-4bf7-85e4-8974cc247cd0\") " pod="openshift-logging/logging-loki-gateway-575bf4587d-cslrg" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.501011 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-575bf4587d-cslrg" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.552215 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-5895d59bb8-jqcx2"] Nov 29 07:54:58 crc kubenswrapper[4795]: W1129 07:54:58.559956 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod996554b0_3876_4c69_be10_a5f2c4a5c2e4.slice/crio-bb84fc3eece32ec10767251ff2852a716f8a97db56f0aa3b50aeac41059ff716 WatchSource:0}: Error finding container bb84fc3eece32ec10767251ff2852a716f8a97db56f0aa3b50aeac41059ff716: Status 404 returned error can't find the container with id bb84fc3eece32ec10767251ff2852a716f8a97db56f0aa3b50aeac41059ff716 Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.656076 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-76cc67bf56-2j6fx"] Nov 29 07:54:58 crc kubenswrapper[4795]: W1129 07:54:58.658332 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2fd41086_3cec_46c6_a4ed_82885461095c.slice/crio-47f412623e29d806aa98f3c03efef85c649490126667d8c37ec2dfeaa43a70b0 WatchSource:0}: Error finding container 47f412623e29d806aa98f3c03efef85c649490126667d8c37ec2dfeaa43a70b0: Status 404 returned error can't find the container with id 47f412623e29d806aa98f3c03efef85c649490126667d8c37ec2dfeaa43a70b0 Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.681010 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-575bf4587d-fvfcs" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.861135 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-84558f7c9f-g2h7b"] Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.885198 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.886047 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.890666 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-http" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.895914 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-grpc" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.903983 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.956564 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.957836 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.960766 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-http" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.960961 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-grpc" Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.981040 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Nov 29 07:54:58 crc kubenswrapper[4795]: I1129 07:54:58.986579 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-575bf4587d-cslrg"] Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.037157 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/65b29f76-cf84-4166-b1b1-17927cbfd032-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"65b29f76-cf84-4166-b1b1-17927cbfd032\") " pod="openshift-logging/logging-loki-compactor-0" Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.037198 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/ec05fb5e-2fc9-424a-a305-4ac1734df8d5-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"ec05fb5e-2fc9-424a-a305-4ac1734df8d5\") " pod="openshift-logging/logging-loki-ingester-0" Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.037226 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/65b29f76-cf84-4166-b1b1-17927cbfd032-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"65b29f76-cf84-4166-b1b1-17927cbfd032\") " pod="openshift-logging/logging-loki-compactor-0" Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.037325 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f93a47f9-6634-4edd-bcd1-77704c77c023\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f93a47f9-6634-4edd-bcd1-77704c77c023\") pod \"logging-loki-compactor-0\" (UID: \"65b29f76-cf84-4166-b1b1-17927cbfd032\") " pod="openshift-logging/logging-loki-compactor-0" Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.037395 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7xgj\" (UniqueName: \"kubernetes.io/projected/ec05fb5e-2fc9-424a-a305-4ac1734df8d5-kube-api-access-f7xgj\") pod \"logging-loki-ingester-0\" (UID: \"ec05fb5e-2fc9-424a-a305-4ac1734df8d5\") " pod="openshift-logging/logging-loki-ingester-0" Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.037419 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec05fb5e-2fc9-424a-a305-4ac1734df8d5-config\") pod \"logging-loki-ingester-0\" (UID: \"ec05fb5e-2fc9-424a-a305-4ac1734df8d5\") " pod="openshift-logging/logging-loki-ingester-0" Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.037434 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ec05fb5e-2fc9-424a-a305-4ac1734df8d5-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"ec05fb5e-2fc9-424a-a305-4ac1734df8d5\") " pod="openshift-logging/logging-loki-ingester-0" Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.037451 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/65b29f76-cf84-4166-b1b1-17927cbfd032-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"65b29f76-cf84-4166-b1b1-17927cbfd032\") " pod="openshift-logging/logging-loki-compactor-0" Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.037541 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-795pr\" (UniqueName: \"kubernetes.io/projected/65b29f76-cf84-4166-b1b1-17927cbfd032-kube-api-access-795pr\") pod \"logging-loki-compactor-0\" (UID: \"65b29f76-cf84-4166-b1b1-17927cbfd032\") " pod="openshift-logging/logging-loki-compactor-0" Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.037604 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5102cd9c-4ff5-43c4-ad13-810e2efc5f05\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5102cd9c-4ff5-43c4-ad13-810e2efc5f05\") pod \"logging-loki-ingester-0\" (UID: \"ec05fb5e-2fc9-424a-a305-4ac1734df8d5\") " pod="openshift-logging/logging-loki-ingester-0" Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.037654 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/ec05fb5e-2fc9-424a-a305-4ac1734df8d5-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"ec05fb5e-2fc9-424a-a305-4ac1734df8d5\") " pod="openshift-logging/logging-loki-ingester-0" Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.037720 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d7d0a980-2c47-49bb-8be1-3cb395ea40cd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d7d0a980-2c47-49bb-8be1-3cb395ea40cd\") pod \"logging-loki-ingester-0\" (UID: \"ec05fb5e-2fc9-424a-a305-4ac1734df8d5\") " pod="openshift-logging/logging-loki-ingester-0" Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.037757 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65b29f76-cf84-4166-b1b1-17927cbfd032-config\") pod \"logging-loki-compactor-0\" (UID: \"65b29f76-cf84-4166-b1b1-17927cbfd032\") " pod="openshift-logging/logging-loki-compactor-0" Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.037805 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65b29f76-cf84-4166-b1b1-17927cbfd032-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"65b29f76-cf84-4166-b1b1-17927cbfd032\") " pod="openshift-logging/logging-loki-compactor-0" Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.037826 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/ec05fb5e-2fc9-424a-a305-4ac1734df8d5-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"ec05fb5e-2fc9-424a-a305-4ac1734df8d5\") " pod="openshift-logging/logging-loki-ingester-0" Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.039980 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.041174 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.045547 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-http" Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.045859 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-grpc" Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.050408 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.050574 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-575bf4587d-cslrg" event={"ID":"04a66bf0-d1a8-4bf7-85e4-8974cc247cd0","Type":"ContainerStarted","Data":"c70b67950a9006743e149537f0dcf4507690b8d942c724a935e9d512c762b12d"} Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.051567 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-76cc67bf56-2j6fx" event={"ID":"2fd41086-3cec-46c6-a4ed-82885461095c","Type":"ContainerStarted","Data":"47f412623e29d806aa98f3c03efef85c649490126667d8c37ec2dfeaa43a70b0"} Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.052446 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-g2h7b" event={"ID":"73ef818d-4038-418e-87e6-a16224e788c5","Type":"ContainerStarted","Data":"0a497ec78e99551c97928abd738daea8fd864bd653c79c759d9d8643f4bcaa58"} Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.054876 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-5895d59bb8-jqcx2" event={"ID":"996554b0-3876-4c69-be10-a5f2c4a5c2e4","Type":"ContainerStarted","Data":"bb84fc3eece32ec10767251ff2852a716f8a97db56f0aa3b50aeac41059ff716"} Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.139769 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/419b2242-f6e9-429b-89fe-a8e499b5952b-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"419b2242-f6e9-429b-89fe-a8e499b5952b\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.139826 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65b29f76-cf84-4166-b1b1-17927cbfd032-config\") pod \"logging-loki-compactor-0\" (UID: \"65b29f76-cf84-4166-b1b1-17927cbfd032\") " pod="openshift-logging/logging-loki-compactor-0" Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.139876 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zslb\" (UniqueName: \"kubernetes.io/projected/419b2242-f6e9-429b-89fe-a8e499b5952b-kube-api-access-7zslb\") pod \"logging-loki-index-gateway-0\" (UID: \"419b2242-f6e9-429b-89fe-a8e499b5952b\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.139910 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/65b29f76-cf84-4166-b1b1-17927cbfd032-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"65b29f76-cf84-4166-b1b1-17927cbfd032\") " pod="openshift-logging/logging-loki-compactor-0" Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.139946 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f93a47f9-6634-4edd-bcd1-77704c77c023\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f93a47f9-6634-4edd-bcd1-77704c77c023\") pod \"logging-loki-compactor-0\" (UID: \"65b29f76-cf84-4166-b1b1-17927cbfd032\") " pod="openshift-logging/logging-loki-compactor-0" Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.139975 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/419b2242-f6e9-429b-89fe-a8e499b5952b-config\") pod \"logging-loki-index-gateway-0\" (UID: \"419b2242-f6e9-429b-89fe-a8e499b5952b\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.139997 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/419b2242-f6e9-429b-89fe-a8e499b5952b-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"419b2242-f6e9-429b-89fe-a8e499b5952b\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.140027 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec05fb5e-2fc9-424a-a305-4ac1734df8d5-config\") pod \"logging-loki-ingester-0\" (UID: \"ec05fb5e-2fc9-424a-a305-4ac1734df8d5\") " pod="openshift-logging/logging-loki-ingester-0" Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.140048 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/65b29f76-cf84-4166-b1b1-17927cbfd032-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"65b29f76-cf84-4166-b1b1-17927cbfd032\") " pod="openshift-logging/logging-loki-compactor-0" Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.140078 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/419b2242-f6e9-429b-89fe-a8e499b5952b-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"419b2242-f6e9-429b-89fe-a8e499b5952b\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.140111 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-795pr\" (UniqueName: \"kubernetes.io/projected/65b29f76-cf84-4166-b1b1-17927cbfd032-kube-api-access-795pr\") pod \"logging-loki-compactor-0\" (UID: \"65b29f76-cf84-4166-b1b1-17927cbfd032\") " pod="openshift-logging/logging-loki-compactor-0" Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.140143 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-5102cd9c-4ff5-43c4-ad13-810e2efc5f05\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5102cd9c-4ff5-43c4-ad13-810e2efc5f05\") pod \"logging-loki-ingester-0\" (UID: \"ec05fb5e-2fc9-424a-a305-4ac1734df8d5\") " pod="openshift-logging/logging-loki-ingester-0" Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.140181 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d7d0a980-2c47-49bb-8be1-3cb395ea40cd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d7d0a980-2c47-49bb-8be1-3cb395ea40cd\") pod \"logging-loki-ingester-0\" (UID: \"ec05fb5e-2fc9-424a-a305-4ac1734df8d5\") " pod="openshift-logging/logging-loki-ingester-0" Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.140222 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65b29f76-cf84-4166-b1b1-17927cbfd032-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"65b29f76-cf84-4166-b1b1-17927cbfd032\") " pod="openshift-logging/logging-loki-compactor-0" Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.140245 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/ec05fb5e-2fc9-424a-a305-4ac1734df8d5-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"ec05fb5e-2fc9-424a-a305-4ac1734df8d5\") " pod="openshift-logging/logging-loki-ingester-0" Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.140276 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/ec05fb5e-2fc9-424a-a305-4ac1734df8d5-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"ec05fb5e-2fc9-424a-a305-4ac1734df8d5\") " pod="openshift-logging/logging-loki-ingester-0" Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.140307 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/65b29f76-cf84-4166-b1b1-17927cbfd032-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"65b29f76-cf84-4166-b1b1-17927cbfd032\") " pod="openshift-logging/logging-loki-compactor-0" Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.140340 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7xgj\" (UniqueName: \"kubernetes.io/projected/ec05fb5e-2fc9-424a-a305-4ac1734df8d5-kube-api-access-f7xgj\") pod \"logging-loki-ingester-0\" (UID: \"ec05fb5e-2fc9-424a-a305-4ac1734df8d5\") " pod="openshift-logging/logging-loki-ingester-0" Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.140361 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ec05fb5e-2fc9-424a-a305-4ac1734df8d5-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"ec05fb5e-2fc9-424a-a305-4ac1734df8d5\") " pod="openshift-logging/logging-loki-ingester-0" Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.140397 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0056cd54-075e-4b8c-ac29-91a6ff1654a9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0056cd54-075e-4b8c-ac29-91a6ff1654a9\") pod \"logging-loki-index-gateway-0\" (UID: \"419b2242-f6e9-429b-89fe-a8e499b5952b\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.140421 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/419b2242-f6e9-429b-89fe-a8e499b5952b-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"419b2242-f6e9-429b-89fe-a8e499b5952b\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.140455 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/ec05fb5e-2fc9-424a-a305-4ac1734df8d5-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"ec05fb5e-2fc9-424a-a305-4ac1734df8d5\") " pod="openshift-logging/logging-loki-ingester-0" Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.141164 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65b29f76-cf84-4166-b1b1-17927cbfd032-config\") pod \"logging-loki-compactor-0\" (UID: \"65b29f76-cf84-4166-b1b1-17927cbfd032\") " pod="openshift-logging/logging-loki-compactor-0" Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.142524 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec05fb5e-2fc9-424a-a305-4ac1734df8d5-config\") pod \"logging-loki-ingester-0\" (UID: \"ec05fb5e-2fc9-424a-a305-4ac1734df8d5\") " pod="openshift-logging/logging-loki-ingester-0" Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.143721 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ec05fb5e-2fc9-424a-a305-4ac1734df8d5-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"ec05fb5e-2fc9-424a-a305-4ac1734df8d5\") " pod="openshift-logging/logging-loki-ingester-0" Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.144116 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65b29f76-cf84-4166-b1b1-17927cbfd032-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"65b29f76-cf84-4166-b1b1-17927cbfd032\") " pod="openshift-logging/logging-loki-compactor-0" Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.145751 4795 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.145785 4795 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d7d0a980-2c47-49bb-8be1-3cb395ea40cd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d7d0a980-2c47-49bb-8be1-3cb395ea40cd\") pod \"logging-loki-ingester-0\" (UID: \"ec05fb5e-2fc9-424a-a305-4ac1734df8d5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ccfa51de5c2fd7cddbf17f2a7006be97303426db2ff3a813485cfffa1deadbcd/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.148005 4795 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.148039 4795 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-5102cd9c-4ff5-43c4-ad13-810e2efc5f05\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5102cd9c-4ff5-43c4-ad13-810e2efc5f05\") pod \"logging-loki-ingester-0\" (UID: \"ec05fb5e-2fc9-424a-a305-4ac1734df8d5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/25d93288bca825a741a0cce36a8c737ce31042f7686b9402092052af3fc2ec31/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.148100 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/ec05fb5e-2fc9-424a-a305-4ac1734df8d5-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"ec05fb5e-2fc9-424a-a305-4ac1734df8d5\") " pod="openshift-logging/logging-loki-ingester-0" Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.148298 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/ec05fb5e-2fc9-424a-a305-4ac1734df8d5-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"ec05fb5e-2fc9-424a-a305-4ac1734df8d5\") " pod="openshift-logging/logging-loki-ingester-0" Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.148713 4795 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.148738 4795 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f93a47f9-6634-4edd-bcd1-77704c77c023\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f93a47f9-6634-4edd-bcd1-77704c77c023\") pod \"logging-loki-compactor-0\" (UID: \"65b29f76-cf84-4166-b1b1-17927cbfd032\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c188ce955e2c59e5973ea1b45f5918722ccccdef70cf7b10d4e675fae1a12ce7/globalmount\"" pod="openshift-logging/logging-loki-compactor-0" Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.148766 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/65b29f76-cf84-4166-b1b1-17927cbfd032-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"65b29f76-cf84-4166-b1b1-17927cbfd032\") " pod="openshift-logging/logging-loki-compactor-0" Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.149092 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/65b29f76-cf84-4166-b1b1-17927cbfd032-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"65b29f76-cf84-4166-b1b1-17927cbfd032\") " pod="openshift-logging/logging-loki-compactor-0" Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.149267 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/ec05fb5e-2fc9-424a-a305-4ac1734df8d5-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"ec05fb5e-2fc9-424a-a305-4ac1734df8d5\") " pod="openshift-logging/logging-loki-ingester-0" Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.153143 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/65b29f76-cf84-4166-b1b1-17927cbfd032-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"65b29f76-cf84-4166-b1b1-17927cbfd032\") " pod="openshift-logging/logging-loki-compactor-0" Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.159262 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7xgj\" (UniqueName: \"kubernetes.io/projected/ec05fb5e-2fc9-424a-a305-4ac1734df8d5-kube-api-access-f7xgj\") pod \"logging-loki-ingester-0\" (UID: \"ec05fb5e-2fc9-424a-a305-4ac1734df8d5\") " pod="openshift-logging/logging-loki-ingester-0" Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.170689 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-795pr\" (UniqueName: \"kubernetes.io/projected/65b29f76-cf84-4166-b1b1-17927cbfd032-kube-api-access-795pr\") pod \"logging-loki-compactor-0\" (UID: \"65b29f76-cf84-4166-b1b1-17927cbfd032\") " pod="openshift-logging/logging-loki-compactor-0" Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.176504 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d7d0a980-2c47-49bb-8be1-3cb395ea40cd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d7d0a980-2c47-49bb-8be1-3cb395ea40cd\") pod \"logging-loki-ingester-0\" (UID: \"ec05fb5e-2fc9-424a-a305-4ac1734df8d5\") " pod="openshift-logging/logging-loki-ingester-0" Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.177784 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-5102cd9c-4ff5-43c4-ad13-810e2efc5f05\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5102cd9c-4ff5-43c4-ad13-810e2efc5f05\") pod \"logging-loki-ingester-0\" (UID: \"ec05fb5e-2fc9-424a-a305-4ac1734df8d5\") " pod="openshift-logging/logging-loki-ingester-0" Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.182483 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f93a47f9-6634-4edd-bcd1-77704c77c023\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f93a47f9-6634-4edd-bcd1-77704c77c023\") pod \"logging-loki-compactor-0\" (UID: \"65b29f76-cf84-4166-b1b1-17927cbfd032\") " pod="openshift-logging/logging-loki-compactor-0" Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.215288 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.243827 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/419b2242-f6e9-429b-89fe-a8e499b5952b-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"419b2242-f6e9-429b-89fe-a8e499b5952b\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.243883 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zslb\" (UniqueName: \"kubernetes.io/projected/419b2242-f6e9-429b-89fe-a8e499b5952b-kube-api-access-7zslb\") pod \"logging-loki-index-gateway-0\" (UID: \"419b2242-f6e9-429b-89fe-a8e499b5952b\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.243916 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/419b2242-f6e9-429b-89fe-a8e499b5952b-config\") pod \"logging-loki-index-gateway-0\" (UID: \"419b2242-f6e9-429b-89fe-a8e499b5952b\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.243931 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/419b2242-f6e9-429b-89fe-a8e499b5952b-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"419b2242-f6e9-429b-89fe-a8e499b5952b\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.243953 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/419b2242-f6e9-429b-89fe-a8e499b5952b-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"419b2242-f6e9-429b-89fe-a8e499b5952b\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.244026 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0056cd54-075e-4b8c-ac29-91a6ff1654a9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0056cd54-075e-4b8c-ac29-91a6ff1654a9\") pod \"logging-loki-index-gateway-0\" (UID: \"419b2242-f6e9-429b-89fe-a8e499b5952b\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.244042 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/419b2242-f6e9-429b-89fe-a8e499b5952b-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"419b2242-f6e9-429b-89fe-a8e499b5952b\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.245478 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/419b2242-f6e9-429b-89fe-a8e499b5952b-config\") pod \"logging-loki-index-gateway-0\" (UID: \"419b2242-f6e9-429b-89fe-a8e499b5952b\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.247954 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/419b2242-f6e9-429b-89fe-a8e499b5952b-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"419b2242-f6e9-429b-89fe-a8e499b5952b\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.248575 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/419b2242-f6e9-429b-89fe-a8e499b5952b-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"419b2242-f6e9-429b-89fe-a8e499b5952b\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.249115 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/419b2242-f6e9-429b-89fe-a8e499b5952b-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"419b2242-f6e9-429b-89fe-a8e499b5952b\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.252490 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/419b2242-f6e9-429b-89fe-a8e499b5952b-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"419b2242-f6e9-429b-89fe-a8e499b5952b\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.253670 4795 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.253694 4795 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0056cd54-075e-4b8c-ac29-91a6ff1654a9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0056cd54-075e-4b8c-ac29-91a6ff1654a9\") pod \"logging-loki-index-gateway-0\" (UID: \"419b2242-f6e9-429b-89fe-a8e499b5952b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3632fa605cfbedee5f94752e41e38e423e99632489a94692551aca0019835728/globalmount\"" pod="openshift-logging/logging-loki-index-gateway-0" Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.265112 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zslb\" (UniqueName: \"kubernetes.io/projected/419b2242-f6e9-429b-89fe-a8e499b5952b-kube-api-access-7zslb\") pod \"logging-loki-index-gateway-0\" (UID: \"419b2242-f6e9-429b-89fe-a8e499b5952b\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.280984 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0056cd54-075e-4b8c-ac29-91a6ff1654a9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0056cd54-075e-4b8c-ac29-91a6ff1654a9\") pod \"logging-loki-index-gateway-0\" (UID: \"419b2242-f6e9-429b-89fe-a8e499b5952b\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.287553 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.347949 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-575bf4587d-fvfcs"] Nov 29 07:54:59 crc kubenswrapper[4795]: W1129 07:54:59.356568 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc5d428b2_eb39_4936_819d_08321d96d015.slice/crio-4eeeb6a5b0aacb9f80f8247adaa45d36f0feece820dffe12a91ee1a63b73cc0f WatchSource:0}: Error finding container 4eeeb6a5b0aacb9f80f8247adaa45d36f0feece820dffe12a91ee1a63b73cc0f: Status 404 returned error can't find the container with id 4eeeb6a5b0aacb9f80f8247adaa45d36f0feece820dffe12a91ee1a63b73cc0f Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.364311 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.658842 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Nov 29 07:54:59 crc kubenswrapper[4795]: W1129 07:54:59.666861 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec05fb5e_2fc9_424a_a305_4ac1734df8d5.slice/crio-8245fbf4dcc1386a11448a1fe56df3b4b59b4d2353309086776cdc268afa8d10 WatchSource:0}: Error finding container 8245fbf4dcc1386a11448a1fe56df3b4b59b4d2353309086776cdc268afa8d10: Status 404 returned error can't find the container with id 8245fbf4dcc1386a11448a1fe56df3b4b59b4d2353309086776cdc268afa8d10 Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.761035 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Nov 29 07:54:59 crc kubenswrapper[4795]: W1129 07:54:59.774284 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod65b29f76_cf84_4166_b1b1_17927cbfd032.slice/crio-b0acc8dedbf790076cfce1708a64f6a2edd79652b45b8a9253ba367ab2f55780 WatchSource:0}: Error finding container b0acc8dedbf790076cfce1708a64f6a2edd79652b45b8a9253ba367ab2f55780: Status 404 returned error can't find the container with id b0acc8dedbf790076cfce1708a64f6a2edd79652b45b8a9253ba367ab2f55780 Nov 29 07:54:59 crc kubenswrapper[4795]: I1129 07:54:59.887820 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Nov 29 07:55:00 crc kubenswrapper[4795]: I1129 07:55:00.065292 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"ec05fb5e-2fc9-424a-a305-4ac1734df8d5","Type":"ContainerStarted","Data":"8245fbf4dcc1386a11448a1fe56df3b4b59b4d2353309086776cdc268afa8d10"} Nov 29 07:55:00 crc kubenswrapper[4795]: I1129 07:55:00.067936 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"65b29f76-cf84-4166-b1b1-17927cbfd032","Type":"ContainerStarted","Data":"b0acc8dedbf790076cfce1708a64f6a2edd79652b45b8a9253ba367ab2f55780"} Nov 29 07:55:00 crc kubenswrapper[4795]: I1129 07:55:00.070471 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"419b2242-f6e9-429b-89fe-a8e499b5952b","Type":"ContainerStarted","Data":"da5da901968b1b33459e56647798ede31e26e27911aa282819d49566b0c984ee"} Nov 29 07:55:00 crc kubenswrapper[4795]: I1129 07:55:00.072164 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-575bf4587d-fvfcs" event={"ID":"c5d428b2-eb39-4936-819d-08321d96d015","Type":"ContainerStarted","Data":"4eeeb6a5b0aacb9f80f8247adaa45d36f0feece820dffe12a91ee1a63b73cc0f"} Nov 29 07:55:04 crc kubenswrapper[4795]: I1129 07:55:04.152580 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-575bf4587d-fvfcs" event={"ID":"c5d428b2-eb39-4936-819d-08321d96d015","Type":"ContainerStarted","Data":"7303c1bf4dc0ac0a2ac7ab809a381941d66387db317a4c089d92c32c15e3e9cf"} Nov 29 07:55:04 crc kubenswrapper[4795]: I1129 07:55:04.157384 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-575bf4587d-cslrg" event={"ID":"04a66bf0-d1a8-4bf7-85e4-8974cc247cd0","Type":"ContainerStarted","Data":"8f20877e0bd06f591c033bf231601b56d82b9bd5cf762b83d51025404b7e00cd"} Nov 29 07:55:04 crc kubenswrapper[4795]: I1129 07:55:04.158440 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-76cc67bf56-2j6fx" event={"ID":"2fd41086-3cec-46c6-a4ed-82885461095c","Type":"ContainerStarted","Data":"0f15ce61f8e5f88745f5bb8bf5da59355e6878efe4518b7564828d48ccbf00d7"} Nov 29 07:55:04 crc kubenswrapper[4795]: I1129 07:55:04.159713 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-76cc67bf56-2j6fx" Nov 29 07:55:04 crc kubenswrapper[4795]: I1129 07:55:04.162565 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-5895d59bb8-jqcx2" event={"ID":"996554b0-3876-4c69-be10-a5f2c4a5c2e4","Type":"ContainerStarted","Data":"03de705abf2f35cc1c5ba2a3e126143326d23e92efe57449bf53c2e58b1128ab"} Nov 29 07:55:04 crc kubenswrapper[4795]: I1129 07:55:04.163432 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-5895d59bb8-jqcx2" Nov 29 07:55:04 crc kubenswrapper[4795]: I1129 07:55:04.168380 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"ec05fb5e-2fc9-424a-a305-4ac1734df8d5","Type":"ContainerStarted","Data":"fe0e33f53eb4d1042c0864de5681f6e0212681a7763472979a03333dd83cd7a2"} Nov 29 07:55:04 crc kubenswrapper[4795]: I1129 07:55:04.168972 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-ingester-0" Nov 29 07:55:04 crc kubenswrapper[4795]: I1129 07:55:04.170196 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-g2h7b" event={"ID":"73ef818d-4038-418e-87e6-a16224e788c5","Type":"ContainerStarted","Data":"4b0ff0518e887c3a5d3b62ddd4971824c2b11c638e47a5b390eabab021f52909"} Nov 29 07:55:04 crc kubenswrapper[4795]: I1129 07:55:04.170709 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-g2h7b" Nov 29 07:55:04 crc kubenswrapper[4795]: I1129 07:55:04.188981 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"65b29f76-cf84-4166-b1b1-17927cbfd032","Type":"ContainerStarted","Data":"58d472bf42583131656d9689a21fb9036df562324b9a686d0f24482a7b816c7e"} Nov 29 07:55:04 crc kubenswrapper[4795]: I1129 07:55:04.189930 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-compactor-0" Nov 29 07:55:04 crc kubenswrapper[4795]: I1129 07:55:04.191624 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"419b2242-f6e9-429b-89fe-a8e499b5952b","Type":"ContainerStarted","Data":"4d6e419e4e68d91d5ccefe0dcc8f60312dbab9f3345366dd01d66458e522242a"} Nov 29 07:55:04 crc kubenswrapper[4795]: I1129 07:55:04.192359 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-index-gateway-0" Nov 29 07:55:04 crc kubenswrapper[4795]: I1129 07:55:04.205421 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-distributor-76cc67bf56-2j6fx" podStartSLOduration=2.222669484 podStartE2EDuration="7.205403014s" podCreationTimestamp="2025-11-29 07:54:57 +0000 UTC" firstStartedPulling="2025-11-29 07:54:58.663270266 +0000 UTC m=+944.638846056" lastFinishedPulling="2025-11-29 07:55:03.646003796 +0000 UTC m=+949.621579586" observedRunningTime="2025-11-29 07:55:04.184346804 +0000 UTC m=+950.159922594" watchObservedRunningTime="2025-11-29 07:55:04.205403014 +0000 UTC m=+950.180978804" Nov 29 07:55:04 crc kubenswrapper[4795]: I1129 07:55:04.216756 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-ingester-0" podStartSLOduration=3.233423461 podStartE2EDuration="7.216732628s" podCreationTimestamp="2025-11-29 07:54:57 +0000 UTC" firstStartedPulling="2025-11-29 07:54:59.669339418 +0000 UTC m=+945.644915208" lastFinishedPulling="2025-11-29 07:55:03.652648585 +0000 UTC m=+949.628224375" observedRunningTime="2025-11-29 07:55:04.205486257 +0000 UTC m=+950.181062057" watchObservedRunningTime="2025-11-29 07:55:04.216732628 +0000 UTC m=+950.192308418" Nov 29 07:55:04 crc kubenswrapper[4795]: I1129 07:55:04.225878 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-querier-5895d59bb8-jqcx2" podStartSLOduration=2.235718587 podStartE2EDuration="7.225856868s" podCreationTimestamp="2025-11-29 07:54:57 +0000 UTC" firstStartedPulling="2025-11-29 07:54:58.563576882 +0000 UTC m=+944.539152672" lastFinishedPulling="2025-11-29 07:55:03.553715163 +0000 UTC m=+949.529290953" observedRunningTime="2025-11-29 07:55:04.223630474 +0000 UTC m=+950.199206264" watchObservedRunningTime="2025-11-29 07:55:04.225856868 +0000 UTC m=+950.201432658" Nov 29 07:55:04 crc kubenswrapper[4795]: I1129 07:55:04.247208 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-g2h7b" podStartSLOduration=2.486277625 podStartE2EDuration="7.247181606s" podCreationTimestamp="2025-11-29 07:54:57 +0000 UTC" firstStartedPulling="2025-11-29 07:54:58.902771909 +0000 UTC m=+944.878347699" lastFinishedPulling="2025-11-29 07:55:03.66367589 +0000 UTC m=+949.639251680" observedRunningTime="2025-11-29 07:55:04.240880306 +0000 UTC m=+950.216456096" watchObservedRunningTime="2025-11-29 07:55:04.247181606 +0000 UTC m=+950.222757406" Nov 29 07:55:04 crc kubenswrapper[4795]: I1129 07:55:04.286217 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-compactor-0" podStartSLOduration=3.432136459 podStartE2EDuration="7.286191839s" podCreationTimestamp="2025-11-29 07:54:57 +0000 UTC" firstStartedPulling="2025-11-29 07:54:59.776376221 +0000 UTC m=+945.751952011" lastFinishedPulling="2025-11-29 07:55:03.630431601 +0000 UTC m=+949.606007391" observedRunningTime="2025-11-29 07:55:04.264813629 +0000 UTC m=+950.240389419" watchObservedRunningTime="2025-11-29 07:55:04.286191839 +0000 UTC m=+950.261767629" Nov 29 07:55:04 crc kubenswrapper[4795]: I1129 07:55:04.288107 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-index-gateway-0" podStartSLOduration=2.537211538 podStartE2EDuration="6.288097704s" podCreationTimestamp="2025-11-29 07:54:58 +0000 UTC" firstStartedPulling="2025-11-29 07:54:59.892408312 +0000 UTC m=+945.867984102" lastFinishedPulling="2025-11-29 07:55:03.643294468 +0000 UTC m=+949.618870268" observedRunningTime="2025-11-29 07:55:04.285736076 +0000 UTC m=+950.261311876" watchObservedRunningTime="2025-11-29 07:55:04.288097704 +0000 UTC m=+950.263673494" Nov 29 07:55:07 crc kubenswrapper[4795]: I1129 07:55:07.318521 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-575bf4587d-fvfcs" event={"ID":"c5d428b2-eb39-4936-819d-08321d96d015","Type":"ContainerStarted","Data":"f5d979a704d8ad0d22b04ffb986babbbdbd9f096384eb8a6821fe842f8bf7061"} Nov 29 07:55:07 crc kubenswrapper[4795]: I1129 07:55:07.319157 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-575bf4587d-fvfcs" Nov 29 07:55:07 crc kubenswrapper[4795]: I1129 07:55:07.322042 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-575bf4587d-cslrg" event={"ID":"04a66bf0-d1a8-4bf7-85e4-8974cc247cd0","Type":"ContainerStarted","Data":"435b9c3cad70dfc16cc24fbb5c77aabb323b6af3375c2d08d19b81fc7429bc9b"} Nov 29 07:55:07 crc kubenswrapper[4795]: I1129 07:55:07.322697 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-575bf4587d-cslrg" Nov 29 07:55:07 crc kubenswrapper[4795]: I1129 07:55:07.322719 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-575bf4587d-cslrg" Nov 29 07:55:07 crc kubenswrapper[4795]: I1129 07:55:07.328321 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-575bf4587d-cslrg" Nov 29 07:55:07 crc kubenswrapper[4795]: I1129 07:55:07.330871 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-575bf4587d-fvfcs" Nov 29 07:55:07 crc kubenswrapper[4795]: I1129 07:55:07.337914 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-575bf4587d-cslrg" Nov 29 07:55:07 crc kubenswrapper[4795]: I1129 07:55:07.455000 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-575bf4587d-fvfcs" podStartSLOduration=2.379889199 podStartE2EDuration="9.45497611s" podCreationTimestamp="2025-11-29 07:54:58 +0000 UTC" firstStartedPulling="2025-11-29 07:54:59.358679045 +0000 UTC m=+945.334254835" lastFinishedPulling="2025-11-29 07:55:06.433765956 +0000 UTC m=+952.409341746" observedRunningTime="2025-11-29 07:55:07.442428982 +0000 UTC m=+953.418004782" watchObservedRunningTime="2025-11-29 07:55:07.45497611 +0000 UTC m=+953.430551910" Nov 29 07:55:07 crc kubenswrapper[4795]: I1129 07:55:07.483183 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-575bf4587d-cslrg" podStartSLOduration=2.066859429 podStartE2EDuration="9.483149264s" podCreationTimestamp="2025-11-29 07:54:58 +0000 UTC" firstStartedPulling="2025-11-29 07:54:58.997718037 +0000 UTC m=+944.973293827" lastFinishedPulling="2025-11-29 07:55:06.414007872 +0000 UTC m=+952.389583662" observedRunningTime="2025-11-29 07:55:07.473871349 +0000 UTC m=+953.449447149" watchObservedRunningTime="2025-11-29 07:55:07.483149264 +0000 UTC m=+953.458725094" Nov 29 07:55:08 crc kubenswrapper[4795]: I1129 07:55:08.329192 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-575bf4587d-fvfcs" Nov 29 07:55:08 crc kubenswrapper[4795]: I1129 07:55:08.340438 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-575bf4587d-fvfcs" Nov 29 07:55:18 crc kubenswrapper[4795]: I1129 07:55:18.211866 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-5895d59bb8-jqcx2" Nov 29 07:55:19 crc kubenswrapper[4795]: I1129 07:55:19.222241 4795 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Nov 29 07:55:19 crc kubenswrapper[4795]: I1129 07:55:19.222696 4795 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="ec05fb5e-2fc9-424a-a305-4ac1734df8d5" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 29 07:55:19 crc kubenswrapper[4795]: I1129 07:55:19.295037 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-compactor-0" Nov 29 07:55:19 crc kubenswrapper[4795]: I1129 07:55:19.373982 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-index-gateway-0" Nov 29 07:55:28 crc kubenswrapper[4795]: I1129 07:55:28.085578 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-76cc67bf56-2j6fx" Nov 29 07:55:28 crc kubenswrapper[4795]: I1129 07:55:28.298835 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-g2h7b" Nov 29 07:55:29 crc kubenswrapper[4795]: I1129 07:55:29.221474 4795 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Nov 29 07:55:29 crc kubenswrapper[4795]: I1129 07:55:29.221566 4795 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="ec05fb5e-2fc9-424a-a305-4ac1734df8d5" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 29 07:55:39 crc kubenswrapper[4795]: I1129 07:55:39.220724 4795 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Nov 29 07:55:39 crc kubenswrapper[4795]: I1129 07:55:39.221228 4795 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="ec05fb5e-2fc9-424a-a305-4ac1734df8d5" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 29 07:55:49 crc kubenswrapper[4795]: I1129 07:55:49.219280 4795 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Nov 29 07:55:49 crc kubenswrapper[4795]: I1129 07:55:49.219903 4795 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="ec05fb5e-2fc9-424a-a305-4ac1734df8d5" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 29 07:55:59 crc kubenswrapper[4795]: I1129 07:55:59.220203 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-ingester-0" Nov 29 07:56:11 crc kubenswrapper[4795]: I1129 07:56:11.941250 4795 patch_prober.go:28] interesting pod/machine-config-daemon-bkmq6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:56:11 crc kubenswrapper[4795]: I1129 07:56:11.941908 4795 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:56:18 crc kubenswrapper[4795]: I1129 07:56:18.788221 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-s6lpk"] Nov 29 07:56:18 crc kubenswrapper[4795]: I1129 07:56:18.790122 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-s6lpk" Nov 29 07:56:18 crc kubenswrapper[4795]: I1129 07:56:18.794620 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Nov 29 07:56:18 crc kubenswrapper[4795]: I1129 07:56:18.794819 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Nov 29 07:56:18 crc kubenswrapper[4795]: I1129 07:56:18.795525 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-hlxfk" Nov 29 07:56:18 crc kubenswrapper[4795]: I1129 07:56:18.795681 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Nov 29 07:56:18 crc kubenswrapper[4795]: I1129 07:56:18.795942 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Nov 29 07:56:18 crc kubenswrapper[4795]: I1129 07:56:18.804903 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Nov 29 07:56:18 crc kubenswrapper[4795]: I1129 07:56:18.812323 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-s6lpk"] Nov 29 07:56:18 crc kubenswrapper[4795]: I1129 07:56:18.901072 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/2a89a070-4385-43b8-a1a1-7d271d375228-collector-token\") pod \"collector-s6lpk\" (UID: \"2a89a070-4385-43b8-a1a1-7d271d375228\") " pod="openshift-logging/collector-s6lpk" Nov 29 07:56:18 crc kubenswrapper[4795]: I1129 07:56:18.901123 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/2a89a070-4385-43b8-a1a1-7d271d375228-sa-token\") pod \"collector-s6lpk\" (UID: \"2a89a070-4385-43b8-a1a1-7d271d375228\") " pod="openshift-logging/collector-s6lpk" Nov 29 07:56:18 crc kubenswrapper[4795]: I1129 07:56:18.901159 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/2a89a070-4385-43b8-a1a1-7d271d375228-datadir\") pod \"collector-s6lpk\" (UID: \"2a89a070-4385-43b8-a1a1-7d271d375228\") " pod="openshift-logging/collector-s6lpk" Nov 29 07:56:18 crc kubenswrapper[4795]: I1129 07:56:18.901189 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/2a89a070-4385-43b8-a1a1-7d271d375228-config-openshift-service-cacrt\") pod \"collector-s6lpk\" (UID: \"2a89a070-4385-43b8-a1a1-7d271d375228\") " pod="openshift-logging/collector-s6lpk" Nov 29 07:56:18 crc kubenswrapper[4795]: I1129 07:56:18.901218 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2a89a070-4385-43b8-a1a1-7d271d375228-tmp\") pod \"collector-s6lpk\" (UID: \"2a89a070-4385-43b8-a1a1-7d271d375228\") " pod="openshift-logging/collector-s6lpk" Nov 29 07:56:18 crc kubenswrapper[4795]: I1129 07:56:18.901232 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a89a070-4385-43b8-a1a1-7d271d375228-config\") pod \"collector-s6lpk\" (UID: \"2a89a070-4385-43b8-a1a1-7d271d375228\") " pod="openshift-logging/collector-s6lpk" Nov 29 07:56:18 crc kubenswrapper[4795]: I1129 07:56:18.901247 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2a89a070-4385-43b8-a1a1-7d271d375228-trusted-ca\") pod \"collector-s6lpk\" (UID: \"2a89a070-4385-43b8-a1a1-7d271d375228\") " pod="openshift-logging/collector-s6lpk" Nov 29 07:56:18 crc kubenswrapper[4795]: I1129 07:56:18.901265 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/2a89a070-4385-43b8-a1a1-7d271d375228-entrypoint\") pod \"collector-s6lpk\" (UID: \"2a89a070-4385-43b8-a1a1-7d271d375228\") " pod="openshift-logging/collector-s6lpk" Nov 29 07:56:18 crc kubenswrapper[4795]: I1129 07:56:18.901288 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/2a89a070-4385-43b8-a1a1-7d271d375228-metrics\") pod \"collector-s6lpk\" (UID: \"2a89a070-4385-43b8-a1a1-7d271d375228\") " pod="openshift-logging/collector-s6lpk" Nov 29 07:56:18 crc kubenswrapper[4795]: I1129 07:56:18.901307 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tx8j\" (UniqueName: \"kubernetes.io/projected/2a89a070-4385-43b8-a1a1-7d271d375228-kube-api-access-4tx8j\") pod \"collector-s6lpk\" (UID: \"2a89a070-4385-43b8-a1a1-7d271d375228\") " pod="openshift-logging/collector-s6lpk" Nov 29 07:56:18 crc kubenswrapper[4795]: I1129 07:56:18.901340 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/2a89a070-4385-43b8-a1a1-7d271d375228-collector-syslog-receiver\") pod \"collector-s6lpk\" (UID: \"2a89a070-4385-43b8-a1a1-7d271d375228\") " pod="openshift-logging/collector-s6lpk" Nov 29 07:56:18 crc kubenswrapper[4795]: I1129 07:56:18.954912 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-s6lpk"] Nov 29 07:56:18 crc kubenswrapper[4795]: E1129 07:56:18.955432 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[collector-syslog-receiver collector-token config config-openshift-service-cacrt datadir entrypoint kube-api-access-4tx8j metrics sa-token tmp trusted-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-logging/collector-s6lpk" podUID="2a89a070-4385-43b8-a1a1-7d271d375228" Nov 29 07:56:19 crc kubenswrapper[4795]: I1129 07:56:19.009528 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/2a89a070-4385-43b8-a1a1-7d271d375228-metrics\") pod \"collector-s6lpk\" (UID: \"2a89a070-4385-43b8-a1a1-7d271d375228\") " pod="openshift-logging/collector-s6lpk" Nov 29 07:56:19 crc kubenswrapper[4795]: I1129 07:56:19.009576 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tx8j\" (UniqueName: \"kubernetes.io/projected/2a89a070-4385-43b8-a1a1-7d271d375228-kube-api-access-4tx8j\") pod \"collector-s6lpk\" (UID: \"2a89a070-4385-43b8-a1a1-7d271d375228\") " pod="openshift-logging/collector-s6lpk" Nov 29 07:56:19 crc kubenswrapper[4795]: I1129 07:56:19.009634 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/2a89a070-4385-43b8-a1a1-7d271d375228-collector-syslog-receiver\") pod \"collector-s6lpk\" (UID: \"2a89a070-4385-43b8-a1a1-7d271d375228\") " pod="openshift-logging/collector-s6lpk" Nov 29 07:56:19 crc kubenswrapper[4795]: I1129 07:56:19.009689 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/2a89a070-4385-43b8-a1a1-7d271d375228-collector-token\") pod \"collector-s6lpk\" (UID: \"2a89a070-4385-43b8-a1a1-7d271d375228\") " pod="openshift-logging/collector-s6lpk" Nov 29 07:56:19 crc kubenswrapper[4795]: I1129 07:56:19.009708 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/2a89a070-4385-43b8-a1a1-7d271d375228-sa-token\") pod \"collector-s6lpk\" (UID: \"2a89a070-4385-43b8-a1a1-7d271d375228\") " pod="openshift-logging/collector-s6lpk" Nov 29 07:56:19 crc kubenswrapper[4795]: I1129 07:56:19.009740 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/2a89a070-4385-43b8-a1a1-7d271d375228-datadir\") pod \"collector-s6lpk\" (UID: \"2a89a070-4385-43b8-a1a1-7d271d375228\") " pod="openshift-logging/collector-s6lpk" Nov 29 07:56:19 crc kubenswrapper[4795]: I1129 07:56:19.009783 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/2a89a070-4385-43b8-a1a1-7d271d375228-config-openshift-service-cacrt\") pod \"collector-s6lpk\" (UID: \"2a89a070-4385-43b8-a1a1-7d271d375228\") " pod="openshift-logging/collector-s6lpk" Nov 29 07:56:19 crc kubenswrapper[4795]: I1129 07:56:19.009812 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2a89a070-4385-43b8-a1a1-7d271d375228-tmp\") pod \"collector-s6lpk\" (UID: \"2a89a070-4385-43b8-a1a1-7d271d375228\") " pod="openshift-logging/collector-s6lpk" Nov 29 07:56:19 crc kubenswrapper[4795]: I1129 07:56:19.009855 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a89a070-4385-43b8-a1a1-7d271d375228-config\") pod \"collector-s6lpk\" (UID: \"2a89a070-4385-43b8-a1a1-7d271d375228\") " pod="openshift-logging/collector-s6lpk" Nov 29 07:56:19 crc kubenswrapper[4795]: I1129 07:56:19.009872 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2a89a070-4385-43b8-a1a1-7d271d375228-trusted-ca\") pod \"collector-s6lpk\" (UID: \"2a89a070-4385-43b8-a1a1-7d271d375228\") " pod="openshift-logging/collector-s6lpk" Nov 29 07:56:19 crc kubenswrapper[4795]: I1129 07:56:19.009889 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/2a89a070-4385-43b8-a1a1-7d271d375228-entrypoint\") pod \"collector-s6lpk\" (UID: \"2a89a070-4385-43b8-a1a1-7d271d375228\") " pod="openshift-logging/collector-s6lpk" Nov 29 07:56:19 crc kubenswrapper[4795]: I1129 07:56:19.010499 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/2a89a070-4385-43b8-a1a1-7d271d375228-datadir\") pod \"collector-s6lpk\" (UID: \"2a89a070-4385-43b8-a1a1-7d271d375228\") " pod="openshift-logging/collector-s6lpk" Nov 29 07:56:19 crc kubenswrapper[4795]: E1129 07:56:19.010755 4795 secret.go:188] Couldn't get secret openshift-logging/collector-syslog-receiver: secret "collector-syslog-receiver" not found Nov 29 07:56:19 crc kubenswrapper[4795]: E1129 07:56:19.010925 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2a89a070-4385-43b8-a1a1-7d271d375228-collector-syslog-receiver podName:2a89a070-4385-43b8-a1a1-7d271d375228 nodeName:}" failed. No retries permitted until 2025-11-29 07:56:19.510897099 +0000 UTC m=+1025.486472939 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "collector-syslog-receiver" (UniqueName: "kubernetes.io/secret/2a89a070-4385-43b8-a1a1-7d271d375228-collector-syslog-receiver") pod "collector-s6lpk" (UID: "2a89a070-4385-43b8-a1a1-7d271d375228") : secret "collector-syslog-receiver" not found Nov 29 07:56:19 crc kubenswrapper[4795]: I1129 07:56:19.011148 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/2a89a070-4385-43b8-a1a1-7d271d375228-config-openshift-service-cacrt\") pod \"collector-s6lpk\" (UID: \"2a89a070-4385-43b8-a1a1-7d271d375228\") " pod="openshift-logging/collector-s6lpk" Nov 29 07:56:19 crc kubenswrapper[4795]: I1129 07:56:19.011384 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a89a070-4385-43b8-a1a1-7d271d375228-config\") pod \"collector-s6lpk\" (UID: \"2a89a070-4385-43b8-a1a1-7d271d375228\") " pod="openshift-logging/collector-s6lpk" Nov 29 07:56:19 crc kubenswrapper[4795]: I1129 07:56:19.011557 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/2a89a070-4385-43b8-a1a1-7d271d375228-entrypoint\") pod \"collector-s6lpk\" (UID: \"2a89a070-4385-43b8-a1a1-7d271d375228\") " pod="openshift-logging/collector-s6lpk" Nov 29 07:56:19 crc kubenswrapper[4795]: I1129 07:56:19.012274 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2a89a070-4385-43b8-a1a1-7d271d375228-trusted-ca\") pod \"collector-s6lpk\" (UID: \"2a89a070-4385-43b8-a1a1-7d271d375228\") " pod="openshift-logging/collector-s6lpk" Nov 29 07:56:19 crc kubenswrapper[4795]: I1129 07:56:19.016408 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/2a89a070-4385-43b8-a1a1-7d271d375228-metrics\") pod \"collector-s6lpk\" (UID: \"2a89a070-4385-43b8-a1a1-7d271d375228\") " pod="openshift-logging/collector-s6lpk" Nov 29 07:56:19 crc kubenswrapper[4795]: I1129 07:56:19.028342 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tx8j\" (UniqueName: \"kubernetes.io/projected/2a89a070-4385-43b8-a1a1-7d271d375228-kube-api-access-4tx8j\") pod \"collector-s6lpk\" (UID: \"2a89a070-4385-43b8-a1a1-7d271d375228\") " pod="openshift-logging/collector-s6lpk" Nov 29 07:56:19 crc kubenswrapper[4795]: I1129 07:56:19.028997 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2a89a070-4385-43b8-a1a1-7d271d375228-tmp\") pod \"collector-s6lpk\" (UID: \"2a89a070-4385-43b8-a1a1-7d271d375228\") " pod="openshift-logging/collector-s6lpk" Nov 29 07:56:19 crc kubenswrapper[4795]: I1129 07:56:19.029448 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/2a89a070-4385-43b8-a1a1-7d271d375228-collector-token\") pod \"collector-s6lpk\" (UID: \"2a89a070-4385-43b8-a1a1-7d271d375228\") " pod="openshift-logging/collector-s6lpk" Nov 29 07:56:19 crc kubenswrapper[4795]: I1129 07:56:19.034048 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/2a89a070-4385-43b8-a1a1-7d271d375228-sa-token\") pod \"collector-s6lpk\" (UID: \"2a89a070-4385-43b8-a1a1-7d271d375228\") " pod="openshift-logging/collector-s6lpk" Nov 29 07:56:19 crc kubenswrapper[4795]: I1129 07:56:19.517285 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/2a89a070-4385-43b8-a1a1-7d271d375228-collector-syslog-receiver\") pod \"collector-s6lpk\" (UID: \"2a89a070-4385-43b8-a1a1-7d271d375228\") " pod="openshift-logging/collector-s6lpk" Nov 29 07:56:19 crc kubenswrapper[4795]: I1129 07:56:19.521191 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/2a89a070-4385-43b8-a1a1-7d271d375228-collector-syslog-receiver\") pod \"collector-s6lpk\" (UID: \"2a89a070-4385-43b8-a1a1-7d271d375228\") " pod="openshift-logging/collector-s6lpk" Nov 29 07:56:19 crc kubenswrapper[4795]: I1129 07:56:19.861111 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-s6lpk" Nov 29 07:56:19 crc kubenswrapper[4795]: I1129 07:56:19.874797 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-s6lpk" Nov 29 07:56:20 crc kubenswrapper[4795]: I1129 07:56:20.023933 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/2a89a070-4385-43b8-a1a1-7d271d375228-config-openshift-service-cacrt\") pod \"2a89a070-4385-43b8-a1a1-7d271d375228\" (UID: \"2a89a070-4385-43b8-a1a1-7d271d375228\") " Nov 29 07:56:20 crc kubenswrapper[4795]: I1129 07:56:20.023994 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2a89a070-4385-43b8-a1a1-7d271d375228-tmp\") pod \"2a89a070-4385-43b8-a1a1-7d271d375228\" (UID: \"2a89a070-4385-43b8-a1a1-7d271d375228\") " Nov 29 07:56:20 crc kubenswrapper[4795]: I1129 07:56:20.024055 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/2a89a070-4385-43b8-a1a1-7d271d375228-collector-token\") pod \"2a89a070-4385-43b8-a1a1-7d271d375228\" (UID: \"2a89a070-4385-43b8-a1a1-7d271d375228\") " Nov 29 07:56:20 crc kubenswrapper[4795]: I1129 07:56:20.024111 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/2a89a070-4385-43b8-a1a1-7d271d375228-entrypoint\") pod \"2a89a070-4385-43b8-a1a1-7d271d375228\" (UID: \"2a89a070-4385-43b8-a1a1-7d271d375228\") " Nov 29 07:56:20 crc kubenswrapper[4795]: I1129 07:56:20.024177 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/2a89a070-4385-43b8-a1a1-7d271d375228-collector-syslog-receiver\") pod \"2a89a070-4385-43b8-a1a1-7d271d375228\" (UID: \"2a89a070-4385-43b8-a1a1-7d271d375228\") " Nov 29 07:56:20 crc kubenswrapper[4795]: I1129 07:56:20.024202 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/2a89a070-4385-43b8-a1a1-7d271d375228-sa-token\") pod \"2a89a070-4385-43b8-a1a1-7d271d375228\" (UID: \"2a89a070-4385-43b8-a1a1-7d271d375228\") " Nov 29 07:56:20 crc kubenswrapper[4795]: I1129 07:56:20.024232 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2a89a070-4385-43b8-a1a1-7d271d375228-trusted-ca\") pod \"2a89a070-4385-43b8-a1a1-7d271d375228\" (UID: \"2a89a070-4385-43b8-a1a1-7d271d375228\") " Nov 29 07:56:20 crc kubenswrapper[4795]: I1129 07:56:20.024257 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/2a89a070-4385-43b8-a1a1-7d271d375228-metrics\") pod \"2a89a070-4385-43b8-a1a1-7d271d375228\" (UID: \"2a89a070-4385-43b8-a1a1-7d271d375228\") " Nov 29 07:56:20 crc kubenswrapper[4795]: I1129 07:56:20.024290 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/2a89a070-4385-43b8-a1a1-7d271d375228-datadir\") pod \"2a89a070-4385-43b8-a1a1-7d271d375228\" (UID: \"2a89a070-4385-43b8-a1a1-7d271d375228\") " Nov 29 07:56:20 crc kubenswrapper[4795]: I1129 07:56:20.024339 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a89a070-4385-43b8-a1a1-7d271d375228-config\") pod \"2a89a070-4385-43b8-a1a1-7d271d375228\" (UID: \"2a89a070-4385-43b8-a1a1-7d271d375228\") " Nov 29 07:56:20 crc kubenswrapper[4795]: I1129 07:56:20.024368 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4tx8j\" (UniqueName: \"kubernetes.io/projected/2a89a070-4385-43b8-a1a1-7d271d375228-kube-api-access-4tx8j\") pod \"2a89a070-4385-43b8-a1a1-7d271d375228\" (UID: \"2a89a070-4385-43b8-a1a1-7d271d375228\") " Nov 29 07:56:20 crc kubenswrapper[4795]: I1129 07:56:20.024408 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a89a070-4385-43b8-a1a1-7d271d375228-datadir" (OuterVolumeSpecName: "datadir") pod "2a89a070-4385-43b8-a1a1-7d271d375228" (UID: "2a89a070-4385-43b8-a1a1-7d271d375228"). InnerVolumeSpecName "datadir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:56:20 crc kubenswrapper[4795]: I1129 07:56:20.024586 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a89a070-4385-43b8-a1a1-7d271d375228-config-openshift-service-cacrt" (OuterVolumeSpecName: "config-openshift-service-cacrt") pod "2a89a070-4385-43b8-a1a1-7d271d375228" (UID: "2a89a070-4385-43b8-a1a1-7d271d375228"). InnerVolumeSpecName "config-openshift-service-cacrt". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:56:20 crc kubenswrapper[4795]: I1129 07:56:20.024855 4795 reconciler_common.go:293] "Volume detached for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/2a89a070-4385-43b8-a1a1-7d271d375228-datadir\") on node \"crc\" DevicePath \"\"" Nov 29 07:56:20 crc kubenswrapper[4795]: I1129 07:56:20.024871 4795 reconciler_common.go:293] "Volume detached for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/2a89a070-4385-43b8-a1a1-7d271d375228-config-openshift-service-cacrt\") on node \"crc\" DevicePath \"\"" Nov 29 07:56:20 crc kubenswrapper[4795]: I1129 07:56:20.026538 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a89a070-4385-43b8-a1a1-7d271d375228-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "2a89a070-4385-43b8-a1a1-7d271d375228" (UID: "2a89a070-4385-43b8-a1a1-7d271d375228"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:56:20 crc kubenswrapper[4795]: I1129 07:56:20.026732 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a89a070-4385-43b8-a1a1-7d271d375228-entrypoint" (OuterVolumeSpecName: "entrypoint") pod "2a89a070-4385-43b8-a1a1-7d271d375228" (UID: "2a89a070-4385-43b8-a1a1-7d271d375228"). InnerVolumeSpecName "entrypoint". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:56:20 crc kubenswrapper[4795]: I1129 07:56:20.027124 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a89a070-4385-43b8-a1a1-7d271d375228-config" (OuterVolumeSpecName: "config") pod "2a89a070-4385-43b8-a1a1-7d271d375228" (UID: "2a89a070-4385-43b8-a1a1-7d271d375228"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:56:20 crc kubenswrapper[4795]: I1129 07:56:20.027405 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a89a070-4385-43b8-a1a1-7d271d375228-tmp" (OuterVolumeSpecName: "tmp") pod "2a89a070-4385-43b8-a1a1-7d271d375228" (UID: "2a89a070-4385-43b8-a1a1-7d271d375228"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:56:20 crc kubenswrapper[4795]: I1129 07:56:20.028533 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a89a070-4385-43b8-a1a1-7d271d375228-metrics" (OuterVolumeSpecName: "metrics") pod "2a89a070-4385-43b8-a1a1-7d271d375228" (UID: "2a89a070-4385-43b8-a1a1-7d271d375228"). InnerVolumeSpecName "metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:56:20 crc kubenswrapper[4795]: I1129 07:56:20.028613 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a89a070-4385-43b8-a1a1-7d271d375228-sa-token" (OuterVolumeSpecName: "sa-token") pod "2a89a070-4385-43b8-a1a1-7d271d375228" (UID: "2a89a070-4385-43b8-a1a1-7d271d375228"). InnerVolumeSpecName "sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:56:20 crc kubenswrapper[4795]: I1129 07:56:20.029519 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a89a070-4385-43b8-a1a1-7d271d375228-kube-api-access-4tx8j" (OuterVolumeSpecName: "kube-api-access-4tx8j") pod "2a89a070-4385-43b8-a1a1-7d271d375228" (UID: "2a89a070-4385-43b8-a1a1-7d271d375228"). InnerVolumeSpecName "kube-api-access-4tx8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:56:20 crc kubenswrapper[4795]: I1129 07:56:20.030920 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a89a070-4385-43b8-a1a1-7d271d375228-collector-syslog-receiver" (OuterVolumeSpecName: "collector-syslog-receiver") pod "2a89a070-4385-43b8-a1a1-7d271d375228" (UID: "2a89a070-4385-43b8-a1a1-7d271d375228"). InnerVolumeSpecName "collector-syslog-receiver". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:56:20 crc kubenswrapper[4795]: I1129 07:56:20.031394 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a89a070-4385-43b8-a1a1-7d271d375228-collector-token" (OuterVolumeSpecName: "collector-token") pod "2a89a070-4385-43b8-a1a1-7d271d375228" (UID: "2a89a070-4385-43b8-a1a1-7d271d375228"). InnerVolumeSpecName "collector-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:56:20 crc kubenswrapper[4795]: I1129 07:56:20.126176 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4tx8j\" (UniqueName: \"kubernetes.io/projected/2a89a070-4385-43b8-a1a1-7d271d375228-kube-api-access-4tx8j\") on node \"crc\" DevicePath \"\"" Nov 29 07:56:20 crc kubenswrapper[4795]: I1129 07:56:20.126218 4795 reconciler_common.go:293] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2a89a070-4385-43b8-a1a1-7d271d375228-tmp\") on node \"crc\" DevicePath \"\"" Nov 29 07:56:20 crc kubenswrapper[4795]: I1129 07:56:20.126232 4795 reconciler_common.go:293] "Volume detached for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/2a89a070-4385-43b8-a1a1-7d271d375228-collector-token\") on node \"crc\" DevicePath \"\"" Nov 29 07:56:20 crc kubenswrapper[4795]: I1129 07:56:20.126244 4795 reconciler_common.go:293] "Volume detached for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/2a89a070-4385-43b8-a1a1-7d271d375228-entrypoint\") on node \"crc\" DevicePath \"\"" Nov 29 07:56:20 crc kubenswrapper[4795]: I1129 07:56:20.126256 4795 reconciler_common.go:293] "Volume detached for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/2a89a070-4385-43b8-a1a1-7d271d375228-sa-token\") on node \"crc\" DevicePath \"\"" Nov 29 07:56:20 crc kubenswrapper[4795]: I1129 07:56:20.126269 4795 reconciler_common.go:293] "Volume detached for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/2a89a070-4385-43b8-a1a1-7d271d375228-collector-syslog-receiver\") on node \"crc\" DevicePath \"\"" Nov 29 07:56:20 crc kubenswrapper[4795]: I1129 07:56:20.126280 4795 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2a89a070-4385-43b8-a1a1-7d271d375228-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:56:20 crc kubenswrapper[4795]: I1129 07:56:20.126291 4795 reconciler_common.go:293] "Volume detached for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/2a89a070-4385-43b8-a1a1-7d271d375228-metrics\") on node \"crc\" DevicePath \"\"" Nov 29 07:56:20 crc kubenswrapper[4795]: I1129 07:56:20.126302 4795 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a89a070-4385-43b8-a1a1-7d271d375228-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:56:20 crc kubenswrapper[4795]: I1129 07:56:20.867380 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-s6lpk" Nov 29 07:56:20 crc kubenswrapper[4795]: I1129 07:56:20.904391 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-s6lpk"] Nov 29 07:56:20 crc kubenswrapper[4795]: I1129 07:56:20.911943 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-logging/collector-s6lpk"] Nov 29 07:56:20 crc kubenswrapper[4795]: I1129 07:56:20.924184 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-plv7b"] Nov 29 07:56:20 crc kubenswrapper[4795]: I1129 07:56:20.926466 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-plv7b" Nov 29 07:56:20 crc kubenswrapper[4795]: I1129 07:56:20.929825 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Nov 29 07:56:20 crc kubenswrapper[4795]: I1129 07:56:20.930068 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-hlxfk" Nov 29 07:56:20 crc kubenswrapper[4795]: I1129 07:56:20.930536 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Nov 29 07:56:20 crc kubenswrapper[4795]: I1129 07:56:20.930734 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Nov 29 07:56:20 crc kubenswrapper[4795]: I1129 07:56:20.931786 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Nov 29 07:56:20 crc kubenswrapper[4795]: I1129 07:56:20.937396 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-plv7b"] Nov 29 07:56:20 crc kubenswrapper[4795]: I1129 07:56:20.948706 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Nov 29 07:56:21 crc kubenswrapper[4795]: I1129 07:56:21.039558 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6a9c50b-4559-45f6-a382-a236c88aa72e-config\") pod \"collector-plv7b\" (UID: \"a6a9c50b-4559-45f6-a382-a236c88aa72e\") " pod="openshift-logging/collector-plv7b" Nov 29 07:56:21 crc kubenswrapper[4795]: I1129 07:56:21.039647 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a6a9c50b-4559-45f6-a382-a236c88aa72e-tmp\") pod \"collector-plv7b\" (UID: \"a6a9c50b-4559-45f6-a382-a236c88aa72e\") " pod="openshift-logging/collector-plv7b" Nov 29 07:56:21 crc kubenswrapper[4795]: I1129 07:56:21.039665 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a6a9c50b-4559-45f6-a382-a236c88aa72e-trusted-ca\") pod \"collector-plv7b\" (UID: \"a6a9c50b-4559-45f6-a382-a236c88aa72e\") " pod="openshift-logging/collector-plv7b" Nov 29 07:56:21 crc kubenswrapper[4795]: I1129 07:56:21.039704 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/a6a9c50b-4559-45f6-a382-a236c88aa72e-sa-token\") pod \"collector-plv7b\" (UID: \"a6a9c50b-4559-45f6-a382-a236c88aa72e\") " pod="openshift-logging/collector-plv7b" Nov 29 07:56:21 crc kubenswrapper[4795]: I1129 07:56:21.039736 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/a6a9c50b-4559-45f6-a382-a236c88aa72e-config-openshift-service-cacrt\") pod \"collector-plv7b\" (UID: \"a6a9c50b-4559-45f6-a382-a236c88aa72e\") " pod="openshift-logging/collector-plv7b" Nov 29 07:56:21 crc kubenswrapper[4795]: I1129 07:56:21.039890 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/a6a9c50b-4559-45f6-a382-a236c88aa72e-datadir\") pod \"collector-plv7b\" (UID: \"a6a9c50b-4559-45f6-a382-a236c88aa72e\") " pod="openshift-logging/collector-plv7b" Nov 29 07:56:21 crc kubenswrapper[4795]: I1129 07:56:21.039958 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/a6a9c50b-4559-45f6-a382-a236c88aa72e-entrypoint\") pod \"collector-plv7b\" (UID: \"a6a9c50b-4559-45f6-a382-a236c88aa72e\") " pod="openshift-logging/collector-plv7b" Nov 29 07:56:21 crc kubenswrapper[4795]: I1129 07:56:21.040039 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/a6a9c50b-4559-45f6-a382-a236c88aa72e-collector-syslog-receiver\") pod \"collector-plv7b\" (UID: \"a6a9c50b-4559-45f6-a382-a236c88aa72e\") " pod="openshift-logging/collector-plv7b" Nov 29 07:56:21 crc kubenswrapper[4795]: I1129 07:56:21.040062 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rj7n\" (UniqueName: \"kubernetes.io/projected/a6a9c50b-4559-45f6-a382-a236c88aa72e-kube-api-access-5rj7n\") pod \"collector-plv7b\" (UID: \"a6a9c50b-4559-45f6-a382-a236c88aa72e\") " pod="openshift-logging/collector-plv7b" Nov 29 07:56:21 crc kubenswrapper[4795]: I1129 07:56:21.040083 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/a6a9c50b-4559-45f6-a382-a236c88aa72e-collector-token\") pod \"collector-plv7b\" (UID: \"a6a9c50b-4559-45f6-a382-a236c88aa72e\") " pod="openshift-logging/collector-plv7b" Nov 29 07:56:21 crc kubenswrapper[4795]: I1129 07:56:21.040137 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/a6a9c50b-4559-45f6-a382-a236c88aa72e-metrics\") pod \"collector-plv7b\" (UID: \"a6a9c50b-4559-45f6-a382-a236c88aa72e\") " pod="openshift-logging/collector-plv7b" Nov 29 07:56:21 crc kubenswrapper[4795]: I1129 07:56:21.141963 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/a6a9c50b-4559-45f6-a382-a236c88aa72e-datadir\") pod \"collector-plv7b\" (UID: \"a6a9c50b-4559-45f6-a382-a236c88aa72e\") " pod="openshift-logging/collector-plv7b" Nov 29 07:56:21 crc kubenswrapper[4795]: I1129 07:56:21.142264 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/a6a9c50b-4559-45f6-a382-a236c88aa72e-entrypoint\") pod \"collector-plv7b\" (UID: \"a6a9c50b-4559-45f6-a382-a236c88aa72e\") " pod="openshift-logging/collector-plv7b" Nov 29 07:56:21 crc kubenswrapper[4795]: I1129 07:56:21.142333 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/a6a9c50b-4559-45f6-a382-a236c88aa72e-collector-syslog-receiver\") pod \"collector-plv7b\" (UID: \"a6a9c50b-4559-45f6-a382-a236c88aa72e\") " pod="openshift-logging/collector-plv7b" Nov 29 07:56:21 crc kubenswrapper[4795]: I1129 07:56:21.142361 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rj7n\" (UniqueName: \"kubernetes.io/projected/a6a9c50b-4559-45f6-a382-a236c88aa72e-kube-api-access-5rj7n\") pod \"collector-plv7b\" (UID: \"a6a9c50b-4559-45f6-a382-a236c88aa72e\") " pod="openshift-logging/collector-plv7b" Nov 29 07:56:21 crc kubenswrapper[4795]: I1129 07:56:21.142384 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/a6a9c50b-4559-45f6-a382-a236c88aa72e-collector-token\") pod \"collector-plv7b\" (UID: \"a6a9c50b-4559-45f6-a382-a236c88aa72e\") " pod="openshift-logging/collector-plv7b" Nov 29 07:56:21 crc kubenswrapper[4795]: I1129 07:56:21.142416 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/a6a9c50b-4559-45f6-a382-a236c88aa72e-metrics\") pod \"collector-plv7b\" (UID: \"a6a9c50b-4559-45f6-a382-a236c88aa72e\") " pod="openshift-logging/collector-plv7b" Nov 29 07:56:21 crc kubenswrapper[4795]: I1129 07:56:21.142437 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6a9c50b-4559-45f6-a382-a236c88aa72e-config\") pod \"collector-plv7b\" (UID: \"a6a9c50b-4559-45f6-a382-a236c88aa72e\") " pod="openshift-logging/collector-plv7b" Nov 29 07:56:21 crc kubenswrapper[4795]: I1129 07:56:21.142465 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a6a9c50b-4559-45f6-a382-a236c88aa72e-tmp\") pod \"collector-plv7b\" (UID: \"a6a9c50b-4559-45f6-a382-a236c88aa72e\") " pod="openshift-logging/collector-plv7b" Nov 29 07:56:21 crc kubenswrapper[4795]: I1129 07:56:21.142484 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a6a9c50b-4559-45f6-a382-a236c88aa72e-trusted-ca\") pod \"collector-plv7b\" (UID: \"a6a9c50b-4559-45f6-a382-a236c88aa72e\") " pod="openshift-logging/collector-plv7b" Nov 29 07:56:21 crc kubenswrapper[4795]: I1129 07:56:21.142526 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/a6a9c50b-4559-45f6-a382-a236c88aa72e-sa-token\") pod \"collector-plv7b\" (UID: \"a6a9c50b-4559-45f6-a382-a236c88aa72e\") " pod="openshift-logging/collector-plv7b" Nov 29 07:56:21 crc kubenswrapper[4795]: I1129 07:56:21.142564 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/a6a9c50b-4559-45f6-a382-a236c88aa72e-config-openshift-service-cacrt\") pod \"collector-plv7b\" (UID: \"a6a9c50b-4559-45f6-a382-a236c88aa72e\") " pod="openshift-logging/collector-plv7b" Nov 29 07:56:21 crc kubenswrapper[4795]: I1129 07:56:21.143346 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/a6a9c50b-4559-45f6-a382-a236c88aa72e-config-openshift-service-cacrt\") pod \"collector-plv7b\" (UID: \"a6a9c50b-4559-45f6-a382-a236c88aa72e\") " pod="openshift-logging/collector-plv7b" Nov 29 07:56:21 crc kubenswrapper[4795]: I1129 07:56:21.142097 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/a6a9c50b-4559-45f6-a382-a236c88aa72e-datadir\") pod \"collector-plv7b\" (UID: \"a6a9c50b-4559-45f6-a382-a236c88aa72e\") " pod="openshift-logging/collector-plv7b" Nov 29 07:56:21 crc kubenswrapper[4795]: I1129 07:56:21.144684 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6a9c50b-4559-45f6-a382-a236c88aa72e-config\") pod \"collector-plv7b\" (UID: \"a6a9c50b-4559-45f6-a382-a236c88aa72e\") " pod="openshift-logging/collector-plv7b" Nov 29 07:56:21 crc kubenswrapper[4795]: I1129 07:56:21.145142 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a6a9c50b-4559-45f6-a382-a236c88aa72e-trusted-ca\") pod \"collector-plv7b\" (UID: \"a6a9c50b-4559-45f6-a382-a236c88aa72e\") " pod="openshift-logging/collector-plv7b" Nov 29 07:56:21 crc kubenswrapper[4795]: I1129 07:56:21.145865 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/a6a9c50b-4559-45f6-a382-a236c88aa72e-entrypoint\") pod \"collector-plv7b\" (UID: \"a6a9c50b-4559-45f6-a382-a236c88aa72e\") " pod="openshift-logging/collector-plv7b" Nov 29 07:56:21 crc kubenswrapper[4795]: I1129 07:56:21.147182 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/a6a9c50b-4559-45f6-a382-a236c88aa72e-metrics\") pod \"collector-plv7b\" (UID: \"a6a9c50b-4559-45f6-a382-a236c88aa72e\") " pod="openshift-logging/collector-plv7b" Nov 29 07:56:21 crc kubenswrapper[4795]: I1129 07:56:21.148573 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/a6a9c50b-4559-45f6-a382-a236c88aa72e-collector-token\") pod \"collector-plv7b\" (UID: \"a6a9c50b-4559-45f6-a382-a236c88aa72e\") " pod="openshift-logging/collector-plv7b" Nov 29 07:56:21 crc kubenswrapper[4795]: I1129 07:56:21.150123 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a6a9c50b-4559-45f6-a382-a236c88aa72e-tmp\") pod \"collector-plv7b\" (UID: \"a6a9c50b-4559-45f6-a382-a236c88aa72e\") " pod="openshift-logging/collector-plv7b" Nov 29 07:56:21 crc kubenswrapper[4795]: I1129 07:56:21.150804 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/a6a9c50b-4559-45f6-a382-a236c88aa72e-collector-syslog-receiver\") pod \"collector-plv7b\" (UID: \"a6a9c50b-4559-45f6-a382-a236c88aa72e\") " pod="openshift-logging/collector-plv7b" Nov 29 07:56:21 crc kubenswrapper[4795]: I1129 07:56:21.160650 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rj7n\" (UniqueName: \"kubernetes.io/projected/a6a9c50b-4559-45f6-a382-a236c88aa72e-kube-api-access-5rj7n\") pod \"collector-plv7b\" (UID: \"a6a9c50b-4559-45f6-a382-a236c88aa72e\") " pod="openshift-logging/collector-plv7b" Nov 29 07:56:21 crc kubenswrapper[4795]: I1129 07:56:21.162952 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/a6a9c50b-4559-45f6-a382-a236c88aa72e-sa-token\") pod \"collector-plv7b\" (UID: \"a6a9c50b-4559-45f6-a382-a236c88aa72e\") " pod="openshift-logging/collector-plv7b" Nov 29 07:56:21 crc kubenswrapper[4795]: I1129 07:56:21.262331 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-plv7b" Nov 29 07:56:21 crc kubenswrapper[4795]: I1129 07:56:21.654442 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-plv7b"] Nov 29 07:56:21 crc kubenswrapper[4795]: W1129 07:56:21.661843 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6a9c50b_4559_45f6_a382_a236c88aa72e.slice/crio-ec770ca5c7b38fcb3b7651aae44112a936d57ae0b025b351cd351ccfa3d3dc22 WatchSource:0}: Error finding container ec770ca5c7b38fcb3b7651aae44112a936d57ae0b025b351cd351ccfa3d3dc22: Status 404 returned error can't find the container with id ec770ca5c7b38fcb3b7651aae44112a936d57ae0b025b351cd351ccfa3d3dc22 Nov 29 07:56:21 crc kubenswrapper[4795]: I1129 07:56:21.874138 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-plv7b" event={"ID":"a6a9c50b-4559-45f6-a382-a236c88aa72e","Type":"ContainerStarted","Data":"ec770ca5c7b38fcb3b7651aae44112a936d57ae0b025b351cd351ccfa3d3dc22"} Nov 29 07:56:22 crc kubenswrapper[4795]: I1129 07:56:22.288259 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a89a070-4385-43b8-a1a1-7d271d375228" path="/var/lib/kubelet/pods/2a89a070-4385-43b8-a1a1-7d271d375228/volumes" Nov 29 07:56:30 crc kubenswrapper[4795]: I1129 07:56:29.656377 4795 patch_prober.go:28] interesting pod/thanos-querier-6b77f7dd4f-cv6b5 container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.75:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 29 07:56:30 crc kubenswrapper[4795]: I1129 07:56:29.657144 4795 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-6b77f7dd4f-cv6b5" podUID="537c19bc-51da-4b65-9baa-176ac3225a2d" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.75:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 29 07:56:30 crc kubenswrapper[4795]: I1129 07:56:29.682580 4795 patch_prober.go:28] interesting pod/logging-loki-gateway-575bf4587d-fvfcs container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 29 07:56:30 crc kubenswrapper[4795]: I1129 07:56:29.685656 4795 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-575bf4587d-fvfcs" podUID="c5d428b2-eb39-4936-819d-08321d96d015" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.58:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 29 07:56:30 crc kubenswrapper[4795]: I1129 07:56:30.713069 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-plv7b" event={"ID":"a6a9c50b-4559-45f6-a382-a236c88aa72e","Type":"ContainerStarted","Data":"1aa011f9847231d9c05a87a1063b9b613289218b4dbbe91cb4a7dc58598d3630"} Nov 29 07:56:30 crc kubenswrapper[4795]: I1129 07:56:30.740230 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/collector-plv7b" podStartSLOduration=2.068638144 podStartE2EDuration="10.740212638s" podCreationTimestamp="2025-11-29 07:56:20 +0000 UTC" firstStartedPulling="2025-11-29 07:56:21.663365007 +0000 UTC m=+1027.638940797" lastFinishedPulling="2025-11-29 07:56:30.334939491 +0000 UTC m=+1036.310515291" observedRunningTime="2025-11-29 07:56:30.736979936 +0000 UTC m=+1036.712555746" watchObservedRunningTime="2025-11-29 07:56:30.740212638 +0000 UTC m=+1036.715788428" Nov 29 07:56:41 crc kubenswrapper[4795]: I1129 07:56:41.941140 4795 patch_prober.go:28] interesting pod/machine-config-daemon-bkmq6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:56:41 crc kubenswrapper[4795]: I1129 07:56:41.941758 4795 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:56:56 crc kubenswrapper[4795]: I1129 07:56:56.792504 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwpl88"] Nov 29 07:56:56 crc kubenswrapper[4795]: I1129 07:56:56.794392 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwpl88" Nov 29 07:56:56 crc kubenswrapper[4795]: I1129 07:56:56.797078 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 29 07:56:56 crc kubenswrapper[4795]: I1129 07:56:56.809914 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwpl88"] Nov 29 07:56:56 crc kubenswrapper[4795]: I1129 07:56:56.923971 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0830d1da-b370-4d0d-8418-7dca445d0ca5-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwpl88\" (UID: \"0830d1da-b370-4d0d-8418-7dca445d0ca5\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwpl88" Nov 29 07:56:56 crc kubenswrapper[4795]: I1129 07:56:56.924039 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qc7rf\" (UniqueName: \"kubernetes.io/projected/0830d1da-b370-4d0d-8418-7dca445d0ca5-kube-api-access-qc7rf\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwpl88\" (UID: \"0830d1da-b370-4d0d-8418-7dca445d0ca5\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwpl88" Nov 29 07:56:56 crc kubenswrapper[4795]: I1129 07:56:56.924299 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0830d1da-b370-4d0d-8418-7dca445d0ca5-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwpl88\" (UID: \"0830d1da-b370-4d0d-8418-7dca445d0ca5\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwpl88" Nov 29 07:56:57 crc kubenswrapper[4795]: I1129 07:56:57.026007 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0830d1da-b370-4d0d-8418-7dca445d0ca5-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwpl88\" (UID: \"0830d1da-b370-4d0d-8418-7dca445d0ca5\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwpl88" Nov 29 07:56:57 crc kubenswrapper[4795]: I1129 07:56:57.026117 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0830d1da-b370-4d0d-8418-7dca445d0ca5-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwpl88\" (UID: \"0830d1da-b370-4d0d-8418-7dca445d0ca5\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwpl88" Nov 29 07:56:57 crc kubenswrapper[4795]: I1129 07:56:57.026154 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qc7rf\" (UniqueName: \"kubernetes.io/projected/0830d1da-b370-4d0d-8418-7dca445d0ca5-kube-api-access-qc7rf\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwpl88\" (UID: \"0830d1da-b370-4d0d-8418-7dca445d0ca5\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwpl88" Nov 29 07:56:57 crc kubenswrapper[4795]: I1129 07:56:57.026558 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0830d1da-b370-4d0d-8418-7dca445d0ca5-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwpl88\" (UID: \"0830d1da-b370-4d0d-8418-7dca445d0ca5\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwpl88" Nov 29 07:56:57 crc kubenswrapper[4795]: I1129 07:56:57.026582 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0830d1da-b370-4d0d-8418-7dca445d0ca5-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwpl88\" (UID: \"0830d1da-b370-4d0d-8418-7dca445d0ca5\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwpl88" Nov 29 07:56:57 crc kubenswrapper[4795]: I1129 07:56:57.045941 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qc7rf\" (UniqueName: \"kubernetes.io/projected/0830d1da-b370-4d0d-8418-7dca445d0ca5-kube-api-access-qc7rf\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwpl88\" (UID: \"0830d1da-b370-4d0d-8418-7dca445d0ca5\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwpl88" Nov 29 07:56:57 crc kubenswrapper[4795]: I1129 07:56:57.112899 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwpl88" Nov 29 07:56:57 crc kubenswrapper[4795]: I1129 07:56:57.637614 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwpl88"] Nov 29 07:56:57 crc kubenswrapper[4795]: I1129 07:56:57.912819 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwpl88" event={"ID":"0830d1da-b370-4d0d-8418-7dca445d0ca5","Type":"ContainerStarted","Data":"766b2a83fce9c374610e38cb7620e2009b9a85a16cad94a224260d97d8b500b5"} Nov 29 07:56:57 crc kubenswrapper[4795]: I1129 07:56:57.913239 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwpl88" event={"ID":"0830d1da-b370-4d0d-8418-7dca445d0ca5","Type":"ContainerStarted","Data":"0a0969041dbb8602ae635344926fa4fde2fd39e57603864929fd10343accdfc7"} Nov 29 07:56:58 crc kubenswrapper[4795]: I1129 07:56:58.921877 4795 generic.go:334] "Generic (PLEG): container finished" podID="0830d1da-b370-4d0d-8418-7dca445d0ca5" containerID="766b2a83fce9c374610e38cb7620e2009b9a85a16cad94a224260d97d8b500b5" exitCode=0 Nov 29 07:56:58 crc kubenswrapper[4795]: I1129 07:56:58.922094 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwpl88" event={"ID":"0830d1da-b370-4d0d-8418-7dca445d0ca5","Type":"ContainerDied","Data":"766b2a83fce9c374610e38cb7620e2009b9a85a16cad94a224260d97d8b500b5"} Nov 29 07:57:00 crc kubenswrapper[4795]: I1129 07:57:00.936928 4795 generic.go:334] "Generic (PLEG): container finished" podID="0830d1da-b370-4d0d-8418-7dca445d0ca5" containerID="106a53742b169dbef1cf1013d9881563ac3ab8a199a77d33641f7b7831b2ad7f" exitCode=0 Nov 29 07:57:00 crc kubenswrapper[4795]: I1129 07:57:00.937026 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwpl88" event={"ID":"0830d1da-b370-4d0d-8418-7dca445d0ca5","Type":"ContainerDied","Data":"106a53742b169dbef1cf1013d9881563ac3ab8a199a77d33641f7b7831b2ad7f"} Nov 29 07:57:01 crc kubenswrapper[4795]: I1129 07:57:01.945693 4795 generic.go:334] "Generic (PLEG): container finished" podID="0830d1da-b370-4d0d-8418-7dca445d0ca5" containerID="cb76e950b5e2eee765c2aa4c54bd80dfe5d643b8c9d6a338872a46e13dd254c5" exitCode=0 Nov 29 07:57:01 crc kubenswrapper[4795]: I1129 07:57:01.945812 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwpl88" event={"ID":"0830d1da-b370-4d0d-8418-7dca445d0ca5","Type":"ContainerDied","Data":"cb76e950b5e2eee765c2aa4c54bd80dfe5d643b8c9d6a338872a46e13dd254c5"} Nov 29 07:57:03 crc kubenswrapper[4795]: I1129 07:57:03.261114 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwpl88" Nov 29 07:57:03 crc kubenswrapper[4795]: I1129 07:57:03.432019 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0830d1da-b370-4d0d-8418-7dca445d0ca5-util\") pod \"0830d1da-b370-4d0d-8418-7dca445d0ca5\" (UID: \"0830d1da-b370-4d0d-8418-7dca445d0ca5\") " Nov 29 07:57:03 crc kubenswrapper[4795]: I1129 07:57:03.432115 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qc7rf\" (UniqueName: \"kubernetes.io/projected/0830d1da-b370-4d0d-8418-7dca445d0ca5-kube-api-access-qc7rf\") pod \"0830d1da-b370-4d0d-8418-7dca445d0ca5\" (UID: \"0830d1da-b370-4d0d-8418-7dca445d0ca5\") " Nov 29 07:57:03 crc kubenswrapper[4795]: I1129 07:57:03.432245 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0830d1da-b370-4d0d-8418-7dca445d0ca5-bundle\") pod \"0830d1da-b370-4d0d-8418-7dca445d0ca5\" (UID: \"0830d1da-b370-4d0d-8418-7dca445d0ca5\") " Nov 29 07:57:03 crc kubenswrapper[4795]: I1129 07:57:03.432857 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0830d1da-b370-4d0d-8418-7dca445d0ca5-bundle" (OuterVolumeSpecName: "bundle") pod "0830d1da-b370-4d0d-8418-7dca445d0ca5" (UID: "0830d1da-b370-4d0d-8418-7dca445d0ca5"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:57:03 crc kubenswrapper[4795]: I1129 07:57:03.433092 4795 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0830d1da-b370-4d0d-8418-7dca445d0ca5-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:57:03 crc kubenswrapper[4795]: I1129 07:57:03.447773 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0830d1da-b370-4d0d-8418-7dca445d0ca5-kube-api-access-qc7rf" (OuterVolumeSpecName: "kube-api-access-qc7rf") pod "0830d1da-b370-4d0d-8418-7dca445d0ca5" (UID: "0830d1da-b370-4d0d-8418-7dca445d0ca5"). InnerVolumeSpecName "kube-api-access-qc7rf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:57:03 crc kubenswrapper[4795]: I1129 07:57:03.534423 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qc7rf\" (UniqueName: \"kubernetes.io/projected/0830d1da-b370-4d0d-8418-7dca445d0ca5-kube-api-access-qc7rf\") on node \"crc\" DevicePath \"\"" Nov 29 07:57:03 crc kubenswrapper[4795]: I1129 07:57:03.605856 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0830d1da-b370-4d0d-8418-7dca445d0ca5-util" (OuterVolumeSpecName: "util") pod "0830d1da-b370-4d0d-8418-7dca445d0ca5" (UID: "0830d1da-b370-4d0d-8418-7dca445d0ca5"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:57:03 crc kubenswrapper[4795]: I1129 07:57:03.635449 4795 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0830d1da-b370-4d0d-8418-7dca445d0ca5-util\") on node \"crc\" DevicePath \"\"" Nov 29 07:57:03 crc kubenswrapper[4795]: I1129 07:57:03.967372 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwpl88" event={"ID":"0830d1da-b370-4d0d-8418-7dca445d0ca5","Type":"ContainerDied","Data":"0a0969041dbb8602ae635344926fa4fde2fd39e57603864929fd10343accdfc7"} Nov 29 07:57:03 crc kubenswrapper[4795]: I1129 07:57:03.967411 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwpl88" Nov 29 07:57:03 crc kubenswrapper[4795]: I1129 07:57:03.967417 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a0969041dbb8602ae635344926fa4fde2fd39e57603864929fd10343accdfc7" Nov 29 07:57:06 crc kubenswrapper[4795]: I1129 07:57:06.917138 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-7c5mp"] Nov 29 07:57:06 crc kubenswrapper[4795]: E1129 07:57:06.917701 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0830d1da-b370-4d0d-8418-7dca445d0ca5" containerName="pull" Nov 29 07:57:06 crc kubenswrapper[4795]: I1129 07:57:06.917713 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="0830d1da-b370-4d0d-8418-7dca445d0ca5" containerName="pull" Nov 29 07:57:06 crc kubenswrapper[4795]: E1129 07:57:06.917727 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0830d1da-b370-4d0d-8418-7dca445d0ca5" containerName="extract" Nov 29 07:57:06 crc kubenswrapper[4795]: I1129 07:57:06.917733 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="0830d1da-b370-4d0d-8418-7dca445d0ca5" containerName="extract" Nov 29 07:57:06 crc kubenswrapper[4795]: E1129 07:57:06.917750 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0830d1da-b370-4d0d-8418-7dca445d0ca5" containerName="util" Nov 29 07:57:06 crc kubenswrapper[4795]: I1129 07:57:06.917757 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="0830d1da-b370-4d0d-8418-7dca445d0ca5" containerName="util" Nov 29 07:57:06 crc kubenswrapper[4795]: I1129 07:57:06.917876 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="0830d1da-b370-4d0d-8418-7dca445d0ca5" containerName="extract" Nov 29 07:57:06 crc kubenswrapper[4795]: I1129 07:57:06.918362 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-7c5mp" Nov 29 07:57:06 crc kubenswrapper[4795]: I1129 07:57:06.920955 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Nov 29 07:57:06 crc kubenswrapper[4795]: I1129 07:57:06.921208 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-xkwqk" Nov 29 07:57:06 crc kubenswrapper[4795]: I1129 07:57:06.926147 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Nov 29 07:57:06 crc kubenswrapper[4795]: I1129 07:57:06.928878 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-7c5mp"] Nov 29 07:57:07 crc kubenswrapper[4795]: I1129 07:57:07.087159 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrvw6\" (UniqueName: \"kubernetes.io/projected/cadc9dc9-f67d-440d-9169-9f7816d26a56-kube-api-access-jrvw6\") pod \"nmstate-operator-5b5b58f5c8-7c5mp\" (UID: \"cadc9dc9-f67d-440d-9169-9f7816d26a56\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-7c5mp" Nov 29 07:57:07 crc kubenswrapper[4795]: I1129 07:57:07.188559 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrvw6\" (UniqueName: \"kubernetes.io/projected/cadc9dc9-f67d-440d-9169-9f7816d26a56-kube-api-access-jrvw6\") pod \"nmstate-operator-5b5b58f5c8-7c5mp\" (UID: \"cadc9dc9-f67d-440d-9169-9f7816d26a56\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-7c5mp" Nov 29 07:57:07 crc kubenswrapper[4795]: I1129 07:57:07.214226 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrvw6\" (UniqueName: \"kubernetes.io/projected/cadc9dc9-f67d-440d-9169-9f7816d26a56-kube-api-access-jrvw6\") pod \"nmstate-operator-5b5b58f5c8-7c5mp\" (UID: \"cadc9dc9-f67d-440d-9169-9f7816d26a56\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-7c5mp" Nov 29 07:57:07 crc kubenswrapper[4795]: I1129 07:57:07.234946 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-7c5mp" Nov 29 07:57:07 crc kubenswrapper[4795]: I1129 07:57:07.635336 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-7c5mp"] Nov 29 07:57:07 crc kubenswrapper[4795]: I1129 07:57:07.636958 4795 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 29 07:57:07 crc kubenswrapper[4795]: I1129 07:57:07.997139 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-7c5mp" event={"ID":"cadc9dc9-f67d-440d-9169-9f7816d26a56","Type":"ContainerStarted","Data":"66c1b1475aa4dbef5a59056e462a0ae30527b086f5178412eafa79bc7981d36d"} Nov 29 07:57:11 crc kubenswrapper[4795]: I1129 07:57:11.023682 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-7c5mp" event={"ID":"cadc9dc9-f67d-440d-9169-9f7816d26a56","Type":"ContainerStarted","Data":"5aa2eb953b302aa0aff2c40160e20af65247adf1165b6c13e379899cffff02e5"} Nov 29 07:57:11 crc kubenswrapper[4795]: I1129 07:57:11.049337 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-7c5mp" podStartSLOduration=2.175015608 podStartE2EDuration="5.049321511s" podCreationTimestamp="2025-11-29 07:57:06 +0000 UTC" firstStartedPulling="2025-11-29 07:57:07.636628736 +0000 UTC m=+1073.612204526" lastFinishedPulling="2025-11-29 07:57:10.510934639 +0000 UTC m=+1076.486510429" observedRunningTime="2025-11-29 07:57:11.048090086 +0000 UTC m=+1077.023665916" watchObservedRunningTime="2025-11-29 07:57:11.049321511 +0000 UTC m=+1077.024897301" Nov 29 07:57:11 crc kubenswrapper[4795]: I1129 07:57:11.941498 4795 patch_prober.go:28] interesting pod/machine-config-daemon-bkmq6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:57:11 crc kubenswrapper[4795]: I1129 07:57:11.941562 4795 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:57:11 crc kubenswrapper[4795]: I1129 07:57:11.941642 4795 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" Nov 29 07:57:11 crc kubenswrapper[4795]: I1129 07:57:11.942274 4795 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ab335149a24428cc9c0c5e9165d87fbe177cd1f8c86a0c5a601208bae120d8f2"} pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 07:57:11 crc kubenswrapper[4795]: I1129 07:57:11.942325 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" containerID="cri-o://ab335149a24428cc9c0c5e9165d87fbe177cd1f8c86a0c5a601208bae120d8f2" gracePeriod=600 Nov 29 07:57:13 crc kubenswrapper[4795]: I1129 07:57:13.043433 4795 generic.go:334] "Generic (PLEG): container finished" podID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerID="ab335149a24428cc9c0c5e9165d87fbe177cd1f8c86a0c5a601208bae120d8f2" exitCode=0 Nov 29 07:57:13 crc kubenswrapper[4795]: I1129 07:57:13.043519 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" event={"ID":"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1","Type":"ContainerDied","Data":"ab335149a24428cc9c0c5e9165d87fbe177cd1f8c86a0c5a601208bae120d8f2"} Nov 29 07:57:13 crc kubenswrapper[4795]: I1129 07:57:13.044329 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" event={"ID":"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1","Type":"ContainerStarted","Data":"d9f96d395ac312701f390d1e390e800e858ea785b49bf9e565d383c5df5e5a12"} Nov 29 07:57:13 crc kubenswrapper[4795]: I1129 07:57:13.044362 4795 scope.go:117] "RemoveContainer" containerID="50234306d145892f3e442365375a5d5625f50018f93ebd19c811e44bda9ed58d" Nov 29 07:57:17 crc kubenswrapper[4795]: I1129 07:57:17.846361 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-jk8vq"] Nov 29 07:57:17 crc kubenswrapper[4795]: I1129 07:57:17.848213 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-jk8vq" Nov 29 07:57:17 crc kubenswrapper[4795]: I1129 07:57:17.852077 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-xnxdx" Nov 29 07:57:17 crc kubenswrapper[4795]: I1129 07:57:17.852331 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Nov 29 07:57:17 crc kubenswrapper[4795]: I1129 07:57:17.861645 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-f8cgx"] Nov 29 07:57:17 crc kubenswrapper[4795]: I1129 07:57:17.862822 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-f8cgx" Nov 29 07:57:17 crc kubenswrapper[4795]: I1129 07:57:17.866902 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-jk8vq"] Nov 29 07:57:17 crc kubenswrapper[4795]: I1129 07:57:17.885489 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-f8cgx"] Nov 29 07:57:17 crc kubenswrapper[4795]: I1129 07:57:17.894167 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-c5pwq"] Nov 29 07:57:17 crc kubenswrapper[4795]: I1129 07:57:17.895248 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-c5pwq" Nov 29 07:57:17 crc kubenswrapper[4795]: I1129 07:57:17.975921 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsnth\" (UniqueName: \"kubernetes.io/projected/dcd14fe7-954f-445a-bd8d-0a62399e71d5-kube-api-access-vsnth\") pod \"nmstate-webhook-5f6d4c5ccb-jk8vq\" (UID: \"dcd14fe7-954f-445a-bd8d-0a62399e71d5\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-jk8vq" Nov 29 07:57:17 crc kubenswrapper[4795]: I1129 07:57:17.976001 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgtbz\" (UniqueName: \"kubernetes.io/projected/471b40b9-dbc5-467e-abd1-18e64ea6a111-kube-api-access-pgtbz\") pod \"nmstate-metrics-7f946cbc9-f8cgx\" (UID: \"471b40b9-dbc5-467e-abd1-18e64ea6a111\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-f8cgx" Nov 29 07:57:17 crc kubenswrapper[4795]: I1129 07:57:17.976032 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/dcd14fe7-954f-445a-bd8d-0a62399e71d5-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-jk8vq\" (UID: \"dcd14fe7-954f-445a-bd8d-0a62399e71d5\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-jk8vq" Nov 29 07:57:18 crc kubenswrapper[4795]: I1129 07:57:18.010835 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-qxwvv"] Nov 29 07:57:18 crc kubenswrapper[4795]: I1129 07:57:18.011803 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-qxwvv" Nov 29 07:57:18 crc kubenswrapper[4795]: I1129 07:57:18.015917 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Nov 29 07:57:18 crc kubenswrapper[4795]: I1129 07:57:18.016065 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Nov 29 07:57:18 crc kubenswrapper[4795]: I1129 07:57:18.016303 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-qxwvv"] Nov 29 07:57:18 crc kubenswrapper[4795]: I1129 07:57:18.016457 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-fsgck" Nov 29 07:57:18 crc kubenswrapper[4795]: I1129 07:57:18.078539 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgtbz\" (UniqueName: \"kubernetes.io/projected/471b40b9-dbc5-467e-abd1-18e64ea6a111-kube-api-access-pgtbz\") pod \"nmstate-metrics-7f946cbc9-f8cgx\" (UID: \"471b40b9-dbc5-467e-abd1-18e64ea6a111\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-f8cgx" Nov 29 07:57:18 crc kubenswrapper[4795]: I1129 07:57:18.078607 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/dcd14fe7-954f-445a-bd8d-0a62399e71d5-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-jk8vq\" (UID: \"dcd14fe7-954f-445a-bd8d-0a62399e71d5\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-jk8vq" Nov 29 07:57:18 crc kubenswrapper[4795]: I1129 07:57:18.078637 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/b1c76aa0-5bd2-4df9-8555-83bb44cb23b7-dbus-socket\") pod \"nmstate-handler-c5pwq\" (UID: \"b1c76aa0-5bd2-4df9-8555-83bb44cb23b7\") " pod="openshift-nmstate/nmstate-handler-c5pwq" Nov 29 07:57:18 crc kubenswrapper[4795]: I1129 07:57:18.078652 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/b1c76aa0-5bd2-4df9-8555-83bb44cb23b7-ovs-socket\") pod \"nmstate-handler-c5pwq\" (UID: \"b1c76aa0-5bd2-4df9-8555-83bb44cb23b7\") " pod="openshift-nmstate/nmstate-handler-c5pwq" Nov 29 07:57:18 crc kubenswrapper[4795]: I1129 07:57:18.078711 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/127f1845-59b8-4b9f-9702-2aae122b06e3-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-qxwvv\" (UID: \"127f1845-59b8-4b9f-9702-2aae122b06e3\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-qxwvv" Nov 29 07:57:18 crc kubenswrapper[4795]: I1129 07:57:18.078732 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89ttk\" (UniqueName: \"kubernetes.io/projected/127f1845-59b8-4b9f-9702-2aae122b06e3-kube-api-access-89ttk\") pod \"nmstate-console-plugin-7fbb5f6569-qxwvv\" (UID: \"127f1845-59b8-4b9f-9702-2aae122b06e3\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-qxwvv" Nov 29 07:57:18 crc kubenswrapper[4795]: I1129 07:57:18.078749 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/127f1845-59b8-4b9f-9702-2aae122b06e3-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-qxwvv\" (UID: \"127f1845-59b8-4b9f-9702-2aae122b06e3\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-qxwvv" Nov 29 07:57:18 crc kubenswrapper[4795]: I1129 07:57:18.078775 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/b1c76aa0-5bd2-4df9-8555-83bb44cb23b7-nmstate-lock\") pod \"nmstate-handler-c5pwq\" (UID: \"b1c76aa0-5bd2-4df9-8555-83bb44cb23b7\") " pod="openshift-nmstate/nmstate-handler-c5pwq" Nov 29 07:57:18 crc kubenswrapper[4795]: I1129 07:57:18.078800 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsnth\" (UniqueName: \"kubernetes.io/projected/dcd14fe7-954f-445a-bd8d-0a62399e71d5-kube-api-access-vsnth\") pod \"nmstate-webhook-5f6d4c5ccb-jk8vq\" (UID: \"dcd14fe7-954f-445a-bd8d-0a62399e71d5\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-jk8vq" Nov 29 07:57:18 crc kubenswrapper[4795]: I1129 07:57:18.078825 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bntht\" (UniqueName: \"kubernetes.io/projected/b1c76aa0-5bd2-4df9-8555-83bb44cb23b7-kube-api-access-bntht\") pod \"nmstate-handler-c5pwq\" (UID: \"b1c76aa0-5bd2-4df9-8555-83bb44cb23b7\") " pod="openshift-nmstate/nmstate-handler-c5pwq" Nov 29 07:57:18 crc kubenswrapper[4795]: E1129 07:57:18.079187 4795 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Nov 29 07:57:18 crc kubenswrapper[4795]: E1129 07:57:18.079230 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dcd14fe7-954f-445a-bd8d-0a62399e71d5-tls-key-pair podName:dcd14fe7-954f-445a-bd8d-0a62399e71d5 nodeName:}" failed. No retries permitted until 2025-11-29 07:57:18.579213391 +0000 UTC m=+1084.554789171 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/dcd14fe7-954f-445a-bd8d-0a62399e71d5-tls-key-pair") pod "nmstate-webhook-5f6d4c5ccb-jk8vq" (UID: "dcd14fe7-954f-445a-bd8d-0a62399e71d5") : secret "openshift-nmstate-webhook" not found Nov 29 07:57:18 crc kubenswrapper[4795]: I1129 07:57:18.117554 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgtbz\" (UniqueName: \"kubernetes.io/projected/471b40b9-dbc5-467e-abd1-18e64ea6a111-kube-api-access-pgtbz\") pod \"nmstate-metrics-7f946cbc9-f8cgx\" (UID: \"471b40b9-dbc5-467e-abd1-18e64ea6a111\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-f8cgx" Nov 29 07:57:18 crc kubenswrapper[4795]: I1129 07:57:18.118118 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsnth\" (UniqueName: \"kubernetes.io/projected/dcd14fe7-954f-445a-bd8d-0a62399e71d5-kube-api-access-vsnth\") pod \"nmstate-webhook-5f6d4c5ccb-jk8vq\" (UID: \"dcd14fe7-954f-445a-bd8d-0a62399e71d5\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-jk8vq" Nov 29 07:57:18 crc kubenswrapper[4795]: I1129 07:57:18.180025 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/127f1845-59b8-4b9f-9702-2aae122b06e3-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-qxwvv\" (UID: \"127f1845-59b8-4b9f-9702-2aae122b06e3\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-qxwvv" Nov 29 07:57:18 crc kubenswrapper[4795]: I1129 07:57:18.180070 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89ttk\" (UniqueName: \"kubernetes.io/projected/127f1845-59b8-4b9f-9702-2aae122b06e3-kube-api-access-89ttk\") pod \"nmstate-console-plugin-7fbb5f6569-qxwvv\" (UID: \"127f1845-59b8-4b9f-9702-2aae122b06e3\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-qxwvv" Nov 29 07:57:18 crc kubenswrapper[4795]: I1129 07:57:18.180093 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/127f1845-59b8-4b9f-9702-2aae122b06e3-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-qxwvv\" (UID: \"127f1845-59b8-4b9f-9702-2aae122b06e3\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-qxwvv" Nov 29 07:57:18 crc kubenswrapper[4795]: I1129 07:57:18.180124 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/b1c76aa0-5bd2-4df9-8555-83bb44cb23b7-nmstate-lock\") pod \"nmstate-handler-c5pwq\" (UID: \"b1c76aa0-5bd2-4df9-8555-83bb44cb23b7\") " pod="openshift-nmstate/nmstate-handler-c5pwq" Nov 29 07:57:18 crc kubenswrapper[4795]: I1129 07:57:18.180154 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bntht\" (UniqueName: \"kubernetes.io/projected/b1c76aa0-5bd2-4df9-8555-83bb44cb23b7-kube-api-access-bntht\") pod \"nmstate-handler-c5pwq\" (UID: \"b1c76aa0-5bd2-4df9-8555-83bb44cb23b7\") " pod="openshift-nmstate/nmstate-handler-c5pwq" Nov 29 07:57:18 crc kubenswrapper[4795]: I1129 07:57:18.180218 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/b1c76aa0-5bd2-4df9-8555-83bb44cb23b7-dbus-socket\") pod \"nmstate-handler-c5pwq\" (UID: \"b1c76aa0-5bd2-4df9-8555-83bb44cb23b7\") " pod="openshift-nmstate/nmstate-handler-c5pwq" Nov 29 07:57:18 crc kubenswrapper[4795]: I1129 07:57:18.180233 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/b1c76aa0-5bd2-4df9-8555-83bb44cb23b7-ovs-socket\") pod \"nmstate-handler-c5pwq\" (UID: \"b1c76aa0-5bd2-4df9-8555-83bb44cb23b7\") " pod="openshift-nmstate/nmstate-handler-c5pwq" Nov 29 07:57:18 crc kubenswrapper[4795]: E1129 07:57:18.180239 4795 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Nov 29 07:57:18 crc kubenswrapper[4795]: E1129 07:57:18.180310 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/127f1845-59b8-4b9f-9702-2aae122b06e3-plugin-serving-cert podName:127f1845-59b8-4b9f-9702-2aae122b06e3 nodeName:}" failed. No retries permitted until 2025-11-29 07:57:18.680291804 +0000 UTC m=+1084.655867654 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/127f1845-59b8-4b9f-9702-2aae122b06e3-plugin-serving-cert") pod "nmstate-console-plugin-7fbb5f6569-qxwvv" (UID: "127f1845-59b8-4b9f-9702-2aae122b06e3") : secret "plugin-serving-cert" not found Nov 29 07:57:18 crc kubenswrapper[4795]: I1129 07:57:18.180335 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/b1c76aa0-5bd2-4df9-8555-83bb44cb23b7-ovs-socket\") pod \"nmstate-handler-c5pwq\" (UID: \"b1c76aa0-5bd2-4df9-8555-83bb44cb23b7\") " pod="openshift-nmstate/nmstate-handler-c5pwq" Nov 29 07:57:18 crc kubenswrapper[4795]: I1129 07:57:18.180510 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/b1c76aa0-5bd2-4df9-8555-83bb44cb23b7-nmstate-lock\") pod \"nmstate-handler-c5pwq\" (UID: \"b1c76aa0-5bd2-4df9-8555-83bb44cb23b7\") " pod="openshift-nmstate/nmstate-handler-c5pwq" Nov 29 07:57:18 crc kubenswrapper[4795]: I1129 07:57:18.180934 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/b1c76aa0-5bd2-4df9-8555-83bb44cb23b7-dbus-socket\") pod \"nmstate-handler-c5pwq\" (UID: \"b1c76aa0-5bd2-4df9-8555-83bb44cb23b7\") " pod="openshift-nmstate/nmstate-handler-c5pwq" Nov 29 07:57:18 crc kubenswrapper[4795]: I1129 07:57:18.181376 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/127f1845-59b8-4b9f-9702-2aae122b06e3-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-qxwvv\" (UID: \"127f1845-59b8-4b9f-9702-2aae122b06e3\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-qxwvv" Nov 29 07:57:18 crc kubenswrapper[4795]: I1129 07:57:18.193500 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f5d67bfff-wl4rm"] Nov 29 07:57:18 crc kubenswrapper[4795]: I1129 07:57:18.194958 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f5d67bfff-wl4rm" Nov 29 07:57:18 crc kubenswrapper[4795]: I1129 07:57:18.195824 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-f8cgx" Nov 29 07:57:18 crc kubenswrapper[4795]: I1129 07:57:18.208185 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bntht\" (UniqueName: \"kubernetes.io/projected/b1c76aa0-5bd2-4df9-8555-83bb44cb23b7-kube-api-access-bntht\") pod \"nmstate-handler-c5pwq\" (UID: \"b1c76aa0-5bd2-4df9-8555-83bb44cb23b7\") " pod="openshift-nmstate/nmstate-handler-c5pwq" Nov 29 07:57:18 crc kubenswrapper[4795]: I1129 07:57:18.212734 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-c5pwq" Nov 29 07:57:18 crc kubenswrapper[4795]: I1129 07:57:18.219009 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89ttk\" (UniqueName: \"kubernetes.io/projected/127f1845-59b8-4b9f-9702-2aae122b06e3-kube-api-access-89ttk\") pod \"nmstate-console-plugin-7fbb5f6569-qxwvv\" (UID: \"127f1845-59b8-4b9f-9702-2aae122b06e3\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-qxwvv" Nov 29 07:57:18 crc kubenswrapper[4795]: I1129 07:57:18.223846 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f5d67bfff-wl4rm"] Nov 29 07:57:18 crc kubenswrapper[4795]: W1129 07:57:18.261023 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1c76aa0_5bd2_4df9_8555_83bb44cb23b7.slice/crio-45ccbaea0a98fe0914f2d36b703646ec5eacbb8d0396a5844b763ca1dd57a79c WatchSource:0}: Error finding container 45ccbaea0a98fe0914f2d36b703646ec5eacbb8d0396a5844b763ca1dd57a79c: Status 404 returned error can't find the container with id 45ccbaea0a98fe0914f2d36b703646ec5eacbb8d0396a5844b763ca1dd57a79c Nov 29 07:57:18 crc kubenswrapper[4795]: I1129 07:57:18.281072 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8385f150-5088-461e-b9a7-05eb8990b8ca-console-serving-cert\") pod \"console-f5d67bfff-wl4rm\" (UID: \"8385f150-5088-461e-b9a7-05eb8990b8ca\") " pod="openshift-console/console-f5d67bfff-wl4rm" Nov 29 07:57:18 crc kubenswrapper[4795]: I1129 07:57:18.281146 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8385f150-5088-461e-b9a7-05eb8990b8ca-trusted-ca-bundle\") pod \"console-f5d67bfff-wl4rm\" (UID: \"8385f150-5088-461e-b9a7-05eb8990b8ca\") " pod="openshift-console/console-f5d67bfff-wl4rm" Nov 29 07:57:18 crc kubenswrapper[4795]: I1129 07:57:18.281188 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8385f150-5088-461e-b9a7-05eb8990b8ca-service-ca\") pod \"console-f5d67bfff-wl4rm\" (UID: \"8385f150-5088-461e-b9a7-05eb8990b8ca\") " pod="openshift-console/console-f5d67bfff-wl4rm" Nov 29 07:57:18 crc kubenswrapper[4795]: I1129 07:57:18.281237 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8385f150-5088-461e-b9a7-05eb8990b8ca-oauth-serving-cert\") pod \"console-f5d67bfff-wl4rm\" (UID: \"8385f150-5088-461e-b9a7-05eb8990b8ca\") " pod="openshift-console/console-f5d67bfff-wl4rm" Nov 29 07:57:18 crc kubenswrapper[4795]: I1129 07:57:18.281264 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8385f150-5088-461e-b9a7-05eb8990b8ca-console-oauth-config\") pod \"console-f5d67bfff-wl4rm\" (UID: \"8385f150-5088-461e-b9a7-05eb8990b8ca\") " pod="openshift-console/console-f5d67bfff-wl4rm" Nov 29 07:57:18 crc kubenswrapper[4795]: I1129 07:57:18.281282 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqsq4\" (UniqueName: \"kubernetes.io/projected/8385f150-5088-461e-b9a7-05eb8990b8ca-kube-api-access-mqsq4\") pod \"console-f5d67bfff-wl4rm\" (UID: \"8385f150-5088-461e-b9a7-05eb8990b8ca\") " pod="openshift-console/console-f5d67bfff-wl4rm" Nov 29 07:57:18 crc kubenswrapper[4795]: I1129 07:57:18.281298 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8385f150-5088-461e-b9a7-05eb8990b8ca-console-config\") pod \"console-f5d67bfff-wl4rm\" (UID: \"8385f150-5088-461e-b9a7-05eb8990b8ca\") " pod="openshift-console/console-f5d67bfff-wl4rm" Nov 29 07:57:18 crc kubenswrapper[4795]: I1129 07:57:18.382494 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqsq4\" (UniqueName: \"kubernetes.io/projected/8385f150-5088-461e-b9a7-05eb8990b8ca-kube-api-access-mqsq4\") pod \"console-f5d67bfff-wl4rm\" (UID: \"8385f150-5088-461e-b9a7-05eb8990b8ca\") " pod="openshift-console/console-f5d67bfff-wl4rm" Nov 29 07:57:18 crc kubenswrapper[4795]: I1129 07:57:18.382535 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8385f150-5088-461e-b9a7-05eb8990b8ca-console-config\") pod \"console-f5d67bfff-wl4rm\" (UID: \"8385f150-5088-461e-b9a7-05eb8990b8ca\") " pod="openshift-console/console-f5d67bfff-wl4rm" Nov 29 07:57:18 crc kubenswrapper[4795]: I1129 07:57:18.382575 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8385f150-5088-461e-b9a7-05eb8990b8ca-console-serving-cert\") pod \"console-f5d67bfff-wl4rm\" (UID: \"8385f150-5088-461e-b9a7-05eb8990b8ca\") " pod="openshift-console/console-f5d67bfff-wl4rm" Nov 29 07:57:18 crc kubenswrapper[4795]: I1129 07:57:18.382638 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8385f150-5088-461e-b9a7-05eb8990b8ca-trusted-ca-bundle\") pod \"console-f5d67bfff-wl4rm\" (UID: \"8385f150-5088-461e-b9a7-05eb8990b8ca\") " pod="openshift-console/console-f5d67bfff-wl4rm" Nov 29 07:57:18 crc kubenswrapper[4795]: I1129 07:57:18.382676 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8385f150-5088-461e-b9a7-05eb8990b8ca-service-ca\") pod \"console-f5d67bfff-wl4rm\" (UID: \"8385f150-5088-461e-b9a7-05eb8990b8ca\") " pod="openshift-console/console-f5d67bfff-wl4rm" Nov 29 07:57:18 crc kubenswrapper[4795]: I1129 07:57:18.382717 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8385f150-5088-461e-b9a7-05eb8990b8ca-oauth-serving-cert\") pod \"console-f5d67bfff-wl4rm\" (UID: \"8385f150-5088-461e-b9a7-05eb8990b8ca\") " pod="openshift-console/console-f5d67bfff-wl4rm" Nov 29 07:57:18 crc kubenswrapper[4795]: I1129 07:57:18.382737 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8385f150-5088-461e-b9a7-05eb8990b8ca-console-oauth-config\") pod \"console-f5d67bfff-wl4rm\" (UID: \"8385f150-5088-461e-b9a7-05eb8990b8ca\") " pod="openshift-console/console-f5d67bfff-wl4rm" Nov 29 07:57:18 crc kubenswrapper[4795]: I1129 07:57:18.386088 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8385f150-5088-461e-b9a7-05eb8990b8ca-service-ca\") pod \"console-f5d67bfff-wl4rm\" (UID: \"8385f150-5088-461e-b9a7-05eb8990b8ca\") " pod="openshift-console/console-f5d67bfff-wl4rm" Nov 29 07:57:18 crc kubenswrapper[4795]: I1129 07:57:18.386156 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8385f150-5088-461e-b9a7-05eb8990b8ca-oauth-serving-cert\") pod \"console-f5d67bfff-wl4rm\" (UID: \"8385f150-5088-461e-b9a7-05eb8990b8ca\") " pod="openshift-console/console-f5d67bfff-wl4rm" Nov 29 07:57:18 crc kubenswrapper[4795]: I1129 07:57:18.386915 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8385f150-5088-461e-b9a7-05eb8990b8ca-console-config\") pod \"console-f5d67bfff-wl4rm\" (UID: \"8385f150-5088-461e-b9a7-05eb8990b8ca\") " pod="openshift-console/console-f5d67bfff-wl4rm" Nov 29 07:57:18 crc kubenswrapper[4795]: I1129 07:57:18.386965 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8385f150-5088-461e-b9a7-05eb8990b8ca-trusted-ca-bundle\") pod \"console-f5d67bfff-wl4rm\" (UID: \"8385f150-5088-461e-b9a7-05eb8990b8ca\") " pod="openshift-console/console-f5d67bfff-wl4rm" Nov 29 07:57:18 crc kubenswrapper[4795]: I1129 07:57:18.389579 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8385f150-5088-461e-b9a7-05eb8990b8ca-console-serving-cert\") pod \"console-f5d67bfff-wl4rm\" (UID: \"8385f150-5088-461e-b9a7-05eb8990b8ca\") " pod="openshift-console/console-f5d67bfff-wl4rm" Nov 29 07:57:18 crc kubenswrapper[4795]: I1129 07:57:18.390194 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8385f150-5088-461e-b9a7-05eb8990b8ca-console-oauth-config\") pod \"console-f5d67bfff-wl4rm\" (UID: \"8385f150-5088-461e-b9a7-05eb8990b8ca\") " pod="openshift-console/console-f5d67bfff-wl4rm" Nov 29 07:57:18 crc kubenswrapper[4795]: I1129 07:57:18.406460 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqsq4\" (UniqueName: \"kubernetes.io/projected/8385f150-5088-461e-b9a7-05eb8990b8ca-kube-api-access-mqsq4\") pod \"console-f5d67bfff-wl4rm\" (UID: \"8385f150-5088-461e-b9a7-05eb8990b8ca\") " pod="openshift-console/console-f5d67bfff-wl4rm" Nov 29 07:57:18 crc kubenswrapper[4795]: I1129 07:57:18.586517 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/dcd14fe7-954f-445a-bd8d-0a62399e71d5-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-jk8vq\" (UID: \"dcd14fe7-954f-445a-bd8d-0a62399e71d5\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-jk8vq" Nov 29 07:57:18 crc kubenswrapper[4795]: I1129 07:57:18.591209 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/dcd14fe7-954f-445a-bd8d-0a62399e71d5-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-jk8vq\" (UID: \"dcd14fe7-954f-445a-bd8d-0a62399e71d5\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-jk8vq" Nov 29 07:57:18 crc kubenswrapper[4795]: I1129 07:57:18.607044 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f5d67bfff-wl4rm" Nov 29 07:57:18 crc kubenswrapper[4795]: I1129 07:57:18.687146 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/127f1845-59b8-4b9f-9702-2aae122b06e3-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-qxwvv\" (UID: \"127f1845-59b8-4b9f-9702-2aae122b06e3\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-qxwvv" Nov 29 07:57:18 crc kubenswrapper[4795]: I1129 07:57:18.690797 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/127f1845-59b8-4b9f-9702-2aae122b06e3-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-qxwvv\" (UID: \"127f1845-59b8-4b9f-9702-2aae122b06e3\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-qxwvv" Nov 29 07:57:18 crc kubenswrapper[4795]: I1129 07:57:18.748044 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-f8cgx"] Nov 29 07:57:18 crc kubenswrapper[4795]: W1129 07:57:18.754568 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod471b40b9_dbc5_467e_abd1_18e64ea6a111.slice/crio-a2dceaf81a6d44e20bee497e02d0d97bc969c90c3bea6ae2df5859053826e783 WatchSource:0}: Error finding container a2dceaf81a6d44e20bee497e02d0d97bc969c90c3bea6ae2df5859053826e783: Status 404 returned error can't find the container with id a2dceaf81a6d44e20bee497e02d0d97bc969c90c3bea6ae2df5859053826e783 Nov 29 07:57:18 crc kubenswrapper[4795]: I1129 07:57:18.783390 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-jk8vq" Nov 29 07:57:18 crc kubenswrapper[4795]: I1129 07:57:18.927583 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-qxwvv" Nov 29 07:57:19 crc kubenswrapper[4795]: I1129 07:57:19.047405 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-jk8vq"] Nov 29 07:57:19 crc kubenswrapper[4795]: W1129 07:57:19.058423 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddcd14fe7_954f_445a_bd8d_0a62399e71d5.slice/crio-69e16c4e9f973ab0ec5a3f03995f2ad9796530e9a0c7c7d26827670689c42066 WatchSource:0}: Error finding container 69e16c4e9f973ab0ec5a3f03995f2ad9796530e9a0c7c7d26827670689c42066: Status 404 returned error can't find the container with id 69e16c4e9f973ab0ec5a3f03995f2ad9796530e9a0c7c7d26827670689c42066 Nov 29 07:57:19 crc kubenswrapper[4795]: I1129 07:57:19.073192 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f5d67bfff-wl4rm"] Nov 29 07:57:19 crc kubenswrapper[4795]: W1129 07:57:19.073715 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8385f150_5088_461e_b9a7_05eb8990b8ca.slice/crio-a501e1dc4ab408658b15b94a0dc0be17b862193c5a680407dfe31b95e3f4f500 WatchSource:0}: Error finding container a501e1dc4ab408658b15b94a0dc0be17b862193c5a680407dfe31b95e3f4f500: Status 404 returned error can't find the container with id a501e1dc4ab408658b15b94a0dc0be17b862193c5a680407dfe31b95e3f4f500 Nov 29 07:57:19 crc kubenswrapper[4795]: I1129 07:57:19.112987 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-jk8vq" event={"ID":"dcd14fe7-954f-445a-bd8d-0a62399e71d5","Type":"ContainerStarted","Data":"69e16c4e9f973ab0ec5a3f03995f2ad9796530e9a0c7c7d26827670689c42066"} Nov 29 07:57:19 crc kubenswrapper[4795]: I1129 07:57:19.114824 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f5d67bfff-wl4rm" event={"ID":"8385f150-5088-461e-b9a7-05eb8990b8ca","Type":"ContainerStarted","Data":"a501e1dc4ab408658b15b94a0dc0be17b862193c5a680407dfe31b95e3f4f500"} Nov 29 07:57:19 crc kubenswrapper[4795]: I1129 07:57:19.129499 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-f8cgx" event={"ID":"471b40b9-dbc5-467e-abd1-18e64ea6a111","Type":"ContainerStarted","Data":"a2dceaf81a6d44e20bee497e02d0d97bc969c90c3bea6ae2df5859053826e783"} Nov 29 07:57:19 crc kubenswrapper[4795]: I1129 07:57:19.131268 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-c5pwq" event={"ID":"b1c76aa0-5bd2-4df9-8555-83bb44cb23b7","Type":"ContainerStarted","Data":"45ccbaea0a98fe0914f2d36b703646ec5eacbb8d0396a5844b763ca1dd57a79c"} Nov 29 07:57:19 crc kubenswrapper[4795]: I1129 07:57:19.365092 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-qxwvv"] Nov 29 07:57:19 crc kubenswrapper[4795]: W1129 07:57:19.374694 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod127f1845_59b8_4b9f_9702_2aae122b06e3.slice/crio-1a5ec4f55ccd168ad339fa5942339fb65492928d3524222bed3b99f03f35dffe WatchSource:0}: Error finding container 1a5ec4f55ccd168ad339fa5942339fb65492928d3524222bed3b99f03f35dffe: Status 404 returned error can't find the container with id 1a5ec4f55ccd168ad339fa5942339fb65492928d3524222bed3b99f03f35dffe Nov 29 07:57:20 crc kubenswrapper[4795]: I1129 07:57:20.138956 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-qxwvv" event={"ID":"127f1845-59b8-4b9f-9702-2aae122b06e3","Type":"ContainerStarted","Data":"1a5ec4f55ccd168ad339fa5942339fb65492928d3524222bed3b99f03f35dffe"} Nov 29 07:57:20 crc kubenswrapper[4795]: I1129 07:57:20.140828 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f5d67bfff-wl4rm" event={"ID":"8385f150-5088-461e-b9a7-05eb8990b8ca","Type":"ContainerStarted","Data":"ffcaa5f34d5d0231400dd2ee0d11e99540dddc52064abbbcfb8afbd420993e01"} Nov 29 07:57:20 crc kubenswrapper[4795]: I1129 07:57:20.165614 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f5d67bfff-wl4rm" podStartSLOduration=2.165573115 podStartE2EDuration="2.165573115s" podCreationTimestamp="2025-11-29 07:57:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:57:20.158499524 +0000 UTC m=+1086.134075314" watchObservedRunningTime="2025-11-29 07:57:20.165573115 +0000 UTC m=+1086.141148895" Nov 29 07:57:23 crc kubenswrapper[4795]: I1129 07:57:23.164628 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-c5pwq" event={"ID":"b1c76aa0-5bd2-4df9-8555-83bb44cb23b7","Type":"ContainerStarted","Data":"261d385534343bf0ab9ed4204974b48cd2c4b931151c920db0dafd264b61de2b"} Nov 29 07:57:23 crc kubenswrapper[4795]: I1129 07:57:23.165291 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-c5pwq" Nov 29 07:57:23 crc kubenswrapper[4795]: I1129 07:57:23.171776 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-qxwvv" event={"ID":"127f1845-59b8-4b9f-9702-2aae122b06e3","Type":"ContainerStarted","Data":"35aec08359c905e8ba3fd1e21826a85c0bf4ac01d72b6ec3c541ae8055f447b8"} Nov 29 07:57:23 crc kubenswrapper[4795]: I1129 07:57:23.208935 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-jk8vq" event={"ID":"dcd14fe7-954f-445a-bd8d-0a62399e71d5","Type":"ContainerStarted","Data":"6ab2090f75e19ca845b003086560127b0b8079361cccd492bb76e4be09e50367"} Nov 29 07:57:23 crc kubenswrapper[4795]: I1129 07:57:23.213289 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-f8cgx" event={"ID":"471b40b9-dbc5-467e-abd1-18e64ea6a111","Type":"ContainerStarted","Data":"a1bbe4b2d781a72e04f87974e1c2306c6a6011c1c3f52c33adf7188925545eb2"} Nov 29 07:57:23 crc kubenswrapper[4795]: I1129 07:57:23.220929 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-c5pwq" podStartSLOduration=2.2165891540000002 podStartE2EDuration="6.220905949s" podCreationTimestamp="2025-11-29 07:57:17 +0000 UTC" firstStartedPulling="2025-11-29 07:57:18.263388373 +0000 UTC m=+1084.238964163" lastFinishedPulling="2025-11-29 07:57:22.267705168 +0000 UTC m=+1088.243280958" observedRunningTime="2025-11-29 07:57:23.203307598 +0000 UTC m=+1089.178883388" watchObservedRunningTime="2025-11-29 07:57:23.220905949 +0000 UTC m=+1089.196481739" Nov 29 07:57:23 crc kubenswrapper[4795]: I1129 07:57:23.257720 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-qxwvv" podStartSLOduration=3.366559727 podStartE2EDuration="6.257704389s" podCreationTimestamp="2025-11-29 07:57:17 +0000 UTC" firstStartedPulling="2025-11-29 07:57:19.376546126 +0000 UTC m=+1085.352121916" lastFinishedPulling="2025-11-29 07:57:22.267690788 +0000 UTC m=+1088.243266578" observedRunningTime="2025-11-29 07:57:23.252042857 +0000 UTC m=+1089.227618657" watchObservedRunningTime="2025-11-29 07:57:23.257704389 +0000 UTC m=+1089.233280179" Nov 29 07:57:23 crc kubenswrapper[4795]: I1129 07:57:23.292461 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-jk8vq" podStartSLOduration=3.074815428 podStartE2EDuration="6.292439289s" podCreationTimestamp="2025-11-29 07:57:17 +0000 UTC" firstStartedPulling="2025-11-29 07:57:19.063277143 +0000 UTC m=+1085.038852933" lastFinishedPulling="2025-11-29 07:57:22.280901004 +0000 UTC m=+1088.256476794" observedRunningTime="2025-11-29 07:57:23.285879362 +0000 UTC m=+1089.261455152" watchObservedRunningTime="2025-11-29 07:57:23.292439289 +0000 UTC m=+1089.268015079" Nov 29 07:57:24 crc kubenswrapper[4795]: I1129 07:57:24.224211 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-jk8vq" Nov 29 07:57:26 crc kubenswrapper[4795]: I1129 07:57:26.241015 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-f8cgx" event={"ID":"471b40b9-dbc5-467e-abd1-18e64ea6a111","Type":"ContainerStarted","Data":"c79d1c9e077698681a12e118ec6426f0961dcfd8aa7e51addcdc8d2cd65dd19c"} Nov 29 07:57:26 crc kubenswrapper[4795]: I1129 07:57:26.257633 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-f8cgx" podStartSLOduration=2.793080454 podStartE2EDuration="9.257618902s" podCreationTimestamp="2025-11-29 07:57:17 +0000 UTC" firstStartedPulling="2025-11-29 07:57:18.759864881 +0000 UTC m=+1084.735440671" lastFinishedPulling="2025-11-29 07:57:25.224403329 +0000 UTC m=+1091.199979119" observedRunningTime="2025-11-29 07:57:26.254673458 +0000 UTC m=+1092.230249268" watchObservedRunningTime="2025-11-29 07:57:26.257618902 +0000 UTC m=+1092.233194692" Nov 29 07:57:28 crc kubenswrapper[4795]: I1129 07:57:28.238026 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-c5pwq" Nov 29 07:57:28 crc kubenswrapper[4795]: I1129 07:57:28.607746 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f5d67bfff-wl4rm" Nov 29 07:57:28 crc kubenswrapper[4795]: I1129 07:57:28.608121 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f5d67bfff-wl4rm" Nov 29 07:57:28 crc kubenswrapper[4795]: I1129 07:57:28.714101 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f5d67bfff-wl4rm" Nov 29 07:57:29 crc kubenswrapper[4795]: I1129 07:57:29.266298 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f5d67bfff-wl4rm" Nov 29 07:57:29 crc kubenswrapper[4795]: I1129 07:57:29.316945 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6bc6b4bc97-zksdr"] Nov 29 07:57:38 crc kubenswrapper[4795]: I1129 07:57:38.789972 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-jk8vq" Nov 29 07:57:54 crc kubenswrapper[4795]: I1129 07:57:54.364257 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-6bc6b4bc97-zksdr" podUID="ebfadff4-591c-4aec-af45-46b370c3a74d" containerName="console" containerID="cri-o://36678dd1ca76be15bac24c9741e6a175ea8b8bee999bfe79cab64a1435601e38" gracePeriod=15 Nov 29 07:57:54 crc kubenswrapper[4795]: I1129 07:57:54.754781 4795 patch_prober.go:28] interesting pod/console-6bc6b4bc97-zksdr container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.80:8443/health\": dial tcp 10.217.0.80:8443: connect: connection refused" start-of-body= Nov 29 07:57:54 crc kubenswrapper[4795]: I1129 07:57:54.755144 4795 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-6bc6b4bc97-zksdr" podUID="ebfadff4-591c-4aec-af45-46b370c3a74d" containerName="console" probeResult="failure" output="Get \"https://10.217.0.80:8443/health\": dial tcp 10.217.0.80:8443: connect: connection refused" Nov 29 07:57:55 crc kubenswrapper[4795]: I1129 07:57:55.307884 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6bc6b4bc97-zksdr_ebfadff4-591c-4aec-af45-46b370c3a74d/console/0.log" Nov 29 07:57:55 crc kubenswrapper[4795]: I1129 07:57:55.307994 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6bc6b4bc97-zksdr" Nov 29 07:57:55 crc kubenswrapper[4795]: I1129 07:57:55.447761 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6bc6b4bc97-zksdr_ebfadff4-591c-4aec-af45-46b370c3a74d/console/0.log" Nov 29 07:57:55 crc kubenswrapper[4795]: I1129 07:57:55.447802 4795 generic.go:334] "Generic (PLEG): container finished" podID="ebfadff4-591c-4aec-af45-46b370c3a74d" containerID="36678dd1ca76be15bac24c9741e6a175ea8b8bee999bfe79cab64a1435601e38" exitCode=2 Nov 29 07:57:55 crc kubenswrapper[4795]: I1129 07:57:55.447834 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6bc6b4bc97-zksdr" event={"ID":"ebfadff4-591c-4aec-af45-46b370c3a74d","Type":"ContainerDied","Data":"36678dd1ca76be15bac24c9741e6a175ea8b8bee999bfe79cab64a1435601e38"} Nov 29 07:57:55 crc kubenswrapper[4795]: I1129 07:57:55.447865 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6bc6b4bc97-zksdr" event={"ID":"ebfadff4-591c-4aec-af45-46b370c3a74d","Type":"ContainerDied","Data":"86b79c2bca8de22f2a6b44b5e6759cbc59ad24819fba06f5b30b7a870248085d"} Nov 29 07:57:55 crc kubenswrapper[4795]: I1129 07:57:55.447884 4795 scope.go:117] "RemoveContainer" containerID="36678dd1ca76be15bac24c9741e6a175ea8b8bee999bfe79cab64a1435601e38" Nov 29 07:57:55 crc kubenswrapper[4795]: I1129 07:57:55.448006 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6bc6b4bc97-zksdr" Nov 29 07:57:55 crc kubenswrapper[4795]: I1129 07:57:55.476093 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zw2bm\" (UniqueName: \"kubernetes.io/projected/ebfadff4-591c-4aec-af45-46b370c3a74d-kube-api-access-zw2bm\") pod \"ebfadff4-591c-4aec-af45-46b370c3a74d\" (UID: \"ebfadff4-591c-4aec-af45-46b370c3a74d\") " Nov 29 07:57:55 crc kubenswrapper[4795]: I1129 07:57:55.476228 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ebfadff4-591c-4aec-af45-46b370c3a74d-oauth-serving-cert\") pod \"ebfadff4-591c-4aec-af45-46b370c3a74d\" (UID: \"ebfadff4-591c-4aec-af45-46b370c3a74d\") " Nov 29 07:57:55 crc kubenswrapper[4795]: I1129 07:57:55.476275 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ebfadff4-591c-4aec-af45-46b370c3a74d-console-oauth-config\") pod \"ebfadff4-591c-4aec-af45-46b370c3a74d\" (UID: \"ebfadff4-591c-4aec-af45-46b370c3a74d\") " Nov 29 07:57:55 crc kubenswrapper[4795]: I1129 07:57:55.476332 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ebfadff4-591c-4aec-af45-46b370c3a74d-service-ca\") pod \"ebfadff4-591c-4aec-af45-46b370c3a74d\" (UID: \"ebfadff4-591c-4aec-af45-46b370c3a74d\") " Nov 29 07:57:55 crc kubenswrapper[4795]: I1129 07:57:55.476348 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ebfadff4-591c-4aec-af45-46b370c3a74d-console-config\") pod \"ebfadff4-591c-4aec-af45-46b370c3a74d\" (UID: \"ebfadff4-591c-4aec-af45-46b370c3a74d\") " Nov 29 07:57:55 crc kubenswrapper[4795]: I1129 07:57:55.476566 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ebfadff4-591c-4aec-af45-46b370c3a74d-console-serving-cert\") pod \"ebfadff4-591c-4aec-af45-46b370c3a74d\" (UID: \"ebfadff4-591c-4aec-af45-46b370c3a74d\") " Nov 29 07:57:55 crc kubenswrapper[4795]: I1129 07:57:55.476584 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebfadff4-591c-4aec-af45-46b370c3a74d-trusted-ca-bundle\") pod \"ebfadff4-591c-4aec-af45-46b370c3a74d\" (UID: \"ebfadff4-591c-4aec-af45-46b370c3a74d\") " Nov 29 07:57:55 crc kubenswrapper[4795]: I1129 07:57:55.478683 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebfadff4-591c-4aec-af45-46b370c3a74d-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "ebfadff4-591c-4aec-af45-46b370c3a74d" (UID: "ebfadff4-591c-4aec-af45-46b370c3a74d"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:57:55 crc kubenswrapper[4795]: I1129 07:57:55.479124 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebfadff4-591c-4aec-af45-46b370c3a74d-service-ca" (OuterVolumeSpecName: "service-ca") pod "ebfadff4-591c-4aec-af45-46b370c3a74d" (UID: "ebfadff4-591c-4aec-af45-46b370c3a74d"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:57:55 crc kubenswrapper[4795]: I1129 07:57:55.479447 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebfadff4-591c-4aec-af45-46b370c3a74d-console-config" (OuterVolumeSpecName: "console-config") pod "ebfadff4-591c-4aec-af45-46b370c3a74d" (UID: "ebfadff4-591c-4aec-af45-46b370c3a74d"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:57:55 crc kubenswrapper[4795]: I1129 07:57:55.480510 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebfadff4-591c-4aec-af45-46b370c3a74d-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "ebfadff4-591c-4aec-af45-46b370c3a74d" (UID: "ebfadff4-591c-4aec-af45-46b370c3a74d"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:57:55 crc kubenswrapper[4795]: I1129 07:57:55.494131 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebfadff4-591c-4aec-af45-46b370c3a74d-kube-api-access-zw2bm" (OuterVolumeSpecName: "kube-api-access-zw2bm") pod "ebfadff4-591c-4aec-af45-46b370c3a74d" (UID: "ebfadff4-591c-4aec-af45-46b370c3a74d"). InnerVolumeSpecName "kube-api-access-zw2bm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:57:55 crc kubenswrapper[4795]: I1129 07:57:55.501839 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebfadff4-591c-4aec-af45-46b370c3a74d-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "ebfadff4-591c-4aec-af45-46b370c3a74d" (UID: "ebfadff4-591c-4aec-af45-46b370c3a74d"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:57:55 crc kubenswrapper[4795]: I1129 07:57:55.501966 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebfadff4-591c-4aec-af45-46b370c3a74d-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "ebfadff4-591c-4aec-af45-46b370c3a74d" (UID: "ebfadff4-591c-4aec-af45-46b370c3a74d"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:57:55 crc kubenswrapper[4795]: I1129 07:57:55.532789 4795 scope.go:117] "RemoveContainer" containerID="36678dd1ca76be15bac24c9741e6a175ea8b8bee999bfe79cab64a1435601e38" Nov 29 07:57:55 crc kubenswrapper[4795]: E1129 07:57:55.533228 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36678dd1ca76be15bac24c9741e6a175ea8b8bee999bfe79cab64a1435601e38\": container with ID starting with 36678dd1ca76be15bac24c9741e6a175ea8b8bee999bfe79cab64a1435601e38 not found: ID does not exist" containerID="36678dd1ca76be15bac24c9741e6a175ea8b8bee999bfe79cab64a1435601e38" Nov 29 07:57:55 crc kubenswrapper[4795]: I1129 07:57:55.533263 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36678dd1ca76be15bac24c9741e6a175ea8b8bee999bfe79cab64a1435601e38"} err="failed to get container status \"36678dd1ca76be15bac24c9741e6a175ea8b8bee999bfe79cab64a1435601e38\": rpc error: code = NotFound desc = could not find container \"36678dd1ca76be15bac24c9741e6a175ea8b8bee999bfe79cab64a1435601e38\": container with ID starting with 36678dd1ca76be15bac24c9741e6a175ea8b8bee999bfe79cab64a1435601e38 not found: ID does not exist" Nov 29 07:57:55 crc kubenswrapper[4795]: I1129 07:57:55.577937 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zw2bm\" (UniqueName: \"kubernetes.io/projected/ebfadff4-591c-4aec-af45-46b370c3a74d-kube-api-access-zw2bm\") on node \"crc\" DevicePath \"\"" Nov 29 07:57:55 crc kubenswrapper[4795]: I1129 07:57:55.578258 4795 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ebfadff4-591c-4aec-af45-46b370c3a74d-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:57:55 crc kubenswrapper[4795]: I1129 07:57:55.578322 4795 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ebfadff4-591c-4aec-af45-46b370c3a74d-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:57:55 crc kubenswrapper[4795]: I1129 07:57:55.578385 4795 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ebfadff4-591c-4aec-af45-46b370c3a74d-console-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:57:55 crc kubenswrapper[4795]: I1129 07:57:55.578439 4795 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ebfadff4-591c-4aec-af45-46b370c3a74d-service-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:57:55 crc kubenswrapper[4795]: I1129 07:57:55.578498 4795 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ebfadff4-591c-4aec-af45-46b370c3a74d-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:57:55 crc kubenswrapper[4795]: I1129 07:57:55.578556 4795 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebfadff4-591c-4aec-af45-46b370c3a74d-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:57:55 crc kubenswrapper[4795]: I1129 07:57:55.779438 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6bc6b4bc97-zksdr"] Nov 29 07:57:55 crc kubenswrapper[4795]: I1129 07:57:55.784955 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-6bc6b4bc97-zksdr"] Nov 29 07:57:56 crc kubenswrapper[4795]: I1129 07:57:56.288000 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebfadff4-591c-4aec-af45-46b370c3a74d" path="/var/lib/kubelet/pods/ebfadff4-591c-4aec-af45-46b370c3a74d/volumes" Nov 29 07:57:57 crc kubenswrapper[4795]: I1129 07:57:57.235120 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hm892"] Nov 29 07:57:57 crc kubenswrapper[4795]: E1129 07:57:57.235422 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebfadff4-591c-4aec-af45-46b370c3a74d" containerName="console" Nov 29 07:57:57 crc kubenswrapper[4795]: I1129 07:57:57.235432 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebfadff4-591c-4aec-af45-46b370c3a74d" containerName="console" Nov 29 07:57:57 crc kubenswrapper[4795]: I1129 07:57:57.235560 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebfadff4-591c-4aec-af45-46b370c3a74d" containerName="console" Nov 29 07:57:57 crc kubenswrapper[4795]: I1129 07:57:57.236549 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hm892" Nov 29 07:57:57 crc kubenswrapper[4795]: I1129 07:57:57.238935 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 29 07:57:57 crc kubenswrapper[4795]: I1129 07:57:57.302334 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hm892"] Nov 29 07:57:57 crc kubenswrapper[4795]: I1129 07:57:57.408197 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1be51a9d-eb05-4b33-85aa-2134496eb1b6-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hm892\" (UID: \"1be51a9d-eb05-4b33-85aa-2134496eb1b6\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hm892" Nov 29 07:57:57 crc kubenswrapper[4795]: I1129 07:57:57.408318 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m72dt\" (UniqueName: \"kubernetes.io/projected/1be51a9d-eb05-4b33-85aa-2134496eb1b6-kube-api-access-m72dt\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hm892\" (UID: \"1be51a9d-eb05-4b33-85aa-2134496eb1b6\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hm892" Nov 29 07:57:57 crc kubenswrapper[4795]: I1129 07:57:57.408352 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1be51a9d-eb05-4b33-85aa-2134496eb1b6-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hm892\" (UID: \"1be51a9d-eb05-4b33-85aa-2134496eb1b6\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hm892" Nov 29 07:57:57 crc kubenswrapper[4795]: I1129 07:57:57.509811 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1be51a9d-eb05-4b33-85aa-2134496eb1b6-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hm892\" (UID: \"1be51a9d-eb05-4b33-85aa-2134496eb1b6\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hm892" Nov 29 07:57:57 crc kubenswrapper[4795]: I1129 07:57:57.509937 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m72dt\" (UniqueName: \"kubernetes.io/projected/1be51a9d-eb05-4b33-85aa-2134496eb1b6-kube-api-access-m72dt\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hm892\" (UID: \"1be51a9d-eb05-4b33-85aa-2134496eb1b6\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hm892" Nov 29 07:57:57 crc kubenswrapper[4795]: I1129 07:57:57.509984 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1be51a9d-eb05-4b33-85aa-2134496eb1b6-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hm892\" (UID: \"1be51a9d-eb05-4b33-85aa-2134496eb1b6\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hm892" Nov 29 07:57:57 crc kubenswrapper[4795]: I1129 07:57:57.510329 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1be51a9d-eb05-4b33-85aa-2134496eb1b6-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hm892\" (UID: \"1be51a9d-eb05-4b33-85aa-2134496eb1b6\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hm892" Nov 29 07:57:57 crc kubenswrapper[4795]: I1129 07:57:57.510401 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1be51a9d-eb05-4b33-85aa-2134496eb1b6-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hm892\" (UID: \"1be51a9d-eb05-4b33-85aa-2134496eb1b6\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hm892" Nov 29 07:57:57 crc kubenswrapper[4795]: I1129 07:57:57.529448 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m72dt\" (UniqueName: \"kubernetes.io/projected/1be51a9d-eb05-4b33-85aa-2134496eb1b6-kube-api-access-m72dt\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hm892\" (UID: \"1be51a9d-eb05-4b33-85aa-2134496eb1b6\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hm892" Nov 29 07:57:57 crc kubenswrapper[4795]: I1129 07:57:57.553883 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hm892" Nov 29 07:57:57 crc kubenswrapper[4795]: I1129 07:57:57.985946 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hm892"] Nov 29 07:57:58 crc kubenswrapper[4795]: I1129 07:57:58.470042 4795 generic.go:334] "Generic (PLEG): container finished" podID="1be51a9d-eb05-4b33-85aa-2134496eb1b6" containerID="dccc85d19fa3a697b1ec767597c7ff63bec8c7af8524252c4636ac226aab207e" exitCode=0 Nov 29 07:57:58 crc kubenswrapper[4795]: I1129 07:57:58.470086 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hm892" event={"ID":"1be51a9d-eb05-4b33-85aa-2134496eb1b6","Type":"ContainerDied","Data":"dccc85d19fa3a697b1ec767597c7ff63bec8c7af8524252c4636ac226aab207e"} Nov 29 07:57:58 crc kubenswrapper[4795]: I1129 07:57:58.470113 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hm892" event={"ID":"1be51a9d-eb05-4b33-85aa-2134496eb1b6","Type":"ContainerStarted","Data":"bf1810347bc0f8926b90bc4ce615d6cd0987785c980bfaeefaac5c112f453c92"} Nov 29 07:58:01 crc kubenswrapper[4795]: I1129 07:58:01.495069 4795 generic.go:334] "Generic (PLEG): container finished" podID="1be51a9d-eb05-4b33-85aa-2134496eb1b6" containerID="f1d243a35d60ef6bca294f7f4ae780f6894865f214e5ab1a042fbf4c10b9b384" exitCode=0 Nov 29 07:58:01 crc kubenswrapper[4795]: I1129 07:58:01.495162 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hm892" event={"ID":"1be51a9d-eb05-4b33-85aa-2134496eb1b6","Type":"ContainerDied","Data":"f1d243a35d60ef6bca294f7f4ae780f6894865f214e5ab1a042fbf4c10b9b384"} Nov 29 07:58:02 crc kubenswrapper[4795]: I1129 07:58:02.505913 4795 generic.go:334] "Generic (PLEG): container finished" podID="1be51a9d-eb05-4b33-85aa-2134496eb1b6" containerID="587af04edb7994531a495b5131c22560d16f0be1dd40918f190cf3484d84ba5c" exitCode=0 Nov 29 07:58:02 crc kubenswrapper[4795]: I1129 07:58:02.506791 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hm892" event={"ID":"1be51a9d-eb05-4b33-85aa-2134496eb1b6","Type":"ContainerDied","Data":"587af04edb7994531a495b5131c22560d16f0be1dd40918f190cf3484d84ba5c"} Nov 29 07:58:03 crc kubenswrapper[4795]: I1129 07:58:03.793252 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hm892" Nov 29 07:58:03 crc kubenswrapper[4795]: I1129 07:58:03.820156 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m72dt\" (UniqueName: \"kubernetes.io/projected/1be51a9d-eb05-4b33-85aa-2134496eb1b6-kube-api-access-m72dt\") pod \"1be51a9d-eb05-4b33-85aa-2134496eb1b6\" (UID: \"1be51a9d-eb05-4b33-85aa-2134496eb1b6\") " Nov 29 07:58:03 crc kubenswrapper[4795]: I1129 07:58:03.820224 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1be51a9d-eb05-4b33-85aa-2134496eb1b6-util\") pod \"1be51a9d-eb05-4b33-85aa-2134496eb1b6\" (UID: \"1be51a9d-eb05-4b33-85aa-2134496eb1b6\") " Nov 29 07:58:03 crc kubenswrapper[4795]: I1129 07:58:03.820266 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1be51a9d-eb05-4b33-85aa-2134496eb1b6-bundle\") pod \"1be51a9d-eb05-4b33-85aa-2134496eb1b6\" (UID: \"1be51a9d-eb05-4b33-85aa-2134496eb1b6\") " Nov 29 07:58:03 crc kubenswrapper[4795]: I1129 07:58:03.821835 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1be51a9d-eb05-4b33-85aa-2134496eb1b6-bundle" (OuterVolumeSpecName: "bundle") pod "1be51a9d-eb05-4b33-85aa-2134496eb1b6" (UID: "1be51a9d-eb05-4b33-85aa-2134496eb1b6"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:58:03 crc kubenswrapper[4795]: I1129 07:58:03.825882 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1be51a9d-eb05-4b33-85aa-2134496eb1b6-kube-api-access-m72dt" (OuterVolumeSpecName: "kube-api-access-m72dt") pod "1be51a9d-eb05-4b33-85aa-2134496eb1b6" (UID: "1be51a9d-eb05-4b33-85aa-2134496eb1b6"). InnerVolumeSpecName "kube-api-access-m72dt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:58:03 crc kubenswrapper[4795]: I1129 07:58:03.830199 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1be51a9d-eb05-4b33-85aa-2134496eb1b6-util" (OuterVolumeSpecName: "util") pod "1be51a9d-eb05-4b33-85aa-2134496eb1b6" (UID: "1be51a9d-eb05-4b33-85aa-2134496eb1b6"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:58:03 crc kubenswrapper[4795]: I1129 07:58:03.922421 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m72dt\" (UniqueName: \"kubernetes.io/projected/1be51a9d-eb05-4b33-85aa-2134496eb1b6-kube-api-access-m72dt\") on node \"crc\" DevicePath \"\"" Nov 29 07:58:03 crc kubenswrapper[4795]: I1129 07:58:03.922458 4795 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1be51a9d-eb05-4b33-85aa-2134496eb1b6-util\") on node \"crc\" DevicePath \"\"" Nov 29 07:58:03 crc kubenswrapper[4795]: I1129 07:58:03.922468 4795 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1be51a9d-eb05-4b33-85aa-2134496eb1b6-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:58:04 crc kubenswrapper[4795]: I1129 07:58:04.559382 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hm892" event={"ID":"1be51a9d-eb05-4b33-85aa-2134496eb1b6","Type":"ContainerDied","Data":"bf1810347bc0f8926b90bc4ce615d6cd0987785c980bfaeefaac5c112f453c92"} Nov 29 07:58:04 crc kubenswrapper[4795]: I1129 07:58:04.560006 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf1810347bc0f8926b90bc4ce615d6cd0987785c980bfaeefaac5c112f453c92" Nov 29 07:58:04 crc kubenswrapper[4795]: I1129 07:58:04.559443 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hm892" Nov 29 07:58:15 crc kubenswrapper[4795]: I1129 07:58:15.889297 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-77967fb544-pfl5d"] Nov 29 07:58:15 crc kubenswrapper[4795]: E1129 07:58:15.890777 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1be51a9d-eb05-4b33-85aa-2134496eb1b6" containerName="util" Nov 29 07:58:15 crc kubenswrapper[4795]: I1129 07:58:15.890808 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="1be51a9d-eb05-4b33-85aa-2134496eb1b6" containerName="util" Nov 29 07:58:15 crc kubenswrapper[4795]: E1129 07:58:15.890828 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1be51a9d-eb05-4b33-85aa-2134496eb1b6" containerName="pull" Nov 29 07:58:15 crc kubenswrapper[4795]: I1129 07:58:15.890835 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="1be51a9d-eb05-4b33-85aa-2134496eb1b6" containerName="pull" Nov 29 07:58:15 crc kubenswrapper[4795]: E1129 07:58:15.890850 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1be51a9d-eb05-4b33-85aa-2134496eb1b6" containerName="extract" Nov 29 07:58:15 crc kubenswrapper[4795]: I1129 07:58:15.890856 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="1be51a9d-eb05-4b33-85aa-2134496eb1b6" containerName="extract" Nov 29 07:58:15 crc kubenswrapper[4795]: I1129 07:58:15.891172 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="1be51a9d-eb05-4b33-85aa-2134496eb1b6" containerName="extract" Nov 29 07:58:15 crc kubenswrapper[4795]: I1129 07:58:15.892006 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-77967fb544-pfl5d" Nov 29 07:58:15 crc kubenswrapper[4795]: I1129 07:58:15.900193 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Nov 29 07:58:15 crc kubenswrapper[4795]: I1129 07:58:15.900476 4795 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Nov 29 07:58:15 crc kubenswrapper[4795]: I1129 07:58:15.900795 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Nov 29 07:58:15 crc kubenswrapper[4795]: I1129 07:58:15.900982 4795 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Nov 29 07:58:15 crc kubenswrapper[4795]: I1129 07:58:15.901117 4795 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-27l7j" Nov 29 07:58:15 crc kubenswrapper[4795]: I1129 07:58:15.904215 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcq6m\" (UniqueName: \"kubernetes.io/projected/88516493-98c7-4365-9293-73456d8d0913-kube-api-access-fcq6m\") pod \"metallb-operator-controller-manager-77967fb544-pfl5d\" (UID: \"88516493-98c7-4365-9293-73456d8d0913\") " pod="metallb-system/metallb-operator-controller-manager-77967fb544-pfl5d" Nov 29 07:58:15 crc kubenswrapper[4795]: I1129 07:58:15.904272 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/88516493-98c7-4365-9293-73456d8d0913-webhook-cert\") pod \"metallb-operator-controller-manager-77967fb544-pfl5d\" (UID: \"88516493-98c7-4365-9293-73456d8d0913\") " pod="metallb-system/metallb-operator-controller-manager-77967fb544-pfl5d" Nov 29 07:58:15 crc kubenswrapper[4795]: I1129 07:58:15.904311 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/88516493-98c7-4365-9293-73456d8d0913-apiservice-cert\") pod \"metallb-operator-controller-manager-77967fb544-pfl5d\" (UID: \"88516493-98c7-4365-9293-73456d8d0913\") " pod="metallb-system/metallb-operator-controller-manager-77967fb544-pfl5d" Nov 29 07:58:15 crc kubenswrapper[4795]: I1129 07:58:15.950833 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-77967fb544-pfl5d"] Nov 29 07:58:16 crc kubenswrapper[4795]: I1129 07:58:16.004991 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/88516493-98c7-4365-9293-73456d8d0913-apiservice-cert\") pod \"metallb-operator-controller-manager-77967fb544-pfl5d\" (UID: \"88516493-98c7-4365-9293-73456d8d0913\") " pod="metallb-system/metallb-operator-controller-manager-77967fb544-pfl5d" Nov 29 07:58:16 crc kubenswrapper[4795]: I1129 07:58:16.005334 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcq6m\" (UniqueName: \"kubernetes.io/projected/88516493-98c7-4365-9293-73456d8d0913-kube-api-access-fcq6m\") pod \"metallb-operator-controller-manager-77967fb544-pfl5d\" (UID: \"88516493-98c7-4365-9293-73456d8d0913\") " pod="metallb-system/metallb-operator-controller-manager-77967fb544-pfl5d" Nov 29 07:58:16 crc kubenswrapper[4795]: I1129 07:58:16.005480 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/88516493-98c7-4365-9293-73456d8d0913-webhook-cert\") pod \"metallb-operator-controller-manager-77967fb544-pfl5d\" (UID: \"88516493-98c7-4365-9293-73456d8d0913\") " pod="metallb-system/metallb-operator-controller-manager-77967fb544-pfl5d" Nov 29 07:58:16 crc kubenswrapper[4795]: I1129 07:58:16.011417 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/88516493-98c7-4365-9293-73456d8d0913-webhook-cert\") pod \"metallb-operator-controller-manager-77967fb544-pfl5d\" (UID: \"88516493-98c7-4365-9293-73456d8d0913\") " pod="metallb-system/metallb-operator-controller-manager-77967fb544-pfl5d" Nov 29 07:58:16 crc kubenswrapper[4795]: I1129 07:58:16.030788 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcq6m\" (UniqueName: \"kubernetes.io/projected/88516493-98c7-4365-9293-73456d8d0913-kube-api-access-fcq6m\") pod \"metallb-operator-controller-manager-77967fb544-pfl5d\" (UID: \"88516493-98c7-4365-9293-73456d8d0913\") " pod="metallb-system/metallb-operator-controller-manager-77967fb544-pfl5d" Nov 29 07:58:16 crc kubenswrapper[4795]: I1129 07:58:16.033668 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/88516493-98c7-4365-9293-73456d8d0913-apiservice-cert\") pod \"metallb-operator-controller-manager-77967fb544-pfl5d\" (UID: \"88516493-98c7-4365-9293-73456d8d0913\") " pod="metallb-system/metallb-operator-controller-manager-77967fb544-pfl5d" Nov 29 07:58:16 crc kubenswrapper[4795]: I1129 07:58:16.145269 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-85bfb995d5-2snm7"] Nov 29 07:58:16 crc kubenswrapper[4795]: I1129 07:58:16.146480 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-85bfb995d5-2snm7" Nov 29 07:58:16 crc kubenswrapper[4795]: I1129 07:58:16.149541 4795 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Nov 29 07:58:16 crc kubenswrapper[4795]: I1129 07:58:16.150199 4795 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 29 07:58:16 crc kubenswrapper[4795]: I1129 07:58:16.150266 4795 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-s2sqs" Nov 29 07:58:16 crc kubenswrapper[4795]: I1129 07:58:16.159608 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-85bfb995d5-2snm7"] Nov 29 07:58:16 crc kubenswrapper[4795]: I1129 07:58:16.231722 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-77967fb544-pfl5d" Nov 29 07:58:16 crc kubenswrapper[4795]: I1129 07:58:16.309344 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/00a1cd25-da5a-4ff5-b0d0-632a4ccdc0a3-webhook-cert\") pod \"metallb-operator-webhook-server-85bfb995d5-2snm7\" (UID: \"00a1cd25-da5a-4ff5-b0d0-632a4ccdc0a3\") " pod="metallb-system/metallb-operator-webhook-server-85bfb995d5-2snm7" Nov 29 07:58:16 crc kubenswrapper[4795]: I1129 07:58:16.309467 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnf7r\" (UniqueName: \"kubernetes.io/projected/00a1cd25-da5a-4ff5-b0d0-632a4ccdc0a3-kube-api-access-qnf7r\") pod \"metallb-operator-webhook-server-85bfb995d5-2snm7\" (UID: \"00a1cd25-da5a-4ff5-b0d0-632a4ccdc0a3\") " pod="metallb-system/metallb-operator-webhook-server-85bfb995d5-2snm7" Nov 29 07:58:16 crc kubenswrapper[4795]: I1129 07:58:16.309508 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/00a1cd25-da5a-4ff5-b0d0-632a4ccdc0a3-apiservice-cert\") pod \"metallb-operator-webhook-server-85bfb995d5-2snm7\" (UID: \"00a1cd25-da5a-4ff5-b0d0-632a4ccdc0a3\") " pod="metallb-system/metallb-operator-webhook-server-85bfb995d5-2snm7" Nov 29 07:58:16 crc kubenswrapper[4795]: I1129 07:58:16.412637 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnf7r\" (UniqueName: \"kubernetes.io/projected/00a1cd25-da5a-4ff5-b0d0-632a4ccdc0a3-kube-api-access-qnf7r\") pod \"metallb-operator-webhook-server-85bfb995d5-2snm7\" (UID: \"00a1cd25-da5a-4ff5-b0d0-632a4ccdc0a3\") " pod="metallb-system/metallb-operator-webhook-server-85bfb995d5-2snm7" Nov 29 07:58:16 crc kubenswrapper[4795]: I1129 07:58:16.412692 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/00a1cd25-da5a-4ff5-b0d0-632a4ccdc0a3-apiservice-cert\") pod \"metallb-operator-webhook-server-85bfb995d5-2snm7\" (UID: \"00a1cd25-da5a-4ff5-b0d0-632a4ccdc0a3\") " pod="metallb-system/metallb-operator-webhook-server-85bfb995d5-2snm7" Nov 29 07:58:16 crc kubenswrapper[4795]: I1129 07:58:16.412772 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/00a1cd25-da5a-4ff5-b0d0-632a4ccdc0a3-webhook-cert\") pod \"metallb-operator-webhook-server-85bfb995d5-2snm7\" (UID: \"00a1cd25-da5a-4ff5-b0d0-632a4ccdc0a3\") " pod="metallb-system/metallb-operator-webhook-server-85bfb995d5-2snm7" Nov 29 07:58:16 crc kubenswrapper[4795]: I1129 07:58:16.419342 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/00a1cd25-da5a-4ff5-b0d0-632a4ccdc0a3-apiservice-cert\") pod \"metallb-operator-webhook-server-85bfb995d5-2snm7\" (UID: \"00a1cd25-da5a-4ff5-b0d0-632a4ccdc0a3\") " pod="metallb-system/metallb-operator-webhook-server-85bfb995d5-2snm7" Nov 29 07:58:16 crc kubenswrapper[4795]: I1129 07:58:16.437174 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/00a1cd25-da5a-4ff5-b0d0-632a4ccdc0a3-webhook-cert\") pod \"metallb-operator-webhook-server-85bfb995d5-2snm7\" (UID: \"00a1cd25-da5a-4ff5-b0d0-632a4ccdc0a3\") " pod="metallb-system/metallb-operator-webhook-server-85bfb995d5-2snm7" Nov 29 07:58:16 crc kubenswrapper[4795]: I1129 07:58:16.446167 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnf7r\" (UniqueName: \"kubernetes.io/projected/00a1cd25-da5a-4ff5-b0d0-632a4ccdc0a3-kube-api-access-qnf7r\") pod \"metallb-operator-webhook-server-85bfb995d5-2snm7\" (UID: \"00a1cd25-da5a-4ff5-b0d0-632a4ccdc0a3\") " pod="metallb-system/metallb-operator-webhook-server-85bfb995d5-2snm7" Nov 29 07:58:16 crc kubenswrapper[4795]: I1129 07:58:16.467273 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-85bfb995d5-2snm7" Nov 29 07:58:16 crc kubenswrapper[4795]: I1129 07:58:16.795259 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-77967fb544-pfl5d"] Nov 29 07:58:16 crc kubenswrapper[4795]: W1129 07:58:16.802691 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88516493_98c7_4365_9293_73456d8d0913.slice/crio-573fcecdb60bb1ce649643c62d195a658bf1e7e3eacfabe4e396141db95d013e WatchSource:0}: Error finding container 573fcecdb60bb1ce649643c62d195a658bf1e7e3eacfabe4e396141db95d013e: Status 404 returned error can't find the container with id 573fcecdb60bb1ce649643c62d195a658bf1e7e3eacfabe4e396141db95d013e Nov 29 07:58:16 crc kubenswrapper[4795]: I1129 07:58:16.916472 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-85bfb995d5-2snm7"] Nov 29 07:58:16 crc kubenswrapper[4795]: W1129 07:58:16.920277 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod00a1cd25_da5a_4ff5_b0d0_632a4ccdc0a3.slice/crio-6b1062a417ef4e3698e17768307e48247b3bb779741c95322e8d55c042a3c41a WatchSource:0}: Error finding container 6b1062a417ef4e3698e17768307e48247b3bb779741c95322e8d55c042a3c41a: Status 404 returned error can't find the container with id 6b1062a417ef4e3698e17768307e48247b3bb779741c95322e8d55c042a3c41a Nov 29 07:58:17 crc kubenswrapper[4795]: I1129 07:58:17.645756 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-85bfb995d5-2snm7" event={"ID":"00a1cd25-da5a-4ff5-b0d0-632a4ccdc0a3","Type":"ContainerStarted","Data":"6b1062a417ef4e3698e17768307e48247b3bb779741c95322e8d55c042a3c41a"} Nov 29 07:58:17 crc kubenswrapper[4795]: I1129 07:58:17.647102 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-77967fb544-pfl5d" event={"ID":"88516493-98c7-4365-9293-73456d8d0913","Type":"ContainerStarted","Data":"573fcecdb60bb1ce649643c62d195a658bf1e7e3eacfabe4e396141db95d013e"} Nov 29 07:58:20 crc kubenswrapper[4795]: I1129 07:58:20.674983 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-77967fb544-pfl5d" event={"ID":"88516493-98c7-4365-9293-73456d8d0913","Type":"ContainerStarted","Data":"653220d5f6a372226f0c2a7e030bdaa6cfa6fa5b228ade1ae97f5375feff5d43"} Nov 29 07:58:20 crc kubenswrapper[4795]: I1129 07:58:20.675837 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-77967fb544-pfl5d" Nov 29 07:58:20 crc kubenswrapper[4795]: I1129 07:58:20.697898 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-77967fb544-pfl5d" podStartSLOduration=2.458716601 podStartE2EDuration="5.69787707s" podCreationTimestamp="2025-11-29 07:58:15 +0000 UTC" firstStartedPulling="2025-11-29 07:58:16.807120539 +0000 UTC m=+1142.782696329" lastFinishedPulling="2025-11-29 07:58:20.046281008 +0000 UTC m=+1146.021856798" observedRunningTime="2025-11-29 07:58:20.695562804 +0000 UTC m=+1146.671138594" watchObservedRunningTime="2025-11-29 07:58:20.69787707 +0000 UTC m=+1146.673452870" Nov 29 07:58:24 crc kubenswrapper[4795]: I1129 07:58:24.703950 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-85bfb995d5-2snm7" event={"ID":"00a1cd25-da5a-4ff5-b0d0-632a4ccdc0a3","Type":"ContainerStarted","Data":"cc00410d926ec4da0b45a01ab5d3c3360bd1696924a07ed8900ec8d7cdfdabdc"} Nov 29 07:58:24 crc kubenswrapper[4795]: I1129 07:58:24.704547 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-85bfb995d5-2snm7" Nov 29 07:58:24 crc kubenswrapper[4795]: I1129 07:58:24.722842 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-85bfb995d5-2snm7" podStartSLOduration=1.848719971 podStartE2EDuration="8.722824327s" podCreationTimestamp="2025-11-29 07:58:16 +0000 UTC" firstStartedPulling="2025-11-29 07:58:16.923463756 +0000 UTC m=+1142.899039536" lastFinishedPulling="2025-11-29 07:58:23.797568102 +0000 UTC m=+1149.773143892" observedRunningTime="2025-11-29 07:58:24.718821783 +0000 UTC m=+1150.694397583" watchObservedRunningTime="2025-11-29 07:58:24.722824327 +0000 UTC m=+1150.698400117" Nov 29 07:58:36 crc kubenswrapper[4795]: I1129 07:58:36.473513 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-85bfb995d5-2snm7" Nov 29 07:58:56 crc kubenswrapper[4795]: I1129 07:58:56.234193 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-77967fb544-pfl5d" Nov 29 07:58:57 crc kubenswrapper[4795]: I1129 07:58:57.013102 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-tkql9"] Nov 29 07:58:57 crc kubenswrapper[4795]: I1129 07:58:57.016645 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-tkql9" Nov 29 07:58:57 crc kubenswrapper[4795]: I1129 07:58:57.022180 4795 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Nov 29 07:58:57 crc kubenswrapper[4795]: I1129 07:58:57.022180 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Nov 29 07:58:57 crc kubenswrapper[4795]: I1129 07:58:57.022544 4795 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-pv64j" Nov 29 07:58:57 crc kubenswrapper[4795]: I1129 07:58:57.030540 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-gbxmx"] Nov 29 07:58:57 crc kubenswrapper[4795]: I1129 07:58:57.032640 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-gbxmx" Nov 29 07:58:57 crc kubenswrapper[4795]: I1129 07:58:57.034374 4795 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Nov 29 07:58:57 crc kubenswrapper[4795]: I1129 07:58:57.047220 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-gbxmx"] Nov 29 07:58:57 crc kubenswrapper[4795]: I1129 07:58:57.123910 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/166b46fd-e087-476d-9491-d173847e5fb9-frr-conf\") pod \"frr-k8s-tkql9\" (UID: \"166b46fd-e087-476d-9491-d173847e5fb9\") " pod="metallb-system/frr-k8s-tkql9" Nov 29 07:58:57 crc kubenswrapper[4795]: I1129 07:58:57.124019 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/166b46fd-e087-476d-9491-d173847e5fb9-metrics-certs\") pod \"frr-k8s-tkql9\" (UID: \"166b46fd-e087-476d-9491-d173847e5fb9\") " pod="metallb-system/frr-k8s-tkql9" Nov 29 07:58:57 crc kubenswrapper[4795]: I1129 07:58:57.124067 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/166b46fd-e087-476d-9491-d173847e5fb9-frr-startup\") pod \"frr-k8s-tkql9\" (UID: \"166b46fd-e087-476d-9491-d173847e5fb9\") " pod="metallb-system/frr-k8s-tkql9" Nov 29 07:58:57 crc kubenswrapper[4795]: I1129 07:58:57.124098 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/166b46fd-e087-476d-9491-d173847e5fb9-frr-sockets\") pod \"frr-k8s-tkql9\" (UID: \"166b46fd-e087-476d-9491-d173847e5fb9\") " pod="metallb-system/frr-k8s-tkql9" Nov 29 07:58:57 crc kubenswrapper[4795]: I1129 07:58:57.124120 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skv4k\" (UniqueName: \"kubernetes.io/projected/166b46fd-e087-476d-9491-d173847e5fb9-kube-api-access-skv4k\") pod \"frr-k8s-tkql9\" (UID: \"166b46fd-e087-476d-9491-d173847e5fb9\") " pod="metallb-system/frr-k8s-tkql9" Nov 29 07:58:57 crc kubenswrapper[4795]: I1129 07:58:57.124140 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/166b46fd-e087-476d-9491-d173847e5fb9-metrics\") pod \"frr-k8s-tkql9\" (UID: \"166b46fd-e087-476d-9491-d173847e5fb9\") " pod="metallb-system/frr-k8s-tkql9" Nov 29 07:58:57 crc kubenswrapper[4795]: I1129 07:58:57.124167 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/166b46fd-e087-476d-9491-d173847e5fb9-reloader\") pod \"frr-k8s-tkql9\" (UID: \"166b46fd-e087-476d-9491-d173847e5fb9\") " pod="metallb-system/frr-k8s-tkql9" Nov 29 07:58:57 crc kubenswrapper[4795]: I1129 07:58:57.125486 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-9ckmx"] Nov 29 07:58:57 crc kubenswrapper[4795]: I1129 07:58:57.126731 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-9ckmx" Nov 29 07:58:57 crc kubenswrapper[4795]: I1129 07:58:57.128819 4795 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Nov 29 07:58:57 crc kubenswrapper[4795]: I1129 07:58:57.129313 4795 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Nov 29 07:58:57 crc kubenswrapper[4795]: I1129 07:58:57.129366 4795 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-7bvlr" Nov 29 07:58:57 crc kubenswrapper[4795]: I1129 07:58:57.129367 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Nov 29 07:58:57 crc kubenswrapper[4795]: I1129 07:58:57.134037 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-f8648f98b-wc858"] Nov 29 07:58:57 crc kubenswrapper[4795]: I1129 07:58:57.135206 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-wc858" Nov 29 07:58:57 crc kubenswrapper[4795]: I1129 07:58:57.136889 4795 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Nov 29 07:58:57 crc kubenswrapper[4795]: I1129 07:58:57.160261 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-wc858"] Nov 29 07:58:57 crc kubenswrapper[4795]: I1129 07:58:57.225569 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/166b46fd-e087-476d-9491-d173847e5fb9-reloader\") pod \"frr-k8s-tkql9\" (UID: \"166b46fd-e087-476d-9491-d173847e5fb9\") " pod="metallb-system/frr-k8s-tkql9" Nov 29 07:58:57 crc kubenswrapper[4795]: I1129 07:58:57.225634 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cc6bc09a-5187-429a-8f93-1f57bb5cd0d0-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-gbxmx\" (UID: \"cc6bc09a-5187-429a-8f93-1f57bb5cd0d0\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-gbxmx" Nov 29 07:58:57 crc kubenswrapper[4795]: I1129 07:58:57.225667 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pr87j\" (UniqueName: \"kubernetes.io/projected/cc6bc09a-5187-429a-8f93-1f57bb5cd0d0-kube-api-access-pr87j\") pod \"frr-k8s-webhook-server-7fcb986d4-gbxmx\" (UID: \"cc6bc09a-5187-429a-8f93-1f57bb5cd0d0\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-gbxmx" Nov 29 07:58:57 crc kubenswrapper[4795]: I1129 07:58:57.225696 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/166b46fd-e087-476d-9491-d173847e5fb9-frr-conf\") pod \"frr-k8s-tkql9\" (UID: \"166b46fd-e087-476d-9491-d173847e5fb9\") " pod="metallb-system/frr-k8s-tkql9" Nov 29 07:58:57 crc kubenswrapper[4795]: I1129 07:58:57.226390 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/166b46fd-e087-476d-9491-d173847e5fb9-metrics-certs\") pod \"frr-k8s-tkql9\" (UID: \"166b46fd-e087-476d-9491-d173847e5fb9\") " pod="metallb-system/frr-k8s-tkql9" Nov 29 07:58:57 crc kubenswrapper[4795]: E1129 07:58:57.226505 4795 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Nov 29 07:58:57 crc kubenswrapper[4795]: I1129 07:58:57.226424 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/166b46fd-e087-476d-9491-d173847e5fb9-frr-conf\") pod \"frr-k8s-tkql9\" (UID: \"166b46fd-e087-476d-9491-d173847e5fb9\") " pod="metallb-system/frr-k8s-tkql9" Nov 29 07:58:57 crc kubenswrapper[4795]: E1129 07:58:57.226566 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/166b46fd-e087-476d-9491-d173847e5fb9-metrics-certs podName:166b46fd-e087-476d-9491-d173847e5fb9 nodeName:}" failed. No retries permitted until 2025-11-29 07:58:57.726548201 +0000 UTC m=+1183.702124001 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/166b46fd-e087-476d-9491-d173847e5fb9-metrics-certs") pod "frr-k8s-tkql9" (UID: "166b46fd-e087-476d-9491-d173847e5fb9") : secret "frr-k8s-certs-secret" not found Nov 29 07:58:57 crc kubenswrapper[4795]: I1129 07:58:57.226605 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/166b46fd-e087-476d-9491-d173847e5fb9-reloader\") pod \"frr-k8s-tkql9\" (UID: \"166b46fd-e087-476d-9491-d173847e5fb9\") " pod="metallb-system/frr-k8s-tkql9" Nov 29 07:58:57 crc kubenswrapper[4795]: I1129 07:58:57.226618 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/166b46fd-e087-476d-9491-d173847e5fb9-frr-startup\") pod \"frr-k8s-tkql9\" (UID: \"166b46fd-e087-476d-9491-d173847e5fb9\") " pod="metallb-system/frr-k8s-tkql9" Nov 29 07:58:57 crc kubenswrapper[4795]: I1129 07:58:57.226886 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/166b46fd-e087-476d-9491-d173847e5fb9-frr-sockets\") pod \"frr-k8s-tkql9\" (UID: \"166b46fd-e087-476d-9491-d173847e5fb9\") " pod="metallb-system/frr-k8s-tkql9" Nov 29 07:58:57 crc kubenswrapper[4795]: I1129 07:58:57.226965 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skv4k\" (UniqueName: \"kubernetes.io/projected/166b46fd-e087-476d-9491-d173847e5fb9-kube-api-access-skv4k\") pod \"frr-k8s-tkql9\" (UID: \"166b46fd-e087-476d-9491-d173847e5fb9\") " pod="metallb-system/frr-k8s-tkql9" Nov 29 07:58:57 crc kubenswrapper[4795]: I1129 07:58:57.227096 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/166b46fd-e087-476d-9491-d173847e5fb9-frr-sockets\") pod \"frr-k8s-tkql9\" (UID: \"166b46fd-e087-476d-9491-d173847e5fb9\") " pod="metallb-system/frr-k8s-tkql9" Nov 29 07:58:57 crc kubenswrapper[4795]: I1129 07:58:57.227571 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/166b46fd-e087-476d-9491-d173847e5fb9-metrics\") pod \"frr-k8s-tkql9\" (UID: \"166b46fd-e087-476d-9491-d173847e5fb9\") " pod="metallb-system/frr-k8s-tkql9" Nov 29 07:58:57 crc kubenswrapper[4795]: I1129 07:58:57.227718 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/166b46fd-e087-476d-9491-d173847e5fb9-frr-startup\") pod \"frr-k8s-tkql9\" (UID: \"166b46fd-e087-476d-9491-d173847e5fb9\") " pod="metallb-system/frr-k8s-tkql9" Nov 29 07:58:57 crc kubenswrapper[4795]: I1129 07:58:57.229799 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/166b46fd-e087-476d-9491-d173847e5fb9-metrics\") pod \"frr-k8s-tkql9\" (UID: \"166b46fd-e087-476d-9491-d173847e5fb9\") " pod="metallb-system/frr-k8s-tkql9" Nov 29 07:58:57 crc kubenswrapper[4795]: I1129 07:58:57.269666 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skv4k\" (UniqueName: \"kubernetes.io/projected/166b46fd-e087-476d-9491-d173847e5fb9-kube-api-access-skv4k\") pod \"frr-k8s-tkql9\" (UID: \"166b46fd-e087-476d-9491-d173847e5fb9\") " pod="metallb-system/frr-k8s-tkql9" Nov 29 07:58:57 crc kubenswrapper[4795]: I1129 07:58:57.332834 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/c433634f-86e7-44a7-9dfa-e0d09a1f5747-metallb-excludel2\") pod \"speaker-9ckmx\" (UID: \"c433634f-86e7-44a7-9dfa-e0d09a1f5747\") " pod="metallb-system/speaker-9ckmx" Nov 29 07:58:57 crc kubenswrapper[4795]: I1129 07:58:57.333122 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c433634f-86e7-44a7-9dfa-e0d09a1f5747-memberlist\") pod \"speaker-9ckmx\" (UID: \"c433634f-86e7-44a7-9dfa-e0d09a1f5747\") " pod="metallb-system/speaker-9ckmx" Nov 29 07:58:57 crc kubenswrapper[4795]: I1129 07:58:57.333234 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgttl\" (UniqueName: \"kubernetes.io/projected/402aa6f5-7950-4290-ab83-bd5bafa2a8d7-kube-api-access-jgttl\") pod \"controller-f8648f98b-wc858\" (UID: \"402aa6f5-7950-4290-ab83-bd5bafa2a8d7\") " pod="metallb-system/controller-f8648f98b-wc858" Nov 29 07:58:57 crc kubenswrapper[4795]: I1129 07:58:57.333366 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqt9q\" (UniqueName: \"kubernetes.io/projected/c433634f-86e7-44a7-9dfa-e0d09a1f5747-kube-api-access-lqt9q\") pod \"speaker-9ckmx\" (UID: \"c433634f-86e7-44a7-9dfa-e0d09a1f5747\") " pod="metallb-system/speaker-9ckmx" Nov 29 07:58:57 crc kubenswrapper[4795]: I1129 07:58:57.333483 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c433634f-86e7-44a7-9dfa-e0d09a1f5747-metrics-certs\") pod \"speaker-9ckmx\" (UID: \"c433634f-86e7-44a7-9dfa-e0d09a1f5747\") " pod="metallb-system/speaker-9ckmx" Nov 29 07:58:57 crc kubenswrapper[4795]: I1129 07:58:57.333699 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/402aa6f5-7950-4290-ab83-bd5bafa2a8d7-cert\") pod \"controller-f8648f98b-wc858\" (UID: \"402aa6f5-7950-4290-ab83-bd5bafa2a8d7\") " pod="metallb-system/controller-f8648f98b-wc858" Nov 29 07:58:57 crc kubenswrapper[4795]: I1129 07:58:57.333799 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/402aa6f5-7950-4290-ab83-bd5bafa2a8d7-metrics-certs\") pod \"controller-f8648f98b-wc858\" (UID: \"402aa6f5-7950-4290-ab83-bd5bafa2a8d7\") " pod="metallb-system/controller-f8648f98b-wc858" Nov 29 07:58:57 crc kubenswrapper[4795]: I1129 07:58:57.333929 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cc6bc09a-5187-429a-8f93-1f57bb5cd0d0-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-gbxmx\" (UID: \"cc6bc09a-5187-429a-8f93-1f57bb5cd0d0\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-gbxmx" Nov 29 07:58:57 crc kubenswrapper[4795]: I1129 07:58:57.334027 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pr87j\" (UniqueName: \"kubernetes.io/projected/cc6bc09a-5187-429a-8f93-1f57bb5cd0d0-kube-api-access-pr87j\") pod \"frr-k8s-webhook-server-7fcb986d4-gbxmx\" (UID: \"cc6bc09a-5187-429a-8f93-1f57bb5cd0d0\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-gbxmx" Nov 29 07:58:57 crc kubenswrapper[4795]: I1129 07:58:57.345293 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cc6bc09a-5187-429a-8f93-1f57bb5cd0d0-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-gbxmx\" (UID: \"cc6bc09a-5187-429a-8f93-1f57bb5cd0d0\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-gbxmx" Nov 29 07:58:57 crc kubenswrapper[4795]: I1129 07:58:57.353400 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pr87j\" (UniqueName: \"kubernetes.io/projected/cc6bc09a-5187-429a-8f93-1f57bb5cd0d0-kube-api-access-pr87j\") pod \"frr-k8s-webhook-server-7fcb986d4-gbxmx\" (UID: \"cc6bc09a-5187-429a-8f93-1f57bb5cd0d0\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-gbxmx" Nov 29 07:58:57 crc kubenswrapper[4795]: I1129 07:58:57.357337 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-gbxmx" Nov 29 07:58:57 crc kubenswrapper[4795]: I1129 07:58:57.435358 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/c433634f-86e7-44a7-9dfa-e0d09a1f5747-metallb-excludel2\") pod \"speaker-9ckmx\" (UID: \"c433634f-86e7-44a7-9dfa-e0d09a1f5747\") " pod="metallb-system/speaker-9ckmx" Nov 29 07:58:57 crc kubenswrapper[4795]: I1129 07:58:57.435738 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c433634f-86e7-44a7-9dfa-e0d09a1f5747-memberlist\") pod \"speaker-9ckmx\" (UID: \"c433634f-86e7-44a7-9dfa-e0d09a1f5747\") " pod="metallb-system/speaker-9ckmx" Nov 29 07:58:57 crc kubenswrapper[4795]: I1129 07:58:57.435860 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgttl\" (UniqueName: \"kubernetes.io/projected/402aa6f5-7950-4290-ab83-bd5bafa2a8d7-kube-api-access-jgttl\") pod \"controller-f8648f98b-wc858\" (UID: \"402aa6f5-7950-4290-ab83-bd5bafa2a8d7\") " pod="metallb-system/controller-f8648f98b-wc858" Nov 29 07:58:57 crc kubenswrapper[4795]: I1129 07:58:57.435970 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqt9q\" (UniqueName: \"kubernetes.io/projected/c433634f-86e7-44a7-9dfa-e0d09a1f5747-kube-api-access-lqt9q\") pod \"speaker-9ckmx\" (UID: \"c433634f-86e7-44a7-9dfa-e0d09a1f5747\") " pod="metallb-system/speaker-9ckmx" Nov 29 07:58:57 crc kubenswrapper[4795]: I1129 07:58:57.436095 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c433634f-86e7-44a7-9dfa-e0d09a1f5747-metrics-certs\") pod \"speaker-9ckmx\" (UID: \"c433634f-86e7-44a7-9dfa-e0d09a1f5747\") " pod="metallb-system/speaker-9ckmx" Nov 29 07:58:57 crc kubenswrapper[4795]: E1129 07:58:57.436116 4795 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 29 07:58:57 crc kubenswrapper[4795]: I1129 07:58:57.436027 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/c433634f-86e7-44a7-9dfa-e0d09a1f5747-metallb-excludel2\") pod \"speaker-9ckmx\" (UID: \"c433634f-86e7-44a7-9dfa-e0d09a1f5747\") " pod="metallb-system/speaker-9ckmx" Nov 29 07:58:57 crc kubenswrapper[4795]: E1129 07:58:57.436177 4795 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Nov 29 07:58:57 crc kubenswrapper[4795]: I1129 07:58:57.436233 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/402aa6f5-7950-4290-ab83-bd5bafa2a8d7-cert\") pod \"controller-f8648f98b-wc858\" (UID: \"402aa6f5-7950-4290-ab83-bd5bafa2a8d7\") " pod="metallb-system/controller-f8648f98b-wc858" Nov 29 07:58:57 crc kubenswrapper[4795]: E1129 07:58:57.436291 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c433634f-86e7-44a7-9dfa-e0d09a1f5747-memberlist podName:c433634f-86e7-44a7-9dfa-e0d09a1f5747 nodeName:}" failed. No retries permitted until 2025-11-29 07:58:57.936274242 +0000 UTC m=+1183.911850032 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/c433634f-86e7-44a7-9dfa-e0d09a1f5747-memberlist") pod "speaker-9ckmx" (UID: "c433634f-86e7-44a7-9dfa-e0d09a1f5747") : secret "metallb-memberlist" not found Nov 29 07:58:57 crc kubenswrapper[4795]: E1129 07:58:57.436442 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c433634f-86e7-44a7-9dfa-e0d09a1f5747-metrics-certs podName:c433634f-86e7-44a7-9dfa-e0d09a1f5747 nodeName:}" failed. No retries permitted until 2025-11-29 07:58:57.936382365 +0000 UTC m=+1183.911958155 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c433634f-86e7-44a7-9dfa-e0d09a1f5747-metrics-certs") pod "speaker-9ckmx" (UID: "c433634f-86e7-44a7-9dfa-e0d09a1f5747") : secret "speaker-certs-secret" not found Nov 29 07:58:57 crc kubenswrapper[4795]: I1129 07:58:57.436508 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/402aa6f5-7950-4290-ab83-bd5bafa2a8d7-metrics-certs\") pod \"controller-f8648f98b-wc858\" (UID: \"402aa6f5-7950-4290-ab83-bd5bafa2a8d7\") " pod="metallb-system/controller-f8648f98b-wc858" Nov 29 07:58:57 crc kubenswrapper[4795]: I1129 07:58:57.441003 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/402aa6f5-7950-4290-ab83-bd5bafa2a8d7-cert\") pod \"controller-f8648f98b-wc858\" (UID: \"402aa6f5-7950-4290-ab83-bd5bafa2a8d7\") " pod="metallb-system/controller-f8648f98b-wc858" Nov 29 07:58:57 crc kubenswrapper[4795]: I1129 07:58:57.443943 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/402aa6f5-7950-4290-ab83-bd5bafa2a8d7-metrics-certs\") pod \"controller-f8648f98b-wc858\" (UID: \"402aa6f5-7950-4290-ab83-bd5bafa2a8d7\") " pod="metallb-system/controller-f8648f98b-wc858" Nov 29 07:58:57 crc kubenswrapper[4795]: I1129 07:58:57.461184 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqt9q\" (UniqueName: \"kubernetes.io/projected/c433634f-86e7-44a7-9dfa-e0d09a1f5747-kube-api-access-lqt9q\") pod \"speaker-9ckmx\" (UID: \"c433634f-86e7-44a7-9dfa-e0d09a1f5747\") " pod="metallb-system/speaker-9ckmx" Nov 29 07:58:57 crc kubenswrapper[4795]: I1129 07:58:57.483386 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgttl\" (UniqueName: \"kubernetes.io/projected/402aa6f5-7950-4290-ab83-bd5bafa2a8d7-kube-api-access-jgttl\") pod \"controller-f8648f98b-wc858\" (UID: \"402aa6f5-7950-4290-ab83-bd5bafa2a8d7\") " pod="metallb-system/controller-f8648f98b-wc858" Nov 29 07:58:57 crc kubenswrapper[4795]: I1129 07:58:57.744344 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/166b46fd-e087-476d-9491-d173847e5fb9-metrics-certs\") pod \"frr-k8s-tkql9\" (UID: \"166b46fd-e087-476d-9491-d173847e5fb9\") " pod="metallb-system/frr-k8s-tkql9" Nov 29 07:58:57 crc kubenswrapper[4795]: I1129 07:58:57.753128 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/166b46fd-e087-476d-9491-d173847e5fb9-metrics-certs\") pod \"frr-k8s-tkql9\" (UID: \"166b46fd-e087-476d-9491-d173847e5fb9\") " pod="metallb-system/frr-k8s-tkql9" Nov 29 07:58:57 crc kubenswrapper[4795]: I1129 07:58:57.754950 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-wc858" Nov 29 07:58:57 crc kubenswrapper[4795]: I1129 07:58:57.895191 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-gbxmx"] Nov 29 07:58:57 crc kubenswrapper[4795]: W1129 07:58:57.914263 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcc6bc09a_5187_429a_8f93_1f57bb5cd0d0.slice/crio-2a6f081036e2b31a089c9ca0714a1a60fcbc5855b68a8847625c16e754110664 WatchSource:0}: Error finding container 2a6f081036e2b31a089c9ca0714a1a60fcbc5855b68a8847625c16e754110664: Status 404 returned error can't find the container with id 2a6f081036e2b31a089c9ca0714a1a60fcbc5855b68a8847625c16e754110664 Nov 29 07:58:57 crc kubenswrapper[4795]: I1129 07:58:57.946118 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-tkql9" Nov 29 07:58:57 crc kubenswrapper[4795]: I1129 07:58:57.947548 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c433634f-86e7-44a7-9dfa-e0d09a1f5747-memberlist\") pod \"speaker-9ckmx\" (UID: \"c433634f-86e7-44a7-9dfa-e0d09a1f5747\") " pod="metallb-system/speaker-9ckmx" Nov 29 07:58:57 crc kubenswrapper[4795]: I1129 07:58:57.947630 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c433634f-86e7-44a7-9dfa-e0d09a1f5747-metrics-certs\") pod \"speaker-9ckmx\" (UID: \"c433634f-86e7-44a7-9dfa-e0d09a1f5747\") " pod="metallb-system/speaker-9ckmx" Nov 29 07:58:57 crc kubenswrapper[4795]: E1129 07:58:57.947729 4795 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 29 07:58:57 crc kubenswrapper[4795]: E1129 07:58:57.947771 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c433634f-86e7-44a7-9dfa-e0d09a1f5747-memberlist podName:c433634f-86e7-44a7-9dfa-e0d09a1f5747 nodeName:}" failed. No retries permitted until 2025-11-29 07:58:58.947756236 +0000 UTC m=+1184.923332026 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/c433634f-86e7-44a7-9dfa-e0d09a1f5747-memberlist") pod "speaker-9ckmx" (UID: "c433634f-86e7-44a7-9dfa-e0d09a1f5747") : secret "metallb-memberlist" not found Nov 29 07:58:57 crc kubenswrapper[4795]: I1129 07:58:57.952324 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c433634f-86e7-44a7-9dfa-e0d09a1f5747-metrics-certs\") pod \"speaker-9ckmx\" (UID: \"c433634f-86e7-44a7-9dfa-e0d09a1f5747\") " pod="metallb-system/speaker-9ckmx" Nov 29 07:58:57 crc kubenswrapper[4795]: I1129 07:58:57.962043 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-gbxmx" event={"ID":"cc6bc09a-5187-429a-8f93-1f57bb5cd0d0","Type":"ContainerStarted","Data":"2a6f081036e2b31a089c9ca0714a1a60fcbc5855b68a8847625c16e754110664"} Nov 29 07:58:58 crc kubenswrapper[4795]: I1129 07:58:58.208941 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-wc858"] Nov 29 07:58:58 crc kubenswrapper[4795]: W1129 07:58:58.213340 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod402aa6f5_7950_4290_ab83_bd5bafa2a8d7.slice/crio-2738aa66dee725aed9e4bffefecffbf37ac2bb42ad158e92dbab6dde46bfdc39 WatchSource:0}: Error finding container 2738aa66dee725aed9e4bffefecffbf37ac2bb42ad158e92dbab6dde46bfdc39: Status 404 returned error can't find the container with id 2738aa66dee725aed9e4bffefecffbf37ac2bb42ad158e92dbab6dde46bfdc39 Nov 29 07:58:58 crc kubenswrapper[4795]: I1129 07:58:58.970788 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c433634f-86e7-44a7-9dfa-e0d09a1f5747-memberlist\") pod \"speaker-9ckmx\" (UID: \"c433634f-86e7-44a7-9dfa-e0d09a1f5747\") " pod="metallb-system/speaker-9ckmx" Nov 29 07:58:58 crc kubenswrapper[4795]: I1129 07:58:58.972055 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-wc858" event={"ID":"402aa6f5-7950-4290-ab83-bd5bafa2a8d7","Type":"ContainerStarted","Data":"fe57f0136c0ae570305d250890124ad54a73693fb91a18c5bf0a9a7234b565cb"} Nov 29 07:58:58 crc kubenswrapper[4795]: I1129 07:58:58.972298 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-f8648f98b-wc858" Nov 29 07:58:58 crc kubenswrapper[4795]: I1129 07:58:58.972459 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-wc858" event={"ID":"402aa6f5-7950-4290-ab83-bd5bafa2a8d7","Type":"ContainerStarted","Data":"ca46f94b5adcc4ee2be0eebfd98f7d9b6f902e4eeb2d1edd5fa1a8397af26019"} Nov 29 07:58:58 crc kubenswrapper[4795]: I1129 07:58:58.972726 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-wc858" event={"ID":"402aa6f5-7950-4290-ab83-bd5bafa2a8d7","Type":"ContainerStarted","Data":"2738aa66dee725aed9e4bffefecffbf37ac2bb42ad158e92dbab6dde46bfdc39"} Nov 29 07:58:58 crc kubenswrapper[4795]: I1129 07:58:58.973234 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-tkql9" event={"ID":"166b46fd-e087-476d-9491-d173847e5fb9","Type":"ContainerStarted","Data":"9c0e8c6b6acb08886df2c90e2ed322960362747205c7147518dbc87f2e89cc3c"} Nov 29 07:58:58 crc kubenswrapper[4795]: I1129 07:58:58.983652 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c433634f-86e7-44a7-9dfa-e0d09a1f5747-memberlist\") pod \"speaker-9ckmx\" (UID: \"c433634f-86e7-44a7-9dfa-e0d09a1f5747\") " pod="metallb-system/speaker-9ckmx" Nov 29 07:58:58 crc kubenswrapper[4795]: I1129 07:58:58.992092 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-f8648f98b-wc858" podStartSLOduration=1.992064157 podStartE2EDuration="1.992064157s" podCreationTimestamp="2025-11-29 07:58:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:58:58.987503197 +0000 UTC m=+1184.963079007" watchObservedRunningTime="2025-11-29 07:58:58.992064157 +0000 UTC m=+1184.967639947" Nov 29 07:58:59 crc kubenswrapper[4795]: I1129 07:58:59.244497 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-9ckmx" Nov 29 07:58:59 crc kubenswrapper[4795]: I1129 07:58:59.983558 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-9ckmx" event={"ID":"c433634f-86e7-44a7-9dfa-e0d09a1f5747","Type":"ContainerStarted","Data":"3261798852d8a9c9ed8c2a2c0831f070ccdd5ae2add219ddff664c1c04328a22"} Nov 29 07:58:59 crc kubenswrapper[4795]: I1129 07:58:59.983984 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-9ckmx" event={"ID":"c433634f-86e7-44a7-9dfa-e0d09a1f5747","Type":"ContainerStarted","Data":"60fa0ad32814f6b043d16f53e85d7c3033cfa10b95437365f4d9069b95b4c9b8"} Nov 29 07:59:00 crc kubenswrapper[4795]: I1129 07:59:00.997047 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-9ckmx" event={"ID":"c433634f-86e7-44a7-9dfa-e0d09a1f5747","Type":"ContainerStarted","Data":"7d606b08700aa505a11e6e32ae2c466cb728ef576fdf6f582e5d264312ab1583"} Nov 29 07:59:00 crc kubenswrapper[4795]: I1129 07:59:00.998005 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-9ckmx" Nov 29 07:59:01 crc kubenswrapper[4795]: I1129 07:59:01.019043 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-9ckmx" podStartSLOduration=4.019028719 podStartE2EDuration="4.019028719s" podCreationTimestamp="2025-11-29 07:58:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:59:01.016774985 +0000 UTC m=+1186.992350775" watchObservedRunningTime="2025-11-29 07:59:01.019028719 +0000 UTC m=+1186.994604509" Nov 29 07:59:06 crc kubenswrapper[4795]: I1129 07:59:06.031472 4795 generic.go:334] "Generic (PLEG): container finished" podID="166b46fd-e087-476d-9491-d173847e5fb9" containerID="387a6722b012f33c5a9696df5883bc9388a48b91fed9b02779decc0e30cfdc07" exitCode=0 Nov 29 07:59:06 crc kubenswrapper[4795]: I1129 07:59:06.031582 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-tkql9" event={"ID":"166b46fd-e087-476d-9491-d173847e5fb9","Type":"ContainerDied","Data":"387a6722b012f33c5a9696df5883bc9388a48b91fed9b02779decc0e30cfdc07"} Nov 29 07:59:06 crc kubenswrapper[4795]: I1129 07:59:06.033985 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-gbxmx" event={"ID":"cc6bc09a-5187-429a-8f93-1f57bb5cd0d0","Type":"ContainerStarted","Data":"76a7e71045a4cdcec67d31dff7332fbd80f60b454a6ac03382627b347e832a43"} Nov 29 07:59:06 crc kubenswrapper[4795]: I1129 07:59:06.034140 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-gbxmx" Nov 29 07:59:06 crc kubenswrapper[4795]: I1129 07:59:06.067163 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-gbxmx" podStartSLOduration=2.257246342 podStartE2EDuration="10.067147063s" podCreationTimestamp="2025-11-29 07:58:56 +0000 UTC" firstStartedPulling="2025-11-29 07:58:57.922742523 +0000 UTC m=+1183.898318313" lastFinishedPulling="2025-11-29 07:59:05.732643244 +0000 UTC m=+1191.708219034" observedRunningTime="2025-11-29 07:59:06.066506325 +0000 UTC m=+1192.042082115" watchObservedRunningTime="2025-11-29 07:59:06.067147063 +0000 UTC m=+1192.042722843" Nov 29 07:59:07 crc kubenswrapper[4795]: I1129 07:59:07.042989 4795 generic.go:334] "Generic (PLEG): container finished" podID="166b46fd-e087-476d-9491-d173847e5fb9" containerID="53d9b33cb2d199a9dfe693cf96ec367f84a85c3d6bb756c9d15a5ba9cf5a5858" exitCode=0 Nov 29 07:59:07 crc kubenswrapper[4795]: I1129 07:59:07.043052 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-tkql9" event={"ID":"166b46fd-e087-476d-9491-d173847e5fb9","Type":"ContainerDied","Data":"53d9b33cb2d199a9dfe693cf96ec367f84a85c3d6bb756c9d15a5ba9cf5a5858"} Nov 29 07:59:08 crc kubenswrapper[4795]: I1129 07:59:08.050686 4795 generic.go:334] "Generic (PLEG): container finished" podID="166b46fd-e087-476d-9491-d173847e5fb9" containerID="c77ccc1a0422ab73f02e2053ac73818d5213d4924d39fe129e64e071c5c0daf5" exitCode=0 Nov 29 07:59:08 crc kubenswrapper[4795]: I1129 07:59:08.050802 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-tkql9" event={"ID":"166b46fd-e087-476d-9491-d173847e5fb9","Type":"ContainerDied","Data":"c77ccc1a0422ab73f02e2053ac73818d5213d4924d39fe129e64e071c5c0daf5"} Nov 29 07:59:09 crc kubenswrapper[4795]: I1129 07:59:09.062524 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-tkql9" event={"ID":"166b46fd-e087-476d-9491-d173847e5fb9","Type":"ContainerStarted","Data":"caffb9f6940470db79c424af691c9302179f08a878096b0d4522b0a57cfa1c68"} Nov 29 07:59:09 crc kubenswrapper[4795]: I1129 07:59:09.062855 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-tkql9" event={"ID":"166b46fd-e087-476d-9491-d173847e5fb9","Type":"ContainerStarted","Data":"6e3835023f02c0d91f3fc19a03c472c96e99d42af5e1c635827a672450f2d992"} Nov 29 07:59:09 crc kubenswrapper[4795]: I1129 07:59:09.062868 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-tkql9" event={"ID":"166b46fd-e087-476d-9491-d173847e5fb9","Type":"ContainerStarted","Data":"0e8964c2e50bdf442d04ddbf5e972925177679d08fa09889a944368c154a0624"} Nov 29 07:59:09 crc kubenswrapper[4795]: I1129 07:59:09.062876 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-tkql9" event={"ID":"166b46fd-e087-476d-9491-d173847e5fb9","Type":"ContainerStarted","Data":"05b0b3cadfe8fb1630484fa0c5da087117bb87e477b346ddd7d050bf24682c2e"} Nov 29 07:59:09 crc kubenswrapper[4795]: I1129 07:59:09.062884 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-tkql9" event={"ID":"166b46fd-e087-476d-9491-d173847e5fb9","Type":"ContainerStarted","Data":"4313fcdb8b8489a3768ad63fc9703fd3e5a819c995ffc3fd7e917760c61c35e0"} Nov 29 07:59:09 crc kubenswrapper[4795]: I1129 07:59:09.248225 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-9ckmx" Nov 29 07:59:10 crc kubenswrapper[4795]: I1129 07:59:10.074638 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-tkql9" event={"ID":"166b46fd-e087-476d-9491-d173847e5fb9","Type":"ContainerStarted","Data":"78525d3aa283e4350acaad03ff0e50f87e0bb84317a107d3aae9a94553c49523"} Nov 29 07:59:10 crc kubenswrapper[4795]: I1129 07:59:10.075033 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-tkql9" Nov 29 07:59:10 crc kubenswrapper[4795]: I1129 07:59:10.100796 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-tkql9" podStartSLOduration=6.438277911 podStartE2EDuration="14.100776029s" podCreationTimestamp="2025-11-29 07:58:56 +0000 UTC" firstStartedPulling="2025-11-29 07:58:58.090979611 +0000 UTC m=+1184.066555401" lastFinishedPulling="2025-11-29 07:59:05.753477729 +0000 UTC m=+1191.729053519" observedRunningTime="2025-11-29 07:59:10.096542458 +0000 UTC m=+1196.072118258" watchObservedRunningTime="2025-11-29 07:59:10.100776029 +0000 UTC m=+1196.076351819" Nov 29 07:59:12 crc kubenswrapper[4795]: I1129 07:59:12.081788 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-hbqdt"] Nov 29 07:59:12 crc kubenswrapper[4795]: I1129 07:59:12.083413 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-hbqdt" Nov 29 07:59:12 crc kubenswrapper[4795]: I1129 07:59:12.085421 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Nov 29 07:59:12 crc kubenswrapper[4795]: I1129 07:59:12.085476 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-rhb9h" Nov 29 07:59:12 crc kubenswrapper[4795]: I1129 07:59:12.085873 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Nov 29 07:59:12 crc kubenswrapper[4795]: I1129 07:59:12.094186 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-hbqdt"] Nov 29 07:59:12 crc kubenswrapper[4795]: I1129 07:59:12.096605 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzp24\" (UniqueName: \"kubernetes.io/projected/e6eeb242-a7dd-46ed-bdc4-490eeb6e77ad-kube-api-access-zzp24\") pod \"openstack-operator-index-hbqdt\" (UID: \"e6eeb242-a7dd-46ed-bdc4-490eeb6e77ad\") " pod="openstack-operators/openstack-operator-index-hbqdt" Nov 29 07:59:12 crc kubenswrapper[4795]: I1129 07:59:12.198127 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzp24\" (UniqueName: \"kubernetes.io/projected/e6eeb242-a7dd-46ed-bdc4-490eeb6e77ad-kube-api-access-zzp24\") pod \"openstack-operator-index-hbqdt\" (UID: \"e6eeb242-a7dd-46ed-bdc4-490eeb6e77ad\") " pod="openstack-operators/openstack-operator-index-hbqdt" Nov 29 07:59:12 crc kubenswrapper[4795]: I1129 07:59:12.228649 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzp24\" (UniqueName: \"kubernetes.io/projected/e6eeb242-a7dd-46ed-bdc4-490eeb6e77ad-kube-api-access-zzp24\") pod \"openstack-operator-index-hbqdt\" (UID: \"e6eeb242-a7dd-46ed-bdc4-490eeb6e77ad\") " pod="openstack-operators/openstack-operator-index-hbqdt" Nov 29 07:59:12 crc kubenswrapper[4795]: I1129 07:59:12.414886 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-hbqdt" Nov 29 07:59:12 crc kubenswrapper[4795]: I1129 07:59:12.838118 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-hbqdt"] Nov 29 07:59:12 crc kubenswrapper[4795]: I1129 07:59:12.949173 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-tkql9" Nov 29 07:59:13 crc kubenswrapper[4795]: I1129 07:59:13.009665 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-tkql9" Nov 29 07:59:13 crc kubenswrapper[4795]: I1129 07:59:13.096478 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-hbqdt" event={"ID":"e6eeb242-a7dd-46ed-bdc4-490eeb6e77ad","Type":"ContainerStarted","Data":"c2f902f9d1354ebe2d514e4c6ee459cc834557ef226975811e66b347963ce021"} Nov 29 07:59:15 crc kubenswrapper[4795]: I1129 07:59:15.260997 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-hbqdt"] Nov 29 07:59:15 crc kubenswrapper[4795]: I1129 07:59:15.868612 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-fc66n"] Nov 29 07:59:15 crc kubenswrapper[4795]: I1129 07:59:15.869737 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-fc66n" Nov 29 07:59:15 crc kubenswrapper[4795]: I1129 07:59:15.894003 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-fc66n"] Nov 29 07:59:15 crc kubenswrapper[4795]: I1129 07:59:15.973797 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9grqj\" (UniqueName: \"kubernetes.io/projected/cf43b8b5-a117-4ed8-853b-869086fd5197-kube-api-access-9grqj\") pod \"openstack-operator-index-fc66n\" (UID: \"cf43b8b5-a117-4ed8-853b-869086fd5197\") " pod="openstack-operators/openstack-operator-index-fc66n" Nov 29 07:59:16 crc kubenswrapper[4795]: I1129 07:59:16.075792 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9grqj\" (UniqueName: \"kubernetes.io/projected/cf43b8b5-a117-4ed8-853b-869086fd5197-kube-api-access-9grqj\") pod \"openstack-operator-index-fc66n\" (UID: \"cf43b8b5-a117-4ed8-853b-869086fd5197\") " pod="openstack-operators/openstack-operator-index-fc66n" Nov 29 07:59:16 crc kubenswrapper[4795]: I1129 07:59:16.094765 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9grqj\" (UniqueName: \"kubernetes.io/projected/cf43b8b5-a117-4ed8-853b-869086fd5197-kube-api-access-9grqj\") pod \"openstack-operator-index-fc66n\" (UID: \"cf43b8b5-a117-4ed8-853b-869086fd5197\") " pod="openstack-operators/openstack-operator-index-fc66n" Nov 29 07:59:16 crc kubenswrapper[4795]: I1129 07:59:16.222503 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-fc66n" Nov 29 07:59:16 crc kubenswrapper[4795]: I1129 07:59:16.609945 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-fc66n"] Nov 29 07:59:17 crc kubenswrapper[4795]: I1129 07:59:17.131251 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-hbqdt" event={"ID":"e6eeb242-a7dd-46ed-bdc4-490eeb6e77ad","Type":"ContainerStarted","Data":"693bf8390419c7c70c2663b669c301fa462cf400d4df2183abe2bf4b835a7d9b"} Nov 29 07:59:17 crc kubenswrapper[4795]: I1129 07:59:17.131337 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-hbqdt" podUID="e6eeb242-a7dd-46ed-bdc4-490eeb6e77ad" containerName="registry-server" containerID="cri-o://693bf8390419c7c70c2663b669c301fa462cf400d4df2183abe2bf4b835a7d9b" gracePeriod=2 Nov 29 07:59:17 crc kubenswrapper[4795]: I1129 07:59:17.132717 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-fc66n" event={"ID":"cf43b8b5-a117-4ed8-853b-869086fd5197","Type":"ContainerStarted","Data":"7ac54412fb58b00df08bfdcf494693e52b9336ad3221726fda4e188ddbd12120"} Nov 29 07:59:17 crc kubenswrapper[4795]: I1129 07:59:17.132740 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-fc66n" event={"ID":"cf43b8b5-a117-4ed8-853b-869086fd5197","Type":"ContainerStarted","Data":"19832f20a6ce71b6a25850c8bb9d2bfc409f1a419fe13b3d5bd5403130185223"} Nov 29 07:59:17 crc kubenswrapper[4795]: I1129 07:59:17.150559 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-hbqdt" podStartSLOduration=1.998440638 podStartE2EDuration="5.150544185s" podCreationTimestamp="2025-11-29 07:59:12 +0000 UTC" firstStartedPulling="2025-11-29 07:59:12.861747162 +0000 UTC m=+1198.837322952" lastFinishedPulling="2025-11-29 07:59:16.013850699 +0000 UTC m=+1201.989426499" observedRunningTime="2025-11-29 07:59:17.147087936 +0000 UTC m=+1203.122663746" watchObservedRunningTime="2025-11-29 07:59:17.150544185 +0000 UTC m=+1203.126119975" Nov 29 07:59:17 crc kubenswrapper[4795]: I1129 07:59:17.163742 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-fc66n" podStartSLOduration=2.067050293 podStartE2EDuration="2.16372066s" podCreationTimestamp="2025-11-29 07:59:15 +0000 UTC" firstStartedPulling="2025-11-29 07:59:16.618228444 +0000 UTC m=+1202.593804234" lastFinishedPulling="2025-11-29 07:59:16.714898811 +0000 UTC m=+1202.690474601" observedRunningTime="2025-11-29 07:59:17.161870397 +0000 UTC m=+1203.137446197" watchObservedRunningTime="2025-11-29 07:59:17.16372066 +0000 UTC m=+1203.139296450" Nov 29 07:59:17 crc kubenswrapper[4795]: I1129 07:59:17.372544 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-gbxmx" Nov 29 07:59:17 crc kubenswrapper[4795]: I1129 07:59:17.558476 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-hbqdt" Nov 29 07:59:17 crc kubenswrapper[4795]: I1129 07:59:17.699735 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzp24\" (UniqueName: \"kubernetes.io/projected/e6eeb242-a7dd-46ed-bdc4-490eeb6e77ad-kube-api-access-zzp24\") pod \"e6eeb242-a7dd-46ed-bdc4-490eeb6e77ad\" (UID: \"e6eeb242-a7dd-46ed-bdc4-490eeb6e77ad\") " Nov 29 07:59:17 crc kubenswrapper[4795]: I1129 07:59:17.705312 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6eeb242-a7dd-46ed-bdc4-490eeb6e77ad-kube-api-access-zzp24" (OuterVolumeSpecName: "kube-api-access-zzp24") pod "e6eeb242-a7dd-46ed-bdc4-490eeb6e77ad" (UID: "e6eeb242-a7dd-46ed-bdc4-490eeb6e77ad"). InnerVolumeSpecName "kube-api-access-zzp24". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:59:17 crc kubenswrapper[4795]: I1129 07:59:17.758347 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-f8648f98b-wc858" Nov 29 07:59:17 crc kubenswrapper[4795]: I1129 07:59:17.802606 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zzp24\" (UniqueName: \"kubernetes.io/projected/e6eeb242-a7dd-46ed-bdc4-490eeb6e77ad-kube-api-access-zzp24\") on node \"crc\" DevicePath \"\"" Nov 29 07:59:18 crc kubenswrapper[4795]: I1129 07:59:18.158323 4795 generic.go:334] "Generic (PLEG): container finished" podID="e6eeb242-a7dd-46ed-bdc4-490eeb6e77ad" containerID="693bf8390419c7c70c2663b669c301fa462cf400d4df2183abe2bf4b835a7d9b" exitCode=0 Nov 29 07:59:18 crc kubenswrapper[4795]: I1129 07:59:18.158420 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-hbqdt" Nov 29 07:59:18 crc kubenswrapper[4795]: I1129 07:59:18.158420 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-hbqdt" event={"ID":"e6eeb242-a7dd-46ed-bdc4-490eeb6e77ad","Type":"ContainerDied","Data":"693bf8390419c7c70c2663b669c301fa462cf400d4df2183abe2bf4b835a7d9b"} Nov 29 07:59:18 crc kubenswrapper[4795]: I1129 07:59:18.158507 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-hbqdt" event={"ID":"e6eeb242-a7dd-46ed-bdc4-490eeb6e77ad","Type":"ContainerDied","Data":"c2f902f9d1354ebe2d514e4c6ee459cc834557ef226975811e66b347963ce021"} Nov 29 07:59:18 crc kubenswrapper[4795]: I1129 07:59:18.158544 4795 scope.go:117] "RemoveContainer" containerID="693bf8390419c7c70c2663b669c301fa462cf400d4df2183abe2bf4b835a7d9b" Nov 29 07:59:18 crc kubenswrapper[4795]: I1129 07:59:18.181992 4795 scope.go:117] "RemoveContainer" containerID="693bf8390419c7c70c2663b669c301fa462cf400d4df2183abe2bf4b835a7d9b" Nov 29 07:59:18 crc kubenswrapper[4795]: E1129 07:59:18.182451 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"693bf8390419c7c70c2663b669c301fa462cf400d4df2183abe2bf4b835a7d9b\": container with ID starting with 693bf8390419c7c70c2663b669c301fa462cf400d4df2183abe2bf4b835a7d9b not found: ID does not exist" containerID="693bf8390419c7c70c2663b669c301fa462cf400d4df2183abe2bf4b835a7d9b" Nov 29 07:59:18 crc kubenswrapper[4795]: I1129 07:59:18.182490 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"693bf8390419c7c70c2663b669c301fa462cf400d4df2183abe2bf4b835a7d9b"} err="failed to get container status \"693bf8390419c7c70c2663b669c301fa462cf400d4df2183abe2bf4b835a7d9b\": rpc error: code = NotFound desc = could not find container \"693bf8390419c7c70c2663b669c301fa462cf400d4df2183abe2bf4b835a7d9b\": container with ID starting with 693bf8390419c7c70c2663b669c301fa462cf400d4df2183abe2bf4b835a7d9b not found: ID does not exist" Nov 29 07:59:18 crc kubenswrapper[4795]: I1129 07:59:18.197345 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-hbqdt"] Nov 29 07:59:18 crc kubenswrapper[4795]: I1129 07:59:18.212424 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-hbqdt"] Nov 29 07:59:18 crc kubenswrapper[4795]: I1129 07:59:18.284742 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6eeb242-a7dd-46ed-bdc4-490eeb6e77ad" path="/var/lib/kubelet/pods/e6eeb242-a7dd-46ed-bdc4-490eeb6e77ad/volumes" Nov 29 07:59:26 crc kubenswrapper[4795]: I1129 07:59:26.223150 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-fc66n" Nov 29 07:59:26 crc kubenswrapper[4795]: I1129 07:59:26.223888 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-fc66n" Nov 29 07:59:26 crc kubenswrapper[4795]: I1129 07:59:26.273989 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-fc66n" Nov 29 07:59:27 crc kubenswrapper[4795]: I1129 07:59:27.269410 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-fc66n" Nov 29 07:59:27 crc kubenswrapper[4795]: I1129 07:59:27.949859 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-tkql9" Nov 29 07:59:40 crc kubenswrapper[4795]: I1129 07:59:40.503422 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/6f4a794a7e4af1c5b0c6f5512a2ac7527d4469dfd9e88afc91933ffc30kzj2m"] Nov 29 07:59:40 crc kubenswrapper[4795]: E1129 07:59:40.504339 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6eeb242-a7dd-46ed-bdc4-490eeb6e77ad" containerName="registry-server" Nov 29 07:59:40 crc kubenswrapper[4795]: I1129 07:59:40.504357 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6eeb242-a7dd-46ed-bdc4-490eeb6e77ad" containerName="registry-server" Nov 29 07:59:40 crc kubenswrapper[4795]: I1129 07:59:40.504612 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6eeb242-a7dd-46ed-bdc4-490eeb6e77ad" containerName="registry-server" Nov 29 07:59:40 crc kubenswrapper[4795]: I1129 07:59:40.505704 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/6f4a794a7e4af1c5b0c6f5512a2ac7527d4469dfd9e88afc91933ffc30kzj2m" Nov 29 07:59:40 crc kubenswrapper[4795]: I1129 07:59:40.513225 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-h7c9h" Nov 29 07:59:40 crc kubenswrapper[4795]: I1129 07:59:40.528858 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/6f4a794a7e4af1c5b0c6f5512a2ac7527d4469dfd9e88afc91933ffc30kzj2m"] Nov 29 07:59:40 crc kubenswrapper[4795]: I1129 07:59:40.653101 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7xsd\" (UniqueName: \"kubernetes.io/projected/2d7978d1-9d8a-4d2d-a6f8-d44c5c2b4eef-kube-api-access-h7xsd\") pod \"6f4a794a7e4af1c5b0c6f5512a2ac7527d4469dfd9e88afc91933ffc30kzj2m\" (UID: \"2d7978d1-9d8a-4d2d-a6f8-d44c5c2b4eef\") " pod="openstack-operators/6f4a794a7e4af1c5b0c6f5512a2ac7527d4469dfd9e88afc91933ffc30kzj2m" Nov 29 07:59:40 crc kubenswrapper[4795]: I1129 07:59:40.653159 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2d7978d1-9d8a-4d2d-a6f8-d44c5c2b4eef-util\") pod \"6f4a794a7e4af1c5b0c6f5512a2ac7527d4469dfd9e88afc91933ffc30kzj2m\" (UID: \"2d7978d1-9d8a-4d2d-a6f8-d44c5c2b4eef\") " pod="openstack-operators/6f4a794a7e4af1c5b0c6f5512a2ac7527d4469dfd9e88afc91933ffc30kzj2m" Nov 29 07:59:40 crc kubenswrapper[4795]: I1129 07:59:40.653201 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2d7978d1-9d8a-4d2d-a6f8-d44c5c2b4eef-bundle\") pod \"6f4a794a7e4af1c5b0c6f5512a2ac7527d4469dfd9e88afc91933ffc30kzj2m\" (UID: \"2d7978d1-9d8a-4d2d-a6f8-d44c5c2b4eef\") " pod="openstack-operators/6f4a794a7e4af1c5b0c6f5512a2ac7527d4469dfd9e88afc91933ffc30kzj2m" Nov 29 07:59:40 crc kubenswrapper[4795]: I1129 07:59:40.755368 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7xsd\" (UniqueName: \"kubernetes.io/projected/2d7978d1-9d8a-4d2d-a6f8-d44c5c2b4eef-kube-api-access-h7xsd\") pod \"6f4a794a7e4af1c5b0c6f5512a2ac7527d4469dfd9e88afc91933ffc30kzj2m\" (UID: \"2d7978d1-9d8a-4d2d-a6f8-d44c5c2b4eef\") " pod="openstack-operators/6f4a794a7e4af1c5b0c6f5512a2ac7527d4469dfd9e88afc91933ffc30kzj2m" Nov 29 07:59:40 crc kubenswrapper[4795]: I1129 07:59:40.755453 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2d7978d1-9d8a-4d2d-a6f8-d44c5c2b4eef-util\") pod \"6f4a794a7e4af1c5b0c6f5512a2ac7527d4469dfd9e88afc91933ffc30kzj2m\" (UID: \"2d7978d1-9d8a-4d2d-a6f8-d44c5c2b4eef\") " pod="openstack-operators/6f4a794a7e4af1c5b0c6f5512a2ac7527d4469dfd9e88afc91933ffc30kzj2m" Nov 29 07:59:40 crc kubenswrapper[4795]: I1129 07:59:40.755499 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2d7978d1-9d8a-4d2d-a6f8-d44c5c2b4eef-bundle\") pod \"6f4a794a7e4af1c5b0c6f5512a2ac7527d4469dfd9e88afc91933ffc30kzj2m\" (UID: \"2d7978d1-9d8a-4d2d-a6f8-d44c5c2b4eef\") " pod="openstack-operators/6f4a794a7e4af1c5b0c6f5512a2ac7527d4469dfd9e88afc91933ffc30kzj2m" Nov 29 07:59:40 crc kubenswrapper[4795]: I1129 07:59:40.756118 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2d7978d1-9d8a-4d2d-a6f8-d44c5c2b4eef-bundle\") pod \"6f4a794a7e4af1c5b0c6f5512a2ac7527d4469dfd9e88afc91933ffc30kzj2m\" (UID: \"2d7978d1-9d8a-4d2d-a6f8-d44c5c2b4eef\") " pod="openstack-operators/6f4a794a7e4af1c5b0c6f5512a2ac7527d4469dfd9e88afc91933ffc30kzj2m" Nov 29 07:59:40 crc kubenswrapper[4795]: I1129 07:59:40.756154 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2d7978d1-9d8a-4d2d-a6f8-d44c5c2b4eef-util\") pod \"6f4a794a7e4af1c5b0c6f5512a2ac7527d4469dfd9e88afc91933ffc30kzj2m\" (UID: \"2d7978d1-9d8a-4d2d-a6f8-d44c5c2b4eef\") " pod="openstack-operators/6f4a794a7e4af1c5b0c6f5512a2ac7527d4469dfd9e88afc91933ffc30kzj2m" Nov 29 07:59:40 crc kubenswrapper[4795]: I1129 07:59:40.792572 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7xsd\" (UniqueName: \"kubernetes.io/projected/2d7978d1-9d8a-4d2d-a6f8-d44c5c2b4eef-kube-api-access-h7xsd\") pod \"6f4a794a7e4af1c5b0c6f5512a2ac7527d4469dfd9e88afc91933ffc30kzj2m\" (UID: \"2d7978d1-9d8a-4d2d-a6f8-d44c5c2b4eef\") " pod="openstack-operators/6f4a794a7e4af1c5b0c6f5512a2ac7527d4469dfd9e88afc91933ffc30kzj2m" Nov 29 07:59:40 crc kubenswrapper[4795]: I1129 07:59:40.825145 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/6f4a794a7e4af1c5b0c6f5512a2ac7527d4469dfd9e88afc91933ffc30kzj2m" Nov 29 07:59:41 crc kubenswrapper[4795]: I1129 07:59:41.253780 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/6f4a794a7e4af1c5b0c6f5512a2ac7527d4469dfd9e88afc91933ffc30kzj2m"] Nov 29 07:59:41 crc kubenswrapper[4795]: I1129 07:59:41.359081 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/6f4a794a7e4af1c5b0c6f5512a2ac7527d4469dfd9e88afc91933ffc30kzj2m" event={"ID":"2d7978d1-9d8a-4d2d-a6f8-d44c5c2b4eef","Type":"ContainerStarted","Data":"ca6a24d3b208c08fd248c068a0bd7fdb8c59a9d8165eedf4d0d709342c3a5d1b"} Nov 29 07:59:41 crc kubenswrapper[4795]: I1129 07:59:41.941149 4795 patch_prober.go:28] interesting pod/machine-config-daemon-bkmq6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:59:41 crc kubenswrapper[4795]: I1129 07:59:41.941514 4795 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:59:42 crc kubenswrapper[4795]: I1129 07:59:42.368965 4795 generic.go:334] "Generic (PLEG): container finished" podID="2d7978d1-9d8a-4d2d-a6f8-d44c5c2b4eef" containerID="c4d9f66a452a1b3f4eea39635c5950a2b89eacb98e27c35450245eb8abca4c22" exitCode=0 Nov 29 07:59:42 crc kubenswrapper[4795]: I1129 07:59:42.369017 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/6f4a794a7e4af1c5b0c6f5512a2ac7527d4469dfd9e88afc91933ffc30kzj2m" event={"ID":"2d7978d1-9d8a-4d2d-a6f8-d44c5c2b4eef","Type":"ContainerDied","Data":"c4d9f66a452a1b3f4eea39635c5950a2b89eacb98e27c35450245eb8abca4c22"} Nov 29 07:59:43 crc kubenswrapper[4795]: I1129 07:59:43.381252 4795 generic.go:334] "Generic (PLEG): container finished" podID="2d7978d1-9d8a-4d2d-a6f8-d44c5c2b4eef" containerID="3c851d456718cff44d4b92f80f2567dbd7a669fd10eceb1e57196dab0cb5b447" exitCode=0 Nov 29 07:59:43 crc kubenswrapper[4795]: I1129 07:59:43.381395 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/6f4a794a7e4af1c5b0c6f5512a2ac7527d4469dfd9e88afc91933ffc30kzj2m" event={"ID":"2d7978d1-9d8a-4d2d-a6f8-d44c5c2b4eef","Type":"ContainerDied","Data":"3c851d456718cff44d4b92f80f2567dbd7a669fd10eceb1e57196dab0cb5b447"} Nov 29 07:59:44 crc kubenswrapper[4795]: I1129 07:59:44.390049 4795 generic.go:334] "Generic (PLEG): container finished" podID="2d7978d1-9d8a-4d2d-a6f8-d44c5c2b4eef" containerID="05aa2a979fe5e6044dbf018693c250722de4d7e7183c337acf5e6c8087b3856a" exitCode=0 Nov 29 07:59:44 crc kubenswrapper[4795]: I1129 07:59:44.390140 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/6f4a794a7e4af1c5b0c6f5512a2ac7527d4469dfd9e88afc91933ffc30kzj2m" event={"ID":"2d7978d1-9d8a-4d2d-a6f8-d44c5c2b4eef","Type":"ContainerDied","Data":"05aa2a979fe5e6044dbf018693c250722de4d7e7183c337acf5e6c8087b3856a"} Nov 29 07:59:45 crc kubenswrapper[4795]: I1129 07:59:45.746464 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/6f4a794a7e4af1c5b0c6f5512a2ac7527d4469dfd9e88afc91933ffc30kzj2m" Nov 29 07:59:45 crc kubenswrapper[4795]: I1129 07:59:45.761181 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2d7978d1-9d8a-4d2d-a6f8-d44c5c2b4eef-util\") pod \"2d7978d1-9d8a-4d2d-a6f8-d44c5c2b4eef\" (UID: \"2d7978d1-9d8a-4d2d-a6f8-d44c5c2b4eef\") " Nov 29 07:59:45 crc kubenswrapper[4795]: I1129 07:59:45.761265 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2d7978d1-9d8a-4d2d-a6f8-d44c5c2b4eef-bundle\") pod \"2d7978d1-9d8a-4d2d-a6f8-d44c5c2b4eef\" (UID: \"2d7978d1-9d8a-4d2d-a6f8-d44c5c2b4eef\") " Nov 29 07:59:45 crc kubenswrapper[4795]: I1129 07:59:45.761297 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7xsd\" (UniqueName: \"kubernetes.io/projected/2d7978d1-9d8a-4d2d-a6f8-d44c5c2b4eef-kube-api-access-h7xsd\") pod \"2d7978d1-9d8a-4d2d-a6f8-d44c5c2b4eef\" (UID: \"2d7978d1-9d8a-4d2d-a6f8-d44c5c2b4eef\") " Nov 29 07:59:45 crc kubenswrapper[4795]: I1129 07:59:45.762068 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d7978d1-9d8a-4d2d-a6f8-d44c5c2b4eef-bundle" (OuterVolumeSpecName: "bundle") pod "2d7978d1-9d8a-4d2d-a6f8-d44c5c2b4eef" (UID: "2d7978d1-9d8a-4d2d-a6f8-d44c5c2b4eef"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:59:45 crc kubenswrapper[4795]: I1129 07:59:45.775204 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d7978d1-9d8a-4d2d-a6f8-d44c5c2b4eef-kube-api-access-h7xsd" (OuterVolumeSpecName: "kube-api-access-h7xsd") pod "2d7978d1-9d8a-4d2d-a6f8-d44c5c2b4eef" (UID: "2d7978d1-9d8a-4d2d-a6f8-d44c5c2b4eef"). InnerVolumeSpecName "kube-api-access-h7xsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:59:45 crc kubenswrapper[4795]: I1129 07:59:45.781524 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d7978d1-9d8a-4d2d-a6f8-d44c5c2b4eef-util" (OuterVolumeSpecName: "util") pod "2d7978d1-9d8a-4d2d-a6f8-d44c5c2b4eef" (UID: "2d7978d1-9d8a-4d2d-a6f8-d44c5c2b4eef"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:59:45 crc kubenswrapper[4795]: I1129 07:59:45.862492 4795 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2d7978d1-9d8a-4d2d-a6f8-d44c5c2b4eef-util\") on node \"crc\" DevicePath \"\"" Nov 29 07:59:45 crc kubenswrapper[4795]: I1129 07:59:45.862522 4795 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2d7978d1-9d8a-4d2d-a6f8-d44c5c2b4eef-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:59:45 crc kubenswrapper[4795]: I1129 07:59:45.862531 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7xsd\" (UniqueName: \"kubernetes.io/projected/2d7978d1-9d8a-4d2d-a6f8-d44c5c2b4eef-kube-api-access-h7xsd\") on node \"crc\" DevicePath \"\"" Nov 29 07:59:46 crc kubenswrapper[4795]: I1129 07:59:46.409468 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/6f4a794a7e4af1c5b0c6f5512a2ac7527d4469dfd9e88afc91933ffc30kzj2m" event={"ID":"2d7978d1-9d8a-4d2d-a6f8-d44c5c2b4eef","Type":"ContainerDied","Data":"ca6a24d3b208c08fd248c068a0bd7fdb8c59a9d8165eedf4d0d709342c3a5d1b"} Nov 29 07:59:46 crc kubenswrapper[4795]: I1129 07:59:46.409914 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca6a24d3b208c08fd248c068a0bd7fdb8c59a9d8165eedf4d0d709342c3a5d1b" Nov 29 07:59:46 crc kubenswrapper[4795]: I1129 07:59:46.409847 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/6f4a794a7e4af1c5b0c6f5512a2ac7527d4469dfd9e88afc91933ffc30kzj2m" Nov 29 07:59:50 crc kubenswrapper[4795]: I1129 07:59:50.574858 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-5c79c4cd8-99qw8"] Nov 29 07:59:50 crc kubenswrapper[4795]: E1129 07:59:50.575639 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d7978d1-9d8a-4d2d-a6f8-d44c5c2b4eef" containerName="extract" Nov 29 07:59:50 crc kubenswrapper[4795]: I1129 07:59:50.575652 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d7978d1-9d8a-4d2d-a6f8-d44c5c2b4eef" containerName="extract" Nov 29 07:59:50 crc kubenswrapper[4795]: E1129 07:59:50.575661 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d7978d1-9d8a-4d2d-a6f8-d44c5c2b4eef" containerName="util" Nov 29 07:59:50 crc kubenswrapper[4795]: I1129 07:59:50.575667 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d7978d1-9d8a-4d2d-a6f8-d44c5c2b4eef" containerName="util" Nov 29 07:59:50 crc kubenswrapper[4795]: E1129 07:59:50.575696 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d7978d1-9d8a-4d2d-a6f8-d44c5c2b4eef" containerName="pull" Nov 29 07:59:50 crc kubenswrapper[4795]: I1129 07:59:50.575701 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d7978d1-9d8a-4d2d-a6f8-d44c5c2b4eef" containerName="pull" Nov 29 07:59:50 crc kubenswrapper[4795]: I1129 07:59:50.575856 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d7978d1-9d8a-4d2d-a6f8-d44c5c2b4eef" containerName="extract" Nov 29 07:59:50 crc kubenswrapper[4795]: I1129 07:59:50.576493 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-5c79c4cd8-99qw8" Nov 29 07:59:50 crc kubenswrapper[4795]: I1129 07:59:50.579478 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-xnksx" Nov 29 07:59:50 crc kubenswrapper[4795]: I1129 07:59:50.595215 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-5c79c4cd8-99qw8"] Nov 29 07:59:50 crc kubenswrapper[4795]: I1129 07:59:50.744917 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrshq\" (UniqueName: \"kubernetes.io/projected/1b9bc471-e43a-403f-8bd9-83744b7746a7-kube-api-access-lrshq\") pod \"openstack-operator-controller-operator-5c79c4cd8-99qw8\" (UID: \"1b9bc471-e43a-403f-8bd9-83744b7746a7\") " pod="openstack-operators/openstack-operator-controller-operator-5c79c4cd8-99qw8" Nov 29 07:59:50 crc kubenswrapper[4795]: I1129 07:59:50.846340 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrshq\" (UniqueName: \"kubernetes.io/projected/1b9bc471-e43a-403f-8bd9-83744b7746a7-kube-api-access-lrshq\") pod \"openstack-operator-controller-operator-5c79c4cd8-99qw8\" (UID: \"1b9bc471-e43a-403f-8bd9-83744b7746a7\") " pod="openstack-operators/openstack-operator-controller-operator-5c79c4cd8-99qw8" Nov 29 07:59:50 crc kubenswrapper[4795]: I1129 07:59:50.865559 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrshq\" (UniqueName: \"kubernetes.io/projected/1b9bc471-e43a-403f-8bd9-83744b7746a7-kube-api-access-lrshq\") pod \"openstack-operator-controller-operator-5c79c4cd8-99qw8\" (UID: \"1b9bc471-e43a-403f-8bd9-83744b7746a7\") " pod="openstack-operators/openstack-operator-controller-operator-5c79c4cd8-99qw8" Nov 29 07:59:50 crc kubenswrapper[4795]: I1129 07:59:50.895422 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-5c79c4cd8-99qw8" Nov 29 07:59:51 crc kubenswrapper[4795]: I1129 07:59:51.364328 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-5c79c4cd8-99qw8"] Nov 29 07:59:51 crc kubenswrapper[4795]: I1129 07:59:51.444442 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-5c79c4cd8-99qw8" event={"ID":"1b9bc471-e43a-403f-8bd9-83744b7746a7","Type":"ContainerStarted","Data":"33265e228734e95f3035b900aeda8cb1681667476033a746e448bfb273258067"} Nov 29 07:59:56 crc kubenswrapper[4795]: I1129 07:59:56.509791 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-5c79c4cd8-99qw8" event={"ID":"1b9bc471-e43a-403f-8bd9-83744b7746a7","Type":"ContainerStarted","Data":"f5738f724f3260e3690a116de8e843f53140b6a2ffea457671fd6ed969dac27e"} Nov 29 07:59:56 crc kubenswrapper[4795]: I1129 07:59:56.510076 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-5c79c4cd8-99qw8" Nov 29 08:00:00 crc kubenswrapper[4795]: I1129 08:00:00.141179 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-5c79c4cd8-99qw8" podStartSLOduration=5.656573304 podStartE2EDuration="10.141153737s" podCreationTimestamp="2025-11-29 07:59:50 +0000 UTC" firstStartedPulling="2025-11-29 07:59:51.371880944 +0000 UTC m=+1237.347456734" lastFinishedPulling="2025-11-29 07:59:55.856461387 +0000 UTC m=+1241.832037167" observedRunningTime="2025-11-29 07:59:56.53916116 +0000 UTC m=+1242.514736950" watchObservedRunningTime="2025-11-29 08:00:00.141153737 +0000 UTC m=+1246.116729527" Nov 29 08:00:00 crc kubenswrapper[4795]: I1129 08:00:00.146321 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406720-55czb"] Nov 29 08:00:00 crc kubenswrapper[4795]: I1129 08:00:00.147738 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-55czb" Nov 29 08:00:00 crc kubenswrapper[4795]: I1129 08:00:00.149744 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 29 08:00:00 crc kubenswrapper[4795]: I1129 08:00:00.153293 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 29 08:00:00 crc kubenswrapper[4795]: I1129 08:00:00.156456 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406720-55czb"] Nov 29 08:00:00 crc kubenswrapper[4795]: I1129 08:00:00.315457 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87f01948-4fcf-443c-86a5-52cd4ea2497d-config-volume\") pod \"collect-profiles-29406720-55czb\" (UID: \"87f01948-4fcf-443c-86a5-52cd4ea2497d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-55czb" Nov 29 08:00:00 crc kubenswrapper[4795]: I1129 08:00:00.315766 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9997m\" (UniqueName: \"kubernetes.io/projected/87f01948-4fcf-443c-86a5-52cd4ea2497d-kube-api-access-9997m\") pod \"collect-profiles-29406720-55czb\" (UID: \"87f01948-4fcf-443c-86a5-52cd4ea2497d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-55czb" Nov 29 08:00:00 crc kubenswrapper[4795]: I1129 08:00:00.315873 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/87f01948-4fcf-443c-86a5-52cd4ea2497d-secret-volume\") pod \"collect-profiles-29406720-55czb\" (UID: \"87f01948-4fcf-443c-86a5-52cd4ea2497d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-55czb" Nov 29 08:00:00 crc kubenswrapper[4795]: I1129 08:00:00.417793 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87f01948-4fcf-443c-86a5-52cd4ea2497d-config-volume\") pod \"collect-profiles-29406720-55czb\" (UID: \"87f01948-4fcf-443c-86a5-52cd4ea2497d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-55czb" Nov 29 08:00:00 crc kubenswrapper[4795]: I1129 08:00:00.417936 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9997m\" (UniqueName: \"kubernetes.io/projected/87f01948-4fcf-443c-86a5-52cd4ea2497d-kube-api-access-9997m\") pod \"collect-profiles-29406720-55czb\" (UID: \"87f01948-4fcf-443c-86a5-52cd4ea2497d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-55czb" Nov 29 08:00:00 crc kubenswrapper[4795]: I1129 08:00:00.417995 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/87f01948-4fcf-443c-86a5-52cd4ea2497d-secret-volume\") pod \"collect-profiles-29406720-55czb\" (UID: \"87f01948-4fcf-443c-86a5-52cd4ea2497d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-55czb" Nov 29 08:00:00 crc kubenswrapper[4795]: I1129 08:00:00.419181 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87f01948-4fcf-443c-86a5-52cd4ea2497d-config-volume\") pod \"collect-profiles-29406720-55czb\" (UID: \"87f01948-4fcf-443c-86a5-52cd4ea2497d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-55czb" Nov 29 08:00:00 crc kubenswrapper[4795]: I1129 08:00:00.431777 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/87f01948-4fcf-443c-86a5-52cd4ea2497d-secret-volume\") pod \"collect-profiles-29406720-55czb\" (UID: \"87f01948-4fcf-443c-86a5-52cd4ea2497d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-55czb" Nov 29 08:00:00 crc kubenswrapper[4795]: I1129 08:00:00.437639 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9997m\" (UniqueName: \"kubernetes.io/projected/87f01948-4fcf-443c-86a5-52cd4ea2497d-kube-api-access-9997m\") pod \"collect-profiles-29406720-55czb\" (UID: \"87f01948-4fcf-443c-86a5-52cd4ea2497d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-55czb" Nov 29 08:00:00 crc kubenswrapper[4795]: I1129 08:00:00.467456 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-55czb" Nov 29 08:00:00 crc kubenswrapper[4795]: I1129 08:00:00.898247 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406720-55czb"] Nov 29 08:00:01 crc kubenswrapper[4795]: I1129 08:00:01.550895 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-55czb" event={"ID":"87f01948-4fcf-443c-86a5-52cd4ea2497d","Type":"ContainerStarted","Data":"79cb6e09ec5b12f7f4c8531192768dd4d9d4e49ca016da21508d472afe272124"} Nov 29 08:00:01 crc kubenswrapper[4795]: I1129 08:00:01.551229 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-55czb" event={"ID":"87f01948-4fcf-443c-86a5-52cd4ea2497d","Type":"ContainerStarted","Data":"94641d264ada19317aa83beaf7234bfb16271284f4e6340a8d28c9c45dfa0f12"} Nov 29 08:00:02 crc kubenswrapper[4795]: I1129 08:00:02.583540 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-55czb" podStartSLOduration=2.583521218 podStartE2EDuration="2.583521218s" podCreationTimestamp="2025-11-29 08:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 08:00:02.576227321 +0000 UTC m=+1248.551803131" watchObservedRunningTime="2025-11-29 08:00:02.583521218 +0000 UTC m=+1248.559097008" Nov 29 08:00:03 crc kubenswrapper[4795]: I1129 08:00:03.566376 4795 generic.go:334] "Generic (PLEG): container finished" podID="87f01948-4fcf-443c-86a5-52cd4ea2497d" containerID="79cb6e09ec5b12f7f4c8531192768dd4d9d4e49ca016da21508d472afe272124" exitCode=0 Nov 29 08:00:03 crc kubenswrapper[4795]: I1129 08:00:03.566430 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-55czb" event={"ID":"87f01948-4fcf-443c-86a5-52cd4ea2497d","Type":"ContainerDied","Data":"79cb6e09ec5b12f7f4c8531192768dd4d9d4e49ca016da21508d472afe272124"} Nov 29 08:00:05 crc kubenswrapper[4795]: I1129 08:00:05.144764 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-55czb" Nov 29 08:00:05 crc kubenswrapper[4795]: I1129 08:00:05.304075 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/87f01948-4fcf-443c-86a5-52cd4ea2497d-secret-volume\") pod \"87f01948-4fcf-443c-86a5-52cd4ea2497d\" (UID: \"87f01948-4fcf-443c-86a5-52cd4ea2497d\") " Nov 29 08:00:05 crc kubenswrapper[4795]: I1129 08:00:05.304147 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9997m\" (UniqueName: \"kubernetes.io/projected/87f01948-4fcf-443c-86a5-52cd4ea2497d-kube-api-access-9997m\") pod \"87f01948-4fcf-443c-86a5-52cd4ea2497d\" (UID: \"87f01948-4fcf-443c-86a5-52cd4ea2497d\") " Nov 29 08:00:05 crc kubenswrapper[4795]: I1129 08:00:05.304209 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87f01948-4fcf-443c-86a5-52cd4ea2497d-config-volume\") pod \"87f01948-4fcf-443c-86a5-52cd4ea2497d\" (UID: \"87f01948-4fcf-443c-86a5-52cd4ea2497d\") " Nov 29 08:00:05 crc kubenswrapper[4795]: I1129 08:00:05.305457 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87f01948-4fcf-443c-86a5-52cd4ea2497d-config-volume" (OuterVolumeSpecName: "config-volume") pod "87f01948-4fcf-443c-86a5-52cd4ea2497d" (UID: "87f01948-4fcf-443c-86a5-52cd4ea2497d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:00:05 crc kubenswrapper[4795]: I1129 08:00:05.314771 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87f01948-4fcf-443c-86a5-52cd4ea2497d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "87f01948-4fcf-443c-86a5-52cd4ea2497d" (UID: "87f01948-4fcf-443c-86a5-52cd4ea2497d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:00:05 crc kubenswrapper[4795]: I1129 08:00:05.329614 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87f01948-4fcf-443c-86a5-52cd4ea2497d-kube-api-access-9997m" (OuterVolumeSpecName: "kube-api-access-9997m") pod "87f01948-4fcf-443c-86a5-52cd4ea2497d" (UID: "87f01948-4fcf-443c-86a5-52cd4ea2497d"). InnerVolumeSpecName "kube-api-access-9997m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:00:05 crc kubenswrapper[4795]: I1129 08:00:05.406522 4795 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/87f01948-4fcf-443c-86a5-52cd4ea2497d-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 29 08:00:05 crc kubenswrapper[4795]: I1129 08:00:05.406576 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9997m\" (UniqueName: \"kubernetes.io/projected/87f01948-4fcf-443c-86a5-52cd4ea2497d-kube-api-access-9997m\") on node \"crc\" DevicePath \"\"" Nov 29 08:00:05 crc kubenswrapper[4795]: I1129 08:00:05.406629 4795 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87f01948-4fcf-443c-86a5-52cd4ea2497d-config-volume\") on node \"crc\" DevicePath \"\"" Nov 29 08:00:05 crc kubenswrapper[4795]: I1129 08:00:05.583358 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-55czb" event={"ID":"87f01948-4fcf-443c-86a5-52cd4ea2497d","Type":"ContainerDied","Data":"94641d264ada19317aa83beaf7234bfb16271284f4e6340a8d28c9c45dfa0f12"} Nov 29 08:00:05 crc kubenswrapper[4795]: I1129 08:00:05.583406 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-55czb" Nov 29 08:00:05 crc kubenswrapper[4795]: I1129 08:00:05.583411 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94641d264ada19317aa83beaf7234bfb16271284f4e6340a8d28c9c45dfa0f12" Nov 29 08:00:10 crc kubenswrapper[4795]: I1129 08:00:10.898448 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-5c79c4cd8-99qw8" Nov 29 08:00:11 crc kubenswrapper[4795]: I1129 08:00:11.941704 4795 patch_prober.go:28] interesting pod/machine-config-daemon-bkmq6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:00:11 crc kubenswrapper[4795]: I1129 08:00:11.942409 4795 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:00:34 crc kubenswrapper[4795]: I1129 08:00:34.748874 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-qtbtd"] Nov 29 08:00:34 crc kubenswrapper[4795]: E1129 08:00:34.749972 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87f01948-4fcf-443c-86a5-52cd4ea2497d" containerName="collect-profiles" Nov 29 08:00:34 crc kubenswrapper[4795]: I1129 08:00:34.749992 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="87f01948-4fcf-443c-86a5-52cd4ea2497d" containerName="collect-profiles" Nov 29 08:00:34 crc kubenswrapper[4795]: I1129 08:00:34.750175 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="87f01948-4fcf-443c-86a5-52cd4ea2497d" containerName="collect-profiles" Nov 29 08:00:34 crc kubenswrapper[4795]: I1129 08:00:34.751760 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-qtbtd" Nov 29 08:00:34 crc kubenswrapper[4795]: I1129 08:00:34.754225 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-dllcm" Nov 29 08:00:34 crc kubenswrapper[4795]: I1129 08:00:34.756931 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-klbwf"] Nov 29 08:00:34 crc kubenswrapper[4795]: I1129 08:00:34.758568 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-klbwf" Nov 29 08:00:34 crc kubenswrapper[4795]: I1129 08:00:34.759927 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-8bcgm" Nov 29 08:00:34 crc kubenswrapper[4795]: I1129 08:00:34.795101 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-klbwf"] Nov 29 08:00:34 crc kubenswrapper[4795]: I1129 08:00:34.830571 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8d4x9\" (UniqueName: \"kubernetes.io/projected/86217734-815f-461c-a32d-8d744192003e-kube-api-access-8d4x9\") pod \"cinder-operator-controller-manager-859b6ccc6-qtbtd\" (UID: \"86217734-815f-461c-a32d-8d744192003e\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-qtbtd" Nov 29 08:00:34 crc kubenswrapper[4795]: I1129 08:00:34.845850 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-8mk4s"] Nov 29 08:00:34 crc kubenswrapper[4795]: I1129 08:00:34.847353 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-8mk4s" Nov 29 08:00:34 crc kubenswrapper[4795]: I1129 08:00:34.851295 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-gcb5w" Nov 29 08:00:34 crc kubenswrapper[4795]: I1129 08:00:34.901811 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-8mk4s"] Nov 29 08:00:34 crc kubenswrapper[4795]: I1129 08:00:34.929099 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-668d9c48b9-dr8f4"] Nov 29 08:00:34 crc kubenswrapper[4795]: I1129 08:00:34.930810 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-dr8f4" Nov 29 08:00:34 crc kubenswrapper[4795]: I1129 08:00:34.932358 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqq8p\" (UniqueName: \"kubernetes.io/projected/7bed5103-966d-43d3-92f1-73a2f8b6d551-kube-api-access-cqq8p\") pod \"barbican-operator-controller-manager-7d9dfd778-klbwf\" (UID: \"7bed5103-966d-43d3-92f1-73a2f8b6d551\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-klbwf" Nov 29 08:00:34 crc kubenswrapper[4795]: I1129 08:00:34.936365 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-cr9x4" Nov 29 08:00:34 crc kubenswrapper[4795]: I1129 08:00:34.936856 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8d4x9\" (UniqueName: \"kubernetes.io/projected/86217734-815f-461c-a32d-8d744192003e-kube-api-access-8d4x9\") pod \"cinder-operator-controller-manager-859b6ccc6-qtbtd\" (UID: \"86217734-815f-461c-a32d-8d744192003e\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-qtbtd" Nov 29 08:00:34 crc kubenswrapper[4795]: I1129 08:00:34.992914 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-668d9c48b9-dr8f4"] Nov 29 08:00:34 crc kubenswrapper[4795]: I1129 08:00:34.997423 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8d4x9\" (UniqueName: \"kubernetes.io/projected/86217734-815f-461c-a32d-8d744192003e-kube-api-access-8d4x9\") pod \"cinder-operator-controller-manager-859b6ccc6-qtbtd\" (UID: \"86217734-815f-461c-a32d-8d744192003e\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-qtbtd" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.037849 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnsjn\" (UniqueName: \"kubernetes.io/projected/7bfb1aea-ee92-4033-ba5f-c7ac79c60d0e-kube-api-access-jnsjn\") pod \"designate-operator-controller-manager-78b4bc895b-8mk4s\" (UID: \"7bfb1aea-ee92-4033-ba5f-c7ac79c60d0e\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-8mk4s" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.037922 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqq8p\" (UniqueName: \"kubernetes.io/projected/7bed5103-966d-43d3-92f1-73a2f8b6d551-kube-api-access-cqq8p\") pod \"barbican-operator-controller-manager-7d9dfd778-klbwf\" (UID: \"7bed5103-966d-43d3-92f1-73a2f8b6d551\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-klbwf" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.037984 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgx5m\" (UniqueName: \"kubernetes.io/projected/3ff17662-f7b1-4870-9ef2-18a81fdb5d73-kube-api-access-hgx5m\") pod \"glance-operator-controller-manager-668d9c48b9-dr8f4\" (UID: \"3ff17662-f7b1-4870-9ef2-18a81fdb5d73\") " pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-dr8f4" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.050612 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-qtbtd"] Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.079861 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-xjns5"] Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.081756 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-xjns5" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.081820 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-qtbtd" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.087984 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-p272w" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.092107 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqq8p\" (UniqueName: \"kubernetes.io/projected/7bed5103-966d-43d3-92f1-73a2f8b6d551-kube-api-access-cqq8p\") pod \"barbican-operator-controller-manager-7d9dfd778-klbwf\" (UID: \"7bed5103-966d-43d3-92f1-73a2f8b6d551\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-klbwf" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.102131 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-xjns5"] Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.105134 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-klbwf" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.129009 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-spfsh"] Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.140937 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-spfsh" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.142938 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgx5m\" (UniqueName: \"kubernetes.io/projected/3ff17662-f7b1-4870-9ef2-18a81fdb5d73-kube-api-access-hgx5m\") pod \"glance-operator-controller-manager-668d9c48b9-dr8f4\" (UID: \"3ff17662-f7b1-4870-9ef2-18a81fdb5d73\") " pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-dr8f4" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.143247 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnsjn\" (UniqueName: \"kubernetes.io/projected/7bfb1aea-ee92-4033-ba5f-c7ac79c60d0e-kube-api-access-jnsjn\") pod \"designate-operator-controller-manager-78b4bc895b-8mk4s\" (UID: \"7bfb1aea-ee92-4033-ba5f-c7ac79c60d0e\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-8mk4s" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.154675 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-628rb" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.173167 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-spfsh"] Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.194390 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-rd6w8"] Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.195830 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57548d458d-rd6w8" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.207835 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-5q5dd"] Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.214393 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-5q5dd" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.233579 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-gbzpz" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.233797 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.234023 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-tgwvb" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.234187 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-546d4bdf48-cn6z7"] Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.235611 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-cn6z7" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.236992 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-rd6w8"] Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.237085 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnsjn\" (UniqueName: \"kubernetes.io/projected/7bfb1aea-ee92-4033-ba5f-c7ac79c60d0e-kube-api-access-jnsjn\") pod \"designate-operator-controller-manager-78b4bc895b-8mk4s\" (UID: \"7bfb1aea-ee92-4033-ba5f-c7ac79c60d0e\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-8mk4s" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.244639 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmknm\" (UniqueName: \"kubernetes.io/projected/d4e1473d-8426-452b-8030-764680cc5a20-kube-api-access-wmknm\") pod \"horizon-operator-controller-manager-68c6d99b8f-spfsh\" (UID: \"d4e1473d-8426-452b-8030-764680cc5a20\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-spfsh" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.244702 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcxhk\" (UniqueName: \"kubernetes.io/projected/36a279fc-25f1-407e-a1c6-6b8689d68cd2-kube-api-access-vcxhk\") pod \"heat-operator-controller-manager-5f64f6f8bb-xjns5\" (UID: \"36a279fc-25f1-407e-a1c6-6b8689d68cd2\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-xjns5" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.245492 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgx5m\" (UniqueName: \"kubernetes.io/projected/3ff17662-f7b1-4870-9ef2-18a81fdb5d73-kube-api-access-hgx5m\") pod \"glance-operator-controller-manager-668d9c48b9-dr8f4\" (UID: \"3ff17662-f7b1-4870-9ef2-18a81fdb5d73\") " pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-dr8f4" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.251433 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-6546668bfd-mvrzj"] Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.253017 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-mvrzj" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.258279 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-5q5dd"] Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.264637 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-546d4bdf48-cn6z7"] Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.279942 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-cmvt9"] Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.279999 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-cxnp8" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.280205 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-4xd9l" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.281886 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-cmvt9" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.284393 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-pl8p2" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.290622 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-6546668bfd-mvrzj"] Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.302691 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-njfdg"] Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.313922 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-njfdg" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.330646 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-966jn"] Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.332080 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-966jn" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.333032 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-nw7pz" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.338682 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-dr8f4" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.341981 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-bbgp8" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.342261 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-cmvt9"] Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.350443 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-njfdg"] Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.351293 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2qmt\" (UniqueName: \"kubernetes.io/projected/1d6dd43f-eee0-4257-adbb-a53218a86eb9-kube-api-access-w2qmt\") pod \"keystone-operator-controller-manager-546d4bdf48-cn6z7\" (UID: \"1d6dd43f-eee0-4257-adbb-a53218a86eb9\") " pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-cn6z7" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.351387 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svc5p\" (UniqueName: \"kubernetes.io/projected/cc9825dd-340b-4dda-ab8a-91d95ee67678-kube-api-access-svc5p\") pod \"infra-operator-controller-manager-57548d458d-rd6w8\" (UID: \"cc9825dd-340b-4dda-ab8a-91d95ee67678\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-rd6w8" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.351416 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cc9825dd-340b-4dda-ab8a-91d95ee67678-cert\") pod \"infra-operator-controller-manager-57548d458d-rd6w8\" (UID: \"cc9825dd-340b-4dda-ab8a-91d95ee67678\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-rd6w8" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.351456 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmknm\" (UniqueName: \"kubernetes.io/projected/d4e1473d-8426-452b-8030-764680cc5a20-kube-api-access-wmknm\") pod \"horizon-operator-controller-manager-68c6d99b8f-spfsh\" (UID: \"d4e1473d-8426-452b-8030-764680cc5a20\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-spfsh" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.351503 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcxhk\" (UniqueName: \"kubernetes.io/projected/36a279fc-25f1-407e-a1c6-6b8689d68cd2-kube-api-access-vcxhk\") pod \"heat-operator-controller-manager-5f64f6f8bb-xjns5\" (UID: \"36a279fc-25f1-407e-a1c6-6b8689d68cd2\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-xjns5" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.351522 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztgt8\" (UniqueName: \"kubernetes.io/projected/b53e6db5-a3fe-4f86-9b6a-49eb7d4629fd-kube-api-access-ztgt8\") pod \"ironic-operator-controller-manager-6c548fd776-5q5dd\" (UID: \"b53e6db5-a3fe-4f86-9b6a-49eb7d4629fd\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-5q5dd" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.359662 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-966jn"] Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.382334 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-d5d4r"] Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.384634 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-d5d4r" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.389894 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-9rq7k" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.408923 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4wf8kl"] Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.410264 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4wf8kl" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.415486 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.435629 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmknm\" (UniqueName: \"kubernetes.io/projected/d4e1473d-8426-452b-8030-764680cc5a20-kube-api-access-wmknm\") pod \"horizon-operator-controller-manager-68c6d99b8f-spfsh\" (UID: \"d4e1473d-8426-452b-8030-764680cc5a20\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-spfsh" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.446024 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-npd28" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.471950 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-d5d4r"] Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.497369 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcxhk\" (UniqueName: \"kubernetes.io/projected/36a279fc-25f1-407e-a1c6-6b8689d68cd2-kube-api-access-vcxhk\") pod \"heat-operator-controller-manager-5f64f6f8bb-xjns5\" (UID: \"36a279fc-25f1-407e-a1c6-6b8689d68cd2\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-xjns5" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.511706 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-8mk4s" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.512741 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cc9825dd-340b-4dda-ab8a-91d95ee67678-cert\") pod \"infra-operator-controller-manager-57548d458d-rd6w8\" (UID: \"cc9825dd-340b-4dda-ab8a-91d95ee67678\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-rd6w8" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.512794 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jc92j\" (UniqueName: \"kubernetes.io/projected/36512615-d21b-4484-af03-ffa1d325883b-kube-api-access-jc92j\") pod \"manila-operator-controller-manager-6546668bfd-mvrzj\" (UID: \"36512615-d21b-4484-af03-ffa1d325883b\") " pod="openstack-operators/manila-operator-controller-manager-6546668bfd-mvrzj" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.512868 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sqj2\" (UniqueName: \"kubernetes.io/projected/8de1af69-5c67-4669-83d5-02de0ecd32d3-kube-api-access-7sqj2\") pod \"nova-operator-controller-manager-697bc559fc-966jn\" (UID: \"8de1af69-5c67-4669-83d5-02de0ecd32d3\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-966jn" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.513094 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztgt8\" (UniqueName: \"kubernetes.io/projected/b53e6db5-a3fe-4f86-9b6a-49eb7d4629fd-kube-api-access-ztgt8\") pod \"ironic-operator-controller-manager-6c548fd776-5q5dd\" (UID: \"b53e6db5-a3fe-4f86-9b6a-49eb7d4629fd\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-5q5dd" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.513140 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slbtr\" (UniqueName: \"kubernetes.io/projected/bfb2e88b-d2db-4afa-8511-e1a896eb9039-kube-api-access-slbtr\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-njfdg\" (UID: \"bfb2e88b-d2db-4afa-8511-e1a896eb9039\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-njfdg" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.513230 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2qmt\" (UniqueName: \"kubernetes.io/projected/1d6dd43f-eee0-4257-adbb-a53218a86eb9-kube-api-access-w2qmt\") pod \"keystone-operator-controller-manager-546d4bdf48-cn6z7\" (UID: \"1d6dd43f-eee0-4257-adbb-a53218a86eb9\") " pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-cn6z7" Nov 29 08:00:35 crc kubenswrapper[4795]: E1129 08:00:35.515098 4795 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 29 08:00:35 crc kubenswrapper[4795]: E1129 08:00:35.515148 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cc9825dd-340b-4dda-ab8a-91d95ee67678-cert podName:cc9825dd-340b-4dda-ab8a-91d95ee67678 nodeName:}" failed. No retries permitted until 2025-11-29 08:00:36.015130784 +0000 UTC m=+1281.990706574 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cc9825dd-340b-4dda-ab8a-91d95ee67678-cert") pod "infra-operator-controller-manager-57548d458d-rd6w8" (UID: "cc9825dd-340b-4dda-ab8a-91d95ee67678") : secret "infra-operator-webhook-server-cert" not found Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.523473 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ffpl\" (UniqueName: \"kubernetes.io/projected/94d164fa-c521-4617-8338-1eba3ee1c31d-kube-api-access-8ffpl\") pod \"mariadb-operator-controller-manager-56bbcc9d85-cmvt9\" (UID: \"94d164fa-c521-4617-8338-1eba3ee1c31d\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-cmvt9" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.523654 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b8fd\" (UniqueName: \"kubernetes.io/projected/f2367076-6d52-4047-908c-c1e32c4ca2c4-kube-api-access-5b8fd\") pod \"octavia-operator-controller-manager-998648c74-d5d4r\" (UID: \"f2367076-6d52-4047-908c-c1e32c4ca2c4\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-d5d4r" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.523802 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svc5p\" (UniqueName: \"kubernetes.io/projected/cc9825dd-340b-4dda-ab8a-91d95ee67678-kube-api-access-svc5p\") pod \"infra-operator-controller-manager-57548d458d-rd6w8\" (UID: \"cc9825dd-340b-4dda-ab8a-91d95ee67678\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-rd6w8" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.530347 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-vdmph"] Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.608027 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztgt8\" (UniqueName: \"kubernetes.io/projected/b53e6db5-a3fe-4f86-9b6a-49eb7d4629fd-kube-api-access-ztgt8\") pod \"ironic-operator-controller-manager-6c548fd776-5q5dd\" (UID: \"b53e6db5-a3fe-4f86-9b6a-49eb7d4629fd\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-5q5dd" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.627649 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-xjns5" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.632444 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-vdmph" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.634206 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jc92j\" (UniqueName: \"kubernetes.io/projected/36512615-d21b-4484-af03-ffa1d325883b-kube-api-access-jc92j\") pod \"manila-operator-controller-manager-6546668bfd-mvrzj\" (UID: \"36512615-d21b-4484-af03-ffa1d325883b\") " pod="openstack-operators/manila-operator-controller-manager-6546668bfd-mvrzj" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.634266 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7sqj2\" (UniqueName: \"kubernetes.io/projected/8de1af69-5c67-4669-83d5-02de0ecd32d3-kube-api-access-7sqj2\") pod \"nova-operator-controller-manager-697bc559fc-966jn\" (UID: \"8de1af69-5c67-4669-83d5-02de0ecd32d3\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-966jn" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.634322 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/543b785e-bdb9-4582-b9dd-8a987b5129f6-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4wf8kl\" (UID: \"543b785e-bdb9-4582-b9dd-8a987b5129f6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4wf8kl" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.634344 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slbtr\" (UniqueName: \"kubernetes.io/projected/bfb2e88b-d2db-4afa-8511-e1a896eb9039-kube-api-access-slbtr\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-njfdg\" (UID: \"bfb2e88b-d2db-4afa-8511-e1a896eb9039\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-njfdg" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.634382 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4gr4\" (UniqueName: \"kubernetes.io/projected/543b785e-bdb9-4582-b9dd-8a987b5129f6-kube-api-access-h4gr4\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4wf8kl\" (UID: \"543b785e-bdb9-4582-b9dd-8a987b5129f6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4wf8kl" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.634412 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ffpl\" (UniqueName: \"kubernetes.io/projected/94d164fa-c521-4617-8338-1eba3ee1c31d-kube-api-access-8ffpl\") pod \"mariadb-operator-controller-manager-56bbcc9d85-cmvt9\" (UID: \"94d164fa-c521-4617-8338-1eba3ee1c31d\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-cmvt9" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.634441 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5b8fd\" (UniqueName: \"kubernetes.io/projected/f2367076-6d52-4047-908c-c1e32c4ca2c4-kube-api-access-5b8fd\") pod \"octavia-operator-controller-manager-998648c74-d5d4r\" (UID: \"f2367076-6d52-4047-908c-c1e32c4ca2c4\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-d5d4r" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.639225 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-kwfzd" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.647857 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-6fkwt"] Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.652425 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-6fkwt" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.658777 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2qmt\" (UniqueName: \"kubernetes.io/projected/1d6dd43f-eee0-4257-adbb-a53218a86eb9-kube-api-access-w2qmt\") pod \"keystone-operator-controller-manager-546d4bdf48-cn6z7\" (UID: \"1d6dd43f-eee0-4257-adbb-a53218a86eb9\") " pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-cn6z7" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.663451 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-g2gvh" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.837826 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-5q5dd" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.847063 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-6fkwt"] Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.852668 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-spfsh" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.901980 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4wf8kl"] Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.906917 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slbtr\" (UniqueName: \"kubernetes.io/projected/bfb2e88b-d2db-4afa-8511-e1a896eb9039-kube-api-access-slbtr\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-njfdg\" (UID: \"bfb2e88b-d2db-4afa-8511-e1a896eb9039\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-njfdg" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.913270 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4gr4\" (UniqueName: \"kubernetes.io/projected/543b785e-bdb9-4582-b9dd-8a987b5129f6-kube-api-access-h4gr4\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4wf8kl\" (UID: \"543b785e-bdb9-4582-b9dd-8a987b5129f6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4wf8kl" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.913431 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbl7k\" (UniqueName: \"kubernetes.io/projected/e56bb4ff-9936-4876-8616-0958e9892fa3-kube-api-access-lbl7k\") pod \"swift-operator-controller-manager-5f8c65bbfc-6fkwt\" (UID: \"e56bb4ff-9936-4876-8616-0958e9892fa3\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-6fkwt" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.913511 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzp9d\" (UniqueName: \"kubernetes.io/projected/a197813b-f5c3-49c1-81f6-b6b2e08e0617-kube-api-access-gzp9d\") pod \"ovn-operator-controller-manager-b6456fdb6-vdmph\" (UID: \"a197813b-f5c3-49c1-81f6-b6b2e08e0617\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-vdmph" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.913703 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/543b785e-bdb9-4582-b9dd-8a987b5129f6-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4wf8kl\" (UID: \"543b785e-bdb9-4582-b9dd-8a987b5129f6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4wf8kl" Nov 29 08:00:35 crc kubenswrapper[4795]: E1129 08:00:35.913872 4795 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 29 08:00:35 crc kubenswrapper[4795]: E1129 08:00:35.913920 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/543b785e-bdb9-4582-b9dd-8a987b5129f6-cert podName:543b785e-bdb9-4582-b9dd-8a987b5129f6 nodeName:}" failed. No retries permitted until 2025-11-29 08:00:36.413904584 +0000 UTC m=+1282.389480374 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/543b785e-bdb9-4582-b9dd-8a987b5129f6-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4wf8kl" (UID: "543b785e-bdb9-4582-b9dd-8a987b5129f6") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.919857 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-vdmph"] Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.919881 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-9g674"] Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.921079 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-9g674" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.924045 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sqj2\" (UniqueName: \"kubernetes.io/projected/8de1af69-5c67-4669-83d5-02de0ecd32d3-kube-api-access-7sqj2\") pod \"nova-operator-controller-manager-697bc559fc-966jn\" (UID: \"8de1af69-5c67-4669-83d5-02de0ecd32d3\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-966jn" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.925310 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-cn6z7" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.931844 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-svxlk" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.936175 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jc92j\" (UniqueName: \"kubernetes.io/projected/36512615-d21b-4484-af03-ffa1d325883b-kube-api-access-jc92j\") pod \"manila-operator-controller-manager-6546668bfd-mvrzj\" (UID: \"36512615-d21b-4484-af03-ffa1d325883b\") " pod="openstack-operators/manila-operator-controller-manager-6546668bfd-mvrzj" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.936630 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svc5p\" (UniqueName: \"kubernetes.io/projected/cc9825dd-340b-4dda-ab8a-91d95ee67678-kube-api-access-svc5p\") pod \"infra-operator-controller-manager-57548d458d-rd6w8\" (UID: \"cc9825dd-340b-4dda-ab8a-91d95ee67678\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-rd6w8" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.945648 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-d486dbd66-bt6tr"] Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.947653 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-d486dbd66-bt6tr" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.949670 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-9g674"] Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.954181 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-zp6jm" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.961040 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ffpl\" (UniqueName: \"kubernetes.io/projected/94d164fa-c521-4617-8338-1eba3ee1c31d-kube-api-access-8ffpl\") pod \"mariadb-operator-controller-manager-56bbcc9d85-cmvt9\" (UID: \"94d164fa-c521-4617-8338-1eba3ee1c31d\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-cmvt9" Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.965548 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-d486dbd66-bt6tr"] Nov 29 08:00:35 crc kubenswrapper[4795]: I1129 08:00:35.980174 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5b8fd\" (UniqueName: \"kubernetes.io/projected/f2367076-6d52-4047-908c-c1e32c4ca2c4-kube-api-access-5b8fd\") pod \"octavia-operator-controller-manager-998648c74-d5d4r\" (UID: \"f2367076-6d52-4047-908c-c1e32c4ca2c4\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-d5d4r" Nov 29 08:00:36 crc kubenswrapper[4795]: I1129 08:00:35.998119 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-k6h4m"] Nov 29 08:00:36 crc kubenswrapper[4795]: I1129 08:00:36.014958 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkvgk\" (UniqueName: \"kubernetes.io/projected/f5e4dece-2c02-41a1-ac8b-ec46eb2a3d45-kube-api-access-gkvgk\") pod \"placement-operator-controller-manager-78f8948974-9g674\" (UID: \"f5e4dece-2c02-41a1-ac8b-ec46eb2a3d45\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-9g674" Nov 29 08:00:36 crc kubenswrapper[4795]: I1129 08:00:36.015242 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbl7k\" (UniqueName: \"kubernetes.io/projected/e56bb4ff-9936-4876-8616-0958e9892fa3-kube-api-access-lbl7k\") pod \"swift-operator-controller-manager-5f8c65bbfc-6fkwt\" (UID: \"e56bb4ff-9936-4876-8616-0958e9892fa3\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-6fkwt" Nov 29 08:00:36 crc kubenswrapper[4795]: I1129 08:00:36.015329 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxdhn\" (UniqueName: \"kubernetes.io/projected/c75b943b-8281-4fbd-a94a-3d5db0475d5d-kube-api-access-fxdhn\") pod \"telemetry-operator-controller-manager-d486dbd66-bt6tr\" (UID: \"c75b943b-8281-4fbd-a94a-3d5db0475d5d\") " pod="openstack-operators/telemetry-operator-controller-manager-d486dbd66-bt6tr" Nov 29 08:00:36 crc kubenswrapper[4795]: I1129 08:00:36.015415 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzp9d\" (UniqueName: \"kubernetes.io/projected/a197813b-f5c3-49c1-81f6-b6b2e08e0617-kube-api-access-gzp9d\") pod \"ovn-operator-controller-manager-b6456fdb6-vdmph\" (UID: \"a197813b-f5c3-49c1-81f6-b6b2e08e0617\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-vdmph" Nov 29 08:00:36 crc kubenswrapper[4795]: I1129 08:00:36.023096 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-k6h4m" Nov 29 08:00:36 crc kubenswrapper[4795]: I1129 08:00:36.037897 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-kg4fc" Nov 29 08:00:36 crc kubenswrapper[4795]: I1129 08:00:36.039474 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-k6h4m"] Nov 29 08:00:36 crc kubenswrapper[4795]: I1129 08:00:36.110967 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-h74tt"] Nov 29 08:00:36 crc kubenswrapper[4795]: I1129 08:00:36.112491 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-h74tt" Nov 29 08:00:36 crc kubenswrapper[4795]: I1129 08:00:36.115099 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-ktsg8" Nov 29 08:00:36 crc kubenswrapper[4795]: I1129 08:00:36.116948 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkvgk\" (UniqueName: \"kubernetes.io/projected/f5e4dece-2c02-41a1-ac8b-ec46eb2a3d45-kube-api-access-gkvgk\") pod \"placement-operator-controller-manager-78f8948974-9g674\" (UID: \"f5e4dece-2c02-41a1-ac8b-ec46eb2a3d45\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-9g674" Nov 29 08:00:36 crc kubenswrapper[4795]: I1129 08:00:36.117053 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxdhn\" (UniqueName: \"kubernetes.io/projected/c75b943b-8281-4fbd-a94a-3d5db0475d5d-kube-api-access-fxdhn\") pod \"telemetry-operator-controller-manager-d486dbd66-bt6tr\" (UID: \"c75b943b-8281-4fbd-a94a-3d5db0475d5d\") " pod="openstack-operators/telemetry-operator-controller-manager-d486dbd66-bt6tr" Nov 29 08:00:36 crc kubenswrapper[4795]: I1129 08:00:36.117112 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cc9825dd-340b-4dda-ab8a-91d95ee67678-cert\") pod \"infra-operator-controller-manager-57548d458d-rd6w8\" (UID: \"cc9825dd-340b-4dda-ab8a-91d95ee67678\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-rd6w8" Nov 29 08:00:36 crc kubenswrapper[4795]: I1129 08:00:36.117146 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9w4d\" (UniqueName: \"kubernetes.io/projected/4eea915e-348e-48a3-b5e1-767648dac19d-kube-api-access-n9w4d\") pod \"test-operator-controller-manager-5854674fcc-k6h4m\" (UID: \"4eea915e-348e-48a3-b5e1-767648dac19d\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-k6h4m" Nov 29 08:00:36 crc kubenswrapper[4795]: E1129 08:00:36.117718 4795 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 29 08:00:36 crc kubenswrapper[4795]: E1129 08:00:36.117766 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cc9825dd-340b-4dda-ab8a-91d95ee67678-cert podName:cc9825dd-340b-4dda-ab8a-91d95ee67678 nodeName:}" failed. No retries permitted until 2025-11-29 08:00:37.117748211 +0000 UTC m=+1283.093324001 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cc9825dd-340b-4dda-ab8a-91d95ee67678-cert") pod "infra-operator-controller-manager-57548d458d-rd6w8" (UID: "cc9825dd-340b-4dda-ab8a-91d95ee67678") : secret "infra-operator-webhook-server-cert" not found Nov 29 08:00:36 crc kubenswrapper[4795]: I1129 08:00:36.117976 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-h74tt"] Nov 29 08:00:36 crc kubenswrapper[4795]: I1129 08:00:36.156429 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-njfdg" Nov 29 08:00:36 crc kubenswrapper[4795]: I1129 08:00:36.156956 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-8688fc7b8-5sbpb"] Nov 29 08:00:36 crc kubenswrapper[4795]: I1129 08:00:36.158014 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-8688fc7b8-5sbpb" Nov 29 08:00:36 crc kubenswrapper[4795]: I1129 08:00:36.172796 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-8688fc7b8-5sbpb"] Nov 29 08:00:36 crc kubenswrapper[4795]: I1129 08:00:36.387109 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9w4d\" (UniqueName: \"kubernetes.io/projected/4eea915e-348e-48a3-b5e1-767648dac19d-kube-api-access-n9w4d\") pod \"test-operator-controller-manager-5854674fcc-k6h4m\" (UID: \"4eea915e-348e-48a3-b5e1-767648dac19d\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-k6h4m" Nov 29 08:00:36 crc kubenswrapper[4795]: I1129 08:00:36.388425 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Nov 29 08:00:36 crc kubenswrapper[4795]: I1129 08:00:36.498882 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-cmvt9" Nov 29 08:00:36 crc kubenswrapper[4795]: I1129 08:00:36.499820 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-mvrzj" Nov 29 08:00:36 crc kubenswrapper[4795]: I1129 08:00:36.501874 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/868e2666-5606-4891-ba11-ac02f852c48d-webhook-certs\") pod \"openstack-operator-controller-manager-8688fc7b8-5sbpb\" (UID: \"868e2666-5606-4891-ba11-ac02f852c48d\") " pod="openstack-operators/openstack-operator-controller-manager-8688fc7b8-5sbpb" Nov 29 08:00:36 crc kubenswrapper[4795]: I1129 08:00:36.502015 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lklv\" (UniqueName: \"kubernetes.io/projected/868e2666-5606-4891-ba11-ac02f852c48d-kube-api-access-2lklv\") pod \"openstack-operator-controller-manager-8688fc7b8-5sbpb\" (UID: \"868e2666-5606-4891-ba11-ac02f852c48d\") " pod="openstack-operators/openstack-operator-controller-manager-8688fc7b8-5sbpb" Nov 29 08:00:36 crc kubenswrapper[4795]: I1129 08:00:36.502055 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/868e2666-5606-4891-ba11-ac02f852c48d-metrics-certs\") pod \"openstack-operator-controller-manager-8688fc7b8-5sbpb\" (UID: \"868e2666-5606-4891-ba11-ac02f852c48d\") " pod="openstack-operators/openstack-operator-controller-manager-8688fc7b8-5sbpb" Nov 29 08:00:36 crc kubenswrapper[4795]: I1129 08:00:36.502105 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/543b785e-bdb9-4582-b9dd-8a987b5129f6-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4wf8kl\" (UID: \"543b785e-bdb9-4582-b9dd-8a987b5129f6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4wf8kl" Nov 29 08:00:36 crc kubenswrapper[4795]: I1129 08:00:36.502178 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djclf\" (UniqueName: \"kubernetes.io/projected/78c2fefa-d0f0-4123-9513-231b2c3ca5fd-kube-api-access-djclf\") pod \"watcher-operator-controller-manager-769dc69bc-h74tt\" (UID: \"78c2fefa-d0f0-4123-9513-231b2c3ca5fd\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-h74tt" Nov 29 08:00:36 crc kubenswrapper[4795]: E1129 08:00:36.502214 4795 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 29 08:00:36 crc kubenswrapper[4795]: E1129 08:00:36.502283 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/543b785e-bdb9-4582-b9dd-8a987b5129f6-cert podName:543b785e-bdb9-4582-b9dd-8a987b5129f6 nodeName:}" failed. No retries permitted until 2025-11-29 08:00:37.502265895 +0000 UTC m=+1283.477841685 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/543b785e-bdb9-4582-b9dd-8a987b5129f6-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4wf8kl" (UID: "543b785e-bdb9-4582-b9dd-8a987b5129f6") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 29 08:00:36 crc kubenswrapper[4795]: I1129 08:00:36.520290 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-l56hd" Nov 29 08:00:36 crc kubenswrapper[4795]: I1129 08:00:36.529515 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4gr4\" (UniqueName: \"kubernetes.io/projected/543b785e-bdb9-4582-b9dd-8a987b5129f6-kube-api-access-h4gr4\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4wf8kl\" (UID: \"543b785e-bdb9-4582-b9dd-8a987b5129f6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4wf8kl" Nov 29 08:00:36 crc kubenswrapper[4795]: I1129 08:00:36.530776 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbl7k\" (UniqueName: \"kubernetes.io/projected/e56bb4ff-9936-4876-8616-0958e9892fa3-kube-api-access-lbl7k\") pod \"swift-operator-controller-manager-5f8c65bbfc-6fkwt\" (UID: \"e56bb4ff-9936-4876-8616-0958e9892fa3\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-6fkwt" Nov 29 08:00:36 crc kubenswrapper[4795]: I1129 08:00:36.533353 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxdhn\" (UniqueName: \"kubernetes.io/projected/c75b943b-8281-4fbd-a94a-3d5db0475d5d-kube-api-access-fxdhn\") pod \"telemetry-operator-controller-manager-d486dbd66-bt6tr\" (UID: \"c75b943b-8281-4fbd-a94a-3d5db0475d5d\") " pod="openstack-operators/telemetry-operator-controller-manager-d486dbd66-bt6tr" Nov 29 08:00:36 crc kubenswrapper[4795]: I1129 08:00:36.533429 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzp9d\" (UniqueName: \"kubernetes.io/projected/a197813b-f5c3-49c1-81f6-b6b2e08e0617-kube-api-access-gzp9d\") pod \"ovn-operator-controller-manager-b6456fdb6-vdmph\" (UID: \"a197813b-f5c3-49c1-81f6-b6b2e08e0617\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-vdmph" Nov 29 08:00:36 crc kubenswrapper[4795]: I1129 08:00:36.537074 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkvgk\" (UniqueName: \"kubernetes.io/projected/f5e4dece-2c02-41a1-ac8b-ec46eb2a3d45-kube-api-access-gkvgk\") pod \"placement-operator-controller-manager-78f8948974-9g674\" (UID: \"f5e4dece-2c02-41a1-ac8b-ec46eb2a3d45\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-9g674" Nov 29 08:00:36 crc kubenswrapper[4795]: I1129 08:00:36.569979 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Nov 29 08:00:36 crc kubenswrapper[4795]: I1129 08:00:36.603813 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lklv\" (UniqueName: \"kubernetes.io/projected/868e2666-5606-4891-ba11-ac02f852c48d-kube-api-access-2lklv\") pod \"openstack-operator-controller-manager-8688fc7b8-5sbpb\" (UID: \"868e2666-5606-4891-ba11-ac02f852c48d\") " pod="openstack-operators/openstack-operator-controller-manager-8688fc7b8-5sbpb" Nov 29 08:00:36 crc kubenswrapper[4795]: I1129 08:00:36.603869 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/868e2666-5606-4891-ba11-ac02f852c48d-metrics-certs\") pod \"openstack-operator-controller-manager-8688fc7b8-5sbpb\" (UID: \"868e2666-5606-4891-ba11-ac02f852c48d\") " pod="openstack-operators/openstack-operator-controller-manager-8688fc7b8-5sbpb" Nov 29 08:00:36 crc kubenswrapper[4795]: I1129 08:00:36.603973 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djclf\" (UniqueName: \"kubernetes.io/projected/78c2fefa-d0f0-4123-9513-231b2c3ca5fd-kube-api-access-djclf\") pod \"watcher-operator-controller-manager-769dc69bc-h74tt\" (UID: \"78c2fefa-d0f0-4123-9513-231b2c3ca5fd\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-h74tt" Nov 29 08:00:36 crc kubenswrapper[4795]: I1129 08:00:36.604070 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/868e2666-5606-4891-ba11-ac02f852c48d-webhook-certs\") pod \"openstack-operator-controller-manager-8688fc7b8-5sbpb\" (UID: \"868e2666-5606-4891-ba11-ac02f852c48d\") " pod="openstack-operators/openstack-operator-controller-manager-8688fc7b8-5sbpb" Nov 29 08:00:36 crc kubenswrapper[4795]: E1129 08:00:36.604228 4795 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 29 08:00:36 crc kubenswrapper[4795]: E1129 08:00:36.604282 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/868e2666-5606-4891-ba11-ac02f852c48d-webhook-certs podName:868e2666-5606-4891-ba11-ac02f852c48d nodeName:}" failed. No retries permitted until 2025-11-29 08:00:37.104264455 +0000 UTC m=+1283.079840245 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/868e2666-5606-4891-ba11-ac02f852c48d-webhook-certs") pod "openstack-operator-controller-manager-8688fc7b8-5sbpb" (UID: "868e2666-5606-4891-ba11-ac02f852c48d") : secret "webhook-server-cert" not found Nov 29 08:00:36 crc kubenswrapper[4795]: E1129 08:00:36.604786 4795 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 29 08:00:36 crc kubenswrapper[4795]: E1129 08:00:36.604821 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/868e2666-5606-4891-ba11-ac02f852c48d-metrics-certs podName:868e2666-5606-4891-ba11-ac02f852c48d nodeName:}" failed. No retries permitted until 2025-11-29 08:00:37.104809171 +0000 UTC m=+1283.080384981 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/868e2666-5606-4891-ba11-ac02f852c48d-metrics-certs") pod "openstack-operator-controller-manager-8688fc7b8-5sbpb" (UID: "868e2666-5606-4891-ba11-ac02f852c48d") : secret "metrics-server-cert" not found Nov 29 08:00:36 crc kubenswrapper[4795]: I1129 08:00:36.615433 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-27n4r"] Nov 29 08:00:36 crc kubenswrapper[4795]: I1129 08:00:36.616513 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-27n4r" Nov 29 08:00:36 crc kubenswrapper[4795]: I1129 08:00:36.622841 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-27n4r"] Nov 29 08:00:36 crc kubenswrapper[4795]: I1129 08:00:36.667977 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-cvgrz" Nov 29 08:00:36 crc kubenswrapper[4795]: I1129 08:00:36.669801 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-966jn" Nov 29 08:00:36 crc kubenswrapper[4795]: I1129 08:00:36.675942 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9w4d\" (UniqueName: \"kubernetes.io/projected/4eea915e-348e-48a3-b5e1-767648dac19d-kube-api-access-n9w4d\") pod \"test-operator-controller-manager-5854674fcc-k6h4m\" (UID: \"4eea915e-348e-48a3-b5e1-767648dac19d\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-k6h4m" Nov 29 08:00:36 crc kubenswrapper[4795]: I1129 08:00:36.707871 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdrcg\" (UniqueName: \"kubernetes.io/projected/7ce03a92-9abd-485c-b949-fb95301de889-kube-api-access-kdrcg\") pod \"rabbitmq-cluster-operator-manager-668c99d594-27n4r\" (UID: \"7ce03a92-9abd-485c-b949-fb95301de889\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-27n4r" Nov 29 08:00:36 crc kubenswrapper[4795]: I1129 08:00:36.712272 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djclf\" (UniqueName: \"kubernetes.io/projected/78c2fefa-d0f0-4123-9513-231b2c3ca5fd-kube-api-access-djclf\") pod \"watcher-operator-controller-manager-769dc69bc-h74tt\" (UID: \"78c2fefa-d0f0-4123-9513-231b2c3ca5fd\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-h74tt" Nov 29 08:00:36 crc kubenswrapper[4795]: I1129 08:00:36.714734 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-d5d4r" Nov 29 08:00:36 crc kubenswrapper[4795]: I1129 08:00:36.793615 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lklv\" (UniqueName: \"kubernetes.io/projected/868e2666-5606-4891-ba11-ac02f852c48d-kube-api-access-2lklv\") pod \"openstack-operator-controller-manager-8688fc7b8-5sbpb\" (UID: \"868e2666-5606-4891-ba11-ac02f852c48d\") " pod="openstack-operators/openstack-operator-controller-manager-8688fc7b8-5sbpb" Nov 29 08:00:36 crc kubenswrapper[4795]: I1129 08:00:36.833297 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdrcg\" (UniqueName: \"kubernetes.io/projected/7ce03a92-9abd-485c-b949-fb95301de889-kube-api-access-kdrcg\") pod \"rabbitmq-cluster-operator-manager-668c99d594-27n4r\" (UID: \"7ce03a92-9abd-485c-b949-fb95301de889\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-27n4r" Nov 29 08:00:36 crc kubenswrapper[4795]: I1129 08:00:36.834870 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-vdmph" Nov 29 08:00:36 crc kubenswrapper[4795]: I1129 08:00:36.864367 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-klbwf"] Nov 29 08:00:36 crc kubenswrapper[4795]: I1129 08:00:36.865163 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdrcg\" (UniqueName: \"kubernetes.io/projected/7ce03a92-9abd-485c-b949-fb95301de889-kube-api-access-kdrcg\") pod \"rabbitmq-cluster-operator-manager-668c99d594-27n4r\" (UID: \"7ce03a92-9abd-485c-b949-fb95301de889\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-27n4r" Nov 29 08:00:37 crc kubenswrapper[4795]: I1129 08:00:37.110425 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/868e2666-5606-4891-ba11-ac02f852c48d-metrics-certs\") pod \"openstack-operator-controller-manager-8688fc7b8-5sbpb\" (UID: \"868e2666-5606-4891-ba11-ac02f852c48d\") " pod="openstack-operators/openstack-operator-controller-manager-8688fc7b8-5sbpb" Nov 29 08:00:37 crc kubenswrapper[4795]: I1129 08:00:37.110623 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/868e2666-5606-4891-ba11-ac02f852c48d-webhook-certs\") pod \"openstack-operator-controller-manager-8688fc7b8-5sbpb\" (UID: \"868e2666-5606-4891-ba11-ac02f852c48d\") " pod="openstack-operators/openstack-operator-controller-manager-8688fc7b8-5sbpb" Nov 29 08:00:37 crc kubenswrapper[4795]: E1129 08:00:37.112646 4795 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 29 08:00:37 crc kubenswrapper[4795]: E1129 08:00:37.112715 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/868e2666-5606-4891-ba11-ac02f852c48d-metrics-certs podName:868e2666-5606-4891-ba11-ac02f852c48d nodeName:}" failed. No retries permitted until 2025-11-29 08:00:38.112694913 +0000 UTC m=+1284.088270703 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/868e2666-5606-4891-ba11-ac02f852c48d-metrics-certs") pod "openstack-operator-controller-manager-8688fc7b8-5sbpb" (UID: "868e2666-5606-4891-ba11-ac02f852c48d") : secret "metrics-server-cert" not found Nov 29 08:00:37 crc kubenswrapper[4795]: E1129 08:00:37.113129 4795 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 29 08:00:37 crc kubenswrapper[4795]: E1129 08:00:37.113168 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/868e2666-5606-4891-ba11-ac02f852c48d-webhook-certs podName:868e2666-5606-4891-ba11-ac02f852c48d nodeName:}" failed. No retries permitted until 2025-11-29 08:00:38.113156276 +0000 UTC m=+1284.088732066 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/868e2666-5606-4891-ba11-ac02f852c48d-webhook-certs") pod "openstack-operator-controller-manager-8688fc7b8-5sbpb" (UID: "868e2666-5606-4891-ba11-ac02f852c48d") : secret "webhook-server-cert" not found Nov 29 08:00:37 crc kubenswrapper[4795]: I1129 08:00:37.340264 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-6fkwt" Nov 29 08:00:37 crc kubenswrapper[4795]: I1129 08:00:37.352802 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cc9825dd-340b-4dda-ab8a-91d95ee67678-cert\") pod \"infra-operator-controller-manager-57548d458d-rd6w8\" (UID: \"cc9825dd-340b-4dda-ab8a-91d95ee67678\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-rd6w8" Nov 29 08:00:37 crc kubenswrapper[4795]: E1129 08:00:37.352988 4795 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 29 08:00:37 crc kubenswrapper[4795]: E1129 08:00:37.353058 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cc9825dd-340b-4dda-ab8a-91d95ee67678-cert podName:cc9825dd-340b-4dda-ab8a-91d95ee67678 nodeName:}" failed. No retries permitted until 2025-11-29 08:00:39.353035248 +0000 UTC m=+1285.328611038 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cc9825dd-340b-4dda-ab8a-91d95ee67678-cert") pod "infra-operator-controller-manager-57548d458d-rd6w8" (UID: "cc9825dd-340b-4dda-ab8a-91d95ee67678") : secret "infra-operator-webhook-server-cert" not found Nov 29 08:00:37 crc kubenswrapper[4795]: I1129 08:00:37.452539 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-qtbtd"] Nov 29 08:00:37 crc kubenswrapper[4795]: I1129 08:00:37.486633 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-spfsh"] Nov 29 08:00:37 crc kubenswrapper[4795]: I1129 08:00:37.491021 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-9g674" Nov 29 08:00:37 crc kubenswrapper[4795]: W1129 08:00:37.507808 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86217734_815f_461c_a32d_8d744192003e.slice/crio-0fffc65c982e873c0841498aaeec1c245b202c7c69d08a3a81381bcb025fccae WatchSource:0}: Error finding container 0fffc65c982e873c0841498aaeec1c245b202c7c69d08a3a81381bcb025fccae: Status 404 returned error can't find the container with id 0fffc65c982e873c0841498aaeec1c245b202c7c69d08a3a81381bcb025fccae Nov 29 08:00:37 crc kubenswrapper[4795]: I1129 08:00:37.581334 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/543b785e-bdb9-4582-b9dd-8a987b5129f6-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4wf8kl\" (UID: \"543b785e-bdb9-4582-b9dd-8a987b5129f6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4wf8kl" Nov 29 08:00:37 crc kubenswrapper[4795]: E1129 08:00:37.581539 4795 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 29 08:00:37 crc kubenswrapper[4795]: E1129 08:00:37.581878 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/543b785e-bdb9-4582-b9dd-8a987b5129f6-cert podName:543b785e-bdb9-4582-b9dd-8a987b5129f6 nodeName:}" failed. No retries permitted until 2025-11-29 08:00:39.581856765 +0000 UTC m=+1285.557432555 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/543b785e-bdb9-4582-b9dd-8a987b5129f6-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4wf8kl" (UID: "543b785e-bdb9-4582-b9dd-8a987b5129f6") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 29 08:00:37 crc kubenswrapper[4795]: I1129 08:00:37.650978 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-d486dbd66-bt6tr" Nov 29 08:00:37 crc kubenswrapper[4795]: I1129 08:00:37.684652 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-k6h4m" Nov 29 08:00:37 crc kubenswrapper[4795]: I1129 08:00:37.812213 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-h74tt" Nov 29 08:00:37 crc kubenswrapper[4795]: I1129 08:00:37.905853 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-27n4r" Nov 29 08:00:37 crc kubenswrapper[4795]: I1129 08:00:37.915895 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-spfsh" event={"ID":"d4e1473d-8426-452b-8030-764680cc5a20","Type":"ContainerStarted","Data":"0953eca01d97a8892ca3251e2ff29f69d95d7e33c5ae95038f16d3d91778952c"} Nov 29 08:00:37 crc kubenswrapper[4795]: I1129 08:00:37.952012 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-qtbtd" event={"ID":"86217734-815f-461c-a32d-8d744192003e","Type":"ContainerStarted","Data":"0fffc65c982e873c0841498aaeec1c245b202c7c69d08a3a81381bcb025fccae"} Nov 29 08:00:37 crc kubenswrapper[4795]: I1129 08:00:37.968517 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-klbwf" event={"ID":"7bed5103-966d-43d3-92f1-73a2f8b6d551","Type":"ContainerStarted","Data":"2f6405f7b9c5f680e781ae3427f9517a0cd97545c2331d89dabe34851e1834e8"} Nov 29 08:00:38 crc kubenswrapper[4795]: I1129 08:00:38.220174 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-xjns5"] Nov 29 08:00:38 crc kubenswrapper[4795]: I1129 08:00:38.221496 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/868e2666-5606-4891-ba11-ac02f852c48d-metrics-certs\") pod \"openstack-operator-controller-manager-8688fc7b8-5sbpb\" (UID: \"868e2666-5606-4891-ba11-ac02f852c48d\") " pod="openstack-operators/openstack-operator-controller-manager-8688fc7b8-5sbpb" Nov 29 08:00:38 crc kubenswrapper[4795]: I1129 08:00:38.221579 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/868e2666-5606-4891-ba11-ac02f852c48d-webhook-certs\") pod \"openstack-operator-controller-manager-8688fc7b8-5sbpb\" (UID: \"868e2666-5606-4891-ba11-ac02f852c48d\") " pod="openstack-operators/openstack-operator-controller-manager-8688fc7b8-5sbpb" Nov 29 08:00:38 crc kubenswrapper[4795]: E1129 08:00:38.221721 4795 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 29 08:00:38 crc kubenswrapper[4795]: E1129 08:00:38.221768 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/868e2666-5606-4891-ba11-ac02f852c48d-webhook-certs podName:868e2666-5606-4891-ba11-ac02f852c48d nodeName:}" failed. No retries permitted until 2025-11-29 08:00:40.22175593 +0000 UTC m=+1286.197331720 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/868e2666-5606-4891-ba11-ac02f852c48d-webhook-certs") pod "openstack-operator-controller-manager-8688fc7b8-5sbpb" (UID: "868e2666-5606-4891-ba11-ac02f852c48d") : secret "webhook-server-cert" not found Nov 29 08:00:38 crc kubenswrapper[4795]: E1129 08:00:38.222089 4795 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 29 08:00:38 crc kubenswrapper[4795]: E1129 08:00:38.222122 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/868e2666-5606-4891-ba11-ac02f852c48d-metrics-certs podName:868e2666-5606-4891-ba11-ac02f852c48d nodeName:}" failed. No retries permitted until 2025-11-29 08:00:40.22211328 +0000 UTC m=+1286.197689060 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/868e2666-5606-4891-ba11-ac02f852c48d-metrics-certs") pod "openstack-operator-controller-manager-8688fc7b8-5sbpb" (UID: "868e2666-5606-4891-ba11-ac02f852c48d") : secret "metrics-server-cert" not found Nov 29 08:00:38 crc kubenswrapper[4795]: I1129 08:00:38.242162 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-8mk4s"] Nov 29 08:00:38 crc kubenswrapper[4795]: I1129 08:00:38.714536 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-668d9c48b9-dr8f4"] Nov 29 08:00:38 crc kubenswrapper[4795]: W1129 08:00:38.726293 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3ff17662_f7b1_4870_9ef2_18a81fdb5d73.slice/crio-134826b77939faa16ef2fe53256c9095b81b4b22892958c2c2f2fe02664eeae9 WatchSource:0}: Error finding container 134826b77939faa16ef2fe53256c9095b81b4b22892958c2c2f2fe02664eeae9: Status 404 returned error can't find the container with id 134826b77939faa16ef2fe53256c9095b81b4b22892958c2c2f2fe02664eeae9 Nov 29 08:00:38 crc kubenswrapper[4795]: I1129 08:00:38.726501 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-5q5dd"] Nov 29 08:00:38 crc kubenswrapper[4795]: I1129 08:00:38.807442 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-njfdg"] Nov 29 08:00:38 crc kubenswrapper[4795]: W1129 08:00:38.959720 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbfb2e88b_d2db_4afa_8511_e1a896eb9039.slice/crio-e348b082ba51c42133551378e3d4d884da7139d69381c64163c8d4a55d4d754e WatchSource:0}: Error finding container e348b082ba51c42133551378e3d4d884da7139d69381c64163c8d4a55d4d754e: Status 404 returned error can't find the container with id e348b082ba51c42133551378e3d4d884da7139d69381c64163c8d4a55d4d754e Nov 29 08:00:39 crc kubenswrapper[4795]: I1129 08:00:39.033863 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-njfdg" event={"ID":"bfb2e88b-d2db-4afa-8511-e1a896eb9039","Type":"ContainerStarted","Data":"e348b082ba51c42133551378e3d4d884da7139d69381c64163c8d4a55d4d754e"} Nov 29 08:00:39 crc kubenswrapper[4795]: I1129 08:00:39.038386 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-dr8f4" event={"ID":"3ff17662-f7b1-4870-9ef2-18a81fdb5d73","Type":"ContainerStarted","Data":"134826b77939faa16ef2fe53256c9095b81b4b22892958c2c2f2fe02664eeae9"} Nov 29 08:00:39 crc kubenswrapper[4795]: I1129 08:00:39.048296 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-8mk4s" event={"ID":"7bfb1aea-ee92-4033-ba5f-c7ac79c60d0e","Type":"ContainerStarted","Data":"5a0f61ecc1f0d4326362011d0ba4b4ed44fd3c35df8145fb91d41bc14adeaae5"} Nov 29 08:00:39 crc kubenswrapper[4795]: I1129 08:00:39.082897 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-xjns5" event={"ID":"36a279fc-25f1-407e-a1c6-6b8689d68cd2","Type":"ContainerStarted","Data":"70d8e97269c5e401c2d3080b021cbcd20a62c615cf9246d21e032eabeefa0032"} Nov 29 08:00:39 crc kubenswrapper[4795]: I1129 08:00:39.135902 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-5q5dd" event={"ID":"b53e6db5-a3fe-4f86-9b6a-49eb7d4629fd","Type":"ContainerStarted","Data":"8fbf32e6f761a7dc5ea480162855698680c3373eeab28441e1686cfb9aa007fb"} Nov 29 08:00:39 crc kubenswrapper[4795]: I1129 08:00:39.272869 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-546d4bdf48-cn6z7"] Nov 29 08:00:39 crc kubenswrapper[4795]: I1129 08:00:39.286033 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-966jn"] Nov 29 08:00:39 crc kubenswrapper[4795]: W1129 08:00:39.291092 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8de1af69_5c67_4669_83d5_02de0ecd32d3.slice/crio-3aaef76c2b9b968bf259072fdb65f2136768d68e54b04c22994ce19c057934a0 WatchSource:0}: Error finding container 3aaef76c2b9b968bf259072fdb65f2136768d68e54b04c22994ce19c057934a0: Status 404 returned error can't find the container with id 3aaef76c2b9b968bf259072fdb65f2136768d68e54b04c22994ce19c057934a0 Nov 29 08:00:39 crc kubenswrapper[4795]: I1129 08:00:39.367074 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cc9825dd-340b-4dda-ab8a-91d95ee67678-cert\") pod \"infra-operator-controller-manager-57548d458d-rd6w8\" (UID: \"cc9825dd-340b-4dda-ab8a-91d95ee67678\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-rd6w8" Nov 29 08:00:39 crc kubenswrapper[4795]: E1129 08:00:39.367812 4795 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 29 08:00:39 crc kubenswrapper[4795]: E1129 08:00:39.368320 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cc9825dd-340b-4dda-ab8a-91d95ee67678-cert podName:cc9825dd-340b-4dda-ab8a-91d95ee67678 nodeName:}" failed. No retries permitted until 2025-11-29 08:00:43.368287513 +0000 UTC m=+1289.343863303 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cc9825dd-340b-4dda-ab8a-91d95ee67678-cert") pod "infra-operator-controller-manager-57548d458d-rd6w8" (UID: "cc9825dd-340b-4dda-ab8a-91d95ee67678") : secret "infra-operator-webhook-server-cert" not found Nov 29 08:00:39 crc kubenswrapper[4795]: I1129 08:00:39.487940 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-6fkwt"] Nov 29 08:00:39 crc kubenswrapper[4795]: I1129 08:00:39.507162 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-cmvt9"] Nov 29 08:00:39 crc kubenswrapper[4795]: I1129 08:00:39.516742 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-d5d4r"] Nov 29 08:00:39 crc kubenswrapper[4795]: I1129 08:00:39.535363 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-vdmph"] Nov 29 08:00:39 crc kubenswrapper[4795]: I1129 08:00:39.551453 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-6546668bfd-mvrzj"] Nov 29 08:00:39 crc kubenswrapper[4795]: W1129 08:00:39.573253 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2367076_6d52_4047_908c_c1e32c4ca2c4.slice/crio-580138e5a1009c0f90d06205075e50ccd445fe3822ceed666b8628b3dbe76a88 WatchSource:0}: Error finding container 580138e5a1009c0f90d06205075e50ccd445fe3822ceed666b8628b3dbe76a88: Status 404 returned error can't find the container with id 580138e5a1009c0f90d06205075e50ccd445fe3822ceed666b8628b3dbe76a88 Nov 29 08:00:39 crc kubenswrapper[4795]: I1129 08:00:39.678246 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-h74tt"] Nov 29 08:00:39 crc kubenswrapper[4795]: I1129 08:00:39.678961 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/543b785e-bdb9-4582-b9dd-8a987b5129f6-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4wf8kl\" (UID: \"543b785e-bdb9-4582-b9dd-8a987b5129f6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4wf8kl" Nov 29 08:00:39 crc kubenswrapper[4795]: E1129 08:00:39.679204 4795 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 29 08:00:39 crc kubenswrapper[4795]: E1129 08:00:39.679261 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/543b785e-bdb9-4582-b9dd-8a987b5129f6-cert podName:543b785e-bdb9-4582-b9dd-8a987b5129f6 nodeName:}" failed. No retries permitted until 2025-11-29 08:00:43.679243656 +0000 UTC m=+1289.654819446 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/543b785e-bdb9-4582-b9dd-8a987b5129f6-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4wf8kl" (UID: "543b785e-bdb9-4582-b9dd-8a987b5129f6") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 29 08:00:39 crc kubenswrapper[4795]: W1129 08:00:39.691830 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78c2fefa_d0f0_4123_9513_231b2c3ca5fd.slice/crio-9d8b5827318c413d0eb3cf4f5114716c0eefb574a0dfcea1934aca422a394e42 WatchSource:0}: Error finding container 9d8b5827318c413d0eb3cf4f5114716c0eefb574a0dfcea1934aca422a394e42: Status 404 returned error can't find the container with id 9d8b5827318c413d0eb3cf4f5114716c0eefb574a0dfcea1934aca422a394e42 Nov 29 08:00:39 crc kubenswrapper[4795]: I1129 08:00:39.700490 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-9g674"] Nov 29 08:00:39 crc kubenswrapper[4795]: W1129 08:00:39.704537 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5e4dece_2c02_41a1_ac8b_ec46eb2a3d45.slice/crio-1fcba7c2e3c2aba881141615260ac4f7a7801bc96c9fa8ebe7340d4f1796ad1c WatchSource:0}: Error finding container 1fcba7c2e3c2aba881141615260ac4f7a7801bc96c9fa8ebe7340d4f1796ad1c: Status 404 returned error can't find the container with id 1fcba7c2e3c2aba881141615260ac4f7a7801bc96c9fa8ebe7340d4f1796ad1c Nov 29 08:00:39 crc kubenswrapper[4795]: I1129 08:00:39.713850 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-d486dbd66-bt6tr"] Nov 29 08:00:39 crc kubenswrapper[4795]: I1129 08:00:39.723218 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-27n4r"] Nov 29 08:00:39 crc kubenswrapper[4795]: E1129 08:00:39.728127 4795 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kdrcg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-27n4r_openstack-operators(7ce03a92-9abd-485c-b949-fb95301de889): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 29 08:00:39 crc kubenswrapper[4795]: E1129 08:00:39.730383 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-27n4r" podUID="7ce03a92-9abd-485c-b949-fb95301de889" Nov 29 08:00:39 crc kubenswrapper[4795]: I1129 08:00:39.735609 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-k6h4m"] Nov 29 08:00:40 crc kubenswrapper[4795]: I1129 08:00:40.144988 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-cmvt9" event={"ID":"94d164fa-c521-4617-8338-1eba3ee1c31d","Type":"ContainerStarted","Data":"43c93ebf41f2d4fa6f1b94e779e4274d662e1e44e68d27e8234e79242109c548"} Nov 29 08:00:40 crc kubenswrapper[4795]: I1129 08:00:40.147113 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-966jn" event={"ID":"8de1af69-5c67-4669-83d5-02de0ecd32d3","Type":"ContainerStarted","Data":"3aaef76c2b9b968bf259072fdb65f2136768d68e54b04c22994ce19c057934a0"} Nov 29 08:00:40 crc kubenswrapper[4795]: I1129 08:00:40.150114 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-d5d4r" event={"ID":"f2367076-6d52-4047-908c-c1e32c4ca2c4","Type":"ContainerStarted","Data":"580138e5a1009c0f90d06205075e50ccd445fe3822ceed666b8628b3dbe76a88"} Nov 29 08:00:40 crc kubenswrapper[4795]: I1129 08:00:40.151547 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-6fkwt" event={"ID":"e56bb4ff-9936-4876-8616-0958e9892fa3","Type":"ContainerStarted","Data":"1d45dd7ab493234b972d6cb73a49a729254055e43439d1faf952c73b7634da1b"} Nov 29 08:00:40 crc kubenswrapper[4795]: I1129 08:00:40.152879 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-cn6z7" event={"ID":"1d6dd43f-eee0-4257-adbb-a53218a86eb9","Type":"ContainerStarted","Data":"a4eaa19af4252ccc38fa95d4a2acedb7671a82e696aa6fd9f7e84beaaddb2b23"} Nov 29 08:00:40 crc kubenswrapper[4795]: I1129 08:00:40.153697 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-d486dbd66-bt6tr" event={"ID":"c75b943b-8281-4fbd-a94a-3d5db0475d5d","Type":"ContainerStarted","Data":"0551f4a411f488de5c9d7d1b2b625f9d06eb556741907579a7f159c36b5c681b"} Nov 29 08:00:40 crc kubenswrapper[4795]: I1129 08:00:40.154887 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-vdmph" event={"ID":"a197813b-f5c3-49c1-81f6-b6b2e08e0617","Type":"ContainerStarted","Data":"6b997f7cec0ceefea0de590f2f6446c9a233f5935b912196f3f0aab5fffa26c3"} Nov 29 08:00:40 crc kubenswrapper[4795]: I1129 08:00:40.158858 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-27n4r" event={"ID":"7ce03a92-9abd-485c-b949-fb95301de889","Type":"ContainerStarted","Data":"c02c79e4504c459b1041e60fc0957a053c2c2fe848b7ccd66ae707e1ffbb0666"} Nov 29 08:00:40 crc kubenswrapper[4795]: E1129 08:00:40.160003 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-27n4r" podUID="7ce03a92-9abd-485c-b949-fb95301de889" Nov 29 08:00:40 crc kubenswrapper[4795]: I1129 08:00:40.161078 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-k6h4m" event={"ID":"4eea915e-348e-48a3-b5e1-767648dac19d","Type":"ContainerStarted","Data":"ad443b3be1316f0fd7fab148499ec41cdcb9be914830b101b67b481967a439e6"} Nov 29 08:00:40 crc kubenswrapper[4795]: I1129 08:00:40.161840 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-h74tt" event={"ID":"78c2fefa-d0f0-4123-9513-231b2c3ca5fd","Type":"ContainerStarted","Data":"9d8b5827318c413d0eb3cf4f5114716c0eefb574a0dfcea1934aca422a394e42"} Nov 29 08:00:40 crc kubenswrapper[4795]: I1129 08:00:40.162538 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-9g674" event={"ID":"f5e4dece-2c02-41a1-ac8b-ec46eb2a3d45","Type":"ContainerStarted","Data":"1fcba7c2e3c2aba881141615260ac4f7a7801bc96c9fa8ebe7340d4f1796ad1c"} Nov 29 08:00:40 crc kubenswrapper[4795]: I1129 08:00:40.163964 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-mvrzj" event={"ID":"36512615-d21b-4484-af03-ffa1d325883b","Type":"ContainerStarted","Data":"3b40ee4dee31c24a327a7d5dfb2512ef828106c5e8742f54333fc99a185d4b41"} Nov 29 08:00:40 crc kubenswrapper[4795]: I1129 08:00:40.316270 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/868e2666-5606-4891-ba11-ac02f852c48d-metrics-certs\") pod \"openstack-operator-controller-manager-8688fc7b8-5sbpb\" (UID: \"868e2666-5606-4891-ba11-ac02f852c48d\") " pod="openstack-operators/openstack-operator-controller-manager-8688fc7b8-5sbpb" Nov 29 08:00:40 crc kubenswrapper[4795]: I1129 08:00:40.316378 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/868e2666-5606-4891-ba11-ac02f852c48d-webhook-certs\") pod \"openstack-operator-controller-manager-8688fc7b8-5sbpb\" (UID: \"868e2666-5606-4891-ba11-ac02f852c48d\") " pod="openstack-operators/openstack-operator-controller-manager-8688fc7b8-5sbpb" Nov 29 08:00:40 crc kubenswrapper[4795]: E1129 08:00:40.316531 4795 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 29 08:00:40 crc kubenswrapper[4795]: E1129 08:00:40.316585 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/868e2666-5606-4891-ba11-ac02f852c48d-webhook-certs podName:868e2666-5606-4891-ba11-ac02f852c48d nodeName:}" failed. No retries permitted until 2025-11-29 08:00:44.316567259 +0000 UTC m=+1290.292143049 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/868e2666-5606-4891-ba11-ac02f852c48d-webhook-certs") pod "openstack-operator-controller-manager-8688fc7b8-5sbpb" (UID: "868e2666-5606-4891-ba11-ac02f852c48d") : secret "webhook-server-cert" not found Nov 29 08:00:40 crc kubenswrapper[4795]: E1129 08:00:40.316859 4795 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 29 08:00:40 crc kubenswrapper[4795]: E1129 08:00:40.316955 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/868e2666-5606-4891-ba11-ac02f852c48d-metrics-certs podName:868e2666-5606-4891-ba11-ac02f852c48d nodeName:}" failed. No retries permitted until 2025-11-29 08:00:44.316935849 +0000 UTC m=+1290.292511639 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/868e2666-5606-4891-ba11-ac02f852c48d-metrics-certs") pod "openstack-operator-controller-manager-8688fc7b8-5sbpb" (UID: "868e2666-5606-4891-ba11-ac02f852c48d") : secret "metrics-server-cert" not found Nov 29 08:00:41 crc kubenswrapper[4795]: E1129 08:00:41.174295 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-27n4r" podUID="7ce03a92-9abd-485c-b949-fb95301de889" Nov 29 08:00:41 crc kubenswrapper[4795]: I1129 08:00:41.941239 4795 patch_prober.go:28] interesting pod/machine-config-daemon-bkmq6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:00:41 crc kubenswrapper[4795]: I1129 08:00:41.941740 4795 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:00:41 crc kubenswrapper[4795]: I1129 08:00:41.941808 4795 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" Nov 29 08:00:41 crc kubenswrapper[4795]: I1129 08:00:41.942877 4795 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d9f96d395ac312701f390d1e390e800e858ea785b49bf9e565d383c5df5e5a12"} pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 08:00:41 crc kubenswrapper[4795]: I1129 08:00:41.943078 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" containerID="cri-o://d9f96d395ac312701f390d1e390e800e858ea785b49bf9e565d383c5df5e5a12" gracePeriod=600 Nov 29 08:00:42 crc kubenswrapper[4795]: I1129 08:00:42.217055 4795 generic.go:334] "Generic (PLEG): container finished" podID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerID="d9f96d395ac312701f390d1e390e800e858ea785b49bf9e565d383c5df5e5a12" exitCode=0 Nov 29 08:00:42 crc kubenswrapper[4795]: I1129 08:00:42.217141 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" event={"ID":"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1","Type":"ContainerDied","Data":"d9f96d395ac312701f390d1e390e800e858ea785b49bf9e565d383c5df5e5a12"} Nov 29 08:00:42 crc kubenswrapper[4795]: I1129 08:00:42.217193 4795 scope.go:117] "RemoveContainer" containerID="ab335149a24428cc9c0c5e9165d87fbe177cd1f8c86a0c5a601208bae120d8f2" Nov 29 08:00:43 crc kubenswrapper[4795]: I1129 08:00:43.407650 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cc9825dd-340b-4dda-ab8a-91d95ee67678-cert\") pod \"infra-operator-controller-manager-57548d458d-rd6w8\" (UID: \"cc9825dd-340b-4dda-ab8a-91d95ee67678\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-rd6w8" Nov 29 08:00:43 crc kubenswrapper[4795]: E1129 08:00:43.408792 4795 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 29 08:00:43 crc kubenswrapper[4795]: E1129 08:00:43.408895 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cc9825dd-340b-4dda-ab8a-91d95ee67678-cert podName:cc9825dd-340b-4dda-ab8a-91d95ee67678 nodeName:}" failed. No retries permitted until 2025-11-29 08:00:51.408870432 +0000 UTC m=+1297.384446222 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cc9825dd-340b-4dda-ab8a-91d95ee67678-cert") pod "infra-operator-controller-manager-57548d458d-rd6w8" (UID: "cc9825dd-340b-4dda-ab8a-91d95ee67678") : secret "infra-operator-webhook-server-cert" not found Nov 29 08:00:43 crc kubenswrapper[4795]: I1129 08:00:43.715157 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/543b785e-bdb9-4582-b9dd-8a987b5129f6-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4wf8kl\" (UID: \"543b785e-bdb9-4582-b9dd-8a987b5129f6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4wf8kl" Nov 29 08:00:43 crc kubenswrapper[4795]: E1129 08:00:43.715439 4795 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 29 08:00:43 crc kubenswrapper[4795]: E1129 08:00:43.715537 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/543b785e-bdb9-4582-b9dd-8a987b5129f6-cert podName:543b785e-bdb9-4582-b9dd-8a987b5129f6 nodeName:}" failed. No retries permitted until 2025-11-29 08:00:51.715512201 +0000 UTC m=+1297.691088061 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/543b785e-bdb9-4582-b9dd-8a987b5129f6-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4wf8kl" (UID: "543b785e-bdb9-4582-b9dd-8a987b5129f6") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 29 08:00:44 crc kubenswrapper[4795]: I1129 08:00:44.330769 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/868e2666-5606-4891-ba11-ac02f852c48d-metrics-certs\") pod \"openstack-operator-controller-manager-8688fc7b8-5sbpb\" (UID: \"868e2666-5606-4891-ba11-ac02f852c48d\") " pod="openstack-operators/openstack-operator-controller-manager-8688fc7b8-5sbpb" Nov 29 08:00:44 crc kubenswrapper[4795]: I1129 08:00:44.330907 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/868e2666-5606-4891-ba11-ac02f852c48d-webhook-certs\") pod \"openstack-operator-controller-manager-8688fc7b8-5sbpb\" (UID: \"868e2666-5606-4891-ba11-ac02f852c48d\") " pod="openstack-operators/openstack-operator-controller-manager-8688fc7b8-5sbpb" Nov 29 08:00:44 crc kubenswrapper[4795]: E1129 08:00:44.330928 4795 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 29 08:00:44 crc kubenswrapper[4795]: E1129 08:00:44.331000 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/868e2666-5606-4891-ba11-ac02f852c48d-metrics-certs podName:868e2666-5606-4891-ba11-ac02f852c48d nodeName:}" failed. No retries permitted until 2025-11-29 08:00:52.330979983 +0000 UTC m=+1298.306555823 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/868e2666-5606-4891-ba11-ac02f852c48d-metrics-certs") pod "openstack-operator-controller-manager-8688fc7b8-5sbpb" (UID: "868e2666-5606-4891-ba11-ac02f852c48d") : secret "metrics-server-cert" not found Nov 29 08:00:44 crc kubenswrapper[4795]: E1129 08:00:44.331084 4795 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 29 08:00:44 crc kubenswrapper[4795]: E1129 08:00:44.331140 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/868e2666-5606-4891-ba11-ac02f852c48d-webhook-certs podName:868e2666-5606-4891-ba11-ac02f852c48d nodeName:}" failed. No retries permitted until 2025-11-29 08:00:52.331125457 +0000 UTC m=+1298.306701247 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/868e2666-5606-4891-ba11-ac02f852c48d-webhook-certs") pod "openstack-operator-controller-manager-8688fc7b8-5sbpb" (UID: "868e2666-5606-4891-ba11-ac02f852c48d") : secret "webhook-server-cert" not found Nov 29 08:00:51 crc kubenswrapper[4795]: I1129 08:00:51.465029 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cc9825dd-340b-4dda-ab8a-91d95ee67678-cert\") pod \"infra-operator-controller-manager-57548d458d-rd6w8\" (UID: \"cc9825dd-340b-4dda-ab8a-91d95ee67678\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-rd6w8" Nov 29 08:00:51 crc kubenswrapper[4795]: I1129 08:00:51.474428 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cc9825dd-340b-4dda-ab8a-91d95ee67678-cert\") pod \"infra-operator-controller-manager-57548d458d-rd6w8\" (UID: \"cc9825dd-340b-4dda-ab8a-91d95ee67678\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-rd6w8" Nov 29 08:00:51 crc kubenswrapper[4795]: I1129 08:00:51.598529 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57548d458d-rd6w8" Nov 29 08:00:51 crc kubenswrapper[4795]: I1129 08:00:51.781904 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/543b785e-bdb9-4582-b9dd-8a987b5129f6-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4wf8kl\" (UID: \"543b785e-bdb9-4582-b9dd-8a987b5129f6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4wf8kl" Nov 29 08:00:51 crc kubenswrapper[4795]: I1129 08:00:51.787362 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/543b785e-bdb9-4582-b9dd-8a987b5129f6-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4wf8kl\" (UID: \"543b785e-bdb9-4582-b9dd-8a987b5129f6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4wf8kl" Nov 29 08:00:51 crc kubenswrapper[4795]: I1129 08:00:51.798145 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4wf8kl" Nov 29 08:00:52 crc kubenswrapper[4795]: I1129 08:00:52.396716 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/868e2666-5606-4891-ba11-ac02f852c48d-webhook-certs\") pod \"openstack-operator-controller-manager-8688fc7b8-5sbpb\" (UID: \"868e2666-5606-4891-ba11-ac02f852c48d\") " pod="openstack-operators/openstack-operator-controller-manager-8688fc7b8-5sbpb" Nov 29 08:00:52 crc kubenswrapper[4795]: I1129 08:00:52.400079 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/868e2666-5606-4891-ba11-ac02f852c48d-metrics-certs\") pod \"openstack-operator-controller-manager-8688fc7b8-5sbpb\" (UID: \"868e2666-5606-4891-ba11-ac02f852c48d\") " pod="openstack-operators/openstack-operator-controller-manager-8688fc7b8-5sbpb" Nov 29 08:00:52 crc kubenswrapper[4795]: E1129 08:00:52.402115 4795 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 29 08:00:52 crc kubenswrapper[4795]: E1129 08:00:52.402769 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/868e2666-5606-4891-ba11-ac02f852c48d-webhook-certs podName:868e2666-5606-4891-ba11-ac02f852c48d nodeName:}" failed. No retries permitted until 2025-11-29 08:01:08.402741513 +0000 UTC m=+1314.378317303 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/868e2666-5606-4891-ba11-ac02f852c48d-webhook-certs") pod "openstack-operator-controller-manager-8688fc7b8-5sbpb" (UID: "868e2666-5606-4891-ba11-ac02f852c48d") : secret "webhook-server-cert" not found Nov 29 08:00:52 crc kubenswrapper[4795]: I1129 08:00:52.420914 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/868e2666-5606-4891-ba11-ac02f852c48d-metrics-certs\") pod \"openstack-operator-controller-manager-8688fc7b8-5sbpb\" (UID: \"868e2666-5606-4891-ba11-ac02f852c48d\") " pod="openstack-operators/openstack-operator-controller-manager-8688fc7b8-5sbpb" Nov 29 08:00:56 crc kubenswrapper[4795]: E1129 08:00:56.696243 4795 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:9f68d7bc8c6bce38f46dee8a8272d5365c49fe7b32b2af52e8ac884e212f3a85" Nov 29 08:00:56 crc kubenswrapper[4795]: E1129 08:00:56.697113 4795 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:9f68d7bc8c6bce38f46dee8a8272d5365c49fe7b32b2af52e8ac884e212f3a85,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jnsjn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-78b4bc895b-8mk4s_openstack-operators(7bfb1aea-ee92-4033-ba5f-c7ac79c60d0e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 08:00:56 crc kubenswrapper[4795]: E1129 08:00:56.796839 4795 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = reading blob sha256:7d6ca59745ac48971cbc2d72b53fe413144fa5c0c21f2ef1d7aaf1291851e501: Get \"https://quay.io/v2/openstack-k8s-operators/nova-operator/blobs/sha256:7d6ca59745ac48971cbc2d72b53fe413144fa5c0c21f2ef1d7aaf1291851e501\": context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670" Nov 29 08:00:56 crc kubenswrapper[4795]: E1129 08:00:56.797055 4795 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7sqj2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-697bc559fc-966jn_openstack-operators(8de1af69-5c67-4669-83d5-02de0ecd32d3): ErrImagePull: rpc error: code = Canceled desc = reading blob sha256:7d6ca59745ac48971cbc2d72b53fe413144fa5c0c21f2ef1d7aaf1291851e501: Get \"https://quay.io/v2/openstack-k8s-operators/nova-operator/blobs/sha256:7d6ca59745ac48971cbc2d72b53fe413144fa5c0c21f2ef1d7aaf1291851e501\": context canceled" logger="UnhandledError" Nov 29 08:01:01 crc kubenswrapper[4795]: E1129 08:01:01.589518 4795 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:f6059a0fbf031d34dcf086d14ce8c0546caeaee23c5780e90b5037c5feee9fea" Nov 29 08:01:01 crc kubenswrapper[4795]: E1129 08:01:01.590243 4795 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:f6059a0fbf031d34dcf086d14ce8c0546caeaee23c5780e90b5037c5feee9fea,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cqq8p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-7d9dfd778-klbwf_openstack-operators(7bed5103-966d-43d3-92f1-73a2f8b6d551): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 08:01:03 crc kubenswrapper[4795]: E1129 08:01:03.101152 4795 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:1d60701214b39cdb0fa70bbe5710f9b131139a9f4b482c2db4058a04daefb801" Nov 29 08:01:03 crc kubenswrapper[4795]: E1129 08:01:03.101697 4795 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:1d60701214b39cdb0fa70bbe5710f9b131139a9f4b482c2db4058a04daefb801,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8d4x9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-859b6ccc6-qtbtd_openstack-operators(86217734-815f-461c-a32d-8d744192003e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 08:01:07 crc kubenswrapper[4795]: E1129 08:01:07.061404 4795 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:600ca007e493d3af0fcc2ebac92e8da5efd2afe812b62d7d3d4dd0115bdf05d7" Nov 29 08:01:07 crc kubenswrapper[4795]: E1129 08:01:07.062180 4795 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:600ca007e493d3af0fcc2ebac92e8da5efd2afe812b62d7d3d4dd0115bdf05d7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8ffpl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-56bbcc9d85-cmvt9_openstack-operators(94d164fa-c521-4617-8338-1eba3ee1c31d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 08:01:07 crc kubenswrapper[4795]: E1129 08:01:07.673037 4795 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:0f523b7e2fa9e86fef986acf07d0c42d5658c475d565f11eaea926ebffcb6530" Nov 29 08:01:07 crc kubenswrapper[4795]: E1129 08:01:07.673640 4795 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:0f523b7e2fa9e86fef986acf07d0c42d5658c475d565f11eaea926ebffcb6530,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ztgt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-6c548fd776-5q5dd_openstack-operators(b53e6db5-a3fe-4f86-9b6a-49eb7d4629fd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 08:01:08 crc kubenswrapper[4795]: I1129 08:01:08.483876 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/868e2666-5606-4891-ba11-ac02f852c48d-webhook-certs\") pod \"openstack-operator-controller-manager-8688fc7b8-5sbpb\" (UID: \"868e2666-5606-4891-ba11-ac02f852c48d\") " pod="openstack-operators/openstack-operator-controller-manager-8688fc7b8-5sbpb" Nov 29 08:01:08 crc kubenswrapper[4795]: I1129 08:01:08.490731 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/868e2666-5606-4891-ba11-ac02f852c48d-webhook-certs\") pod \"openstack-operator-controller-manager-8688fc7b8-5sbpb\" (UID: \"868e2666-5606-4891-ba11-ac02f852c48d\") " pod="openstack-operators/openstack-operator-controller-manager-8688fc7b8-5sbpb" Nov 29 08:01:08 crc kubenswrapper[4795]: I1129 08:01:08.789372 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-8688fc7b8-5sbpb" Nov 29 08:01:14 crc kubenswrapper[4795]: E1129 08:01:14.376499 4795 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:c4abfc148600dfa85915f3dc911d988ea2335f26cb6b8d749fe79bfe53e5e429" Nov 29 08:01:14 crc kubenswrapper[4795]: E1129 08:01:14.377067 4795 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:c4abfc148600dfa85915f3dc911d988ea2335f26cb6b8d749fe79bfe53e5e429,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vcxhk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-5f64f6f8bb-xjns5_openstack-operators(36a279fc-25f1-407e-a1c6-6b8689d68cd2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 08:01:14 crc kubenswrapper[4795]: E1129 08:01:14.918414 4795 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d" Nov 29 08:01:14 crc kubenswrapper[4795]: E1129 08:01:14.918573 4795 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lbl7k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-5f8c65bbfc-6fkwt_openstack-operators(e56bb4ff-9936-4876-8616-0958e9892fa3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 08:01:15 crc kubenswrapper[4795]: E1129 08:01:15.385824 4795 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557" Nov 29 08:01:15 crc kubenswrapper[4795]: E1129 08:01:15.386275 4795 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-slbtr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-5fdfd5b6b5-njfdg_openstack-operators(bfb2e88b-d2db-4afa-8511-e1a896eb9039): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 08:01:16 crc kubenswrapper[4795]: E1129 08:01:16.032684 4795 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59" Nov 29 08:01:16 crc kubenswrapper[4795]: E1129 08:01:16.032902 4795 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gzp9d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-b6456fdb6-vdmph_openstack-operators(a197813b-f5c3-49c1-81f6-b6b2e08e0617): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 08:01:16 crc kubenswrapper[4795]: E1129 08:01:16.562726 4795 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:ecf7be921850bdc04697ed1b332bab39ad2a64e4e45c2a445c04f9bae6ac61b5" Nov 29 08:01:16 crc kubenswrapper[4795]: E1129 08:01:16.562932 4795 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:ecf7be921850bdc04697ed1b332bab39ad2a64e4e45c2a445c04f9bae6ac61b5,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jc92j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-6546668bfd-mvrzj_openstack-operators(36512615-d21b-4484-af03-ffa1d325883b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 08:01:20 crc kubenswrapper[4795]: E1129 08:01:20.026816 4795 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168" Nov 29 08:01:20 crc kubenswrapper[4795]: E1129 08:01:20.027508 4795 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5b8fd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-998648c74-d5d4r_openstack-operators(f2367076-6d52-4047-908c-c1e32c4ca2c4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 08:01:20 crc kubenswrapper[4795]: E1129 08:01:20.726191 4795 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:9aa8c03633e4b934c57868c1660acf47e7d386ac86bcb344df262c9ad76b8621" Nov 29 08:01:20 crc kubenswrapper[4795]: E1129 08:01:20.726406 4795 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:9aa8c03633e4b934c57868c1660acf47e7d386ac86bcb344df262c9ad76b8621,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-djclf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-769dc69bc-h74tt_openstack-operators(78c2fefa-d0f0-4123-9513-231b2c3ca5fd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 08:01:21 crc kubenswrapper[4795]: E1129 08:01:21.268896 4795 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f" Nov 29 08:01:21 crc kubenswrapper[4795]: E1129 08:01:21.269093 4795 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gkvgk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-78f8948974-9g674_openstack-operators(f5e4dece-2c02-41a1-ac8b-ec46eb2a3d45): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 08:01:21 crc kubenswrapper[4795]: E1129 08:01:21.833680 4795 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:986861e5a0a9954f63581d9d55a30f8057883cefea489415d76257774526eea3" Nov 29 08:01:21 crc kubenswrapper[4795]: E1129 08:01:21.834238 4795 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:986861e5a0a9954f63581d9d55a30f8057883cefea489415d76257774526eea3,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w2qmt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-546d4bdf48-cn6z7_openstack-operators(1d6dd43f-eee0-4257-adbb-a53218a86eb9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 08:01:28 crc kubenswrapper[4795]: E1129 08:01:28.648334 4795 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.119:5001/openstack-k8s-operators/telemetry-operator:a78c62110b644e65ff67df5c075c61a894a42046" Nov 29 08:01:28 crc kubenswrapper[4795]: E1129 08:01:28.648923 4795 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.119:5001/openstack-k8s-operators/telemetry-operator:a78c62110b644e65ff67df5c075c61a894a42046" Nov 29 08:01:28 crc kubenswrapper[4795]: E1129 08:01:28.649083 4795 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.119:5001/openstack-k8s-operators/telemetry-operator:a78c62110b644e65ff67df5c075c61a894a42046,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fxdhn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-d486dbd66-bt6tr_openstack-operators(c75b943b-8281-4fbd-a94a-3d5db0475d5d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 08:01:30 crc kubenswrapper[4795]: E1129 08:01:30.922247 4795 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Nov 29 08:01:30 crc kubenswrapper[4795]: E1129 08:01:30.923845 4795 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kdrcg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-27n4r_openstack-operators(7ce03a92-9abd-485c-b949-fb95301de889): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 08:01:30 crc kubenswrapper[4795]: E1129 08:01:30.925952 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-27n4r" podUID="7ce03a92-9abd-485c-b949-fb95301de889" Nov 29 08:01:31 crc kubenswrapper[4795]: I1129 08:01:31.388062 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" event={"ID":"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1","Type":"ContainerStarted","Data":"92bdb17e0171829dd368ff834d02a1b920553ce70337271f54fb17757386f741"} Nov 29 08:01:31 crc kubenswrapper[4795]: I1129 08:01:31.391003 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-rd6w8"] Nov 29 08:01:31 crc kubenswrapper[4795]: I1129 08:01:31.633308 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4wf8kl"] Nov 29 08:01:31 crc kubenswrapper[4795]: W1129 08:01:31.726807 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod543b785e_bdb9_4582_b9dd_8a987b5129f6.slice/crio-1541b2e20299962a4cda3d75823857dfab7331bc8ad53a3a80dd567b37e44949 WatchSource:0}: Error finding container 1541b2e20299962a4cda3d75823857dfab7331bc8ad53a3a80dd567b37e44949: Status 404 returned error can't find the container with id 1541b2e20299962a4cda3d75823857dfab7331bc8ad53a3a80dd567b37e44949 Nov 29 08:01:32 crc kubenswrapper[4795]: I1129 08:01:32.308041 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-8688fc7b8-5sbpb"] Nov 29 08:01:32 crc kubenswrapper[4795]: I1129 08:01:32.412523 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-dr8f4" event={"ID":"3ff17662-f7b1-4870-9ef2-18a81fdb5d73","Type":"ContainerStarted","Data":"f96cfcbd9259b44c988c36ee19ef76b253db55e5fa37dc9eef9ce66d18f36521"} Nov 29 08:01:32 crc kubenswrapper[4795]: I1129 08:01:32.414041 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-rd6w8" event={"ID":"cc9825dd-340b-4dda-ab8a-91d95ee67678","Type":"ContainerStarted","Data":"695ce53de17706b207aca7523d17bc5aac129cfb52cd5cba3a6aa35ce34bd30a"} Nov 29 08:01:32 crc kubenswrapper[4795]: I1129 08:01:32.415734 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-spfsh" event={"ID":"d4e1473d-8426-452b-8030-764680cc5a20","Type":"ContainerStarted","Data":"bd623ce0cf3505e596a3a29553bd0943bb27b123fc9ca136e9b3f33987cb5f42"} Nov 29 08:01:32 crc kubenswrapper[4795]: I1129 08:01:32.417650 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4wf8kl" event={"ID":"543b785e-bdb9-4582-b9dd-8a987b5129f6","Type":"ContainerStarted","Data":"1541b2e20299962a4cda3d75823857dfab7331bc8ad53a3a80dd567b37e44949"} Nov 29 08:01:34 crc kubenswrapper[4795]: I1129 08:01:34.441480 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-8688fc7b8-5sbpb" event={"ID":"868e2666-5606-4891-ba11-ac02f852c48d","Type":"ContainerStarted","Data":"f27f110c67e3205a5f1ad312721168202c7f9f454008ad412ea323678dbf764f"} Nov 29 08:01:35 crc kubenswrapper[4795]: I1129 08:01:35.455500 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-k6h4m" event={"ID":"4eea915e-348e-48a3-b5e1-767648dac19d","Type":"ContainerStarted","Data":"a2dc6bdbafb03ba8af8104b233bd6bfc5adf44517aa8994a7e60266498804329"} Nov 29 08:01:36 crc kubenswrapper[4795]: E1129 08:01:36.102154 4795 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Nov 29 08:01:36 crc kubenswrapper[4795]: E1129 08:01:36.102361 4795 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jnsjn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-78b4bc895b-8mk4s_openstack-operators(7bfb1aea-ee92-4033-ba5f-c7ac79c60d0e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 08:01:36 crc kubenswrapper[4795]: E1129 08:01:36.104220 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-8mk4s" podUID="7bfb1aea-ee92-4033-ba5f-c7ac79c60d0e" Nov 29 08:01:36 crc kubenswrapper[4795]: I1129 08:01:36.471958 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-8688fc7b8-5sbpb" event={"ID":"868e2666-5606-4891-ba11-ac02f852c48d","Type":"ContainerStarted","Data":"07d88316624e67632bcf8053cd9c090f8f5dd2bc7785acc5b0f579781f59f31f"} Nov 29 08:01:36 crc kubenswrapper[4795]: I1129 08:01:36.472515 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-8688fc7b8-5sbpb" Nov 29 08:01:36 crc kubenswrapper[4795]: I1129 08:01:36.528583 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-8688fc7b8-5sbpb" podStartSLOduration=61.528561265 podStartE2EDuration="1m1.528561265s" podCreationTimestamp="2025-11-29 08:00:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 08:01:36.525002153 +0000 UTC m=+1342.500577963" watchObservedRunningTime="2025-11-29 08:01:36.528561265 +0000 UTC m=+1342.504137055" Nov 29 08:01:36 crc kubenswrapper[4795]: E1129 08:01:36.592502 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-998648c74-d5d4r" podUID="f2367076-6d52-4047-908c-c1e32c4ca2c4" Nov 29 08:01:36 crc kubenswrapper[4795]: E1129 08:01:36.849500 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-cn6z7" podUID="1d6dd43f-eee0-4257-adbb-a53218a86eb9" Nov 29 08:01:36 crc kubenswrapper[4795]: E1129 08:01:36.927862 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = reading blob sha256:7d6ca59745ac48971cbc2d72b53fe413144fa5c0c21f2ef1d7aaf1291851e501: Get \\\"https://quay.io/v2/openstack-k8s-operators/nova-operator/blobs/sha256:7d6ca59745ac48971cbc2d72b53fe413144fa5c0c21f2ef1d7aaf1291851e501\\\": context canceled\"" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-966jn" podUID="8de1af69-5c67-4669-83d5-02de0ecd32d3" Nov 29 08:01:36 crc kubenswrapper[4795]: E1129 08:01:36.955479 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-d486dbd66-bt6tr" podUID="c75b943b-8281-4fbd-a94a-3d5db0475d5d" Nov 29 08:01:36 crc kubenswrapper[4795]: E1129 08:01:36.993428 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-mvrzj" podUID="36512615-d21b-4484-af03-ffa1d325883b" Nov 29 08:01:37 crc kubenswrapper[4795]: E1129 08:01:37.003042 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-h74tt" podUID="78c2fefa-d0f0-4123-9513-231b2c3ca5fd" Nov 29 08:01:37 crc kubenswrapper[4795]: E1129 08:01:37.022658 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-klbwf" podUID="7bed5103-966d-43d3-92f1-73a2f8b6d551" Nov 29 08:01:37 crc kubenswrapper[4795]: E1129 08:01:37.086113 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-vdmph" podUID="a197813b-f5c3-49c1-81f6-b6b2e08e0617" Nov 29 08:01:37 crc kubenswrapper[4795]: E1129 08:01:37.225608 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-6fkwt" podUID="e56bb4ff-9936-4876-8616-0958e9892fa3" Nov 29 08:01:37 crc kubenswrapper[4795]: I1129 08:01:37.520395 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-h74tt" event={"ID":"78c2fefa-d0f0-4123-9513-231b2c3ca5fd","Type":"ContainerStarted","Data":"3a17563a5b4cbab9cf3d13591b8234b32161b0f20ab8f95893c3611c9867d399"} Nov 29 08:01:37 crc kubenswrapper[4795]: I1129 08:01:37.539357 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-d486dbd66-bt6tr" event={"ID":"c75b943b-8281-4fbd-a94a-3d5db0475d5d","Type":"ContainerStarted","Data":"81fb447fc55136b288440d167282c576457d6c3a53699f1668a67fe08e8bc2df"} Nov 29 08:01:37 crc kubenswrapper[4795]: I1129 08:01:37.556302 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-vdmph" event={"ID":"a197813b-f5c3-49c1-81f6-b6b2e08e0617","Type":"ContainerStarted","Data":"b5aa98e6b793443d21c716ef71f7e715db3ff10ff6a464434904eb056d189a4a"} Nov 29 08:01:37 crc kubenswrapper[4795]: E1129 08:01:37.557310 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.119:5001/openstack-k8s-operators/telemetry-operator:a78c62110b644e65ff67df5c075c61a894a42046\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-d486dbd66-bt6tr" podUID="c75b943b-8281-4fbd-a94a-3d5db0475d5d" Nov 29 08:01:37 crc kubenswrapper[4795]: I1129 08:01:37.615530 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-dr8f4" event={"ID":"3ff17662-f7b1-4870-9ef2-18a81fdb5d73","Type":"ContainerStarted","Data":"6914f6d145a6378db85a1c06ece65f7ef6bbaf64f971ba58d6d956c6a1556ec5"} Nov 29 08:01:37 crc kubenswrapper[4795]: I1129 08:01:37.616271 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-dr8f4" Nov 29 08:01:37 crc kubenswrapper[4795]: I1129 08:01:37.644450 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-dr8f4" Nov 29 08:01:37 crc kubenswrapper[4795]: I1129 08:01:37.698570 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-dr8f4" podStartSLOduration=5.535622921 podStartE2EDuration="1m3.698539794s" podCreationTimestamp="2025-11-29 08:00:34 +0000 UTC" firstStartedPulling="2025-11-29 08:00:38.738073252 +0000 UTC m=+1284.713649042" lastFinishedPulling="2025-11-29 08:01:36.900990115 +0000 UTC m=+1342.876565915" observedRunningTime="2025-11-29 08:01:37.669721525 +0000 UTC m=+1343.645297315" watchObservedRunningTime="2025-11-29 08:01:37.698539794 +0000 UTC m=+1343.674115584" Nov 29 08:01:37 crc kubenswrapper[4795]: I1129 08:01:37.700492 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-spfsh" event={"ID":"d4e1473d-8426-452b-8030-764680cc5a20","Type":"ContainerStarted","Data":"85ef3372023c73efca2baf2d4f7a65fcccfd95c765d53c2fb36df23b2194752c"} Nov 29 08:01:37 crc kubenswrapper[4795]: I1129 08:01:37.703539 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-spfsh" Nov 29 08:01:37 crc kubenswrapper[4795]: I1129 08:01:37.719372 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-spfsh" Nov 29 08:01:37 crc kubenswrapper[4795]: E1129 08:01:37.740121 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-78f8948974-9g674" podUID="f5e4dece-2c02-41a1-ac8b-ec46eb2a3d45" Nov 29 08:01:37 crc kubenswrapper[4795]: I1129 08:01:37.744160 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-966jn" event={"ID":"8de1af69-5c67-4669-83d5-02de0ecd32d3","Type":"ContainerStarted","Data":"ea12926d4fd438ceea58435ce11143d3ba686e25378a8245c637c65bd39ea3ab"} Nov 29 08:01:37 crc kubenswrapper[4795]: I1129 08:01:37.773400 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-mvrzj" event={"ID":"36512615-d21b-4484-af03-ffa1d325883b","Type":"ContainerStarted","Data":"42dbb0fdd5251c14d2422151cadf949cef41d25b6778273b14ca7935a382a39b"} Nov 29 08:01:37 crc kubenswrapper[4795]: I1129 08:01:37.815971 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-k6h4m" event={"ID":"4eea915e-348e-48a3-b5e1-767648dac19d","Type":"ContainerStarted","Data":"2adc210e0391e182fc9e33fab541266617878dcead8e29a5163dc8c2b18326cc"} Nov 29 08:01:37 crc kubenswrapper[4795]: I1129 08:01:37.817262 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5854674fcc-k6h4m" Nov 29 08:01:37 crc kubenswrapper[4795]: I1129 08:01:37.824725 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-d5d4r" event={"ID":"f2367076-6d52-4047-908c-c1e32c4ca2c4","Type":"ContainerStarted","Data":"ef656bc037a723af7fcedf0fcf3989c2bbf8bbde07255f13286854576a5d7dca"} Nov 29 08:01:37 crc kubenswrapper[4795]: I1129 08:01:37.834327 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-spfsh" podStartSLOduration=5.14730958 podStartE2EDuration="1m3.834290204s" podCreationTimestamp="2025-11-29 08:00:34 +0000 UTC" firstStartedPulling="2025-11-29 08:00:37.532553323 +0000 UTC m=+1283.508129113" lastFinishedPulling="2025-11-29 08:01:36.219533947 +0000 UTC m=+1342.195109737" observedRunningTime="2025-11-29 08:01:37.787957107 +0000 UTC m=+1343.763532897" watchObservedRunningTime="2025-11-29 08:01:37.834290204 +0000 UTC m=+1343.809866004" Nov 29 08:01:37 crc kubenswrapper[4795]: I1129 08:01:37.858316 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-6fkwt" event={"ID":"e56bb4ff-9936-4876-8616-0958e9892fa3","Type":"ContainerStarted","Data":"8db496115ea175f2e583097b32b6fce01d506fca805481dde3d06da216ac63d6"} Nov 29 08:01:37 crc kubenswrapper[4795]: I1129 08:01:37.885458 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-cn6z7" event={"ID":"1d6dd43f-eee0-4257-adbb-a53218a86eb9","Type":"ContainerStarted","Data":"5891af45405e50df20145e4a372db69387f7a17356b0acd196b0bba3c08170bb"} Nov 29 08:01:37 crc kubenswrapper[4795]: E1129 08:01:37.949749 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-xjns5" podUID="36a279fc-25f1-407e-a1c6-6b8689d68cd2" Nov 29 08:01:37 crc kubenswrapper[4795]: I1129 08:01:37.952136 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5854674fcc-k6h4m" podStartSLOduration=5.949075505 podStartE2EDuration="1m2.952114855s" podCreationTimestamp="2025-11-29 08:00:35 +0000 UTC" firstStartedPulling="2025-11-29 08:00:39.725534522 +0000 UTC m=+1285.701110312" lastFinishedPulling="2025-11-29 08:01:36.728573872 +0000 UTC m=+1342.704149662" observedRunningTime="2025-11-29 08:01:37.933882606 +0000 UTC m=+1343.909458396" watchObservedRunningTime="2025-11-29 08:01:37.952114855 +0000 UTC m=+1343.927690635" Nov 29 08:01:37 crc kubenswrapper[4795]: E1129 08:01:37.958272 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-qtbtd" podUID="86217734-815f-461c-a32d-8d744192003e" Nov 29 08:01:37 crc kubenswrapper[4795]: I1129 08:01:37.958729 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-klbwf" event={"ID":"7bed5103-966d-43d3-92f1-73a2f8b6d551","Type":"ContainerStarted","Data":"816e48e27dade9d79f0a22832f6bd6fc663d1c09b0fa93efc153fc00dced37dd"} Nov 29 08:01:37 crc kubenswrapper[4795]: E1129 08:01:37.963996 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-5q5dd" podUID="b53e6db5-a3fe-4f86-9b6a-49eb7d4629fd" Nov 29 08:01:38 crc kubenswrapper[4795]: E1129 08:01:38.084819 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-cmvt9" podUID="94d164fa-c521-4617-8338-1eba3ee1c31d" Nov 29 08:01:38 crc kubenswrapper[4795]: E1129 08:01:38.134378 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-njfdg" podUID="bfb2e88b-d2db-4afa-8511-e1a896eb9039" Nov 29 08:01:39 crc kubenswrapper[4795]: I1129 08:01:39.069906 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-h74tt" event={"ID":"78c2fefa-d0f0-4123-9513-231b2c3ca5fd","Type":"ContainerStarted","Data":"ca8a0d2bbfbeb4898b0f352336932a0048598594b95909ff534c8eb96ba94fa8"} Nov 29 08:01:39 crc kubenswrapper[4795]: I1129 08:01:39.070505 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-h74tt" Nov 29 08:01:39 crc kubenswrapper[4795]: I1129 08:01:39.081238 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-9g674" event={"ID":"f5e4dece-2c02-41a1-ac8b-ec46eb2a3d45","Type":"ContainerStarted","Data":"01323051bf66b1758639924111aaa656b70f3aeb3d366174ba7a02108c37d8d9"} Nov 29 08:01:39 crc kubenswrapper[4795]: I1129 08:01:39.089437 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-njfdg" event={"ID":"bfb2e88b-d2db-4afa-8511-e1a896eb9039","Type":"ContainerStarted","Data":"a7ebdb5bc2ad47b0b50185a4fbb1745358f449bba66f49365cf2e835c682d49e"} Nov 29 08:01:39 crc kubenswrapper[4795]: I1129 08:01:39.091627 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-qtbtd" event={"ID":"86217734-815f-461c-a32d-8d744192003e","Type":"ContainerStarted","Data":"ea8ff91b56fdb09acedb73accfb5be25f789bb93bf8ba05df43ae62f2a2ff675"} Nov 29 08:01:39 crc kubenswrapper[4795]: I1129 08:01:39.100556 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-h74tt" podStartSLOduration=5.520382444 podStartE2EDuration="1m4.100536641s" podCreationTimestamp="2025-11-29 08:00:35 +0000 UTC" firstStartedPulling="2025-11-29 08:00:39.698836733 +0000 UTC m=+1285.674412523" lastFinishedPulling="2025-11-29 08:01:38.27899094 +0000 UTC m=+1344.254566720" observedRunningTime="2025-11-29 08:01:39.100120399 +0000 UTC m=+1345.075696189" watchObservedRunningTime="2025-11-29 08:01:39.100536641 +0000 UTC m=+1345.076112431" Nov 29 08:01:39 crc kubenswrapper[4795]: I1129 08:01:39.116039 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-xjns5" event={"ID":"36a279fc-25f1-407e-a1c6-6b8689d68cd2","Type":"ContainerStarted","Data":"e54c1201e80ee5066372e3146bef482fe42adc52c0cedde6b58ca8339ca47636"} Nov 29 08:01:39 crc kubenswrapper[4795]: I1129 08:01:39.139532 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-cmvt9" event={"ID":"94d164fa-c521-4617-8338-1eba3ee1c31d","Type":"ContainerStarted","Data":"ce9c0ee2708627169e0a5b052296544a2ff1a5489c433b575b17ff293ac8b0e8"} Nov 29 08:01:39 crc kubenswrapper[4795]: I1129 08:01:39.181447 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-8mk4s" event={"ID":"7bfb1aea-ee92-4033-ba5f-c7ac79c60d0e","Type":"ContainerStarted","Data":"e9c252f6cc43ff693ee0a1b37ad2f148620755971be2091728d546683cbc4973"} Nov 29 08:01:39 crc kubenswrapper[4795]: I1129 08:01:39.181534 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-8mk4s" event={"ID":"7bfb1aea-ee92-4033-ba5f-c7ac79c60d0e","Type":"ContainerStarted","Data":"6181aadc3c843387ba39eced022e6a0bb0f3a47abb7c989e364dcf052ff3418f"} Nov 29 08:01:39 crc kubenswrapper[4795]: I1129 08:01:39.182250 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-8mk4s" Nov 29 08:01:39 crc kubenswrapper[4795]: I1129 08:01:39.192306 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-5q5dd" event={"ID":"b53e6db5-a3fe-4f86-9b6a-49eb7d4629fd","Type":"ContainerStarted","Data":"d2205618c2edfbc40bc2c38607fa968bba8194cea38944b7ab34f8fe1e146a71"} Nov 29 08:01:39 crc kubenswrapper[4795]: I1129 08:01:39.210768 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-vdmph" event={"ID":"a197813b-f5c3-49c1-81f6-b6b2e08e0617","Type":"ContainerStarted","Data":"00294f92916ba1afe365cee88d869ce350a2b3e9a0b4111eaae966b78e35421d"} Nov 29 08:01:39 crc kubenswrapper[4795]: I1129 08:01:39.213242 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-vdmph" Nov 29 08:01:39 crc kubenswrapper[4795]: I1129 08:01:39.333706 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-8mk4s" podStartSLOduration=6.19038197 podStartE2EDuration="1m5.333676851s" podCreationTimestamp="2025-11-29 08:00:34 +0000 UTC" firstStartedPulling="2025-11-29 08:00:38.367781562 +0000 UTC m=+1284.343357352" lastFinishedPulling="2025-11-29 08:01:37.511076443 +0000 UTC m=+1343.486652233" observedRunningTime="2025-11-29 08:01:39.296289787 +0000 UTC m=+1345.271865587" watchObservedRunningTime="2025-11-29 08:01:39.333676851 +0000 UTC m=+1345.309252641" Nov 29 08:01:39 crc kubenswrapper[4795]: I1129 08:01:39.411380 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-vdmph" podStartSLOduration=5.663800961 podStartE2EDuration="1m4.411339359s" podCreationTimestamp="2025-11-29 08:00:35 +0000 UTC" firstStartedPulling="2025-11-29 08:00:39.535302352 +0000 UTC m=+1285.510878142" lastFinishedPulling="2025-11-29 08:01:38.28284075 +0000 UTC m=+1344.258416540" observedRunningTime="2025-11-29 08:01:39.36566601 +0000 UTC m=+1345.341241800" watchObservedRunningTime="2025-11-29 08:01:39.411339359 +0000 UTC m=+1345.386915159" Nov 29 08:01:40 crc kubenswrapper[4795]: I1129 08:01:40.221779 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5854674fcc-k6h4m" Nov 29 08:01:41 crc kubenswrapper[4795]: I1129 08:01:41.227097 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-6fkwt" event={"ID":"e56bb4ff-9936-4876-8616-0958e9892fa3","Type":"ContainerStarted","Data":"c41e72dae1efdcf28fd599504bfb89bfdaf71c4645d060c5505e1726b4249ac7"} Nov 29 08:01:41 crc kubenswrapper[4795]: I1129 08:01:41.227803 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-6fkwt" Nov 29 08:01:41 crc kubenswrapper[4795]: I1129 08:01:41.233179 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-cn6z7" event={"ID":"1d6dd43f-eee0-4257-adbb-a53218a86eb9","Type":"ContainerStarted","Data":"c801cb5a7e33aad7b23b641e1413ce0ff85a563294dc4fc4a845a920c4e594dc"} Nov 29 08:01:41 crc kubenswrapper[4795]: I1129 08:01:41.233288 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-cn6z7" Nov 29 08:01:41 crc kubenswrapper[4795]: I1129 08:01:41.242398 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-klbwf" event={"ID":"7bed5103-966d-43d3-92f1-73a2f8b6d551","Type":"ContainerStarted","Data":"7ab40cdd79ebc53e8a219c0030c977312d8914c8b28a537a418f08d3e499ccd9"} Nov 29 08:01:41 crc kubenswrapper[4795]: I1129 08:01:41.242567 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-klbwf" Nov 29 08:01:41 crc kubenswrapper[4795]: I1129 08:01:41.250419 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-mvrzj" event={"ID":"36512615-d21b-4484-af03-ffa1d325883b","Type":"ContainerStarted","Data":"0b652405a807429ab4a3e90e024ef8226b89dedc0aecc61c0f62538a92bcf2c6"} Nov 29 08:01:41 crc kubenswrapper[4795]: I1129 08:01:41.250665 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-mvrzj" Nov 29 08:01:41 crc kubenswrapper[4795]: I1129 08:01:41.255792 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-d5d4r" event={"ID":"f2367076-6d52-4047-908c-c1e32c4ca2c4","Type":"ContainerStarted","Data":"4c6b5dcd491596c03ce5baf7dfc652052d887c905ad090c247948b90680a6feb"} Nov 29 08:01:41 crc kubenswrapper[4795]: I1129 08:01:41.256017 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-998648c74-d5d4r" Nov 29 08:01:41 crc kubenswrapper[4795]: I1129 08:01:41.259526 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-6fkwt" podStartSLOduration=6.640193678 podStartE2EDuration="1m6.259494544s" podCreationTimestamp="2025-11-29 08:00:35 +0000 UTC" firstStartedPulling="2025-11-29 08:00:39.529448156 +0000 UTC m=+1285.505023946" lastFinishedPulling="2025-11-29 08:01:39.148749022 +0000 UTC m=+1345.124324812" observedRunningTime="2025-11-29 08:01:41.24601686 +0000 UTC m=+1347.221592660" watchObservedRunningTime="2025-11-29 08:01:41.259494544 +0000 UTC m=+1347.235070334" Nov 29 08:01:41 crc kubenswrapper[4795]: I1129 08:01:41.368054 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-cn6z7" podStartSLOduration=7.492939631 podStartE2EDuration="1m7.36802881s" podCreationTimestamp="2025-11-29 08:00:34 +0000 UTC" firstStartedPulling="2025-11-29 08:00:39.284543622 +0000 UTC m=+1285.260119412" lastFinishedPulling="2025-11-29 08:01:39.159632801 +0000 UTC m=+1345.135208591" observedRunningTime="2025-11-29 08:01:41.353009113 +0000 UTC m=+1347.328584903" watchObservedRunningTime="2025-11-29 08:01:41.36802881 +0000 UTC m=+1347.343604600" Nov 29 08:01:41 crc kubenswrapper[4795]: I1129 08:01:41.398439 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-klbwf" podStartSLOduration=5.177622251 podStartE2EDuration="1m7.398420544s" podCreationTimestamp="2025-11-29 08:00:34 +0000 UTC" firstStartedPulling="2025-11-29 08:00:36.928219527 +0000 UTC m=+1282.903795317" lastFinishedPulling="2025-11-29 08:01:39.14901782 +0000 UTC m=+1345.124593610" observedRunningTime="2025-11-29 08:01:41.396576362 +0000 UTC m=+1347.372152152" watchObservedRunningTime="2025-11-29 08:01:41.398420544 +0000 UTC m=+1347.373996334" Nov 29 08:01:41 crc kubenswrapper[4795]: I1129 08:01:41.426450 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-mvrzj" podStartSLOduration=7.856086647 podStartE2EDuration="1m7.42641401s" podCreationTimestamp="2025-11-29 08:00:34 +0000 UTC" firstStartedPulling="2025-11-29 08:00:39.578610454 +0000 UTC m=+1285.554186244" lastFinishedPulling="2025-11-29 08:01:39.148937827 +0000 UTC m=+1345.124513607" observedRunningTime="2025-11-29 08:01:41.421386287 +0000 UTC m=+1347.396962107" watchObservedRunningTime="2025-11-29 08:01:41.42641401 +0000 UTC m=+1347.401989800" Nov 29 08:01:41 crc kubenswrapper[4795]: I1129 08:01:41.452690 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-998648c74-d5d4r" podStartSLOduration=6.887701846 podStartE2EDuration="1m6.452656987s" podCreationTimestamp="2025-11-29 08:00:35 +0000 UTC" firstStartedPulling="2025-11-29 08:00:39.584082389 +0000 UTC m=+1285.559658169" lastFinishedPulling="2025-11-29 08:01:39.14903752 +0000 UTC m=+1345.124613310" observedRunningTime="2025-11-29 08:01:41.444538636 +0000 UTC m=+1347.420114436" watchObservedRunningTime="2025-11-29 08:01:41.452656987 +0000 UTC m=+1347.428232797" Nov 29 08:01:45 crc kubenswrapper[4795]: I1129 08:01:45.109046 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-klbwf" Nov 29 08:01:45 crc kubenswrapper[4795]: E1129 08:01:45.278057 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-27n4r" podUID="7ce03a92-9abd-485c-b949-fb95301de889" Nov 29 08:01:45 crc kubenswrapper[4795]: I1129 08:01:45.310652 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-5q5dd" event={"ID":"b53e6db5-a3fe-4f86-9b6a-49eb7d4629fd","Type":"ContainerStarted","Data":"c22141bc7ede778d586dac2389f6a0eaf9bd7993a50afe43608080978c834519"} Nov 29 08:01:45 crc kubenswrapper[4795]: I1129 08:01:45.310899 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-5q5dd" Nov 29 08:01:45 crc kubenswrapper[4795]: I1129 08:01:45.319814 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-9g674" event={"ID":"f5e4dece-2c02-41a1-ac8b-ec46eb2a3d45","Type":"ContainerStarted","Data":"a88155759af3b1b3ed7025fa35f3495f72769231e670957dbb226efcd216a210"} Nov 29 08:01:45 crc kubenswrapper[4795]: I1129 08:01:45.324613 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-d486dbd66-bt6tr" event={"ID":"c75b943b-8281-4fbd-a94a-3d5db0475d5d","Type":"ContainerStarted","Data":"034f274bc740730e8b2938938ffe1392113f66cf891a3df9fdbf6ad15923e0df"} Nov 29 08:01:45 crc kubenswrapper[4795]: I1129 08:01:45.330658 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-xjns5" event={"ID":"36a279fc-25f1-407e-a1c6-6b8689d68cd2","Type":"ContainerStarted","Data":"bd1fb0080f5d3640a0335f374037b6c50295275a933c0d22f5736d02762ec475"} Nov 29 08:01:45 crc kubenswrapper[4795]: I1129 08:01:45.330869 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-xjns5" Nov 29 08:01:45 crc kubenswrapper[4795]: I1129 08:01:45.331848 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-5q5dd" podStartSLOduration=5.249614648 podStartE2EDuration="1m11.331830745s" podCreationTimestamp="2025-11-29 08:00:34 +0000 UTC" firstStartedPulling="2025-11-29 08:00:38.766957283 +0000 UTC m=+1284.742533073" lastFinishedPulling="2025-11-29 08:01:44.84917338 +0000 UTC m=+1350.824749170" observedRunningTime="2025-11-29 08:01:45.330182038 +0000 UTC m=+1351.305757828" watchObservedRunningTime="2025-11-29 08:01:45.331830745 +0000 UTC m=+1351.307406535" Nov 29 08:01:45 crc kubenswrapper[4795]: I1129 08:01:45.354822 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-xjns5" podStartSLOduration=4.8250285250000005 podStartE2EDuration="1m11.354799178s" podCreationTimestamp="2025-11-29 08:00:34 +0000 UTC" firstStartedPulling="2025-11-29 08:00:38.309379222 +0000 UTC m=+1284.284955012" lastFinishedPulling="2025-11-29 08:01:44.839149875 +0000 UTC m=+1350.814725665" observedRunningTime="2025-11-29 08:01:45.348745256 +0000 UTC m=+1351.324321046" watchObservedRunningTime="2025-11-29 08:01:45.354799178 +0000 UTC m=+1351.330374968" Nov 29 08:01:45 crc kubenswrapper[4795]: I1129 08:01:45.521396 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-8mk4s" Nov 29 08:01:45 crc kubenswrapper[4795]: I1129 08:01:45.935570 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-cn6z7" Nov 29 08:01:46 crc kubenswrapper[4795]: I1129 08:01:46.363770 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-cmvt9" event={"ID":"94d164fa-c521-4617-8338-1eba3ee1c31d","Type":"ContainerStarted","Data":"6119e4e9ae051df1870c7d46310f15fa1afddee3a242103e15bf0a48399e018a"} Nov 29 08:01:46 crc kubenswrapper[4795]: I1129 08:01:46.363829 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-cmvt9" Nov 29 08:01:46 crc kubenswrapper[4795]: I1129 08:01:46.376819 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-966jn" event={"ID":"8de1af69-5c67-4669-83d5-02de0ecd32d3","Type":"ContainerStarted","Data":"4be0ff3d9b09fbff2716e587ee2abebb0ad36e2111b5ddd7a134ba8260da5648"} Nov 29 08:01:46 crc kubenswrapper[4795]: I1129 08:01:46.377774 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-966jn" Nov 29 08:01:46 crc kubenswrapper[4795]: I1129 08:01:46.400287 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-njfdg" event={"ID":"bfb2e88b-d2db-4afa-8511-e1a896eb9039","Type":"ContainerStarted","Data":"4a24397ee390b0232a7fdcc083367b9f0bafe4be47377c21e9c80001a01374ce"} Nov 29 08:01:46 crc kubenswrapper[4795]: I1129 08:01:46.400765 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-njfdg" Nov 29 08:01:46 crc kubenswrapper[4795]: I1129 08:01:46.441913 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4wf8kl" event={"ID":"543b785e-bdb9-4582-b9dd-8a987b5129f6","Type":"ContainerStarted","Data":"1c066e8b5e09fa7d974f592e1197980b54b66ef321ae084b1413d34567c397d0"} Nov 29 08:01:46 crc kubenswrapper[4795]: I1129 08:01:46.442419 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4wf8kl" event={"ID":"543b785e-bdb9-4582-b9dd-8a987b5129f6","Type":"ContainerStarted","Data":"eef0520714678c92308e091c198d0fb1d2cdadd25046ab8b17714f30b04b5f90"} Nov 29 08:01:46 crc kubenswrapper[4795]: I1129 08:01:46.443304 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4wf8kl" Nov 29 08:01:46 crc kubenswrapper[4795]: I1129 08:01:46.445275 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-cmvt9" podStartSLOduration=7.207088581 podStartE2EDuration="1m12.445255286s" podCreationTimestamp="2025-11-29 08:00:34 +0000 UTC" firstStartedPulling="2025-11-29 08:00:39.583692538 +0000 UTC m=+1285.559268338" lastFinishedPulling="2025-11-29 08:01:44.821859253 +0000 UTC m=+1350.797435043" observedRunningTime="2025-11-29 08:01:46.442078156 +0000 UTC m=+1352.417653946" watchObservedRunningTime="2025-11-29 08:01:46.445255286 +0000 UTC m=+1352.420831066" Nov 29 08:01:46 crc kubenswrapper[4795]: I1129 08:01:46.445468 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-rd6w8" event={"ID":"cc9825dd-340b-4dda-ab8a-91d95ee67678","Type":"ContainerStarted","Data":"fcd04d7d0fd5427fa5e0d51c1dbccaa537aad28a519379d9577adcd79701bafb"} Nov 29 08:01:46 crc kubenswrapper[4795]: I1129 08:01:46.445495 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-rd6w8" event={"ID":"cc9825dd-340b-4dda-ab8a-91d95ee67678","Type":"ContainerStarted","Data":"b7f3a1f8c8fe500dd77a89f426edb7ea89745e3dc6daed9493b9d69d594030c2"} Nov 29 08:01:46 crc kubenswrapper[4795]: I1129 08:01:46.446069 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-57548d458d-rd6w8" Nov 29 08:01:46 crc kubenswrapper[4795]: I1129 08:01:46.475370 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-qtbtd" event={"ID":"86217734-815f-461c-a32d-8d744192003e","Type":"ContainerStarted","Data":"0e98b8f29a8e3c65521bcef4fd47fb7012717112891d2bf5d85594cb59b60911"} Nov 29 08:01:46 crc kubenswrapper[4795]: I1129 08:01:46.475922 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-78f8948974-9g674" Nov 29 08:01:46 crc kubenswrapper[4795]: I1129 08:01:46.476832 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-qtbtd" Nov 29 08:01:46 crc kubenswrapper[4795]: I1129 08:01:46.542745 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-mvrzj" Nov 29 08:01:46 crc kubenswrapper[4795]: I1129 08:01:46.543318 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-njfdg" podStartSLOduration=5.706772043 podStartE2EDuration="1m11.543298124s" podCreationTimestamp="2025-11-29 08:00:35 +0000 UTC" firstStartedPulling="2025-11-29 08:00:39.011434315 +0000 UTC m=+1284.987010105" lastFinishedPulling="2025-11-29 08:01:44.847960396 +0000 UTC m=+1350.823536186" observedRunningTime="2025-11-29 08:01:46.507849776 +0000 UTC m=+1352.483425566" watchObservedRunningTime="2025-11-29 08:01:46.543298124 +0000 UTC m=+1352.518873914" Nov 29 08:01:46 crc kubenswrapper[4795]: I1129 08:01:46.547022 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-966jn" podStartSLOduration=6.96372361 podStartE2EDuration="1m12.547005589s" podCreationTimestamp="2025-11-29 08:00:34 +0000 UTC" firstStartedPulling="2025-11-29 08:00:39.293344032 +0000 UTC m=+1285.268919822" lastFinishedPulling="2025-11-29 08:01:44.876626011 +0000 UTC m=+1350.852201801" observedRunningTime="2025-11-29 08:01:46.544962591 +0000 UTC m=+1352.520538371" watchObservedRunningTime="2025-11-29 08:01:46.547005589 +0000 UTC m=+1352.522581379" Nov 29 08:01:46 crc kubenswrapper[4795]: I1129 08:01:46.636897 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-57548d458d-rd6w8" podStartSLOduration=59.515866896 podStartE2EDuration="1m12.636870775s" podCreationTimestamp="2025-11-29 08:00:34 +0000 UTC" firstStartedPulling="2025-11-29 08:01:31.728243143 +0000 UTC m=+1337.703818933" lastFinishedPulling="2025-11-29 08:01:44.849247022 +0000 UTC m=+1350.824822812" observedRunningTime="2025-11-29 08:01:46.605304717 +0000 UTC m=+1352.580880507" watchObservedRunningTime="2025-11-29 08:01:46.636870775 +0000 UTC m=+1352.612446565" Nov 29 08:01:46 crc kubenswrapper[4795]: I1129 08:01:46.733263 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4wf8kl" podStartSLOduration=58.605925027 podStartE2EDuration="1m11.723812167s" podCreationTimestamp="2025-11-29 08:00:35 +0000 UTC" firstStartedPulling="2025-11-29 08:01:31.732305979 +0000 UTC m=+1337.707881769" lastFinishedPulling="2025-11-29 08:01:44.850193119 +0000 UTC m=+1350.825768909" observedRunningTime="2025-11-29 08:01:46.671829899 +0000 UTC m=+1352.647405689" watchObservedRunningTime="2025-11-29 08:01:46.723812167 +0000 UTC m=+1352.699387957" Nov 29 08:01:46 crc kubenswrapper[4795]: I1129 08:01:46.735898 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-d486dbd66-bt6tr" podStartSLOduration=6.629579946 podStartE2EDuration="1m11.735364126s" podCreationTimestamp="2025-11-29 08:00:35 +0000 UTC" firstStartedPulling="2025-11-29 08:00:39.704921326 +0000 UTC m=+1285.680497116" lastFinishedPulling="2025-11-29 08:01:44.810705506 +0000 UTC m=+1350.786281296" observedRunningTime="2025-11-29 08:01:46.704990112 +0000 UTC m=+1352.680565902" watchObservedRunningTime="2025-11-29 08:01:46.735364126 +0000 UTC m=+1352.710939926" Nov 29 08:01:46 crc kubenswrapper[4795]: I1129 08:01:46.738304 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-998648c74-d5d4r" Nov 29 08:01:46 crc kubenswrapper[4795]: I1129 08:01:46.762709 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-78f8948974-9g674" podStartSLOduration=6.632807577 podStartE2EDuration="1m11.762686263s" podCreationTimestamp="2025-11-29 08:00:35 +0000 UTC" firstStartedPulling="2025-11-29 08:00:39.70929343 +0000 UTC m=+1285.684869220" lastFinishedPulling="2025-11-29 08:01:44.839172116 +0000 UTC m=+1350.814747906" observedRunningTime="2025-11-29 08:01:46.739706499 +0000 UTC m=+1352.715282289" watchObservedRunningTime="2025-11-29 08:01:46.762686263 +0000 UTC m=+1352.738262063" Nov 29 08:01:46 crc kubenswrapper[4795]: I1129 08:01:46.796409 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-qtbtd" podStartSLOduration=5.477861459 podStartE2EDuration="1m12.796383221s" podCreationTimestamp="2025-11-29 08:00:34 +0000 UTC" firstStartedPulling="2025-11-29 08:00:37.532920933 +0000 UTC m=+1283.508496723" lastFinishedPulling="2025-11-29 08:01:44.851442695 +0000 UTC m=+1350.827018485" observedRunningTime="2025-11-29 08:01:46.758763951 +0000 UTC m=+1352.734339741" watchObservedRunningTime="2025-11-29 08:01:46.796383221 +0000 UTC m=+1352.771959021" Nov 29 08:01:46 crc kubenswrapper[4795]: I1129 08:01:46.850561 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-vdmph" Nov 29 08:01:47 crc kubenswrapper[4795]: I1129 08:01:47.351407 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-6fkwt" Nov 29 08:01:47 crc kubenswrapper[4795]: I1129 08:01:47.655375 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-d486dbd66-bt6tr" Nov 29 08:01:47 crc kubenswrapper[4795]: I1129 08:01:47.816623 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-h74tt" Nov 29 08:01:48 crc kubenswrapper[4795]: I1129 08:01:48.797305 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-8688fc7b8-5sbpb" Nov 29 08:01:51 crc kubenswrapper[4795]: I1129 08:01:51.606705 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-57548d458d-rd6w8" Nov 29 08:01:51 crc kubenswrapper[4795]: I1129 08:01:51.809054 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4wf8kl" Nov 29 08:01:55 crc kubenswrapper[4795]: I1129 08:01:55.086025 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-qtbtd" Nov 29 08:01:55 crc kubenswrapper[4795]: I1129 08:01:55.637535 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-xjns5" Nov 29 08:01:55 crc kubenswrapper[4795]: I1129 08:01:55.848705 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-5q5dd" Nov 29 08:01:56 crc kubenswrapper[4795]: I1129 08:01:56.162939 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-njfdg" Nov 29 08:01:56 crc kubenswrapper[4795]: I1129 08:01:56.503883 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-cmvt9" Nov 29 08:01:56 crc kubenswrapper[4795]: I1129 08:01:56.675766 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-966jn" Nov 29 08:01:57 crc kubenswrapper[4795]: I1129 08:01:57.497132 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-78f8948974-9g674" Nov 29 08:01:57 crc kubenswrapper[4795]: I1129 08:01:57.656156 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-d486dbd66-bt6tr" Nov 29 08:02:02 crc kubenswrapper[4795]: I1129 08:02:02.615702 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-27n4r" event={"ID":"7ce03a92-9abd-485c-b949-fb95301de889","Type":"ContainerStarted","Data":"4aaef69d72f475f0a4960424191d087d663bdc6c56f0bf114d6becce124f421b"} Nov 29 08:02:02 crc kubenswrapper[4795]: I1129 08:02:02.641098 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-27n4r" podStartSLOduration=5.058722663 podStartE2EDuration="1m26.641060112s" podCreationTimestamp="2025-11-29 08:00:36 +0000 UTC" firstStartedPulling="2025-11-29 08:00:39.727985072 +0000 UTC m=+1285.703560862" lastFinishedPulling="2025-11-29 08:02:01.310322521 +0000 UTC m=+1367.285898311" observedRunningTime="2025-11-29 08:02:02.637890552 +0000 UTC m=+1368.613466342" watchObservedRunningTime="2025-11-29 08:02:02.641060112 +0000 UTC m=+1368.616635932" Nov 29 08:02:15 crc kubenswrapper[4795]: I1129 08:02:15.253050 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-98nxw"] Nov 29 08:02:15 crc kubenswrapper[4795]: I1129 08:02:15.255524 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-98nxw" Nov 29 08:02:15 crc kubenswrapper[4795]: I1129 08:02:15.275888 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-98nxw"] Nov 29 08:02:15 crc kubenswrapper[4795]: I1129 08:02:15.296028 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p25vt\" (UniqueName: \"kubernetes.io/projected/e3051c02-6211-46f9-a843-9ea284aa2b72-kube-api-access-p25vt\") pod \"redhat-operators-98nxw\" (UID: \"e3051c02-6211-46f9-a843-9ea284aa2b72\") " pod="openshift-marketplace/redhat-operators-98nxw" Nov 29 08:02:15 crc kubenswrapper[4795]: I1129 08:02:15.296110 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3051c02-6211-46f9-a843-9ea284aa2b72-utilities\") pod \"redhat-operators-98nxw\" (UID: \"e3051c02-6211-46f9-a843-9ea284aa2b72\") " pod="openshift-marketplace/redhat-operators-98nxw" Nov 29 08:02:15 crc kubenswrapper[4795]: I1129 08:02:15.296158 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3051c02-6211-46f9-a843-9ea284aa2b72-catalog-content\") pod \"redhat-operators-98nxw\" (UID: \"e3051c02-6211-46f9-a843-9ea284aa2b72\") " pod="openshift-marketplace/redhat-operators-98nxw" Nov 29 08:02:15 crc kubenswrapper[4795]: I1129 08:02:15.397264 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p25vt\" (UniqueName: \"kubernetes.io/projected/e3051c02-6211-46f9-a843-9ea284aa2b72-kube-api-access-p25vt\") pod \"redhat-operators-98nxw\" (UID: \"e3051c02-6211-46f9-a843-9ea284aa2b72\") " pod="openshift-marketplace/redhat-operators-98nxw" Nov 29 08:02:15 crc kubenswrapper[4795]: I1129 08:02:15.397357 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3051c02-6211-46f9-a843-9ea284aa2b72-utilities\") pod \"redhat-operators-98nxw\" (UID: \"e3051c02-6211-46f9-a843-9ea284aa2b72\") " pod="openshift-marketplace/redhat-operators-98nxw" Nov 29 08:02:15 crc kubenswrapper[4795]: I1129 08:02:15.397415 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3051c02-6211-46f9-a843-9ea284aa2b72-catalog-content\") pod \"redhat-operators-98nxw\" (UID: \"e3051c02-6211-46f9-a843-9ea284aa2b72\") " pod="openshift-marketplace/redhat-operators-98nxw" Nov 29 08:02:15 crc kubenswrapper[4795]: I1129 08:02:15.398026 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3051c02-6211-46f9-a843-9ea284aa2b72-utilities\") pod \"redhat-operators-98nxw\" (UID: \"e3051c02-6211-46f9-a843-9ea284aa2b72\") " pod="openshift-marketplace/redhat-operators-98nxw" Nov 29 08:02:15 crc kubenswrapper[4795]: I1129 08:02:15.398085 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3051c02-6211-46f9-a843-9ea284aa2b72-catalog-content\") pod \"redhat-operators-98nxw\" (UID: \"e3051c02-6211-46f9-a843-9ea284aa2b72\") " pod="openshift-marketplace/redhat-operators-98nxw" Nov 29 08:02:15 crc kubenswrapper[4795]: I1129 08:02:15.427954 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p25vt\" (UniqueName: \"kubernetes.io/projected/e3051c02-6211-46f9-a843-9ea284aa2b72-kube-api-access-p25vt\") pod \"redhat-operators-98nxw\" (UID: \"e3051c02-6211-46f9-a843-9ea284aa2b72\") " pod="openshift-marketplace/redhat-operators-98nxw" Nov 29 08:02:15 crc kubenswrapper[4795]: I1129 08:02:15.571787 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-98nxw" Nov 29 08:02:16 crc kubenswrapper[4795]: I1129 08:02:16.056025 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-98nxw"] Nov 29 08:02:16 crc kubenswrapper[4795]: I1129 08:02:16.749656 4795 generic.go:334] "Generic (PLEG): container finished" podID="e3051c02-6211-46f9-a843-9ea284aa2b72" containerID="e4d7dae02db0f29b4fddd52f88248351b3ae5896234552592165aae941a5f18a" exitCode=0 Nov 29 08:02:16 crc kubenswrapper[4795]: I1129 08:02:16.749717 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-98nxw" event={"ID":"e3051c02-6211-46f9-a843-9ea284aa2b72","Type":"ContainerDied","Data":"e4d7dae02db0f29b4fddd52f88248351b3ae5896234552592165aae941a5f18a"} Nov 29 08:02:16 crc kubenswrapper[4795]: I1129 08:02:16.749783 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-98nxw" event={"ID":"e3051c02-6211-46f9-a843-9ea284aa2b72","Type":"ContainerStarted","Data":"89ed45f1b8ed81f1038620f3741cf59d347d80396d14af72035ad631c97f82cf"} Nov 29 08:02:16 crc kubenswrapper[4795]: I1129 08:02:16.752523 4795 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 29 08:02:17 crc kubenswrapper[4795]: I1129 08:02:17.194332 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-rpwls"] Nov 29 08:02:17 crc kubenswrapper[4795]: I1129 08:02:17.199962 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-rpwls" Nov 29 08:02:17 crc kubenswrapper[4795]: I1129 08:02:17.214713 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Nov 29 08:02:17 crc kubenswrapper[4795]: I1129 08:02:17.214713 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Nov 29 08:02:17 crc kubenswrapper[4795]: I1129 08:02:17.214713 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Nov 29 08:02:17 crc kubenswrapper[4795]: I1129 08:02:17.216712 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-l6rmd" Nov 29 08:02:17 crc kubenswrapper[4795]: I1129 08:02:17.224610 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-rpwls"] Nov 29 08:02:17 crc kubenswrapper[4795]: I1129 08:02:17.230574 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94t9h\" (UniqueName: \"kubernetes.io/projected/d8e7e29e-48b1-4cd6-bdb7-81e790e6d06b-kube-api-access-94t9h\") pod \"dnsmasq-dns-675f4bcbfc-rpwls\" (UID: \"d8e7e29e-48b1-4cd6-bdb7-81e790e6d06b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-rpwls" Nov 29 08:02:17 crc kubenswrapper[4795]: I1129 08:02:17.230634 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8e7e29e-48b1-4cd6-bdb7-81e790e6d06b-config\") pod \"dnsmasq-dns-675f4bcbfc-rpwls\" (UID: \"d8e7e29e-48b1-4cd6-bdb7-81e790e6d06b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-rpwls" Nov 29 08:02:17 crc kubenswrapper[4795]: I1129 08:02:17.332236 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94t9h\" (UniqueName: \"kubernetes.io/projected/d8e7e29e-48b1-4cd6-bdb7-81e790e6d06b-kube-api-access-94t9h\") pod \"dnsmasq-dns-675f4bcbfc-rpwls\" (UID: \"d8e7e29e-48b1-4cd6-bdb7-81e790e6d06b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-rpwls" Nov 29 08:02:17 crc kubenswrapper[4795]: I1129 08:02:17.332293 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8e7e29e-48b1-4cd6-bdb7-81e790e6d06b-config\") pod \"dnsmasq-dns-675f4bcbfc-rpwls\" (UID: \"d8e7e29e-48b1-4cd6-bdb7-81e790e6d06b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-rpwls" Nov 29 08:02:17 crc kubenswrapper[4795]: I1129 08:02:17.333304 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8e7e29e-48b1-4cd6-bdb7-81e790e6d06b-config\") pod \"dnsmasq-dns-675f4bcbfc-rpwls\" (UID: \"d8e7e29e-48b1-4cd6-bdb7-81e790e6d06b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-rpwls" Nov 29 08:02:17 crc kubenswrapper[4795]: I1129 08:02:17.373120 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94t9h\" (UniqueName: \"kubernetes.io/projected/d8e7e29e-48b1-4cd6-bdb7-81e790e6d06b-kube-api-access-94t9h\") pod \"dnsmasq-dns-675f4bcbfc-rpwls\" (UID: \"d8e7e29e-48b1-4cd6-bdb7-81e790e6d06b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-rpwls" Nov 29 08:02:17 crc kubenswrapper[4795]: I1129 08:02:17.443992 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-52w68"] Nov 29 08:02:17 crc kubenswrapper[4795]: I1129 08:02:17.447021 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-52w68" Nov 29 08:02:17 crc kubenswrapper[4795]: I1129 08:02:17.452478 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Nov 29 08:02:17 crc kubenswrapper[4795]: I1129 08:02:17.497388 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-52w68"] Nov 29 08:02:17 crc kubenswrapper[4795]: I1129 08:02:17.536148 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-rpwls" Nov 29 08:02:17 crc kubenswrapper[4795]: I1129 08:02:17.642008 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkwbz\" (UniqueName: \"kubernetes.io/projected/8159fb0f-c9c2-40f0-9851-ed3b69d07813-kube-api-access-lkwbz\") pod \"dnsmasq-dns-78dd6ddcc-52w68\" (UID: \"8159fb0f-c9c2-40f0-9851-ed3b69d07813\") " pod="openstack/dnsmasq-dns-78dd6ddcc-52w68" Nov 29 08:02:17 crc kubenswrapper[4795]: I1129 08:02:17.642087 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8159fb0f-c9c2-40f0-9851-ed3b69d07813-config\") pod \"dnsmasq-dns-78dd6ddcc-52w68\" (UID: \"8159fb0f-c9c2-40f0-9851-ed3b69d07813\") " pod="openstack/dnsmasq-dns-78dd6ddcc-52w68" Nov 29 08:02:17 crc kubenswrapper[4795]: I1129 08:02:17.642111 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8159fb0f-c9c2-40f0-9851-ed3b69d07813-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-52w68\" (UID: \"8159fb0f-c9c2-40f0-9851-ed3b69d07813\") " pod="openstack/dnsmasq-dns-78dd6ddcc-52w68" Nov 29 08:02:17 crc kubenswrapper[4795]: I1129 08:02:17.744039 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkwbz\" (UniqueName: \"kubernetes.io/projected/8159fb0f-c9c2-40f0-9851-ed3b69d07813-kube-api-access-lkwbz\") pod \"dnsmasq-dns-78dd6ddcc-52w68\" (UID: \"8159fb0f-c9c2-40f0-9851-ed3b69d07813\") " pod="openstack/dnsmasq-dns-78dd6ddcc-52w68" Nov 29 08:02:17 crc kubenswrapper[4795]: I1129 08:02:17.744518 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8159fb0f-c9c2-40f0-9851-ed3b69d07813-config\") pod \"dnsmasq-dns-78dd6ddcc-52w68\" (UID: \"8159fb0f-c9c2-40f0-9851-ed3b69d07813\") " pod="openstack/dnsmasq-dns-78dd6ddcc-52w68" Nov 29 08:02:17 crc kubenswrapper[4795]: I1129 08:02:17.744555 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8159fb0f-c9c2-40f0-9851-ed3b69d07813-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-52w68\" (UID: \"8159fb0f-c9c2-40f0-9851-ed3b69d07813\") " pod="openstack/dnsmasq-dns-78dd6ddcc-52w68" Nov 29 08:02:17 crc kubenswrapper[4795]: I1129 08:02:17.745850 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8159fb0f-c9c2-40f0-9851-ed3b69d07813-config\") pod \"dnsmasq-dns-78dd6ddcc-52w68\" (UID: \"8159fb0f-c9c2-40f0-9851-ed3b69d07813\") " pod="openstack/dnsmasq-dns-78dd6ddcc-52w68" Nov 29 08:02:17 crc kubenswrapper[4795]: I1129 08:02:17.747732 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8159fb0f-c9c2-40f0-9851-ed3b69d07813-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-52w68\" (UID: \"8159fb0f-c9c2-40f0-9851-ed3b69d07813\") " pod="openstack/dnsmasq-dns-78dd6ddcc-52w68" Nov 29 08:02:17 crc kubenswrapper[4795]: I1129 08:02:17.772514 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkwbz\" (UniqueName: \"kubernetes.io/projected/8159fb0f-c9c2-40f0-9851-ed3b69d07813-kube-api-access-lkwbz\") pod \"dnsmasq-dns-78dd6ddcc-52w68\" (UID: \"8159fb0f-c9c2-40f0-9851-ed3b69d07813\") " pod="openstack/dnsmasq-dns-78dd6ddcc-52w68" Nov 29 08:02:17 crc kubenswrapper[4795]: I1129 08:02:17.791198 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-52w68" Nov 29 08:02:18 crc kubenswrapper[4795]: I1129 08:02:18.055096 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-rpwls"] Nov 29 08:02:18 crc kubenswrapper[4795]: I1129 08:02:18.318923 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-52w68"] Nov 29 08:02:18 crc kubenswrapper[4795]: W1129 08:02:18.324375 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8159fb0f_c9c2_40f0_9851_ed3b69d07813.slice/crio-be13660f62b7cea3e7011c3e0998e3facaeb3b3161723ddbe615f3dcb7af6619 WatchSource:0}: Error finding container be13660f62b7cea3e7011c3e0998e3facaeb3b3161723ddbe615f3dcb7af6619: Status 404 returned error can't find the container with id be13660f62b7cea3e7011c3e0998e3facaeb3b3161723ddbe615f3dcb7af6619 Nov 29 08:02:18 crc kubenswrapper[4795]: I1129 08:02:18.775981 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-rpwls" event={"ID":"d8e7e29e-48b1-4cd6-bdb7-81e790e6d06b","Type":"ContainerStarted","Data":"14bae4e53e3667f619bb4ea185678b3ec3aa757d8b8df76036de113020aec22c"} Nov 29 08:02:18 crc kubenswrapper[4795]: I1129 08:02:18.778006 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-52w68" event={"ID":"8159fb0f-c9c2-40f0-9851-ed3b69d07813","Type":"ContainerStarted","Data":"be13660f62b7cea3e7011c3e0998e3facaeb3b3161723ddbe615f3dcb7af6619"} Nov 29 08:02:18 crc kubenswrapper[4795]: I1129 08:02:18.781541 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-98nxw" event={"ID":"e3051c02-6211-46f9-a843-9ea284aa2b72","Type":"ContainerStarted","Data":"61a9341113703b0ae330375c227bb721dda5ee9717950107916c6085012626a6"} Nov 29 08:02:19 crc kubenswrapper[4795]: I1129 08:02:19.810527 4795 generic.go:334] "Generic (PLEG): container finished" podID="e3051c02-6211-46f9-a843-9ea284aa2b72" containerID="61a9341113703b0ae330375c227bb721dda5ee9717950107916c6085012626a6" exitCode=0 Nov 29 08:02:19 crc kubenswrapper[4795]: I1129 08:02:19.810857 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-98nxw" event={"ID":"e3051c02-6211-46f9-a843-9ea284aa2b72","Type":"ContainerDied","Data":"61a9341113703b0ae330375c227bb721dda5ee9717950107916c6085012626a6"} Nov 29 08:02:19 crc kubenswrapper[4795]: I1129 08:02:19.865560 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-rpwls"] Nov 29 08:02:19 crc kubenswrapper[4795]: I1129 08:02:19.913427 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-8qck9"] Nov 29 08:02:19 crc kubenswrapper[4795]: I1129 08:02:19.915196 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-8qck9" Nov 29 08:02:19 crc kubenswrapper[4795]: I1129 08:02:19.925254 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-8qck9"] Nov 29 08:02:20 crc kubenswrapper[4795]: I1129 08:02:20.099956 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3632abbe-bcd4-4af0-a0b4-05a09b26ccd0-config\") pod \"dnsmasq-dns-666b6646f7-8qck9\" (UID: \"3632abbe-bcd4-4af0-a0b4-05a09b26ccd0\") " pod="openstack/dnsmasq-dns-666b6646f7-8qck9" Nov 29 08:02:20 crc kubenswrapper[4795]: I1129 08:02:20.100224 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dckt\" (UniqueName: \"kubernetes.io/projected/3632abbe-bcd4-4af0-a0b4-05a09b26ccd0-kube-api-access-2dckt\") pod \"dnsmasq-dns-666b6646f7-8qck9\" (UID: \"3632abbe-bcd4-4af0-a0b4-05a09b26ccd0\") " pod="openstack/dnsmasq-dns-666b6646f7-8qck9" Nov 29 08:02:20 crc kubenswrapper[4795]: I1129 08:02:20.100380 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3632abbe-bcd4-4af0-a0b4-05a09b26ccd0-dns-svc\") pod \"dnsmasq-dns-666b6646f7-8qck9\" (UID: \"3632abbe-bcd4-4af0-a0b4-05a09b26ccd0\") " pod="openstack/dnsmasq-dns-666b6646f7-8qck9" Nov 29 08:02:20 crc kubenswrapper[4795]: I1129 08:02:20.202263 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3632abbe-bcd4-4af0-a0b4-05a09b26ccd0-dns-svc\") pod \"dnsmasq-dns-666b6646f7-8qck9\" (UID: \"3632abbe-bcd4-4af0-a0b4-05a09b26ccd0\") " pod="openstack/dnsmasq-dns-666b6646f7-8qck9" Nov 29 08:02:20 crc kubenswrapper[4795]: I1129 08:02:20.202448 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3632abbe-bcd4-4af0-a0b4-05a09b26ccd0-config\") pod \"dnsmasq-dns-666b6646f7-8qck9\" (UID: \"3632abbe-bcd4-4af0-a0b4-05a09b26ccd0\") " pod="openstack/dnsmasq-dns-666b6646f7-8qck9" Nov 29 08:02:20 crc kubenswrapper[4795]: I1129 08:02:20.202481 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dckt\" (UniqueName: \"kubernetes.io/projected/3632abbe-bcd4-4af0-a0b4-05a09b26ccd0-kube-api-access-2dckt\") pod \"dnsmasq-dns-666b6646f7-8qck9\" (UID: \"3632abbe-bcd4-4af0-a0b4-05a09b26ccd0\") " pod="openstack/dnsmasq-dns-666b6646f7-8qck9" Nov 29 08:02:20 crc kubenswrapper[4795]: I1129 08:02:20.204551 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3632abbe-bcd4-4af0-a0b4-05a09b26ccd0-dns-svc\") pod \"dnsmasq-dns-666b6646f7-8qck9\" (UID: \"3632abbe-bcd4-4af0-a0b4-05a09b26ccd0\") " pod="openstack/dnsmasq-dns-666b6646f7-8qck9" Nov 29 08:02:20 crc kubenswrapper[4795]: I1129 08:02:20.204707 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3632abbe-bcd4-4af0-a0b4-05a09b26ccd0-config\") pod \"dnsmasq-dns-666b6646f7-8qck9\" (UID: \"3632abbe-bcd4-4af0-a0b4-05a09b26ccd0\") " pod="openstack/dnsmasq-dns-666b6646f7-8qck9" Nov 29 08:02:20 crc kubenswrapper[4795]: I1129 08:02:20.247070 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dckt\" (UniqueName: \"kubernetes.io/projected/3632abbe-bcd4-4af0-a0b4-05a09b26ccd0-kube-api-access-2dckt\") pod \"dnsmasq-dns-666b6646f7-8qck9\" (UID: \"3632abbe-bcd4-4af0-a0b4-05a09b26ccd0\") " pod="openstack/dnsmasq-dns-666b6646f7-8qck9" Nov 29 08:02:20 crc kubenswrapper[4795]: I1129 08:02:20.261450 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-8qck9" Nov 29 08:02:20 crc kubenswrapper[4795]: I1129 08:02:20.390356 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-52w68"] Nov 29 08:02:20 crc kubenswrapper[4795]: I1129 08:02:20.410522 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-2dl4b"] Nov 29 08:02:20 crc kubenswrapper[4795]: I1129 08:02:20.418527 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-2dl4b" Nov 29 08:02:20 crc kubenswrapper[4795]: I1129 08:02:20.511489 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-2dl4b"] Nov 29 08:02:20 crc kubenswrapper[4795]: I1129 08:02:20.516120 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jd9s\" (UniqueName: \"kubernetes.io/projected/d4434e71-afe0-404d-aeec-6c080ff6a767-kube-api-access-9jd9s\") pod \"dnsmasq-dns-57d769cc4f-2dl4b\" (UID: \"d4434e71-afe0-404d-aeec-6c080ff6a767\") " pod="openstack/dnsmasq-dns-57d769cc4f-2dl4b" Nov 29 08:02:20 crc kubenswrapper[4795]: I1129 08:02:20.516235 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4434e71-afe0-404d-aeec-6c080ff6a767-config\") pod \"dnsmasq-dns-57d769cc4f-2dl4b\" (UID: \"d4434e71-afe0-404d-aeec-6c080ff6a767\") " pod="openstack/dnsmasq-dns-57d769cc4f-2dl4b" Nov 29 08:02:20 crc kubenswrapper[4795]: I1129 08:02:20.516308 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4434e71-afe0-404d-aeec-6c080ff6a767-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-2dl4b\" (UID: \"d4434e71-afe0-404d-aeec-6c080ff6a767\") " pod="openstack/dnsmasq-dns-57d769cc4f-2dl4b" Nov 29 08:02:20 crc kubenswrapper[4795]: I1129 08:02:20.619011 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4434e71-afe0-404d-aeec-6c080ff6a767-config\") pod \"dnsmasq-dns-57d769cc4f-2dl4b\" (UID: \"d4434e71-afe0-404d-aeec-6c080ff6a767\") " pod="openstack/dnsmasq-dns-57d769cc4f-2dl4b" Nov 29 08:02:20 crc kubenswrapper[4795]: I1129 08:02:20.619171 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4434e71-afe0-404d-aeec-6c080ff6a767-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-2dl4b\" (UID: \"d4434e71-afe0-404d-aeec-6c080ff6a767\") " pod="openstack/dnsmasq-dns-57d769cc4f-2dl4b" Nov 29 08:02:20 crc kubenswrapper[4795]: I1129 08:02:20.619275 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jd9s\" (UniqueName: \"kubernetes.io/projected/d4434e71-afe0-404d-aeec-6c080ff6a767-kube-api-access-9jd9s\") pod \"dnsmasq-dns-57d769cc4f-2dl4b\" (UID: \"d4434e71-afe0-404d-aeec-6c080ff6a767\") " pod="openstack/dnsmasq-dns-57d769cc4f-2dl4b" Nov 29 08:02:20 crc kubenswrapper[4795]: I1129 08:02:20.620903 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4434e71-afe0-404d-aeec-6c080ff6a767-config\") pod \"dnsmasq-dns-57d769cc4f-2dl4b\" (UID: \"d4434e71-afe0-404d-aeec-6c080ff6a767\") " pod="openstack/dnsmasq-dns-57d769cc4f-2dl4b" Nov 29 08:02:20 crc kubenswrapper[4795]: I1129 08:02:20.629239 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4434e71-afe0-404d-aeec-6c080ff6a767-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-2dl4b\" (UID: \"d4434e71-afe0-404d-aeec-6c080ff6a767\") " pod="openstack/dnsmasq-dns-57d769cc4f-2dl4b" Nov 29 08:02:20 crc kubenswrapper[4795]: I1129 08:02:20.644662 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jd9s\" (UniqueName: \"kubernetes.io/projected/d4434e71-afe0-404d-aeec-6c080ff6a767-kube-api-access-9jd9s\") pod \"dnsmasq-dns-57d769cc4f-2dl4b\" (UID: \"d4434e71-afe0-404d-aeec-6c080ff6a767\") " pod="openstack/dnsmasq-dns-57d769cc4f-2dl4b" Nov 29 08:02:20 crc kubenswrapper[4795]: I1129 08:02:20.782981 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-2dl4b" Nov 29 08:02:20 crc kubenswrapper[4795]: I1129 08:02:20.987879 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-8qck9"] Nov 29 08:02:21 crc kubenswrapper[4795]: W1129 08:02:21.003728 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3632abbe_bcd4_4af0_a0b4_05a09b26ccd0.slice/crio-5dd44097ed8b8bb42b31f7e975699df50ab893433863ce491cb084dac3608ecc WatchSource:0}: Error finding container 5dd44097ed8b8bb42b31f7e975699df50ab893433863ce491cb084dac3608ecc: Status 404 returned error can't find the container with id 5dd44097ed8b8bb42b31f7e975699df50ab893433863ce491cb084dac3608ecc Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.038132 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.042345 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.045924 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.050533 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.050571 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.050709 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.050745 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.050976 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.051327 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-qxp9m" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.065652 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.128733 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkc5t\" (UniqueName: \"kubernetes.io/projected/74169c45-99e0-4179-a18b-07a1c2cade8b-kube-api-access-tkc5t\") pod \"rabbitmq-server-0\" (UID: \"74169c45-99e0-4179-a18b-07a1c2cade8b\") " pod="openstack/rabbitmq-server-0" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.129215 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/74169c45-99e0-4179-a18b-07a1c2cade8b-config-data\") pod \"rabbitmq-server-0\" (UID: \"74169c45-99e0-4179-a18b-07a1c2cade8b\") " pod="openstack/rabbitmq-server-0" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.129242 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/74169c45-99e0-4179-a18b-07a1c2cade8b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"74169c45-99e0-4179-a18b-07a1c2cade8b\") " pod="openstack/rabbitmq-server-0" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.129270 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/74169c45-99e0-4179-a18b-07a1c2cade8b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"74169c45-99e0-4179-a18b-07a1c2cade8b\") " pod="openstack/rabbitmq-server-0" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.129324 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/74169c45-99e0-4179-a18b-07a1c2cade8b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"74169c45-99e0-4179-a18b-07a1c2cade8b\") " pod="openstack/rabbitmq-server-0" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.129400 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"74169c45-99e0-4179-a18b-07a1c2cade8b\") " pod="openstack/rabbitmq-server-0" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.129439 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/74169c45-99e0-4179-a18b-07a1c2cade8b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"74169c45-99e0-4179-a18b-07a1c2cade8b\") " pod="openstack/rabbitmq-server-0" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.129462 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/74169c45-99e0-4179-a18b-07a1c2cade8b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"74169c45-99e0-4179-a18b-07a1c2cade8b\") " pod="openstack/rabbitmq-server-0" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.129490 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/74169c45-99e0-4179-a18b-07a1c2cade8b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"74169c45-99e0-4179-a18b-07a1c2cade8b\") " pod="openstack/rabbitmq-server-0" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.129515 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/74169c45-99e0-4179-a18b-07a1c2cade8b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"74169c45-99e0-4179-a18b-07a1c2cade8b\") " pod="openstack/rabbitmq-server-0" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.129551 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/74169c45-99e0-4179-a18b-07a1c2cade8b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"74169c45-99e0-4179-a18b-07a1c2cade8b\") " pod="openstack/rabbitmq-server-0" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.231679 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"74169c45-99e0-4179-a18b-07a1c2cade8b\") " pod="openstack/rabbitmq-server-0" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.231748 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/74169c45-99e0-4179-a18b-07a1c2cade8b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"74169c45-99e0-4179-a18b-07a1c2cade8b\") " pod="openstack/rabbitmq-server-0" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.231786 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/74169c45-99e0-4179-a18b-07a1c2cade8b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"74169c45-99e0-4179-a18b-07a1c2cade8b\") " pod="openstack/rabbitmq-server-0" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.231815 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/74169c45-99e0-4179-a18b-07a1c2cade8b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"74169c45-99e0-4179-a18b-07a1c2cade8b\") " pod="openstack/rabbitmq-server-0" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.231839 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/74169c45-99e0-4179-a18b-07a1c2cade8b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"74169c45-99e0-4179-a18b-07a1c2cade8b\") " pod="openstack/rabbitmq-server-0" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.231885 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/74169c45-99e0-4179-a18b-07a1c2cade8b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"74169c45-99e0-4179-a18b-07a1c2cade8b\") " pod="openstack/rabbitmq-server-0" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.231993 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkc5t\" (UniqueName: \"kubernetes.io/projected/74169c45-99e0-4179-a18b-07a1c2cade8b-kube-api-access-tkc5t\") pod \"rabbitmq-server-0\" (UID: \"74169c45-99e0-4179-a18b-07a1c2cade8b\") " pod="openstack/rabbitmq-server-0" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.232023 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/74169c45-99e0-4179-a18b-07a1c2cade8b-config-data\") pod \"rabbitmq-server-0\" (UID: \"74169c45-99e0-4179-a18b-07a1c2cade8b\") " pod="openstack/rabbitmq-server-0" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.232040 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/74169c45-99e0-4179-a18b-07a1c2cade8b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"74169c45-99e0-4179-a18b-07a1c2cade8b\") " pod="openstack/rabbitmq-server-0" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.232063 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/74169c45-99e0-4179-a18b-07a1c2cade8b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"74169c45-99e0-4179-a18b-07a1c2cade8b\") " pod="openstack/rabbitmq-server-0" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.232116 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/74169c45-99e0-4179-a18b-07a1c2cade8b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"74169c45-99e0-4179-a18b-07a1c2cade8b\") " pod="openstack/rabbitmq-server-0" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.232993 4795 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"74169c45-99e0-4179-a18b-07a1c2cade8b\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/rabbitmq-server-0" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.233162 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/74169c45-99e0-4179-a18b-07a1c2cade8b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"74169c45-99e0-4179-a18b-07a1c2cade8b\") " pod="openstack/rabbitmq-server-0" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.236465 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/74169c45-99e0-4179-a18b-07a1c2cade8b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"74169c45-99e0-4179-a18b-07a1c2cade8b\") " pod="openstack/rabbitmq-server-0" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.236883 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/74169c45-99e0-4179-a18b-07a1c2cade8b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"74169c45-99e0-4179-a18b-07a1c2cade8b\") " pod="openstack/rabbitmq-server-0" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.237176 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/74169c45-99e0-4179-a18b-07a1c2cade8b-config-data\") pod \"rabbitmq-server-0\" (UID: \"74169c45-99e0-4179-a18b-07a1c2cade8b\") " pod="openstack/rabbitmq-server-0" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.237374 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/74169c45-99e0-4179-a18b-07a1c2cade8b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"74169c45-99e0-4179-a18b-07a1c2cade8b\") " pod="openstack/rabbitmq-server-0" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.243679 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/74169c45-99e0-4179-a18b-07a1c2cade8b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"74169c45-99e0-4179-a18b-07a1c2cade8b\") " pod="openstack/rabbitmq-server-0" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.244427 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/74169c45-99e0-4179-a18b-07a1c2cade8b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"74169c45-99e0-4179-a18b-07a1c2cade8b\") " pod="openstack/rabbitmq-server-0" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.244706 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/74169c45-99e0-4179-a18b-07a1c2cade8b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"74169c45-99e0-4179-a18b-07a1c2cade8b\") " pod="openstack/rabbitmq-server-0" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.253725 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/74169c45-99e0-4179-a18b-07a1c2cade8b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"74169c45-99e0-4179-a18b-07a1c2cade8b\") " pod="openstack/rabbitmq-server-0" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.261530 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkc5t\" (UniqueName: \"kubernetes.io/projected/74169c45-99e0-4179-a18b-07a1c2cade8b-kube-api-access-tkc5t\") pod \"rabbitmq-server-0\" (UID: \"74169c45-99e0-4179-a18b-07a1c2cade8b\") " pod="openstack/rabbitmq-server-0" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.270994 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"74169c45-99e0-4179-a18b-07a1c2cade8b\") " pod="openstack/rabbitmq-server-0" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.374083 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.461135 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.463407 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.476297 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.476532 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.476537 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.476582 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.476671 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.476955 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-82kxm" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.477048 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.500162 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.537965 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/28c13f65-78c4-4d4a-8960-7ef17a4c93e8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"28c13f65-78c4-4d4a-8960-7ef17a4c93e8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.538022 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/28c13f65-78c4-4d4a-8960-7ef17a4c93e8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"28c13f65-78c4-4d4a-8960-7ef17a4c93e8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.538067 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/28c13f65-78c4-4d4a-8960-7ef17a4c93e8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"28c13f65-78c4-4d4a-8960-7ef17a4c93e8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.538087 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/28c13f65-78c4-4d4a-8960-7ef17a4c93e8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"28c13f65-78c4-4d4a-8960-7ef17a4c93e8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.538124 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"28c13f65-78c4-4d4a-8960-7ef17a4c93e8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.538146 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/28c13f65-78c4-4d4a-8960-7ef17a4c93e8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"28c13f65-78c4-4d4a-8960-7ef17a4c93e8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.538159 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlllp\" (UniqueName: \"kubernetes.io/projected/28c13f65-78c4-4d4a-8960-7ef17a4c93e8-kube-api-access-mlllp\") pod \"rabbitmq-cell1-server-0\" (UID: \"28c13f65-78c4-4d4a-8960-7ef17a4c93e8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.538201 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/28c13f65-78c4-4d4a-8960-7ef17a4c93e8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"28c13f65-78c4-4d4a-8960-7ef17a4c93e8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.538230 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/28c13f65-78c4-4d4a-8960-7ef17a4c93e8-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"28c13f65-78c4-4d4a-8960-7ef17a4c93e8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.538259 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/28c13f65-78c4-4d4a-8960-7ef17a4c93e8-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"28c13f65-78c4-4d4a-8960-7ef17a4c93e8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.538285 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/28c13f65-78c4-4d4a-8960-7ef17a4c93e8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"28c13f65-78c4-4d4a-8960-7ef17a4c93e8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.640141 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"28c13f65-78c4-4d4a-8960-7ef17a4c93e8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.640500 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/28c13f65-78c4-4d4a-8960-7ef17a4c93e8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"28c13f65-78c4-4d4a-8960-7ef17a4c93e8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.640546 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlllp\" (UniqueName: \"kubernetes.io/projected/28c13f65-78c4-4d4a-8960-7ef17a4c93e8-kube-api-access-mlllp\") pod \"rabbitmq-cell1-server-0\" (UID: \"28c13f65-78c4-4d4a-8960-7ef17a4c93e8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.640613 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/28c13f65-78c4-4d4a-8960-7ef17a4c93e8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"28c13f65-78c4-4d4a-8960-7ef17a4c93e8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.640657 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/28c13f65-78c4-4d4a-8960-7ef17a4c93e8-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"28c13f65-78c4-4d4a-8960-7ef17a4c93e8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.640693 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/28c13f65-78c4-4d4a-8960-7ef17a4c93e8-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"28c13f65-78c4-4d4a-8960-7ef17a4c93e8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.640723 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/28c13f65-78c4-4d4a-8960-7ef17a4c93e8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"28c13f65-78c4-4d4a-8960-7ef17a4c93e8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.640762 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/28c13f65-78c4-4d4a-8960-7ef17a4c93e8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"28c13f65-78c4-4d4a-8960-7ef17a4c93e8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.640797 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/28c13f65-78c4-4d4a-8960-7ef17a4c93e8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"28c13f65-78c4-4d4a-8960-7ef17a4c93e8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.640871 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/28c13f65-78c4-4d4a-8960-7ef17a4c93e8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"28c13f65-78c4-4d4a-8960-7ef17a4c93e8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.640902 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/28c13f65-78c4-4d4a-8960-7ef17a4c93e8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"28c13f65-78c4-4d4a-8960-7ef17a4c93e8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.644822 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/28c13f65-78c4-4d4a-8960-7ef17a4c93e8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"28c13f65-78c4-4d4a-8960-7ef17a4c93e8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.648877 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/28c13f65-78c4-4d4a-8960-7ef17a4c93e8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"28c13f65-78c4-4d4a-8960-7ef17a4c93e8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.648966 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/28c13f65-78c4-4d4a-8960-7ef17a4c93e8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"28c13f65-78c4-4d4a-8960-7ef17a4c93e8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.649140 4795 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"28c13f65-78c4-4d4a-8960-7ef17a4c93e8\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-cell1-server-0" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.650315 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/28c13f65-78c4-4d4a-8960-7ef17a4c93e8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"28c13f65-78c4-4d4a-8960-7ef17a4c93e8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.650875 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/28c13f65-78c4-4d4a-8960-7ef17a4c93e8-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"28c13f65-78c4-4d4a-8960-7ef17a4c93e8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.655372 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/28c13f65-78c4-4d4a-8960-7ef17a4c93e8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"28c13f65-78c4-4d4a-8960-7ef17a4c93e8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.661771 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/28c13f65-78c4-4d4a-8960-7ef17a4c93e8-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"28c13f65-78c4-4d4a-8960-7ef17a4c93e8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.667226 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/28c13f65-78c4-4d4a-8960-7ef17a4c93e8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"28c13f65-78c4-4d4a-8960-7ef17a4c93e8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.671940 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/28c13f65-78c4-4d4a-8960-7ef17a4c93e8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"28c13f65-78c4-4d4a-8960-7ef17a4c93e8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.697459 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlllp\" (UniqueName: \"kubernetes.io/projected/28c13f65-78c4-4d4a-8960-7ef17a4c93e8-kube-api-access-mlllp\") pod \"rabbitmq-cell1-server-0\" (UID: \"28c13f65-78c4-4d4a-8960-7ef17a4c93e8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.772602 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-2dl4b"] Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.804625 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"28c13f65-78c4-4d4a-8960-7ef17a4c93e8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.830525 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 29 08:02:21 crc kubenswrapper[4795]: W1129 08:02:21.855029 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4434e71_afe0_404d_aeec_6c080ff6a767.slice/crio-65b06c8127f9e1fed8aa26de0e17e9ea3223119c5a50737b0c13dd0d6cbf3e42 WatchSource:0}: Error finding container 65b06c8127f9e1fed8aa26de0e17e9ea3223119c5a50737b0c13dd0d6cbf3e42: Status 404 returned error can't find the container with id 65b06c8127f9e1fed8aa26de0e17e9ea3223119c5a50737b0c13dd0d6cbf3e42 Nov 29 08:02:21 crc kubenswrapper[4795]: I1129 08:02:21.882686 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-8qck9" event={"ID":"3632abbe-bcd4-4af0-a0b4-05a09b26ccd0","Type":"ContainerStarted","Data":"5dd44097ed8b8bb42b31f7e975699df50ab893433863ce491cb084dac3608ecc"} Nov 29 08:02:22 crc kubenswrapper[4795]: I1129 08:02:22.466311 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 29 08:02:22 crc kubenswrapper[4795]: I1129 08:02:22.751481 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 29 08:02:22 crc kubenswrapper[4795]: I1129 08:02:22.909996 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-2dl4b" event={"ID":"d4434e71-afe0-404d-aeec-6c080ff6a767","Type":"ContainerStarted","Data":"65b06c8127f9e1fed8aa26de0e17e9ea3223119c5a50737b0c13dd0d6cbf3e42"} Nov 29 08:02:22 crc kubenswrapper[4795]: I1129 08:02:22.915133 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"74169c45-99e0-4179-a18b-07a1c2cade8b","Type":"ContainerStarted","Data":"7ac1f3c96059def2813bbfd912f88e988c365dff2096208217e4df7af9d4757d"} Nov 29 08:02:22 crc kubenswrapper[4795]: I1129 08:02:22.915227 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Nov 29 08:02:22 crc kubenswrapper[4795]: I1129 08:02:22.920701 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 29 08:02:22 crc kubenswrapper[4795]: I1129 08:02:22.936937 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-fkvvf" Nov 29 08:02:22 crc kubenswrapper[4795]: I1129 08:02:22.938316 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Nov 29 08:02:22 crc kubenswrapper[4795]: I1129 08:02:22.938474 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Nov 29 08:02:22 crc kubenswrapper[4795]: I1129 08:02:22.938645 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Nov 29 08:02:22 crc kubenswrapper[4795]: I1129 08:02:22.945560 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"28c13f65-78c4-4d4a-8960-7ef17a4c93e8","Type":"ContainerStarted","Data":"207431e043164d72eb566e2f5f1c219efb51c4b30289fa78fd050d06d779fead"} Nov 29 08:02:22 crc kubenswrapper[4795]: I1129 08:02:22.955119 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Nov 29 08:02:22 crc kubenswrapper[4795]: I1129 08:02:22.982273 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 29 08:02:23 crc kubenswrapper[4795]: I1129 08:02:23.029831 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"a2fd879b-6f46-437e-acf0-c60e879af239\") " pod="openstack/openstack-galera-0" Nov 29 08:02:23 crc kubenswrapper[4795]: I1129 08:02:23.029890 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2fd879b-6f46-437e-acf0-c60e879af239-operator-scripts\") pod \"openstack-galera-0\" (UID: \"a2fd879b-6f46-437e-acf0-c60e879af239\") " pod="openstack/openstack-galera-0" Nov 29 08:02:23 crc kubenswrapper[4795]: I1129 08:02:23.029916 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qvds\" (UniqueName: \"kubernetes.io/projected/a2fd879b-6f46-437e-acf0-c60e879af239-kube-api-access-7qvds\") pod \"openstack-galera-0\" (UID: \"a2fd879b-6f46-437e-acf0-c60e879af239\") " pod="openstack/openstack-galera-0" Nov 29 08:02:23 crc kubenswrapper[4795]: I1129 08:02:23.029958 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2fd879b-6f46-437e-acf0-c60e879af239-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"a2fd879b-6f46-437e-acf0-c60e879af239\") " pod="openstack/openstack-galera-0" Nov 29 08:02:23 crc kubenswrapper[4795]: I1129 08:02:23.030038 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a2fd879b-6f46-437e-acf0-c60e879af239-config-data-generated\") pod \"openstack-galera-0\" (UID: \"a2fd879b-6f46-437e-acf0-c60e879af239\") " pod="openstack/openstack-galera-0" Nov 29 08:02:23 crc kubenswrapper[4795]: I1129 08:02:23.030070 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a2fd879b-6f46-437e-acf0-c60e879af239-kolla-config\") pod \"openstack-galera-0\" (UID: \"a2fd879b-6f46-437e-acf0-c60e879af239\") " pod="openstack/openstack-galera-0" Nov 29 08:02:23 crc kubenswrapper[4795]: I1129 08:02:23.030109 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2fd879b-6f46-437e-acf0-c60e879af239-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"a2fd879b-6f46-437e-acf0-c60e879af239\") " pod="openstack/openstack-galera-0" Nov 29 08:02:23 crc kubenswrapper[4795]: I1129 08:02:23.030157 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a2fd879b-6f46-437e-acf0-c60e879af239-config-data-default\") pod \"openstack-galera-0\" (UID: \"a2fd879b-6f46-437e-acf0-c60e879af239\") " pod="openstack/openstack-galera-0" Nov 29 08:02:23 crc kubenswrapper[4795]: I1129 08:02:23.133712 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"a2fd879b-6f46-437e-acf0-c60e879af239\") " pod="openstack/openstack-galera-0" Nov 29 08:02:23 crc kubenswrapper[4795]: I1129 08:02:23.133761 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2fd879b-6f46-437e-acf0-c60e879af239-operator-scripts\") pod \"openstack-galera-0\" (UID: \"a2fd879b-6f46-437e-acf0-c60e879af239\") " pod="openstack/openstack-galera-0" Nov 29 08:02:23 crc kubenswrapper[4795]: I1129 08:02:23.133788 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qvds\" (UniqueName: \"kubernetes.io/projected/a2fd879b-6f46-437e-acf0-c60e879af239-kube-api-access-7qvds\") pod \"openstack-galera-0\" (UID: \"a2fd879b-6f46-437e-acf0-c60e879af239\") " pod="openstack/openstack-galera-0" Nov 29 08:02:23 crc kubenswrapper[4795]: I1129 08:02:23.133830 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2fd879b-6f46-437e-acf0-c60e879af239-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"a2fd879b-6f46-437e-acf0-c60e879af239\") " pod="openstack/openstack-galera-0" Nov 29 08:02:23 crc kubenswrapper[4795]: I1129 08:02:23.133872 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a2fd879b-6f46-437e-acf0-c60e879af239-config-data-generated\") pod \"openstack-galera-0\" (UID: \"a2fd879b-6f46-437e-acf0-c60e879af239\") " pod="openstack/openstack-galera-0" Nov 29 08:02:23 crc kubenswrapper[4795]: I1129 08:02:23.133905 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a2fd879b-6f46-437e-acf0-c60e879af239-kolla-config\") pod \"openstack-galera-0\" (UID: \"a2fd879b-6f46-437e-acf0-c60e879af239\") " pod="openstack/openstack-galera-0" Nov 29 08:02:23 crc kubenswrapper[4795]: I1129 08:02:23.133942 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2fd879b-6f46-437e-acf0-c60e879af239-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"a2fd879b-6f46-437e-acf0-c60e879af239\") " pod="openstack/openstack-galera-0" Nov 29 08:02:23 crc kubenswrapper[4795]: I1129 08:02:23.133985 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a2fd879b-6f46-437e-acf0-c60e879af239-config-data-default\") pod \"openstack-galera-0\" (UID: \"a2fd879b-6f46-437e-acf0-c60e879af239\") " pod="openstack/openstack-galera-0" Nov 29 08:02:23 crc kubenswrapper[4795]: I1129 08:02:23.134725 4795 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"a2fd879b-6f46-437e-acf0-c60e879af239\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/openstack-galera-0" Nov 29 08:02:23 crc kubenswrapper[4795]: I1129 08:02:23.138543 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2fd879b-6f46-437e-acf0-c60e879af239-operator-scripts\") pod \"openstack-galera-0\" (UID: \"a2fd879b-6f46-437e-acf0-c60e879af239\") " pod="openstack/openstack-galera-0" Nov 29 08:02:23 crc kubenswrapper[4795]: I1129 08:02:23.138622 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a2fd879b-6f46-437e-acf0-c60e879af239-config-data-default\") pod \"openstack-galera-0\" (UID: \"a2fd879b-6f46-437e-acf0-c60e879af239\") " pod="openstack/openstack-galera-0" Nov 29 08:02:23 crc kubenswrapper[4795]: I1129 08:02:23.138713 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a2fd879b-6f46-437e-acf0-c60e879af239-kolla-config\") pod \"openstack-galera-0\" (UID: \"a2fd879b-6f46-437e-acf0-c60e879af239\") " pod="openstack/openstack-galera-0" Nov 29 08:02:23 crc kubenswrapper[4795]: I1129 08:02:23.139757 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a2fd879b-6f46-437e-acf0-c60e879af239-config-data-generated\") pod \"openstack-galera-0\" (UID: \"a2fd879b-6f46-437e-acf0-c60e879af239\") " pod="openstack/openstack-galera-0" Nov 29 08:02:23 crc kubenswrapper[4795]: I1129 08:02:23.144235 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2fd879b-6f46-437e-acf0-c60e879af239-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"a2fd879b-6f46-437e-acf0-c60e879af239\") " pod="openstack/openstack-galera-0" Nov 29 08:02:23 crc kubenswrapper[4795]: I1129 08:02:23.151636 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2fd879b-6f46-437e-acf0-c60e879af239-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"a2fd879b-6f46-437e-acf0-c60e879af239\") " pod="openstack/openstack-galera-0" Nov 29 08:02:23 crc kubenswrapper[4795]: I1129 08:02:23.157332 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qvds\" (UniqueName: \"kubernetes.io/projected/a2fd879b-6f46-437e-acf0-c60e879af239-kube-api-access-7qvds\") pod \"openstack-galera-0\" (UID: \"a2fd879b-6f46-437e-acf0-c60e879af239\") " pod="openstack/openstack-galera-0" Nov 29 08:02:23 crc kubenswrapper[4795]: I1129 08:02:23.187069 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"a2fd879b-6f46-437e-acf0-c60e879af239\") " pod="openstack/openstack-galera-0" Nov 29 08:02:23 crc kubenswrapper[4795]: I1129 08:02:23.262067 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 29 08:02:23 crc kubenswrapper[4795]: I1129 08:02:23.971978 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-98nxw" event={"ID":"e3051c02-6211-46f9-a843-9ea284aa2b72","Type":"ContainerStarted","Data":"c99084b0cecf35106fde8985c9da78a0c45f4be50196f0996a8fb98372b7b630"} Nov 29 08:02:24 crc kubenswrapper[4795]: I1129 08:02:24.025794 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-98nxw" podStartSLOduration=2.654126826 podStartE2EDuration="9.02576062s" podCreationTimestamp="2025-11-29 08:02:15 +0000 UTC" firstStartedPulling="2025-11-29 08:02:16.752291891 +0000 UTC m=+1382.727867681" lastFinishedPulling="2025-11-29 08:02:23.123925685 +0000 UTC m=+1389.099501475" observedRunningTime="2025-11-29 08:02:24.003368523 +0000 UTC m=+1389.978944333" watchObservedRunningTime="2025-11-29 08:02:24.02576062 +0000 UTC m=+1390.001336410" Nov 29 08:02:24 crc kubenswrapper[4795]: I1129 08:02:24.123782 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 29 08:02:24 crc kubenswrapper[4795]: I1129 08:02:24.126871 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 29 08:02:24 crc kubenswrapper[4795]: I1129 08:02:24.133272 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Nov 29 08:02:24 crc kubenswrapper[4795]: I1129 08:02:24.133634 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-r47hl" Nov 29 08:02:24 crc kubenswrapper[4795]: I1129 08:02:24.133773 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Nov 29 08:02:24 crc kubenswrapper[4795]: I1129 08:02:24.134577 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Nov 29 08:02:24 crc kubenswrapper[4795]: I1129 08:02:24.156856 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 29 08:02:24 crc kubenswrapper[4795]: I1129 08:02:24.168789 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 29 08:02:24 crc kubenswrapper[4795]: I1129 08:02:24.208001 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsftz\" (UniqueName: \"kubernetes.io/projected/a9c19857-e09b-4c26-bf5a-a64655eaa024-kube-api-access-xsftz\") pod \"openstack-cell1-galera-0\" (UID: \"a9c19857-e09b-4c26-bf5a-a64655eaa024\") " pod="openstack/openstack-cell1-galera-0" Nov 29 08:02:24 crc kubenswrapper[4795]: I1129 08:02:24.208133 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9c19857-e09b-4c26-bf5a-a64655eaa024-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"a9c19857-e09b-4c26-bf5a-a64655eaa024\") " pod="openstack/openstack-cell1-galera-0" Nov 29 08:02:24 crc kubenswrapper[4795]: I1129 08:02:24.208192 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a9c19857-e09b-4c26-bf5a-a64655eaa024-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"a9c19857-e09b-4c26-bf5a-a64655eaa024\") " pod="openstack/openstack-cell1-galera-0" Nov 29 08:02:24 crc kubenswrapper[4795]: I1129 08:02:24.208359 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9c19857-e09b-4c26-bf5a-a64655eaa024-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"a9c19857-e09b-4c26-bf5a-a64655eaa024\") " pod="openstack/openstack-cell1-galera-0" Nov 29 08:02:24 crc kubenswrapper[4795]: I1129 08:02:24.208430 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a9c19857-e09b-4c26-bf5a-a64655eaa024-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"a9c19857-e09b-4c26-bf5a-a64655eaa024\") " pod="openstack/openstack-cell1-galera-0" Nov 29 08:02:24 crc kubenswrapper[4795]: I1129 08:02:24.208619 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"a9c19857-e09b-4c26-bf5a-a64655eaa024\") " pod="openstack/openstack-cell1-galera-0" Nov 29 08:02:24 crc kubenswrapper[4795]: I1129 08:02:24.208748 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9c19857-e09b-4c26-bf5a-a64655eaa024-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"a9c19857-e09b-4c26-bf5a-a64655eaa024\") " pod="openstack/openstack-cell1-galera-0" Nov 29 08:02:24 crc kubenswrapper[4795]: I1129 08:02:24.208837 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a9c19857-e09b-4c26-bf5a-a64655eaa024-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"a9c19857-e09b-4c26-bf5a-a64655eaa024\") " pod="openstack/openstack-cell1-galera-0" Nov 29 08:02:24 crc kubenswrapper[4795]: I1129 08:02:24.312159 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9c19857-e09b-4c26-bf5a-a64655eaa024-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"a9c19857-e09b-4c26-bf5a-a64655eaa024\") " pod="openstack/openstack-cell1-galera-0" Nov 29 08:02:24 crc kubenswrapper[4795]: I1129 08:02:24.312239 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a9c19857-e09b-4c26-bf5a-a64655eaa024-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"a9c19857-e09b-4c26-bf5a-a64655eaa024\") " pod="openstack/openstack-cell1-galera-0" Nov 29 08:02:24 crc kubenswrapper[4795]: I1129 08:02:24.312274 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9c19857-e09b-4c26-bf5a-a64655eaa024-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"a9c19857-e09b-4c26-bf5a-a64655eaa024\") " pod="openstack/openstack-cell1-galera-0" Nov 29 08:02:24 crc kubenswrapper[4795]: I1129 08:02:24.312303 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a9c19857-e09b-4c26-bf5a-a64655eaa024-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"a9c19857-e09b-4c26-bf5a-a64655eaa024\") " pod="openstack/openstack-cell1-galera-0" Nov 29 08:02:24 crc kubenswrapper[4795]: I1129 08:02:24.312372 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"a9c19857-e09b-4c26-bf5a-a64655eaa024\") " pod="openstack/openstack-cell1-galera-0" Nov 29 08:02:24 crc kubenswrapper[4795]: I1129 08:02:24.312405 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9c19857-e09b-4c26-bf5a-a64655eaa024-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"a9c19857-e09b-4c26-bf5a-a64655eaa024\") " pod="openstack/openstack-cell1-galera-0" Nov 29 08:02:24 crc kubenswrapper[4795]: I1129 08:02:24.312434 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a9c19857-e09b-4c26-bf5a-a64655eaa024-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"a9c19857-e09b-4c26-bf5a-a64655eaa024\") " pod="openstack/openstack-cell1-galera-0" Nov 29 08:02:24 crc kubenswrapper[4795]: I1129 08:02:24.312515 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsftz\" (UniqueName: \"kubernetes.io/projected/a9c19857-e09b-4c26-bf5a-a64655eaa024-kube-api-access-xsftz\") pod \"openstack-cell1-galera-0\" (UID: \"a9c19857-e09b-4c26-bf5a-a64655eaa024\") " pod="openstack/openstack-cell1-galera-0" Nov 29 08:02:24 crc kubenswrapper[4795]: I1129 08:02:24.313860 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a9c19857-e09b-4c26-bf5a-a64655eaa024-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"a9c19857-e09b-4c26-bf5a-a64655eaa024\") " pod="openstack/openstack-cell1-galera-0" Nov 29 08:02:24 crc kubenswrapper[4795]: I1129 08:02:24.314433 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a9c19857-e09b-4c26-bf5a-a64655eaa024-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"a9c19857-e09b-4c26-bf5a-a64655eaa024\") " pod="openstack/openstack-cell1-galera-0" Nov 29 08:02:24 crc kubenswrapper[4795]: I1129 08:02:24.314546 4795 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"a9c19857-e09b-4c26-bf5a-a64655eaa024\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/openstack-cell1-galera-0" Nov 29 08:02:24 crc kubenswrapper[4795]: I1129 08:02:24.325569 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9c19857-e09b-4c26-bf5a-a64655eaa024-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"a9c19857-e09b-4c26-bf5a-a64655eaa024\") " pod="openstack/openstack-cell1-galera-0" Nov 29 08:02:24 crc kubenswrapper[4795]: I1129 08:02:24.329806 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a9c19857-e09b-4c26-bf5a-a64655eaa024-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"a9c19857-e09b-4c26-bf5a-a64655eaa024\") " pod="openstack/openstack-cell1-galera-0" Nov 29 08:02:24 crc kubenswrapper[4795]: I1129 08:02:24.333568 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9c19857-e09b-4c26-bf5a-a64655eaa024-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"a9c19857-e09b-4c26-bf5a-a64655eaa024\") " pod="openstack/openstack-cell1-galera-0" Nov 29 08:02:24 crc kubenswrapper[4795]: I1129 08:02:24.336242 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9c19857-e09b-4c26-bf5a-a64655eaa024-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"a9c19857-e09b-4c26-bf5a-a64655eaa024\") " pod="openstack/openstack-cell1-galera-0" Nov 29 08:02:24 crc kubenswrapper[4795]: I1129 08:02:24.354659 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Nov 29 08:02:24 crc kubenswrapper[4795]: I1129 08:02:24.356650 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 29 08:02:24 crc kubenswrapper[4795]: I1129 08:02:24.358552 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsftz\" (UniqueName: \"kubernetes.io/projected/a9c19857-e09b-4c26-bf5a-a64655eaa024-kube-api-access-xsftz\") pod \"openstack-cell1-galera-0\" (UID: \"a9c19857-e09b-4c26-bf5a-a64655eaa024\") " pod="openstack/openstack-cell1-galera-0" Nov 29 08:02:24 crc kubenswrapper[4795]: I1129 08:02:24.367012 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-jp8bl" Nov 29 08:02:24 crc kubenswrapper[4795]: I1129 08:02:24.367395 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Nov 29 08:02:24 crc kubenswrapper[4795]: I1129 08:02:24.370218 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Nov 29 08:02:24 crc kubenswrapper[4795]: I1129 08:02:24.382146 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 29 08:02:24 crc kubenswrapper[4795]: I1129 08:02:24.400808 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"a9c19857-e09b-4c26-bf5a-a64655eaa024\") " pod="openstack/openstack-cell1-galera-0" Nov 29 08:02:24 crc kubenswrapper[4795]: I1129 08:02:24.419359 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab20548b-7f96-4eb8-aa44-80425459c0ed-memcached-tls-certs\") pod \"memcached-0\" (UID: \"ab20548b-7f96-4eb8-aa44-80425459c0ed\") " pod="openstack/memcached-0" Nov 29 08:02:24 crc kubenswrapper[4795]: I1129 08:02:24.419508 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ab20548b-7f96-4eb8-aa44-80425459c0ed-kolla-config\") pod \"memcached-0\" (UID: \"ab20548b-7f96-4eb8-aa44-80425459c0ed\") " pod="openstack/memcached-0" Nov 29 08:02:24 crc kubenswrapper[4795]: I1129 08:02:24.419662 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab20548b-7f96-4eb8-aa44-80425459c0ed-combined-ca-bundle\") pod \"memcached-0\" (UID: \"ab20548b-7f96-4eb8-aa44-80425459c0ed\") " pod="openstack/memcached-0" Nov 29 08:02:24 crc kubenswrapper[4795]: I1129 08:02:24.419703 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ab20548b-7f96-4eb8-aa44-80425459c0ed-config-data\") pod \"memcached-0\" (UID: \"ab20548b-7f96-4eb8-aa44-80425459c0ed\") " pod="openstack/memcached-0" Nov 29 08:02:24 crc kubenswrapper[4795]: I1129 08:02:24.419846 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cdvq\" (UniqueName: \"kubernetes.io/projected/ab20548b-7f96-4eb8-aa44-80425459c0ed-kube-api-access-7cdvq\") pod \"memcached-0\" (UID: \"ab20548b-7f96-4eb8-aa44-80425459c0ed\") " pod="openstack/memcached-0" Nov 29 08:02:24 crc kubenswrapper[4795]: I1129 08:02:24.520398 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 29 08:02:24 crc kubenswrapper[4795]: I1129 08:02:24.524881 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab20548b-7f96-4eb8-aa44-80425459c0ed-combined-ca-bundle\") pod \"memcached-0\" (UID: \"ab20548b-7f96-4eb8-aa44-80425459c0ed\") " pod="openstack/memcached-0" Nov 29 08:02:24 crc kubenswrapper[4795]: I1129 08:02:24.524963 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ab20548b-7f96-4eb8-aa44-80425459c0ed-config-data\") pod \"memcached-0\" (UID: \"ab20548b-7f96-4eb8-aa44-80425459c0ed\") " pod="openstack/memcached-0" Nov 29 08:02:24 crc kubenswrapper[4795]: I1129 08:02:24.524996 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cdvq\" (UniqueName: \"kubernetes.io/projected/ab20548b-7f96-4eb8-aa44-80425459c0ed-kube-api-access-7cdvq\") pod \"memcached-0\" (UID: \"ab20548b-7f96-4eb8-aa44-80425459c0ed\") " pod="openstack/memcached-0" Nov 29 08:02:24 crc kubenswrapper[4795]: I1129 08:02:24.525107 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab20548b-7f96-4eb8-aa44-80425459c0ed-memcached-tls-certs\") pod \"memcached-0\" (UID: \"ab20548b-7f96-4eb8-aa44-80425459c0ed\") " pod="openstack/memcached-0" Nov 29 08:02:24 crc kubenswrapper[4795]: I1129 08:02:24.525136 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ab20548b-7f96-4eb8-aa44-80425459c0ed-kolla-config\") pod \"memcached-0\" (UID: \"ab20548b-7f96-4eb8-aa44-80425459c0ed\") " pod="openstack/memcached-0" Nov 29 08:02:24 crc kubenswrapper[4795]: I1129 08:02:24.528702 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ab20548b-7f96-4eb8-aa44-80425459c0ed-kolla-config\") pod \"memcached-0\" (UID: \"ab20548b-7f96-4eb8-aa44-80425459c0ed\") " pod="openstack/memcached-0" Nov 29 08:02:24 crc kubenswrapper[4795]: I1129 08:02:24.529554 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ab20548b-7f96-4eb8-aa44-80425459c0ed-config-data\") pod \"memcached-0\" (UID: \"ab20548b-7f96-4eb8-aa44-80425459c0ed\") " pod="openstack/memcached-0" Nov 29 08:02:24 crc kubenswrapper[4795]: I1129 08:02:24.544473 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab20548b-7f96-4eb8-aa44-80425459c0ed-combined-ca-bundle\") pod \"memcached-0\" (UID: \"ab20548b-7f96-4eb8-aa44-80425459c0ed\") " pod="openstack/memcached-0" Nov 29 08:02:24 crc kubenswrapper[4795]: I1129 08:02:24.548997 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab20548b-7f96-4eb8-aa44-80425459c0ed-memcached-tls-certs\") pod \"memcached-0\" (UID: \"ab20548b-7f96-4eb8-aa44-80425459c0ed\") " pod="openstack/memcached-0" Nov 29 08:02:24 crc kubenswrapper[4795]: I1129 08:02:24.565475 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cdvq\" (UniqueName: \"kubernetes.io/projected/ab20548b-7f96-4eb8-aa44-80425459c0ed-kube-api-access-7cdvq\") pod \"memcached-0\" (UID: \"ab20548b-7f96-4eb8-aa44-80425459c0ed\") " pod="openstack/memcached-0" Nov 29 08:02:24 crc kubenswrapper[4795]: I1129 08:02:24.777782 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 29 08:02:25 crc kubenswrapper[4795]: I1129 08:02:25.020929 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a2fd879b-6f46-437e-acf0-c60e879af239","Type":"ContainerStarted","Data":"4c827ce78ca02ff1a635cc58da0ced3f7dc594e2742e9dfa3433587cca3fca5a"} Nov 29 08:02:25 crc kubenswrapper[4795]: I1129 08:02:25.572154 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-98nxw" Nov 29 08:02:25 crc kubenswrapper[4795]: I1129 08:02:25.573168 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-98nxw" Nov 29 08:02:25 crc kubenswrapper[4795]: I1129 08:02:25.737560 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 29 08:02:25 crc kubenswrapper[4795]: I1129 08:02:25.994023 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 29 08:02:26 crc kubenswrapper[4795]: I1129 08:02:26.056865 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"ab20548b-7f96-4eb8-aa44-80425459c0ed","Type":"ContainerStarted","Data":"70dc312c3ac393f622ad3ddfa5cccf4bf79b8057208b768792b61ab3af28da5f"} Nov 29 08:02:26 crc kubenswrapper[4795]: W1129 08:02:26.070099 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9c19857_e09b_4c26_bf5a_a64655eaa024.slice/crio-72f7dda3dcd27fce30b865914e41c574f3f80997b9b93c24467170ea7103987b WatchSource:0}: Error finding container 72f7dda3dcd27fce30b865914e41c574f3f80997b9b93c24467170ea7103987b: Status 404 returned error can't find the container with id 72f7dda3dcd27fce30b865914e41c574f3f80997b9b93c24467170ea7103987b Nov 29 08:02:26 crc kubenswrapper[4795]: I1129 08:02:26.548115 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 29 08:02:26 crc kubenswrapper[4795]: I1129 08:02:26.552170 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 29 08:02:26 crc kubenswrapper[4795]: I1129 08:02:26.559356 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-mp5kr" Nov 29 08:02:26 crc kubenswrapper[4795]: I1129 08:02:26.575342 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 29 08:02:26 crc kubenswrapper[4795]: I1129 08:02:26.636634 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nv6vx\" (UniqueName: \"kubernetes.io/projected/1bbcaf05-b2f0-4958-b862-50bb9d1f62b5-kube-api-access-nv6vx\") pod \"kube-state-metrics-0\" (UID: \"1bbcaf05-b2f0-4958-b862-50bb9d1f62b5\") " pod="openstack/kube-state-metrics-0" Nov 29 08:02:26 crc kubenswrapper[4795]: I1129 08:02:26.714951 4795 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-98nxw" podUID="e3051c02-6211-46f9-a843-9ea284aa2b72" containerName="registry-server" probeResult="failure" output=< Nov 29 08:02:26 crc kubenswrapper[4795]: timeout: failed to connect service ":50051" within 1s Nov 29 08:02:26 crc kubenswrapper[4795]: > Nov 29 08:02:26 crc kubenswrapper[4795]: I1129 08:02:26.740914 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nv6vx\" (UniqueName: \"kubernetes.io/projected/1bbcaf05-b2f0-4958-b862-50bb9d1f62b5-kube-api-access-nv6vx\") pod \"kube-state-metrics-0\" (UID: \"1bbcaf05-b2f0-4958-b862-50bb9d1f62b5\") " pod="openstack/kube-state-metrics-0" Nov 29 08:02:26 crc kubenswrapper[4795]: I1129 08:02:26.786697 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nv6vx\" (UniqueName: \"kubernetes.io/projected/1bbcaf05-b2f0-4958-b862-50bb9d1f62b5-kube-api-access-nv6vx\") pod \"kube-state-metrics-0\" (UID: \"1bbcaf05-b2f0-4958-b862-50bb9d1f62b5\") " pod="openstack/kube-state-metrics-0" Nov 29 08:02:26 crc kubenswrapper[4795]: I1129 08:02:26.900235 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 29 08:02:27 crc kubenswrapper[4795]: I1129 08:02:27.109918 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a9c19857-e09b-4c26-bf5a-a64655eaa024","Type":"ContainerStarted","Data":"72f7dda3dcd27fce30b865914e41c574f3f80997b9b93c24467170ea7103987b"} Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:27.645851 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-ui-dashboards-7d5fb4cbfb-q86dd"] Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:27.649311 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-q86dd" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:27.692432 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-7d5fb4cbfb-q86dd"] Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:27.693293 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5194485f-a306-493d-a1a3-f33030371413-serving-cert\") pod \"observability-ui-dashboards-7d5fb4cbfb-q86dd\" (UID: \"5194485f-a306-493d-a1a3-f33030371413\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-q86dd" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:27.693453 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl7ls\" (UniqueName: \"kubernetes.io/projected/5194485f-a306-493d-a1a3-f33030371413-kube-api-access-zl7ls\") pod \"observability-ui-dashboards-7d5fb4cbfb-q86dd\" (UID: \"5194485f-a306-493d-a1a3-f33030371413\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-q86dd" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:27.707316 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards-sa-dockercfg-jvmvp" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:27.708229 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:27.806082 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5194485f-a306-493d-a1a3-f33030371413-serving-cert\") pod \"observability-ui-dashboards-7d5fb4cbfb-q86dd\" (UID: \"5194485f-a306-493d-a1a3-f33030371413\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-q86dd" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:27.817743 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zl7ls\" (UniqueName: \"kubernetes.io/projected/5194485f-a306-493d-a1a3-f33030371413-kube-api-access-zl7ls\") pod \"observability-ui-dashboards-7d5fb4cbfb-q86dd\" (UID: \"5194485f-a306-493d-a1a3-f33030371413\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-q86dd" Nov 29 08:02:32 crc kubenswrapper[4795]: E1129 08:02:27.818052 4795 secret.go:188] Couldn't get secret openshift-operators/observability-ui-dashboards: secret "observability-ui-dashboards" not found Nov 29 08:02:32 crc kubenswrapper[4795]: E1129 08:02:27.818137 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5194485f-a306-493d-a1a3-f33030371413-serving-cert podName:5194485f-a306-493d-a1a3-f33030371413 nodeName:}" failed. No retries permitted until 2025-11-29 08:02:28.31811566 +0000 UTC m=+1394.293691450 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5194485f-a306-493d-a1a3-f33030371413-serving-cert") pod "observability-ui-dashboards-7d5fb4cbfb-q86dd" (UID: "5194485f-a306-493d-a1a3-f33030371413") : secret "observability-ui-dashboards" not found Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:27.886732 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zl7ls\" (UniqueName: \"kubernetes.io/projected/5194485f-a306-493d-a1a3-f33030371413-kube-api-access-zl7ls\") pod \"observability-ui-dashboards-7d5fb4cbfb-q86dd\" (UID: \"5194485f-a306-493d-a1a3-f33030371413\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-q86dd" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:28.076276 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7995bbc7cc-g5h8j"] Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:28.077519 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7995bbc7cc-g5h8j" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:28.089170 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7995bbc7cc-g5h8j"] Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:28.112638 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:28.115753 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:28.123913 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:28.124154 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:28.125908 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:28.126346 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:28.126621 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-w8tlv" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:28.136839 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:28.140023 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:28.248713 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c039262e-df43-4664-af38-f0645377e9f6-console-serving-cert\") pod \"console-7995bbc7cc-g5h8j\" (UID: \"c039262e-df43-4664-af38-f0645377e9f6\") " pod="openshift-console/console-7995bbc7cc-g5h8j" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:28.249147 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/063373b7-6898-409d-b792-d770a8f6f021-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"063373b7-6898-409d-b792-d770a8f6f021\") " pod="openstack/prometheus-metric-storage-0" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:28.249177 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c039262e-df43-4664-af38-f0645377e9f6-console-config\") pod \"console-7995bbc7cc-g5h8j\" (UID: \"c039262e-df43-4664-af38-f0645377e9f6\") " pod="openshift-console/console-7995bbc7cc-g5h8j" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:28.249244 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/063373b7-6898-409d-b792-d770a8f6f021-config\") pod \"prometheus-metric-storage-0\" (UID: \"063373b7-6898-409d-b792-d770a8f6f021\") " pod="openstack/prometheus-metric-storage-0" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:28.249266 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"prometheus-metric-storage-0\" (UID: \"063373b7-6898-409d-b792-d770a8f6f021\") " pod="openstack/prometheus-metric-storage-0" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:28.249290 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/063373b7-6898-409d-b792-d770a8f6f021-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"063373b7-6898-409d-b792-d770a8f6f021\") " pod="openstack/prometheus-metric-storage-0" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:28.249579 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c039262e-df43-4664-af38-f0645377e9f6-console-oauth-config\") pod \"console-7995bbc7cc-g5h8j\" (UID: \"c039262e-df43-4664-af38-f0645377e9f6\") " pod="openshift-console/console-7995bbc7cc-g5h8j" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:28.249613 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c039262e-df43-4664-af38-f0645377e9f6-trusted-ca-bundle\") pod \"console-7995bbc7cc-g5h8j\" (UID: \"c039262e-df43-4664-af38-f0645377e9f6\") " pod="openshift-console/console-7995bbc7cc-g5h8j" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:28.249633 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/063373b7-6898-409d-b792-d770a8f6f021-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"063373b7-6898-409d-b792-d770a8f6f021\") " pod="openstack/prometheus-metric-storage-0" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:28.249669 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/063373b7-6898-409d-b792-d770a8f6f021-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"063373b7-6898-409d-b792-d770a8f6f021\") " pod="openstack/prometheus-metric-storage-0" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:28.249730 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcgdp\" (UniqueName: \"kubernetes.io/projected/c039262e-df43-4664-af38-f0645377e9f6-kube-api-access-wcgdp\") pod \"console-7995bbc7cc-g5h8j\" (UID: \"c039262e-df43-4664-af38-f0645377e9f6\") " pod="openshift-console/console-7995bbc7cc-g5h8j" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:28.249747 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c039262e-df43-4664-af38-f0645377e9f6-oauth-serving-cert\") pod \"console-7995bbc7cc-g5h8j\" (UID: \"c039262e-df43-4664-af38-f0645377e9f6\") " pod="openshift-console/console-7995bbc7cc-g5h8j" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:28.249871 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c039262e-df43-4664-af38-f0645377e9f6-service-ca\") pod \"console-7995bbc7cc-g5h8j\" (UID: \"c039262e-df43-4664-af38-f0645377e9f6\") " pod="openshift-console/console-7995bbc7cc-g5h8j" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:28.250112 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/063373b7-6898-409d-b792-d770a8f6f021-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"063373b7-6898-409d-b792-d770a8f6f021\") " pod="openstack/prometheus-metric-storage-0" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:28.250159 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89ch9\" (UniqueName: \"kubernetes.io/projected/063373b7-6898-409d-b792-d770a8f6f021-kube-api-access-89ch9\") pod \"prometheus-metric-storage-0\" (UID: \"063373b7-6898-409d-b792-d770a8f6f021\") " pod="openstack/prometheus-metric-storage-0" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:28.351810 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c039262e-df43-4664-af38-f0645377e9f6-console-serving-cert\") pod \"console-7995bbc7cc-g5h8j\" (UID: \"c039262e-df43-4664-af38-f0645377e9f6\") " pod="openshift-console/console-7995bbc7cc-g5h8j" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:28.351869 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/063373b7-6898-409d-b792-d770a8f6f021-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"063373b7-6898-409d-b792-d770a8f6f021\") " pod="openstack/prometheus-metric-storage-0" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:28.351892 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c039262e-df43-4664-af38-f0645377e9f6-console-config\") pod \"console-7995bbc7cc-g5h8j\" (UID: \"c039262e-df43-4664-af38-f0645377e9f6\") " pod="openshift-console/console-7995bbc7cc-g5h8j" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:28.351910 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/063373b7-6898-409d-b792-d770a8f6f021-config\") pod \"prometheus-metric-storage-0\" (UID: \"063373b7-6898-409d-b792-d770a8f6f021\") " pod="openstack/prometheus-metric-storage-0" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:28.351937 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"prometheus-metric-storage-0\" (UID: \"063373b7-6898-409d-b792-d770a8f6f021\") " pod="openstack/prometheus-metric-storage-0" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:28.351959 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/063373b7-6898-409d-b792-d770a8f6f021-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"063373b7-6898-409d-b792-d770a8f6f021\") " pod="openstack/prometheus-metric-storage-0" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:28.351981 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c039262e-df43-4664-af38-f0645377e9f6-console-oauth-config\") pod \"console-7995bbc7cc-g5h8j\" (UID: \"c039262e-df43-4664-af38-f0645377e9f6\") " pod="openshift-console/console-7995bbc7cc-g5h8j" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:28.351998 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c039262e-df43-4664-af38-f0645377e9f6-trusted-ca-bundle\") pod \"console-7995bbc7cc-g5h8j\" (UID: \"c039262e-df43-4664-af38-f0645377e9f6\") " pod="openshift-console/console-7995bbc7cc-g5h8j" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:28.352018 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/063373b7-6898-409d-b792-d770a8f6f021-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"063373b7-6898-409d-b792-d770a8f6f021\") " pod="openstack/prometheus-metric-storage-0" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:28.352055 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/063373b7-6898-409d-b792-d770a8f6f021-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"063373b7-6898-409d-b792-d770a8f6f021\") " pod="openstack/prometheus-metric-storage-0" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:28.352089 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcgdp\" (UniqueName: \"kubernetes.io/projected/c039262e-df43-4664-af38-f0645377e9f6-kube-api-access-wcgdp\") pod \"console-7995bbc7cc-g5h8j\" (UID: \"c039262e-df43-4664-af38-f0645377e9f6\") " pod="openshift-console/console-7995bbc7cc-g5h8j" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:28.352112 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c039262e-df43-4664-af38-f0645377e9f6-oauth-serving-cert\") pod \"console-7995bbc7cc-g5h8j\" (UID: \"c039262e-df43-4664-af38-f0645377e9f6\") " pod="openshift-console/console-7995bbc7cc-g5h8j" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:28.352161 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c039262e-df43-4664-af38-f0645377e9f6-service-ca\") pod \"console-7995bbc7cc-g5h8j\" (UID: \"c039262e-df43-4664-af38-f0645377e9f6\") " pod="openshift-console/console-7995bbc7cc-g5h8j" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:28.352227 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/063373b7-6898-409d-b792-d770a8f6f021-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"063373b7-6898-409d-b792-d770a8f6f021\") " pod="openstack/prometheus-metric-storage-0" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:28.352251 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89ch9\" (UniqueName: \"kubernetes.io/projected/063373b7-6898-409d-b792-d770a8f6f021-kube-api-access-89ch9\") pod \"prometheus-metric-storage-0\" (UID: \"063373b7-6898-409d-b792-d770a8f6f021\") " pod="openstack/prometheus-metric-storage-0" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:28.352282 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5194485f-a306-493d-a1a3-f33030371413-serving-cert\") pod \"observability-ui-dashboards-7d5fb4cbfb-q86dd\" (UID: \"5194485f-a306-493d-a1a3-f33030371413\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-q86dd" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:28.352614 4795 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"prometheus-metric-storage-0\" (UID: \"063373b7-6898-409d-b792-d770a8f6f021\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/prometheus-metric-storage-0" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:28.353398 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c039262e-df43-4664-af38-f0645377e9f6-console-config\") pod \"console-7995bbc7cc-g5h8j\" (UID: \"c039262e-df43-4664-af38-f0645377e9f6\") " pod="openshift-console/console-7995bbc7cc-g5h8j" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:28.357400 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/063373b7-6898-409d-b792-d770a8f6f021-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"063373b7-6898-409d-b792-d770a8f6f021\") " pod="openstack/prometheus-metric-storage-0" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:28.358984 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c039262e-df43-4664-af38-f0645377e9f6-oauth-serving-cert\") pod \"console-7995bbc7cc-g5h8j\" (UID: \"c039262e-df43-4664-af38-f0645377e9f6\") " pod="openshift-console/console-7995bbc7cc-g5h8j" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:28.360641 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c039262e-df43-4664-af38-f0645377e9f6-trusted-ca-bundle\") pod \"console-7995bbc7cc-g5h8j\" (UID: \"c039262e-df43-4664-af38-f0645377e9f6\") " pod="openshift-console/console-7995bbc7cc-g5h8j" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:28.360760 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c039262e-df43-4664-af38-f0645377e9f6-service-ca\") pod \"console-7995bbc7cc-g5h8j\" (UID: \"c039262e-df43-4664-af38-f0645377e9f6\") " pod="openshift-console/console-7995bbc7cc-g5h8j" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:28.370779 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/063373b7-6898-409d-b792-d770a8f6f021-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"063373b7-6898-409d-b792-d770a8f6f021\") " pod="openstack/prometheus-metric-storage-0" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:28.371425 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/063373b7-6898-409d-b792-d770a8f6f021-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"063373b7-6898-409d-b792-d770a8f6f021\") " pod="openstack/prometheus-metric-storage-0" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:28.371428 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5194485f-a306-493d-a1a3-f33030371413-serving-cert\") pod \"observability-ui-dashboards-7d5fb4cbfb-q86dd\" (UID: \"5194485f-a306-493d-a1a3-f33030371413\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-q86dd" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:28.371508 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/063373b7-6898-409d-b792-d770a8f6f021-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"063373b7-6898-409d-b792-d770a8f6f021\") " pod="openstack/prometheus-metric-storage-0" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:28.371750 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c039262e-df43-4664-af38-f0645377e9f6-console-serving-cert\") pod \"console-7995bbc7cc-g5h8j\" (UID: \"c039262e-df43-4664-af38-f0645377e9f6\") " pod="openshift-console/console-7995bbc7cc-g5h8j" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:28.375604 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/063373b7-6898-409d-b792-d770a8f6f021-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"063373b7-6898-409d-b792-d770a8f6f021\") " pod="openstack/prometheus-metric-storage-0" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:28.376128 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/063373b7-6898-409d-b792-d770a8f6f021-config\") pod \"prometheus-metric-storage-0\" (UID: \"063373b7-6898-409d-b792-d770a8f6f021\") " pod="openstack/prometheus-metric-storage-0" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:28.378224 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c039262e-df43-4664-af38-f0645377e9f6-console-oauth-config\") pod \"console-7995bbc7cc-g5h8j\" (UID: \"c039262e-df43-4664-af38-f0645377e9f6\") " pod="openshift-console/console-7995bbc7cc-g5h8j" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:28.383298 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcgdp\" (UniqueName: \"kubernetes.io/projected/c039262e-df43-4664-af38-f0645377e9f6-kube-api-access-wcgdp\") pod \"console-7995bbc7cc-g5h8j\" (UID: \"c039262e-df43-4664-af38-f0645377e9f6\") " pod="openshift-console/console-7995bbc7cc-g5h8j" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:28.388459 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89ch9\" (UniqueName: \"kubernetes.io/projected/063373b7-6898-409d-b792-d770a8f6f021-kube-api-access-89ch9\") pod \"prometheus-metric-storage-0\" (UID: \"063373b7-6898-409d-b792-d770a8f6f021\") " pod="openstack/prometheus-metric-storage-0" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:28.407920 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7995bbc7cc-g5h8j" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:28.409679 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"prometheus-metric-storage-0\" (UID: \"063373b7-6898-409d-b792-d770a8f6f021\") " pod="openstack/prometheus-metric-storage-0" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:28.449994 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:28.599660 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-q86dd" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:28.758115 4795 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/controller-f8648f98b-wc858" podUID="402aa6f5-7950-4290-ab83-bd5bafa2a8d7" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.95:29150/metrics\": dial tcp 10.217.0.95:29150: i/o timeout (Client.Timeout exceeded while awaiting headers)" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:30.347465 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:30.354319 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:30.354420 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:30.358582 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:30.359266 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:30.359683 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:30.359897 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-88l5q" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:30.360121 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:30.522784 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/acd08e2d-0e1b-473c-ae31-d63d742d2061-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"acd08e2d-0e1b-473c-ae31-d63d742d2061\") " pod="openstack/ovsdbserver-nb-0" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:30.522889 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kksln\" (UniqueName: \"kubernetes.io/projected/acd08e2d-0e1b-473c-ae31-d63d742d2061-kube-api-access-kksln\") pod \"ovsdbserver-nb-0\" (UID: \"acd08e2d-0e1b-473c-ae31-d63d742d2061\") " pod="openstack/ovsdbserver-nb-0" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:30.522929 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/acd08e2d-0e1b-473c-ae31-d63d742d2061-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"acd08e2d-0e1b-473c-ae31-d63d742d2061\") " pod="openstack/ovsdbserver-nb-0" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:30.522962 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/acd08e2d-0e1b-473c-ae31-d63d742d2061-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"acd08e2d-0e1b-473c-ae31-d63d742d2061\") " pod="openstack/ovsdbserver-nb-0" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:30.523049 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/acd08e2d-0e1b-473c-ae31-d63d742d2061-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"acd08e2d-0e1b-473c-ae31-d63d742d2061\") " pod="openstack/ovsdbserver-nb-0" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:30.523075 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acd08e2d-0e1b-473c-ae31-d63d742d2061-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"acd08e2d-0e1b-473c-ae31-d63d742d2061\") " pod="openstack/ovsdbserver-nb-0" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:30.523104 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acd08e2d-0e1b-473c-ae31-d63d742d2061-config\") pod \"ovsdbserver-nb-0\" (UID: \"acd08e2d-0e1b-473c-ae31-d63d742d2061\") " pod="openstack/ovsdbserver-nb-0" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:30.523154 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"acd08e2d-0e1b-473c-ae31-d63d742d2061\") " pod="openstack/ovsdbserver-nb-0" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:30.632903 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"acd08e2d-0e1b-473c-ae31-d63d742d2061\") " pod="openstack/ovsdbserver-nb-0" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:30.633178 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/acd08e2d-0e1b-473c-ae31-d63d742d2061-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"acd08e2d-0e1b-473c-ae31-d63d742d2061\") " pod="openstack/ovsdbserver-nb-0" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:30.634361 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/acd08e2d-0e1b-473c-ae31-d63d742d2061-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"acd08e2d-0e1b-473c-ae31-d63d742d2061\") " pod="openstack/ovsdbserver-nb-0" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:30.634498 4795 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"acd08e2d-0e1b-473c-ae31-d63d742d2061\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/ovsdbserver-nb-0" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:30.634566 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kksln\" (UniqueName: \"kubernetes.io/projected/acd08e2d-0e1b-473c-ae31-d63d742d2061-kube-api-access-kksln\") pod \"ovsdbserver-nb-0\" (UID: \"acd08e2d-0e1b-473c-ae31-d63d742d2061\") " pod="openstack/ovsdbserver-nb-0" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:30.634625 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/acd08e2d-0e1b-473c-ae31-d63d742d2061-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"acd08e2d-0e1b-473c-ae31-d63d742d2061\") " pod="openstack/ovsdbserver-nb-0" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:30.634668 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/acd08e2d-0e1b-473c-ae31-d63d742d2061-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"acd08e2d-0e1b-473c-ae31-d63d742d2061\") " pod="openstack/ovsdbserver-nb-0" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:30.634777 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/acd08e2d-0e1b-473c-ae31-d63d742d2061-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"acd08e2d-0e1b-473c-ae31-d63d742d2061\") " pod="openstack/ovsdbserver-nb-0" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:30.634805 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acd08e2d-0e1b-473c-ae31-d63d742d2061-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"acd08e2d-0e1b-473c-ae31-d63d742d2061\") " pod="openstack/ovsdbserver-nb-0" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:30.634836 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acd08e2d-0e1b-473c-ae31-d63d742d2061-config\") pod \"ovsdbserver-nb-0\" (UID: \"acd08e2d-0e1b-473c-ae31-d63d742d2061\") " pod="openstack/ovsdbserver-nb-0" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:30.636369 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acd08e2d-0e1b-473c-ae31-d63d742d2061-config\") pod \"ovsdbserver-nb-0\" (UID: \"acd08e2d-0e1b-473c-ae31-d63d742d2061\") " pod="openstack/ovsdbserver-nb-0" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:30.636367 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/acd08e2d-0e1b-473c-ae31-d63d742d2061-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"acd08e2d-0e1b-473c-ae31-d63d742d2061\") " pod="openstack/ovsdbserver-nb-0" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:30.647638 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/acd08e2d-0e1b-473c-ae31-d63d742d2061-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"acd08e2d-0e1b-473c-ae31-d63d742d2061\") " pod="openstack/ovsdbserver-nb-0" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:30.648495 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acd08e2d-0e1b-473c-ae31-d63d742d2061-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"acd08e2d-0e1b-473c-ae31-d63d742d2061\") " pod="openstack/ovsdbserver-nb-0" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:30.659833 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kksln\" (UniqueName: \"kubernetes.io/projected/acd08e2d-0e1b-473c-ae31-d63d742d2061-kube-api-access-kksln\") pod \"ovsdbserver-nb-0\" (UID: \"acd08e2d-0e1b-473c-ae31-d63d742d2061\") " pod="openstack/ovsdbserver-nb-0" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:30.683693 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"acd08e2d-0e1b-473c-ae31-d63d742d2061\") " pod="openstack/ovsdbserver-nb-0" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:30.689370 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/acd08e2d-0e1b-473c-ae31-d63d742d2061-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"acd08e2d-0e1b-473c-ae31-d63d742d2061\") " pod="openstack/ovsdbserver-nb-0" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:30.987905 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:31.194383 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-r5w67"] Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:31.202866 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-r5w67" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:31.205333 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:31.208480 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-jpmkg" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:31.210379 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:31.227214 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-r5w67"] Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:31.277876 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-fjltq"] Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:31.287716 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-fjltq" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:31.295840 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-fjltq"] Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:31.369090 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnlz8\" (UniqueName: \"kubernetes.io/projected/77e980be-cb41-448f-96d7-0c99fec4d400-kube-api-access-fnlz8\") pod \"ovn-controller-r5w67\" (UID: \"77e980be-cb41-448f-96d7-0c99fec4d400\") " pod="openstack/ovn-controller-r5w67" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:31.369211 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/77e980be-cb41-448f-96d7-0c99fec4d400-var-log-ovn\") pod \"ovn-controller-r5w67\" (UID: \"77e980be-cb41-448f-96d7-0c99fec4d400\") " pod="openstack/ovn-controller-r5w67" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:31.369296 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/77e980be-cb41-448f-96d7-0c99fec4d400-ovn-controller-tls-certs\") pod \"ovn-controller-r5w67\" (UID: \"77e980be-cb41-448f-96d7-0c99fec4d400\") " pod="openstack/ovn-controller-r5w67" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:31.369358 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/77e980be-cb41-448f-96d7-0c99fec4d400-var-run\") pod \"ovn-controller-r5w67\" (UID: \"77e980be-cb41-448f-96d7-0c99fec4d400\") " pod="openstack/ovn-controller-r5w67" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:31.369405 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77e980be-cb41-448f-96d7-0c99fec4d400-combined-ca-bundle\") pod \"ovn-controller-r5w67\" (UID: \"77e980be-cb41-448f-96d7-0c99fec4d400\") " pod="openstack/ovn-controller-r5w67" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:31.369441 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/77e980be-cb41-448f-96d7-0c99fec4d400-var-run-ovn\") pod \"ovn-controller-r5w67\" (UID: \"77e980be-cb41-448f-96d7-0c99fec4d400\") " pod="openstack/ovn-controller-r5w67" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:31.369491 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/77e980be-cb41-448f-96d7-0c99fec4d400-scripts\") pod \"ovn-controller-r5w67\" (UID: \"77e980be-cb41-448f-96d7-0c99fec4d400\") " pod="openstack/ovn-controller-r5w67" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:31.471962 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/77e980be-cb41-448f-96d7-0c99fec4d400-var-run\") pod \"ovn-controller-r5w67\" (UID: \"77e980be-cb41-448f-96d7-0c99fec4d400\") " pod="openstack/ovn-controller-r5w67" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:31.472035 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-md7ch\" (UniqueName: \"kubernetes.io/projected/e351bdf6-6e04-4bbd-bfae-e28c7bf2179f-kube-api-access-md7ch\") pod \"ovn-controller-ovs-fjltq\" (UID: \"e351bdf6-6e04-4bbd-bfae-e28c7bf2179f\") " pod="openstack/ovn-controller-ovs-fjltq" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:31.472093 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e351bdf6-6e04-4bbd-bfae-e28c7bf2179f-var-run\") pod \"ovn-controller-ovs-fjltq\" (UID: \"e351bdf6-6e04-4bbd-bfae-e28c7bf2179f\") " pod="openstack/ovn-controller-ovs-fjltq" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:31.472164 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77e980be-cb41-448f-96d7-0c99fec4d400-combined-ca-bundle\") pod \"ovn-controller-r5w67\" (UID: \"77e980be-cb41-448f-96d7-0c99fec4d400\") " pod="openstack/ovn-controller-r5w67" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:31.472216 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/e351bdf6-6e04-4bbd-bfae-e28c7bf2179f-var-lib\") pod \"ovn-controller-ovs-fjltq\" (UID: \"e351bdf6-6e04-4bbd-bfae-e28c7bf2179f\") " pod="openstack/ovn-controller-ovs-fjltq" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:31.472244 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/77e980be-cb41-448f-96d7-0c99fec4d400-var-run-ovn\") pod \"ovn-controller-r5w67\" (UID: \"77e980be-cb41-448f-96d7-0c99fec4d400\") " pod="openstack/ovn-controller-r5w67" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:31.473739 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/77e980be-cb41-448f-96d7-0c99fec4d400-var-run-ovn\") pod \"ovn-controller-r5w67\" (UID: \"77e980be-cb41-448f-96d7-0c99fec4d400\") " pod="openstack/ovn-controller-r5w67" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:31.474101 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/77e980be-cb41-448f-96d7-0c99fec4d400-var-run\") pod \"ovn-controller-r5w67\" (UID: \"77e980be-cb41-448f-96d7-0c99fec4d400\") " pod="openstack/ovn-controller-r5w67" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:31.476771 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/77e980be-cb41-448f-96d7-0c99fec4d400-scripts\") pod \"ovn-controller-r5w67\" (UID: \"77e980be-cb41-448f-96d7-0c99fec4d400\") " pod="openstack/ovn-controller-r5w67" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:31.476814 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/77e980be-cb41-448f-96d7-0c99fec4d400-scripts\") pod \"ovn-controller-r5w67\" (UID: \"77e980be-cb41-448f-96d7-0c99fec4d400\") " pod="openstack/ovn-controller-r5w67" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:31.477019 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnlz8\" (UniqueName: \"kubernetes.io/projected/77e980be-cb41-448f-96d7-0c99fec4d400-kube-api-access-fnlz8\") pod \"ovn-controller-r5w67\" (UID: \"77e980be-cb41-448f-96d7-0c99fec4d400\") " pod="openstack/ovn-controller-r5w67" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:31.477952 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e351bdf6-6e04-4bbd-bfae-e28c7bf2179f-scripts\") pod \"ovn-controller-ovs-fjltq\" (UID: \"e351bdf6-6e04-4bbd-bfae-e28c7bf2179f\") " pod="openstack/ovn-controller-ovs-fjltq" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:31.479372 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/77e980be-cb41-448f-96d7-0c99fec4d400-var-log-ovn\") pod \"ovn-controller-r5w67\" (UID: \"77e980be-cb41-448f-96d7-0c99fec4d400\") " pod="openstack/ovn-controller-r5w67" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:31.479555 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/77e980be-cb41-448f-96d7-0c99fec4d400-var-log-ovn\") pod \"ovn-controller-r5w67\" (UID: \"77e980be-cb41-448f-96d7-0c99fec4d400\") " pod="openstack/ovn-controller-r5w67" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:31.479725 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/77e980be-cb41-448f-96d7-0c99fec4d400-ovn-controller-tls-certs\") pod \"ovn-controller-r5w67\" (UID: \"77e980be-cb41-448f-96d7-0c99fec4d400\") " pod="openstack/ovn-controller-r5w67" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:31.480210 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/e351bdf6-6e04-4bbd-bfae-e28c7bf2179f-var-log\") pod \"ovn-controller-ovs-fjltq\" (UID: \"e351bdf6-6e04-4bbd-bfae-e28c7bf2179f\") " pod="openstack/ovn-controller-ovs-fjltq" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:31.480240 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/e351bdf6-6e04-4bbd-bfae-e28c7bf2179f-etc-ovs\") pod \"ovn-controller-ovs-fjltq\" (UID: \"e351bdf6-6e04-4bbd-bfae-e28c7bf2179f\") " pod="openstack/ovn-controller-ovs-fjltq" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:31.485271 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/77e980be-cb41-448f-96d7-0c99fec4d400-ovn-controller-tls-certs\") pod \"ovn-controller-r5w67\" (UID: \"77e980be-cb41-448f-96d7-0c99fec4d400\") " pod="openstack/ovn-controller-r5w67" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:31.497272 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77e980be-cb41-448f-96d7-0c99fec4d400-combined-ca-bundle\") pod \"ovn-controller-r5w67\" (UID: \"77e980be-cb41-448f-96d7-0c99fec4d400\") " pod="openstack/ovn-controller-r5w67" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:31.497609 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnlz8\" (UniqueName: \"kubernetes.io/projected/77e980be-cb41-448f-96d7-0c99fec4d400-kube-api-access-fnlz8\") pod \"ovn-controller-r5w67\" (UID: \"77e980be-cb41-448f-96d7-0c99fec4d400\") " pod="openstack/ovn-controller-r5w67" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:31.584739 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-md7ch\" (UniqueName: \"kubernetes.io/projected/e351bdf6-6e04-4bbd-bfae-e28c7bf2179f-kube-api-access-md7ch\") pod \"ovn-controller-ovs-fjltq\" (UID: \"e351bdf6-6e04-4bbd-bfae-e28c7bf2179f\") " pod="openstack/ovn-controller-ovs-fjltq" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:31.584798 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e351bdf6-6e04-4bbd-bfae-e28c7bf2179f-var-run\") pod \"ovn-controller-ovs-fjltq\" (UID: \"e351bdf6-6e04-4bbd-bfae-e28c7bf2179f\") " pod="openstack/ovn-controller-ovs-fjltq" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:31.584830 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/e351bdf6-6e04-4bbd-bfae-e28c7bf2179f-var-lib\") pod \"ovn-controller-ovs-fjltq\" (UID: \"e351bdf6-6e04-4bbd-bfae-e28c7bf2179f\") " pod="openstack/ovn-controller-ovs-fjltq" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:31.584919 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e351bdf6-6e04-4bbd-bfae-e28c7bf2179f-scripts\") pod \"ovn-controller-ovs-fjltq\" (UID: \"e351bdf6-6e04-4bbd-bfae-e28c7bf2179f\") " pod="openstack/ovn-controller-ovs-fjltq" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:31.585002 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/e351bdf6-6e04-4bbd-bfae-e28c7bf2179f-var-log\") pod \"ovn-controller-ovs-fjltq\" (UID: \"e351bdf6-6e04-4bbd-bfae-e28c7bf2179f\") " pod="openstack/ovn-controller-ovs-fjltq" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:31.585020 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/e351bdf6-6e04-4bbd-bfae-e28c7bf2179f-etc-ovs\") pod \"ovn-controller-ovs-fjltq\" (UID: \"e351bdf6-6e04-4bbd-bfae-e28c7bf2179f\") " pod="openstack/ovn-controller-ovs-fjltq" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:31.585067 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e351bdf6-6e04-4bbd-bfae-e28c7bf2179f-var-run\") pod \"ovn-controller-ovs-fjltq\" (UID: \"e351bdf6-6e04-4bbd-bfae-e28c7bf2179f\") " pod="openstack/ovn-controller-ovs-fjltq" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:31.585311 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/e351bdf6-6e04-4bbd-bfae-e28c7bf2179f-etc-ovs\") pod \"ovn-controller-ovs-fjltq\" (UID: \"e351bdf6-6e04-4bbd-bfae-e28c7bf2179f\") " pod="openstack/ovn-controller-ovs-fjltq" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:31.585492 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/e351bdf6-6e04-4bbd-bfae-e28c7bf2179f-var-lib\") pod \"ovn-controller-ovs-fjltq\" (UID: \"e351bdf6-6e04-4bbd-bfae-e28c7bf2179f\") " pod="openstack/ovn-controller-ovs-fjltq" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:31.585581 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/e351bdf6-6e04-4bbd-bfae-e28c7bf2179f-var-log\") pod \"ovn-controller-ovs-fjltq\" (UID: \"e351bdf6-6e04-4bbd-bfae-e28c7bf2179f\") " pod="openstack/ovn-controller-ovs-fjltq" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:31.585844 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-r5w67" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:31.587202 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e351bdf6-6e04-4bbd-bfae-e28c7bf2179f-scripts\") pod \"ovn-controller-ovs-fjltq\" (UID: \"e351bdf6-6e04-4bbd-bfae-e28c7bf2179f\") " pod="openstack/ovn-controller-ovs-fjltq" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:31.603281 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-md7ch\" (UniqueName: \"kubernetes.io/projected/e351bdf6-6e04-4bbd-bfae-e28c7bf2179f-kube-api-access-md7ch\") pod \"ovn-controller-ovs-fjltq\" (UID: \"e351bdf6-6e04-4bbd-bfae-e28c7bf2179f\") " pod="openstack/ovn-controller-ovs-fjltq" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:31.623583 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-fjltq" Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:32.585906 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 29 08:02:32 crc kubenswrapper[4795]: I1129 08:02:32.982882 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-7d5fb4cbfb-q86dd"] Nov 29 08:02:33 crc kubenswrapper[4795]: I1129 08:02:33.165202 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7995bbc7cc-g5h8j"] Nov 29 08:02:33 crc kubenswrapper[4795]: I1129 08:02:33.187062 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 29 08:02:34 crc kubenswrapper[4795]: I1129 08:02:34.134640 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 29 08:02:34 crc kubenswrapper[4795]: I1129 08:02:34.137382 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 29 08:02:34 crc kubenswrapper[4795]: I1129 08:02:34.142545 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Nov 29 08:02:34 crc kubenswrapper[4795]: I1129 08:02:34.143037 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-xqqr9" Nov 29 08:02:34 crc kubenswrapper[4795]: I1129 08:02:34.143279 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Nov 29 08:02:34 crc kubenswrapper[4795]: I1129 08:02:34.143417 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Nov 29 08:02:34 crc kubenswrapper[4795]: I1129 08:02:34.151192 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 29 08:02:34 crc kubenswrapper[4795]: I1129 08:02:34.303618 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75918c04-f960-4321-8894-582921ced50d-config\") pod \"ovsdbserver-sb-0\" (UID: \"75918c04-f960-4321-8894-582921ced50d\") " pod="openstack/ovsdbserver-sb-0" Nov 29 08:02:34 crc kubenswrapper[4795]: I1129 08:02:34.303720 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/75918c04-f960-4321-8894-582921ced50d-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"75918c04-f960-4321-8894-582921ced50d\") " pod="openstack/ovsdbserver-sb-0" Nov 29 08:02:34 crc kubenswrapper[4795]: I1129 08:02:34.303878 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/75918c04-f960-4321-8894-582921ced50d-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"75918c04-f960-4321-8894-582921ced50d\") " pod="openstack/ovsdbserver-sb-0" Nov 29 08:02:34 crc kubenswrapper[4795]: I1129 08:02:34.303988 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2f64\" (UniqueName: \"kubernetes.io/projected/75918c04-f960-4321-8894-582921ced50d-kube-api-access-b2f64\") pod \"ovsdbserver-sb-0\" (UID: \"75918c04-f960-4321-8894-582921ced50d\") " pod="openstack/ovsdbserver-sb-0" Nov 29 08:02:34 crc kubenswrapper[4795]: I1129 08:02:34.304102 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75918c04-f960-4321-8894-582921ced50d-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"75918c04-f960-4321-8894-582921ced50d\") " pod="openstack/ovsdbserver-sb-0" Nov 29 08:02:34 crc kubenswrapper[4795]: I1129 08:02:34.304137 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"75918c04-f960-4321-8894-582921ced50d\") " pod="openstack/ovsdbserver-sb-0" Nov 29 08:02:34 crc kubenswrapper[4795]: I1129 08:02:34.304235 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/75918c04-f960-4321-8894-582921ced50d-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"75918c04-f960-4321-8894-582921ced50d\") " pod="openstack/ovsdbserver-sb-0" Nov 29 08:02:34 crc kubenswrapper[4795]: I1129 08:02:34.304338 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/75918c04-f960-4321-8894-582921ced50d-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"75918c04-f960-4321-8894-582921ced50d\") " pod="openstack/ovsdbserver-sb-0" Nov 29 08:02:34 crc kubenswrapper[4795]: I1129 08:02:34.406890 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/75918c04-f960-4321-8894-582921ced50d-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"75918c04-f960-4321-8894-582921ced50d\") " pod="openstack/ovsdbserver-sb-0" Nov 29 08:02:34 crc kubenswrapper[4795]: I1129 08:02:34.406979 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/75918c04-f960-4321-8894-582921ced50d-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"75918c04-f960-4321-8894-582921ced50d\") " pod="openstack/ovsdbserver-sb-0" Nov 29 08:02:34 crc kubenswrapper[4795]: I1129 08:02:34.407021 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2f64\" (UniqueName: \"kubernetes.io/projected/75918c04-f960-4321-8894-582921ced50d-kube-api-access-b2f64\") pod \"ovsdbserver-sb-0\" (UID: \"75918c04-f960-4321-8894-582921ced50d\") " pod="openstack/ovsdbserver-sb-0" Nov 29 08:02:34 crc kubenswrapper[4795]: I1129 08:02:34.407080 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75918c04-f960-4321-8894-582921ced50d-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"75918c04-f960-4321-8894-582921ced50d\") " pod="openstack/ovsdbserver-sb-0" Nov 29 08:02:34 crc kubenswrapper[4795]: I1129 08:02:34.407115 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"75918c04-f960-4321-8894-582921ced50d\") " pod="openstack/ovsdbserver-sb-0" Nov 29 08:02:34 crc kubenswrapper[4795]: I1129 08:02:34.407167 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/75918c04-f960-4321-8894-582921ced50d-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"75918c04-f960-4321-8894-582921ced50d\") " pod="openstack/ovsdbserver-sb-0" Nov 29 08:02:34 crc kubenswrapper[4795]: I1129 08:02:34.407229 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/75918c04-f960-4321-8894-582921ced50d-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"75918c04-f960-4321-8894-582921ced50d\") " pod="openstack/ovsdbserver-sb-0" Nov 29 08:02:34 crc kubenswrapper[4795]: I1129 08:02:34.407284 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75918c04-f960-4321-8894-582921ced50d-config\") pod \"ovsdbserver-sb-0\" (UID: \"75918c04-f960-4321-8894-582921ced50d\") " pod="openstack/ovsdbserver-sb-0" Nov 29 08:02:34 crc kubenswrapper[4795]: I1129 08:02:34.407508 4795 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"75918c04-f960-4321-8894-582921ced50d\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/ovsdbserver-sb-0" Nov 29 08:02:34 crc kubenswrapper[4795]: I1129 08:02:34.409452 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75918c04-f960-4321-8894-582921ced50d-config\") pod \"ovsdbserver-sb-0\" (UID: \"75918c04-f960-4321-8894-582921ced50d\") " pod="openstack/ovsdbserver-sb-0" Nov 29 08:02:34 crc kubenswrapper[4795]: I1129 08:02:34.410225 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/75918c04-f960-4321-8894-582921ced50d-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"75918c04-f960-4321-8894-582921ced50d\") " pod="openstack/ovsdbserver-sb-0" Nov 29 08:02:34 crc kubenswrapper[4795]: I1129 08:02:34.412526 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/75918c04-f960-4321-8894-582921ced50d-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"75918c04-f960-4321-8894-582921ced50d\") " pod="openstack/ovsdbserver-sb-0" Nov 29 08:02:34 crc kubenswrapper[4795]: I1129 08:02:34.415936 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75918c04-f960-4321-8894-582921ced50d-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"75918c04-f960-4321-8894-582921ced50d\") " pod="openstack/ovsdbserver-sb-0" Nov 29 08:02:34 crc kubenswrapper[4795]: I1129 08:02:34.418121 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/75918c04-f960-4321-8894-582921ced50d-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"75918c04-f960-4321-8894-582921ced50d\") " pod="openstack/ovsdbserver-sb-0" Nov 29 08:02:34 crc kubenswrapper[4795]: I1129 08:02:34.427579 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/75918c04-f960-4321-8894-582921ced50d-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"75918c04-f960-4321-8894-582921ced50d\") " pod="openstack/ovsdbserver-sb-0" Nov 29 08:02:34 crc kubenswrapper[4795]: I1129 08:02:34.430011 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2f64\" (UniqueName: \"kubernetes.io/projected/75918c04-f960-4321-8894-582921ced50d-kube-api-access-b2f64\") pod \"ovsdbserver-sb-0\" (UID: \"75918c04-f960-4321-8894-582921ced50d\") " pod="openstack/ovsdbserver-sb-0" Nov 29 08:02:34 crc kubenswrapper[4795]: I1129 08:02:34.465000 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"75918c04-f960-4321-8894-582921ced50d\") " pod="openstack/ovsdbserver-sb-0" Nov 29 08:02:34 crc kubenswrapper[4795]: I1129 08:02:34.471297 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 29 08:02:36 crc kubenswrapper[4795]: I1129 08:02:36.648968 4795 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-98nxw" podUID="e3051c02-6211-46f9-a843-9ea284aa2b72" containerName="registry-server" probeResult="failure" output=< Nov 29 08:02:36 crc kubenswrapper[4795]: timeout: failed to connect service ":50051" within 1s Nov 29 08:02:36 crc kubenswrapper[4795]: > Nov 29 08:02:36 crc kubenswrapper[4795]: W1129 08:02:36.933280 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5194485f_a306_493d_a1a3_f33030371413.slice/crio-bf40ca8225a79a6864dd76552199690428377c46981aecdf6d11d7a2b3704b96 WatchSource:0}: Error finding container bf40ca8225a79a6864dd76552199690428377c46981aecdf6d11d7a2b3704b96: Status 404 returned error can't find the container with id bf40ca8225a79a6864dd76552199690428377c46981aecdf6d11d7a2b3704b96 Nov 29 08:02:36 crc kubenswrapper[4795]: W1129 08:02:36.944101 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1bbcaf05_b2f0_4958_b862_50bb9d1f62b5.slice/crio-1e9b15ef8efeb88244e9fa6f20a0447fd7be575328209675f927989d98f26205 WatchSource:0}: Error finding container 1e9b15ef8efeb88244e9fa6f20a0447fd7be575328209675f927989d98f26205: Status 404 returned error can't find the container with id 1e9b15ef8efeb88244e9fa6f20a0447fd7be575328209675f927989d98f26205 Nov 29 08:02:36 crc kubenswrapper[4795]: W1129 08:02:36.946201 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc039262e_df43_4664_af38_f0645377e9f6.slice/crio-1771ad2f5eb1b71435f26a8e79a5ae3bb9e4c186e639e8611c6f59e941698bbd WatchSource:0}: Error finding container 1771ad2f5eb1b71435f26a8e79a5ae3bb9e4c186e639e8611c6f59e941698bbd: Status 404 returned error can't find the container with id 1771ad2f5eb1b71435f26a8e79a5ae3bb9e4c186e639e8611c6f59e941698bbd Nov 29 08:02:37 crc kubenswrapper[4795]: I1129 08:02:37.607268 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"063373b7-6898-409d-b792-d770a8f6f021","Type":"ContainerStarted","Data":"05207f7e389e76235bf61effb8cd38b5460dff57b356f4a7773b9132b187506e"} Nov 29 08:02:37 crc kubenswrapper[4795]: I1129 08:02:37.609965 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-q86dd" event={"ID":"5194485f-a306-493d-a1a3-f33030371413","Type":"ContainerStarted","Data":"bf40ca8225a79a6864dd76552199690428377c46981aecdf6d11d7a2b3704b96"} Nov 29 08:02:37 crc kubenswrapper[4795]: I1129 08:02:37.612924 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"1bbcaf05-b2f0-4958-b862-50bb9d1f62b5","Type":"ContainerStarted","Data":"1e9b15ef8efeb88244e9fa6f20a0447fd7be575328209675f927989d98f26205"} Nov 29 08:02:37 crc kubenswrapper[4795]: I1129 08:02:37.614576 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7995bbc7cc-g5h8j" event={"ID":"c039262e-df43-4664-af38-f0645377e9f6","Type":"ContainerStarted","Data":"1771ad2f5eb1b71435f26a8e79a5ae3bb9e4c186e639e8611c6f59e941698bbd"} Nov 29 08:02:45 crc kubenswrapper[4795]: I1129 08:02:45.619915 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-98nxw" Nov 29 08:02:45 crc kubenswrapper[4795]: I1129 08:02:45.681518 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-98nxw" Nov 29 08:02:45 crc kubenswrapper[4795]: I1129 08:02:45.860904 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-98nxw"] Nov 29 08:02:46 crc kubenswrapper[4795]: E1129 08:02:46.196955 4795 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Nov 29 08:02:46 crc kubenswrapper[4795]: E1129 08:02:46.197157 4795 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mlllp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(28c13f65-78c4-4d4a-8960-7ef17a4c93e8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 08:02:46 crc kubenswrapper[4795]: E1129 08:02:46.198364 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="28c13f65-78c4-4d4a-8960-7ef17a4c93e8" Nov 29 08:02:46 crc kubenswrapper[4795]: E1129 08:02:46.215988 4795 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Nov 29 08:02:46 crc kubenswrapper[4795]: E1129 08:02:46.216203 4795 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tkc5t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(74169c45-99e0-4179-a18b-07a1c2cade8b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 08:02:46 crc kubenswrapper[4795]: E1129 08:02:46.217540 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="74169c45-99e0-4179-a18b-07a1c2cade8b" Nov 29 08:02:46 crc kubenswrapper[4795]: I1129 08:02:46.725863 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-98nxw" podUID="e3051c02-6211-46f9-a843-9ea284aa2b72" containerName="registry-server" containerID="cri-o://c99084b0cecf35106fde8985c9da78a0c45f4be50196f0996a8fb98372b7b630" gracePeriod=2 Nov 29 08:02:46 crc kubenswrapper[4795]: E1129 08:02:46.727607 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="28c13f65-78c4-4d4a-8960-7ef17a4c93e8" Nov 29 08:02:46 crc kubenswrapper[4795]: E1129 08:02:46.727721 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="74169c45-99e0-4179-a18b-07a1c2cade8b" Nov 29 08:02:46 crc kubenswrapper[4795]: E1129 08:02:46.876988 4795 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Nov 29 08:02:46 crc kubenswrapper[4795]: E1129 08:02:46.877219 4795 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:nc4h659h584h5d9h65bh666hbdhd4h67fh5b7h594h5fbh599h67fh8bh65dh577hc9hfchc6h5b7h587h5b5h587h65fh5b6h664h5dchf6h5c6h88h578q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7cdvq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(ab20548b-7f96-4eb8-aa44-80425459c0ed): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 08:02:46 crc kubenswrapper[4795]: E1129 08:02:46.878409 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="ab20548b-7f96-4eb8-aa44-80425459c0ed" Nov 29 08:02:47 crc kubenswrapper[4795]: I1129 08:02:47.743076 4795 generic.go:334] "Generic (PLEG): container finished" podID="e3051c02-6211-46f9-a843-9ea284aa2b72" containerID="c99084b0cecf35106fde8985c9da78a0c45f4be50196f0996a8fb98372b7b630" exitCode=0 Nov 29 08:02:47 crc kubenswrapper[4795]: I1129 08:02:47.743146 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-98nxw" event={"ID":"e3051c02-6211-46f9-a843-9ea284aa2b72","Type":"ContainerDied","Data":"c99084b0cecf35106fde8985c9da78a0c45f4be50196f0996a8fb98372b7b630"} Nov 29 08:02:47 crc kubenswrapper[4795]: E1129 08:02:47.746638 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="openstack/memcached-0" podUID="ab20548b-7f96-4eb8-aa44-80425459c0ed" Nov 29 08:02:53 crc kubenswrapper[4795]: E1129 08:02:53.492733 4795 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Nov 29 08:02:53 crc kubenswrapper[4795]: E1129 08:02:53.493454 4795 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xsftz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(a9c19857-e09b-4c26-bf5a-a64655eaa024): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 08:02:53 crc kubenswrapper[4795]: E1129 08:02:53.494975 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="a9c19857-e09b-4c26-bf5a-a64655eaa024" Nov 29 08:02:53 crc kubenswrapper[4795]: E1129 08:02:53.575238 4795 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Nov 29 08:02:53 crc kubenswrapper[4795]: E1129 08:02:53.575382 4795 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7qvds,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(a2fd879b-6f46-437e-acf0-c60e879af239): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 08:02:53 crc kubenswrapper[4795]: E1129 08:02:53.576983 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="a2fd879b-6f46-437e-acf0-c60e879af239" Nov 29 08:02:53 crc kubenswrapper[4795]: E1129 08:02:53.805881 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="a9c19857-e09b-4c26-bf5a-a64655eaa024" Nov 29 08:02:53 crc kubenswrapper[4795]: E1129 08:02:53.806349 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="a2fd879b-6f46-437e-acf0-c60e879af239" Nov 29 08:02:54 crc kubenswrapper[4795]: I1129 08:02:54.017909 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-r5w67"] Nov 29 08:02:55 crc kubenswrapper[4795]: E1129 08:02:55.574535 4795 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c99084b0cecf35106fde8985c9da78a0c45f4be50196f0996a8fb98372b7b630 is running failed: container process not found" containerID="c99084b0cecf35106fde8985c9da78a0c45f4be50196f0996a8fb98372b7b630" cmd=["grpc_health_probe","-addr=:50051"] Nov 29 08:02:55 crc kubenswrapper[4795]: E1129 08:02:55.575340 4795 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c99084b0cecf35106fde8985c9da78a0c45f4be50196f0996a8fb98372b7b630 is running failed: container process not found" containerID="c99084b0cecf35106fde8985c9da78a0c45f4be50196f0996a8fb98372b7b630" cmd=["grpc_health_probe","-addr=:50051"] Nov 29 08:02:55 crc kubenswrapper[4795]: E1129 08:02:55.575739 4795 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c99084b0cecf35106fde8985c9da78a0c45f4be50196f0996a8fb98372b7b630 is running failed: container process not found" containerID="c99084b0cecf35106fde8985c9da78a0c45f4be50196f0996a8fb98372b7b630" cmd=["grpc_health_probe","-addr=:50051"] Nov 29 08:02:55 crc kubenswrapper[4795]: E1129 08:02:55.575808 4795 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c99084b0cecf35106fde8985c9da78a0c45f4be50196f0996a8fb98372b7b630 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-98nxw" podUID="e3051c02-6211-46f9-a843-9ea284aa2b72" containerName="registry-server" Nov 29 08:02:55 crc kubenswrapper[4795]: E1129 08:02:55.639667 4795 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 29 08:02:55 crc kubenswrapper[4795]: E1129 08:02:55.639855 4795 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lkwbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-52w68_openstack(8159fb0f-c9c2-40f0-9851-ed3b69d07813): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 08:02:55 crc kubenswrapper[4795]: E1129 08:02:55.641248 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-52w68" podUID="8159fb0f-c9c2-40f0-9851-ed3b69d07813" Nov 29 08:02:55 crc kubenswrapper[4795]: E1129 08:02:55.644797 4795 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 29 08:02:55 crc kubenswrapper[4795]: E1129 08:02:55.644956 4795 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9jd9s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-2dl4b_openstack(d4434e71-afe0-404d-aeec-6c080ff6a767): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 08:02:55 crc kubenswrapper[4795]: E1129 08:02:55.646172 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-2dl4b" podUID="d4434e71-afe0-404d-aeec-6c080ff6a767" Nov 29 08:02:55 crc kubenswrapper[4795]: E1129 08:02:55.822616 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-2dl4b" podUID="d4434e71-afe0-404d-aeec-6c080ff6a767" Nov 29 08:02:55 crc kubenswrapper[4795]: E1129 08:02:55.896829 4795 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/dashboards-console-plugin-rhel9@sha256:a69da8bbca8a28dd2925f864d51cc31cf761b10532c553095ba40b242ef701cb" Nov 29 08:02:55 crc kubenswrapper[4795]: E1129 08:02:55.897017 4795 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:observability-ui-dashboards,Image:registry.redhat.io/cluster-observability-operator/dashboards-console-plugin-rhel9@sha256:a69da8bbca8a28dd2925f864d51cc31cf761b10532c553095ba40b242ef701cb,Command:[],Args:[-port=9443 -cert=/var/serving-cert/tls.crt -key=/var/serving-cert/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:web,HostPort:0,ContainerPort:9443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serving-cert,ReadOnly:true,MountPath:/var/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zl7ls,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000350000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod observability-ui-dashboards-7d5fb4cbfb-q86dd_openshift-operators(5194485f-a306-493d-a1a3-f33030371413): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 29 08:02:55 crc kubenswrapper[4795]: E1129 08:02:55.898298 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"observability-ui-dashboards\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-q86dd" podUID="5194485f-a306-493d-a1a3-f33030371413" Nov 29 08:02:56 crc kubenswrapper[4795]: E1129 08:02:56.330562 4795 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:1133c973c7472c665f910a722e19c8e2e27accb34b90fab67f14548627ce9c62" Nov 29 08:02:56 crc kubenswrapper[4795]: E1129 08:02:56.330818 4795 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init-config-reloader,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:1133c973c7472c665f910a722e19c8e2e27accb34b90fab67f14548627ce9c62,Command:[/bin/prometheus-config-reloader],Args:[--watch-interval=0 --listen-address=:8081 --config-file=/etc/prometheus/config/prometheus.yaml.gz --config-envsubst-file=/etc/prometheus/config_out/prometheus.env.yaml --watched-dir=/etc/prometheus/rules/prometheus-metric-storage-rulefiles-0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:reloader-init,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:SHARD,Value:0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/etc/prometheus/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-out,ReadOnly:false,MountPath:/etc/prometheus/config_out,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-0,ReadOnly:false,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-0,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-89ch9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod prometheus-metric-storage-0_openstack(063373b7-6898-409d-b792-d770a8f6f021): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 29 08:02:56 crc kubenswrapper[4795]: E1129 08:02:56.332107 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init-config-reloader\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/prometheus-metric-storage-0" podUID="063373b7-6898-409d-b792-d770a8f6f021" Nov 29 08:02:56 crc kubenswrapper[4795]: E1129 08:02:56.371066 4795 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 29 08:02:56 crc kubenswrapper[4795]: E1129 08:02:56.371748 4795 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-94t9h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-rpwls_openstack(d8e7e29e-48b1-4cd6-bdb7-81e790e6d06b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 08:02:56 crc kubenswrapper[4795]: E1129 08:02:56.373015 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-rpwls" podUID="d8e7e29e-48b1-4cd6-bdb7-81e790e6d06b" Nov 29 08:02:56 crc kubenswrapper[4795]: E1129 08:02:56.398274 4795 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 29 08:02:56 crc kubenswrapper[4795]: E1129 08:02:56.398550 4795 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2dckt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-8qck9_openstack(3632abbe-bcd4-4af0-a0b4-05a09b26ccd0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 08:02:56 crc kubenswrapper[4795]: E1129 08:02:56.400463 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-8qck9" podUID="3632abbe-bcd4-4af0-a0b4-05a09b26ccd0" Nov 29 08:02:56 crc kubenswrapper[4795]: I1129 08:02:56.561059 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-52w68" Nov 29 08:02:56 crc kubenswrapper[4795]: I1129 08:02:56.723911 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkwbz\" (UniqueName: \"kubernetes.io/projected/8159fb0f-c9c2-40f0-9851-ed3b69d07813-kube-api-access-lkwbz\") pod \"8159fb0f-c9c2-40f0-9851-ed3b69d07813\" (UID: \"8159fb0f-c9c2-40f0-9851-ed3b69d07813\") " Nov 29 08:02:56 crc kubenswrapper[4795]: I1129 08:02:56.724000 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8159fb0f-c9c2-40f0-9851-ed3b69d07813-dns-svc\") pod \"8159fb0f-c9c2-40f0-9851-ed3b69d07813\" (UID: \"8159fb0f-c9c2-40f0-9851-ed3b69d07813\") " Nov 29 08:02:56 crc kubenswrapper[4795]: I1129 08:02:56.724141 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8159fb0f-c9c2-40f0-9851-ed3b69d07813-config\") pod \"8159fb0f-c9c2-40f0-9851-ed3b69d07813\" (UID: \"8159fb0f-c9c2-40f0-9851-ed3b69d07813\") " Nov 29 08:02:56 crc kubenswrapper[4795]: I1129 08:02:56.724562 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8159fb0f-c9c2-40f0-9851-ed3b69d07813-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8159fb0f-c9c2-40f0-9851-ed3b69d07813" (UID: "8159fb0f-c9c2-40f0-9851-ed3b69d07813"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:02:56 crc kubenswrapper[4795]: I1129 08:02:56.724576 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8159fb0f-c9c2-40f0-9851-ed3b69d07813-config" (OuterVolumeSpecName: "config") pod "8159fb0f-c9c2-40f0-9851-ed3b69d07813" (UID: "8159fb0f-c9c2-40f0-9851-ed3b69d07813"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:02:56 crc kubenswrapper[4795]: I1129 08:02:56.734314 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8159fb0f-c9c2-40f0-9851-ed3b69d07813-kube-api-access-lkwbz" (OuterVolumeSpecName: "kube-api-access-lkwbz") pod "8159fb0f-c9c2-40f0-9851-ed3b69d07813" (UID: "8159fb0f-c9c2-40f0-9851-ed3b69d07813"). InnerVolumeSpecName "kube-api-access-lkwbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:02:56 crc kubenswrapper[4795]: I1129 08:02:56.951946 4795 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8159fb0f-c9c2-40f0-9851-ed3b69d07813-config\") on node \"crc\" DevicePath \"\"" Nov 29 08:02:56 crc kubenswrapper[4795]: I1129 08:02:56.952239 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lkwbz\" (UniqueName: \"kubernetes.io/projected/8159fb0f-c9c2-40f0-9851-ed3b69d07813-kube-api-access-lkwbz\") on node \"crc\" DevicePath \"\"" Nov 29 08:02:56 crc kubenswrapper[4795]: I1129 08:02:56.952253 4795 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8159fb0f-c9c2-40f0-9851-ed3b69d07813-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 08:02:56 crc kubenswrapper[4795]: I1129 08:02:56.962469 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-98nxw" Nov 29 08:02:56 crc kubenswrapper[4795]: I1129 08:02:56.963183 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-r5w67" event={"ID":"77e980be-cb41-448f-96d7-0c99fec4d400","Type":"ContainerStarted","Data":"78e02b4b0d03af07b2f289eaefef9b7dcf5234eaef83a437b235a1734218b39d"} Nov 29 08:02:56 crc kubenswrapper[4795]: I1129 08:02:56.964170 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-52w68" event={"ID":"8159fb0f-c9c2-40f0-9851-ed3b69d07813","Type":"ContainerDied","Data":"be13660f62b7cea3e7011c3e0998e3facaeb3b3161723ddbe615f3dcb7af6619"} Nov 29 08:02:56 crc kubenswrapper[4795]: I1129 08:02:56.964179 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-52w68" Nov 29 08:02:56 crc kubenswrapper[4795]: I1129 08:02:56.977099 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-98nxw" event={"ID":"e3051c02-6211-46f9-a843-9ea284aa2b72","Type":"ContainerDied","Data":"89ed45f1b8ed81f1038620f3741cf59d347d80396d14af72035ad631c97f82cf"} Nov 29 08:02:56 crc kubenswrapper[4795]: I1129 08:02:56.977247 4795 scope.go:117] "RemoveContainer" containerID="c99084b0cecf35106fde8985c9da78a0c45f4be50196f0996a8fb98372b7b630" Nov 29 08:02:56 crc kubenswrapper[4795]: I1129 08:02:56.977491 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-98nxw" Nov 29 08:02:56 crc kubenswrapper[4795]: I1129 08:02:56.983620 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7995bbc7cc-g5h8j" event={"ID":"c039262e-df43-4664-af38-f0645377e9f6","Type":"ContainerStarted","Data":"111c5cb5bce96cad026cbe2d66fefd5cdfd3900581a76dda9ced625b352c14f9"} Nov 29 08:02:56 crc kubenswrapper[4795]: E1129 08:02:56.987227 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-8qck9" podUID="3632abbe-bcd4-4af0-a0b4-05a09b26ccd0" Nov 29 08:02:56 crc kubenswrapper[4795]: E1129 08:02:56.987395 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"observability-ui-dashboards\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/dashboards-console-plugin-rhel9@sha256:a69da8bbca8a28dd2925f864d51cc31cf761b10532c553095ba40b242ef701cb\\\"\"" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-q86dd" podUID="5194485f-a306-493d-a1a3-f33030371413" Nov 29 08:02:56 crc kubenswrapper[4795]: E1129 08:02:56.999102 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init-config-reloader\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:1133c973c7472c665f910a722e19c8e2e27accb34b90fab67f14548627ce9c62\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="063373b7-6898-409d-b792-d770a8f6f021" Nov 29 08:02:57 crc kubenswrapper[4795]: I1129 08:02:57.111022 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7995bbc7cc-g5h8j" podStartSLOduration=29.110990253 podStartE2EDuration="29.110990253s" podCreationTimestamp="2025-11-29 08:02:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 08:02:57.095164603 +0000 UTC m=+1423.070740393" watchObservedRunningTime="2025-11-29 08:02:57.110990253 +0000 UTC m=+1423.086566043" Nov 29 08:02:57 crc kubenswrapper[4795]: I1129 08:02:57.154908 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p25vt\" (UniqueName: \"kubernetes.io/projected/e3051c02-6211-46f9-a843-9ea284aa2b72-kube-api-access-p25vt\") pod \"e3051c02-6211-46f9-a843-9ea284aa2b72\" (UID: \"e3051c02-6211-46f9-a843-9ea284aa2b72\") " Nov 29 08:02:57 crc kubenswrapper[4795]: I1129 08:02:57.155074 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3051c02-6211-46f9-a843-9ea284aa2b72-catalog-content\") pod \"e3051c02-6211-46f9-a843-9ea284aa2b72\" (UID: \"e3051c02-6211-46f9-a843-9ea284aa2b72\") " Nov 29 08:02:57 crc kubenswrapper[4795]: I1129 08:02:57.155195 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3051c02-6211-46f9-a843-9ea284aa2b72-utilities\") pod \"e3051c02-6211-46f9-a843-9ea284aa2b72\" (UID: \"e3051c02-6211-46f9-a843-9ea284aa2b72\") " Nov 29 08:02:57 crc kubenswrapper[4795]: I1129 08:02:57.167894 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3051c02-6211-46f9-a843-9ea284aa2b72-kube-api-access-p25vt" (OuterVolumeSpecName: "kube-api-access-p25vt") pod "e3051c02-6211-46f9-a843-9ea284aa2b72" (UID: "e3051c02-6211-46f9-a843-9ea284aa2b72"). InnerVolumeSpecName "kube-api-access-p25vt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:02:57 crc kubenswrapper[4795]: I1129 08:02:57.182311 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3051c02-6211-46f9-a843-9ea284aa2b72-utilities" (OuterVolumeSpecName: "utilities") pod "e3051c02-6211-46f9-a843-9ea284aa2b72" (UID: "e3051c02-6211-46f9-a843-9ea284aa2b72"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:02:57 crc kubenswrapper[4795]: I1129 08:02:57.185680 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p25vt\" (UniqueName: \"kubernetes.io/projected/e3051c02-6211-46f9-a843-9ea284aa2b72-kube-api-access-p25vt\") on node \"crc\" DevicePath \"\"" Nov 29 08:02:57 crc kubenswrapper[4795]: I1129 08:02:57.219698 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-52w68"] Nov 29 08:02:57 crc kubenswrapper[4795]: I1129 08:02:57.229075 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-52w68"] Nov 29 08:02:57 crc kubenswrapper[4795]: I1129 08:02:57.293954 4795 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3051c02-6211-46f9-a843-9ea284aa2b72-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 08:02:57 crc kubenswrapper[4795]: I1129 08:02:57.355733 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3051c02-6211-46f9-a843-9ea284aa2b72-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e3051c02-6211-46f9-a843-9ea284aa2b72" (UID: "e3051c02-6211-46f9-a843-9ea284aa2b72"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:02:57 crc kubenswrapper[4795]: I1129 08:02:57.396234 4795 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3051c02-6211-46f9-a843-9ea284aa2b72-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 08:02:57 crc kubenswrapper[4795]: I1129 08:02:57.490010 4795 scope.go:117] "RemoveContainer" containerID="61a9341113703b0ae330375c227bb721dda5ee9717950107916c6085012626a6" Nov 29 08:02:57 crc kubenswrapper[4795]: I1129 08:02:57.571144 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-rpwls" Nov 29 08:02:57 crc kubenswrapper[4795]: I1129 08:02:57.599652 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94t9h\" (UniqueName: \"kubernetes.io/projected/d8e7e29e-48b1-4cd6-bdb7-81e790e6d06b-kube-api-access-94t9h\") pod \"d8e7e29e-48b1-4cd6-bdb7-81e790e6d06b\" (UID: \"d8e7e29e-48b1-4cd6-bdb7-81e790e6d06b\") " Nov 29 08:02:57 crc kubenswrapper[4795]: I1129 08:02:57.599849 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8e7e29e-48b1-4cd6-bdb7-81e790e6d06b-config\") pod \"d8e7e29e-48b1-4cd6-bdb7-81e790e6d06b\" (UID: \"d8e7e29e-48b1-4cd6-bdb7-81e790e6d06b\") " Nov 29 08:02:57 crc kubenswrapper[4795]: I1129 08:02:57.600507 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8e7e29e-48b1-4cd6-bdb7-81e790e6d06b-config" (OuterVolumeSpecName: "config") pod "d8e7e29e-48b1-4cd6-bdb7-81e790e6d06b" (UID: "d8e7e29e-48b1-4cd6-bdb7-81e790e6d06b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:02:57 crc kubenswrapper[4795]: I1129 08:02:57.600836 4795 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8e7e29e-48b1-4cd6-bdb7-81e790e6d06b-config\") on node \"crc\" DevicePath \"\"" Nov 29 08:02:57 crc kubenswrapper[4795]: I1129 08:02:57.605024 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8e7e29e-48b1-4cd6-bdb7-81e790e6d06b-kube-api-access-94t9h" (OuterVolumeSpecName: "kube-api-access-94t9h") pod "d8e7e29e-48b1-4cd6-bdb7-81e790e6d06b" (UID: "d8e7e29e-48b1-4cd6-bdb7-81e790e6d06b"). InnerVolumeSpecName "kube-api-access-94t9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:02:57 crc kubenswrapper[4795]: I1129 08:02:57.625382 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-98nxw"] Nov 29 08:02:57 crc kubenswrapper[4795]: I1129 08:02:57.636359 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-98nxw"] Nov 29 08:02:57 crc kubenswrapper[4795]: I1129 08:02:57.703006 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94t9h\" (UniqueName: \"kubernetes.io/projected/d8e7e29e-48b1-4cd6-bdb7-81e790e6d06b-kube-api-access-94t9h\") on node \"crc\" DevicePath \"\"" Nov 29 08:02:57 crc kubenswrapper[4795]: I1129 08:02:57.732407 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-fjltq"] Nov 29 08:02:57 crc kubenswrapper[4795]: I1129 08:02:57.944493 4795 scope.go:117] "RemoveContainer" containerID="e4d7dae02db0f29b4fddd52f88248351b3ae5896234552592165aae941a5f18a" Nov 29 08:02:57 crc kubenswrapper[4795]: I1129 08:02:57.994191 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-fjltq" event={"ID":"e351bdf6-6e04-4bbd-bfae-e28c7bf2179f","Type":"ContainerStarted","Data":"c610460980294e0bf4c1b27b705df3deaec75d8a94f12454074208cfa8ae4a87"} Nov 29 08:02:57 crc kubenswrapper[4795]: I1129 08:02:57.995325 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-rpwls" event={"ID":"d8e7e29e-48b1-4cd6-bdb7-81e790e6d06b","Type":"ContainerDied","Data":"14bae4e53e3667f619bb4ea185678b3ec3aa757d8b8df76036de113020aec22c"} Nov 29 08:02:57 crc kubenswrapper[4795]: I1129 08:02:57.995524 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-rpwls" Nov 29 08:02:58 crc kubenswrapper[4795]: I1129 08:02:58.074224 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-rpwls"] Nov 29 08:02:58 crc kubenswrapper[4795]: I1129 08:02:58.087526 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-rpwls"] Nov 29 08:02:58 crc kubenswrapper[4795]: I1129 08:02:58.240857 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 29 08:02:58 crc kubenswrapper[4795]: I1129 08:02:58.289147 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8159fb0f-c9c2-40f0-9851-ed3b69d07813" path="/var/lib/kubelet/pods/8159fb0f-c9c2-40f0-9851-ed3b69d07813/volumes" Nov 29 08:02:58 crc kubenswrapper[4795]: I1129 08:02:58.289930 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8e7e29e-48b1-4cd6-bdb7-81e790e6d06b" path="/var/lib/kubelet/pods/d8e7e29e-48b1-4cd6-bdb7-81e790e6d06b/volumes" Nov 29 08:02:58 crc kubenswrapper[4795]: I1129 08:02:58.290580 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3051c02-6211-46f9-a843-9ea284aa2b72" path="/var/lib/kubelet/pods/e3051c02-6211-46f9-a843-9ea284aa2b72/volumes" Nov 29 08:02:58 crc kubenswrapper[4795]: W1129 08:02:58.342985 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75918c04_f960_4321_8894_582921ced50d.slice/crio-29f87a58675ea1290272f2990568f5b2ddaf3b595f9916b4c4d0cf512097e215 WatchSource:0}: Error finding container 29f87a58675ea1290272f2990568f5b2ddaf3b595f9916b4c4d0cf512097e215: Status 404 returned error can't find the container with id 29f87a58675ea1290272f2990568f5b2ddaf3b595f9916b4c4d0cf512097e215 Nov 29 08:02:58 crc kubenswrapper[4795]: I1129 08:02:58.408583 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7995bbc7cc-g5h8j" Nov 29 08:02:58 crc kubenswrapper[4795]: I1129 08:02:58.408636 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7995bbc7cc-g5h8j" Nov 29 08:02:58 crc kubenswrapper[4795]: I1129 08:02:58.413501 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7995bbc7cc-g5h8j" Nov 29 08:02:58 crc kubenswrapper[4795]: I1129 08:02:58.684145 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 29 08:02:58 crc kubenswrapper[4795]: W1129 08:02:58.776868 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podacd08e2d_0e1b_473c_ae31_d63d742d2061.slice/crio-e9ee4eac4f003c75973cbea24ad1b6c3ce279eab899aff42685cb2381f7b3deb WatchSource:0}: Error finding container e9ee4eac4f003c75973cbea24ad1b6c3ce279eab899aff42685cb2381f7b3deb: Status 404 returned error can't find the container with id e9ee4eac4f003c75973cbea24ad1b6c3ce279eab899aff42685cb2381f7b3deb Nov 29 08:02:59 crc kubenswrapper[4795]: I1129 08:02:59.006830 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"acd08e2d-0e1b-473c-ae31-d63d742d2061","Type":"ContainerStarted","Data":"e9ee4eac4f003c75973cbea24ad1b6c3ce279eab899aff42685cb2381f7b3deb"} Nov 29 08:02:59 crc kubenswrapper[4795]: I1129 08:02:59.009621 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"75918c04-f960-4321-8894-582921ced50d","Type":"ContainerStarted","Data":"29f87a58675ea1290272f2990568f5b2ddaf3b595f9916b4c4d0cf512097e215"} Nov 29 08:02:59 crc kubenswrapper[4795]: I1129 08:02:59.013991 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7995bbc7cc-g5h8j" Nov 29 08:02:59 crc kubenswrapper[4795]: I1129 08:02:59.085984 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f5d67bfff-wl4rm"] Nov 29 08:03:02 crc kubenswrapper[4795]: I1129 08:03:02.037217 4795 generic.go:334] "Generic (PLEG): container finished" podID="e351bdf6-6e04-4bbd-bfae-e28c7bf2179f" containerID="5cc3b0e1972e5ade768cbe8551b234840ab744d65f4045e54281d9a84c752017" exitCode=0 Nov 29 08:03:02 crc kubenswrapper[4795]: I1129 08:03:02.037431 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-fjltq" event={"ID":"e351bdf6-6e04-4bbd-bfae-e28c7bf2179f","Type":"ContainerDied","Data":"5cc3b0e1972e5ade768cbe8551b234840ab744d65f4045e54281d9a84c752017"} Nov 29 08:03:02 crc kubenswrapper[4795]: I1129 08:03:02.040775 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-r5w67" event={"ID":"77e980be-cb41-448f-96d7-0c99fec4d400","Type":"ContainerStarted","Data":"66a2798efa79fe7e028969cf1553f6641d664e9d8f8d03404d28161b9d56f4af"} Nov 29 08:03:02 crc kubenswrapper[4795]: I1129 08:03:02.041378 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-r5w67" Nov 29 08:03:02 crc kubenswrapper[4795]: I1129 08:03:02.043359 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"1bbcaf05-b2f0-4958-b862-50bb9d1f62b5","Type":"ContainerStarted","Data":"168e5b9f85111702a49a66146f29355f7dbb91a35c317f7e56c85dff748d0bf6"} Nov 29 08:03:02 crc kubenswrapper[4795]: I1129 08:03:02.043526 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 29 08:03:02 crc kubenswrapper[4795]: I1129 08:03:02.045640 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"acd08e2d-0e1b-473c-ae31-d63d742d2061","Type":"ContainerStarted","Data":"c30af32e1d9d9dbd1ecde3fdacdca39951cac5e39c3228936a621dbb53d6a9e9"} Nov 29 08:03:02 crc kubenswrapper[4795]: I1129 08:03:02.048159 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"75918c04-f960-4321-8894-582921ced50d","Type":"ContainerStarted","Data":"6dd304f406f49c3344db5c77a82b010993fe5bd39dcbcacf102025a0a23f7884"} Nov 29 08:03:02 crc kubenswrapper[4795]: I1129 08:03:02.049873 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"ab20548b-7f96-4eb8-aa44-80425459c0ed","Type":"ContainerStarted","Data":"12a291948537acf71b7866714cca95db98985ecfef9c2695b00dee71b45cd09f"} Nov 29 08:03:02 crc kubenswrapper[4795]: I1129 08:03:02.050453 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Nov 29 08:03:02 crc kubenswrapper[4795]: I1129 08:03:02.077500 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-r5w67" podStartSLOduration=26.628374184 podStartE2EDuration="31.07748442s" podCreationTimestamp="2025-11-29 08:02:31 +0000 UTC" firstStartedPulling="2025-11-29 08:02:56.357876447 +0000 UTC m=+1422.333452237" lastFinishedPulling="2025-11-29 08:03:00.806986683 +0000 UTC m=+1426.782562473" observedRunningTime="2025-11-29 08:03:02.076194563 +0000 UTC m=+1428.051770353" watchObservedRunningTime="2025-11-29 08:03:02.07748442 +0000 UTC m=+1428.053060210" Nov 29 08:03:02 crc kubenswrapper[4795]: I1129 08:03:02.093049 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=3.177131702 podStartE2EDuration="38.093033222s" podCreationTimestamp="2025-11-29 08:02:24 +0000 UTC" firstStartedPulling="2025-11-29 08:02:25.890974309 +0000 UTC m=+1391.866550099" lastFinishedPulling="2025-11-29 08:03:00.806875829 +0000 UTC m=+1426.782451619" observedRunningTime="2025-11-29 08:03:02.091055136 +0000 UTC m=+1428.066630916" watchObservedRunningTime="2025-11-29 08:03:02.093033222 +0000 UTC m=+1428.068609012" Nov 29 08:03:02 crc kubenswrapper[4795]: I1129 08:03:02.113883 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=12.7560362 podStartE2EDuration="36.113864975s" podCreationTimestamp="2025-11-29 08:02:26 +0000 UTC" firstStartedPulling="2025-11-29 08:02:36.947966836 +0000 UTC m=+1402.923542626" lastFinishedPulling="2025-11-29 08:03:00.305795611 +0000 UTC m=+1426.281371401" observedRunningTime="2025-11-29 08:03:02.112037753 +0000 UTC m=+1428.087613543" watchObservedRunningTime="2025-11-29 08:03:02.113864975 +0000 UTC m=+1428.089440775" Nov 29 08:03:03 crc kubenswrapper[4795]: I1129 08:03:03.062569 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-fjltq" event={"ID":"e351bdf6-6e04-4bbd-bfae-e28c7bf2179f","Type":"ContainerStarted","Data":"26657f201fac849596a2f1ca0c790e592a4a1e6b70cf766d2e54c1987886b66c"} Nov 29 08:03:03 crc kubenswrapper[4795]: I1129 08:03:03.062916 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-fjltq" event={"ID":"e351bdf6-6e04-4bbd-bfae-e28c7bf2179f","Type":"ContainerStarted","Data":"18d54416faa0b40f45220a6f9d945b5a8627e07958ac3349c00a187049f83075"} Nov 29 08:03:03 crc kubenswrapper[4795]: I1129 08:03:03.062966 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-fjltq" Nov 29 08:03:03 crc kubenswrapper[4795]: I1129 08:03:03.063015 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-fjltq" Nov 29 08:03:03 crc kubenswrapper[4795]: I1129 08:03:03.066565 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"74169c45-99e0-4179-a18b-07a1c2cade8b","Type":"ContainerStarted","Data":"bf11edd6d9ce5ac04733b56bab1255c59a0c0bb035e4e46f3a8d314f0c4f8633"} Nov 29 08:03:03 crc kubenswrapper[4795]: I1129 08:03:03.087320 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-fjltq" podStartSLOduration=29.125598625 podStartE2EDuration="32.087271435s" podCreationTimestamp="2025-11-29 08:02:31 +0000 UTC" firstStartedPulling="2025-11-29 08:02:57.84520942 +0000 UTC m=+1423.820785220" lastFinishedPulling="2025-11-29 08:03:00.80688224 +0000 UTC m=+1426.782458030" observedRunningTime="2025-11-29 08:03:03.084198087 +0000 UTC m=+1429.059773887" watchObservedRunningTime="2025-11-29 08:03:03.087271435 +0000 UTC m=+1429.062847225" Nov 29 08:03:04 crc kubenswrapper[4795]: I1129 08:03:04.103392 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"28c13f65-78c4-4d4a-8960-7ef17a4c93e8","Type":"ContainerStarted","Data":"66c2994154c169b7826b57d071eca33a6effd691ef0e54ad3484e4595c873484"} Nov 29 08:03:06 crc kubenswrapper[4795]: I1129 08:03:06.132996 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"acd08e2d-0e1b-473c-ae31-d63d742d2061","Type":"ContainerStarted","Data":"173dc160df284137cfe35c48e3e6b10e5f2e6b3557c41e49e530b8fbe4486f7e"} Nov 29 08:03:06 crc kubenswrapper[4795]: I1129 08:03:06.141639 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"75918c04-f960-4321-8894-582921ced50d","Type":"ContainerStarted","Data":"0e3f1a3a1a1716d7cfe9456958dfd05b91294b6749f084c35385db86e1c2d140"} Nov 29 08:03:06 crc kubenswrapper[4795]: I1129 08:03:06.167083 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=30.853811068 podStartE2EDuration="37.167057382s" podCreationTimestamp="2025-11-29 08:02:29 +0000 UTC" firstStartedPulling="2025-11-29 08:02:58.780388284 +0000 UTC m=+1424.755964074" lastFinishedPulling="2025-11-29 08:03:05.093634598 +0000 UTC m=+1431.069210388" observedRunningTime="2025-11-29 08:03:06.16136171 +0000 UTC m=+1432.136937510" watchObservedRunningTime="2025-11-29 08:03:06.167057382 +0000 UTC m=+1432.142633172" Nov 29 08:03:06 crc kubenswrapper[4795]: I1129 08:03:06.185824 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=26.437286063 podStartE2EDuration="33.185802005s" podCreationTimestamp="2025-11-29 08:02:33 +0000 UTC" firstStartedPulling="2025-11-29 08:02:58.351762845 +0000 UTC m=+1424.327338635" lastFinishedPulling="2025-11-29 08:03:05.100278796 +0000 UTC m=+1431.075854577" observedRunningTime="2025-11-29 08:03:06.181911324 +0000 UTC m=+1432.157487114" watchObservedRunningTime="2025-11-29 08:03:06.185802005 +0000 UTC m=+1432.161377795" Nov 29 08:03:06 crc kubenswrapper[4795]: I1129 08:03:06.907491 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 29 08:03:06 crc kubenswrapper[4795]: I1129 08:03:06.989923 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Nov 29 08:03:07 crc kubenswrapper[4795]: I1129 08:03:07.039229 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Nov 29 08:03:07 crc kubenswrapper[4795]: I1129 08:03:07.151618 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a9c19857-e09b-4c26-bf5a-a64655eaa024","Type":"ContainerStarted","Data":"56e898a703e03ff2fade00fe18d84618fbb706c63bca621693c090cad3c3c4c3"} Nov 29 08:03:07 crc kubenswrapper[4795]: I1129 08:03:07.152116 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Nov 29 08:03:07 crc kubenswrapper[4795]: I1129 08:03:07.208335 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Nov 29 08:03:07 crc kubenswrapper[4795]: I1129 08:03:07.444695 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-8qck9"] Nov 29 08:03:07 crc kubenswrapper[4795]: I1129 08:03:07.474525 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Nov 29 08:03:07 crc kubenswrapper[4795]: I1129 08:03:07.504999 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-j2gnj"] Nov 29 08:03:07 crc kubenswrapper[4795]: E1129 08:03:07.505481 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3051c02-6211-46f9-a843-9ea284aa2b72" containerName="registry-server" Nov 29 08:03:07 crc kubenswrapper[4795]: I1129 08:03:07.505498 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3051c02-6211-46f9-a843-9ea284aa2b72" containerName="registry-server" Nov 29 08:03:07 crc kubenswrapper[4795]: E1129 08:03:07.505516 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3051c02-6211-46f9-a843-9ea284aa2b72" containerName="extract-utilities" Nov 29 08:03:07 crc kubenswrapper[4795]: I1129 08:03:07.505522 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3051c02-6211-46f9-a843-9ea284aa2b72" containerName="extract-utilities" Nov 29 08:03:07 crc kubenswrapper[4795]: E1129 08:03:07.505534 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3051c02-6211-46f9-a843-9ea284aa2b72" containerName="extract-content" Nov 29 08:03:07 crc kubenswrapper[4795]: I1129 08:03:07.505540 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3051c02-6211-46f9-a843-9ea284aa2b72" containerName="extract-content" Nov 29 08:03:07 crc kubenswrapper[4795]: I1129 08:03:07.505762 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3051c02-6211-46f9-a843-9ea284aa2b72" containerName="registry-server" Nov 29 08:03:07 crc kubenswrapper[4795]: I1129 08:03:07.506467 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-j2gnj" Nov 29 08:03:07 crc kubenswrapper[4795]: I1129 08:03:07.511994 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Nov 29 08:03:07 crc kubenswrapper[4795]: I1129 08:03:07.519873 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-wsrxw"] Nov 29 08:03:07 crc kubenswrapper[4795]: I1129 08:03:07.523470 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-wsrxw" Nov 29 08:03:07 crc kubenswrapper[4795]: I1129 08:03:07.527081 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Nov 29 08:03:07 crc kubenswrapper[4795]: I1129 08:03:07.541075 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-j2gnj"] Nov 29 08:03:07 crc kubenswrapper[4795]: I1129 08:03:07.559566 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb13c276-73ed-4b9b-90fd-58d6ae6e4169-combined-ca-bundle\") pod \"ovn-controller-metrics-j2gnj\" (UID: \"fb13c276-73ed-4b9b-90fd-58d6ae6e4169\") " pod="openstack/ovn-controller-metrics-j2gnj" Nov 29 08:03:07 crc kubenswrapper[4795]: I1129 08:03:07.560253 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fp8f\" (UniqueName: \"kubernetes.io/projected/fb13c276-73ed-4b9b-90fd-58d6ae6e4169-kube-api-access-6fp8f\") pod \"ovn-controller-metrics-j2gnj\" (UID: \"fb13c276-73ed-4b9b-90fd-58d6ae6e4169\") " pod="openstack/ovn-controller-metrics-j2gnj" Nov 29 08:03:07 crc kubenswrapper[4795]: I1129 08:03:07.560619 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/fb13c276-73ed-4b9b-90fd-58d6ae6e4169-ovn-rundir\") pod \"ovn-controller-metrics-j2gnj\" (UID: \"fb13c276-73ed-4b9b-90fd-58d6ae6e4169\") " pod="openstack/ovn-controller-metrics-j2gnj" Nov 29 08:03:07 crc kubenswrapper[4795]: I1129 08:03:07.563958 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/fb13c276-73ed-4b9b-90fd-58d6ae6e4169-ovs-rundir\") pod \"ovn-controller-metrics-j2gnj\" (UID: \"fb13c276-73ed-4b9b-90fd-58d6ae6e4169\") " pod="openstack/ovn-controller-metrics-j2gnj" Nov 29 08:03:07 crc kubenswrapper[4795]: I1129 08:03:07.564417 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb13c276-73ed-4b9b-90fd-58d6ae6e4169-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-j2gnj\" (UID: \"fb13c276-73ed-4b9b-90fd-58d6ae6e4169\") " pod="openstack/ovn-controller-metrics-j2gnj" Nov 29 08:03:07 crc kubenswrapper[4795]: I1129 08:03:07.564814 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb13c276-73ed-4b9b-90fd-58d6ae6e4169-config\") pod \"ovn-controller-metrics-j2gnj\" (UID: \"fb13c276-73ed-4b9b-90fd-58d6ae6e4169\") " pod="openstack/ovn-controller-metrics-j2gnj" Nov 29 08:03:07 crc kubenswrapper[4795]: I1129 08:03:07.569661 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-wsrxw"] Nov 29 08:03:07 crc kubenswrapper[4795]: I1129 08:03:07.590126 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Nov 29 08:03:07 crc kubenswrapper[4795]: I1129 08:03:07.667938 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb13c276-73ed-4b9b-90fd-58d6ae6e4169-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-j2gnj\" (UID: \"fb13c276-73ed-4b9b-90fd-58d6ae6e4169\") " pod="openstack/ovn-controller-metrics-j2gnj" Nov 29 08:03:07 crc kubenswrapper[4795]: I1129 08:03:07.668123 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb13c276-73ed-4b9b-90fd-58d6ae6e4169-config\") pod \"ovn-controller-metrics-j2gnj\" (UID: \"fb13c276-73ed-4b9b-90fd-58d6ae6e4169\") " pod="openstack/ovn-controller-metrics-j2gnj" Nov 29 08:03:07 crc kubenswrapper[4795]: I1129 08:03:07.671361 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb13c276-73ed-4b9b-90fd-58d6ae6e4169-config\") pod \"ovn-controller-metrics-j2gnj\" (UID: \"fb13c276-73ed-4b9b-90fd-58d6ae6e4169\") " pod="openstack/ovn-controller-metrics-j2gnj" Nov 29 08:03:07 crc kubenswrapper[4795]: I1129 08:03:07.671520 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb13c276-73ed-4b9b-90fd-58d6ae6e4169-combined-ca-bundle\") pod \"ovn-controller-metrics-j2gnj\" (UID: \"fb13c276-73ed-4b9b-90fd-58d6ae6e4169\") " pod="openstack/ovn-controller-metrics-j2gnj" Nov 29 08:03:07 crc kubenswrapper[4795]: I1129 08:03:07.671711 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/335e92c9-b508-4368-9af8-55dc11bad481-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-wsrxw\" (UID: \"335e92c9-b508-4368-9af8-55dc11bad481\") " pod="openstack/dnsmasq-dns-5bf47b49b7-wsrxw" Nov 29 08:03:07 crc kubenswrapper[4795]: I1129 08:03:07.671805 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/335e92c9-b508-4368-9af8-55dc11bad481-config\") pod \"dnsmasq-dns-5bf47b49b7-wsrxw\" (UID: \"335e92c9-b508-4368-9af8-55dc11bad481\") " pod="openstack/dnsmasq-dns-5bf47b49b7-wsrxw" Nov 29 08:03:07 crc kubenswrapper[4795]: I1129 08:03:07.674256 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fp8f\" (UniqueName: \"kubernetes.io/projected/fb13c276-73ed-4b9b-90fd-58d6ae6e4169-kube-api-access-6fp8f\") pod \"ovn-controller-metrics-j2gnj\" (UID: \"fb13c276-73ed-4b9b-90fd-58d6ae6e4169\") " pod="openstack/ovn-controller-metrics-j2gnj" Nov 29 08:03:07 crc kubenswrapper[4795]: I1129 08:03:07.674543 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj4jm\" (UniqueName: \"kubernetes.io/projected/335e92c9-b508-4368-9af8-55dc11bad481-kube-api-access-jj4jm\") pod \"dnsmasq-dns-5bf47b49b7-wsrxw\" (UID: \"335e92c9-b508-4368-9af8-55dc11bad481\") " pod="openstack/dnsmasq-dns-5bf47b49b7-wsrxw" Nov 29 08:03:07 crc kubenswrapper[4795]: I1129 08:03:07.674575 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/fb13c276-73ed-4b9b-90fd-58d6ae6e4169-ovn-rundir\") pod \"ovn-controller-metrics-j2gnj\" (UID: \"fb13c276-73ed-4b9b-90fd-58d6ae6e4169\") " pod="openstack/ovn-controller-metrics-j2gnj" Nov 29 08:03:07 crc kubenswrapper[4795]: I1129 08:03:07.676175 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/fb13c276-73ed-4b9b-90fd-58d6ae6e4169-ovn-rundir\") pod \"ovn-controller-metrics-j2gnj\" (UID: \"fb13c276-73ed-4b9b-90fd-58d6ae6e4169\") " pod="openstack/ovn-controller-metrics-j2gnj" Nov 29 08:03:07 crc kubenswrapper[4795]: I1129 08:03:07.676311 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/fb13c276-73ed-4b9b-90fd-58d6ae6e4169-ovs-rundir\") pod \"ovn-controller-metrics-j2gnj\" (UID: \"fb13c276-73ed-4b9b-90fd-58d6ae6e4169\") " pod="openstack/ovn-controller-metrics-j2gnj" Nov 29 08:03:07 crc kubenswrapper[4795]: I1129 08:03:07.676372 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/fb13c276-73ed-4b9b-90fd-58d6ae6e4169-ovs-rundir\") pod \"ovn-controller-metrics-j2gnj\" (UID: \"fb13c276-73ed-4b9b-90fd-58d6ae6e4169\") " pod="openstack/ovn-controller-metrics-j2gnj" Nov 29 08:03:07 crc kubenswrapper[4795]: I1129 08:03:07.676511 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/335e92c9-b508-4368-9af8-55dc11bad481-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-wsrxw\" (UID: \"335e92c9-b508-4368-9af8-55dc11bad481\") " pod="openstack/dnsmasq-dns-5bf47b49b7-wsrxw" Nov 29 08:03:07 crc kubenswrapper[4795]: I1129 08:03:07.680242 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb13c276-73ed-4b9b-90fd-58d6ae6e4169-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-j2gnj\" (UID: \"fb13c276-73ed-4b9b-90fd-58d6ae6e4169\") " pod="openstack/ovn-controller-metrics-j2gnj" Nov 29 08:03:07 crc kubenswrapper[4795]: I1129 08:03:07.681241 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb13c276-73ed-4b9b-90fd-58d6ae6e4169-combined-ca-bundle\") pod \"ovn-controller-metrics-j2gnj\" (UID: \"fb13c276-73ed-4b9b-90fd-58d6ae6e4169\") " pod="openstack/ovn-controller-metrics-j2gnj" Nov 29 08:03:07 crc kubenswrapper[4795]: I1129 08:03:07.700268 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fp8f\" (UniqueName: \"kubernetes.io/projected/fb13c276-73ed-4b9b-90fd-58d6ae6e4169-kube-api-access-6fp8f\") pod \"ovn-controller-metrics-j2gnj\" (UID: \"fb13c276-73ed-4b9b-90fd-58d6ae6e4169\") " pod="openstack/ovn-controller-metrics-j2gnj" Nov 29 08:03:07 crc kubenswrapper[4795]: I1129 08:03:07.717955 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-2dl4b"] Nov 29 08:03:07 crc kubenswrapper[4795]: I1129 08:03:07.777660 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-w6vqh"] Nov 29 08:03:07 crc kubenswrapper[4795]: I1129 08:03:07.778009 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/335e92c9-b508-4368-9af8-55dc11bad481-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-wsrxw\" (UID: \"335e92c9-b508-4368-9af8-55dc11bad481\") " pod="openstack/dnsmasq-dns-5bf47b49b7-wsrxw" Nov 29 08:03:07 crc kubenswrapper[4795]: I1129 08:03:07.778153 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/335e92c9-b508-4368-9af8-55dc11bad481-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-wsrxw\" (UID: \"335e92c9-b508-4368-9af8-55dc11bad481\") " pod="openstack/dnsmasq-dns-5bf47b49b7-wsrxw" Nov 29 08:03:07 crc kubenswrapper[4795]: I1129 08:03:07.778193 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/335e92c9-b508-4368-9af8-55dc11bad481-config\") pod \"dnsmasq-dns-5bf47b49b7-wsrxw\" (UID: \"335e92c9-b508-4368-9af8-55dc11bad481\") " pod="openstack/dnsmasq-dns-5bf47b49b7-wsrxw" Nov 29 08:03:07 crc kubenswrapper[4795]: I1129 08:03:07.778253 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jj4jm\" (UniqueName: \"kubernetes.io/projected/335e92c9-b508-4368-9af8-55dc11bad481-kube-api-access-jj4jm\") pod \"dnsmasq-dns-5bf47b49b7-wsrxw\" (UID: \"335e92c9-b508-4368-9af8-55dc11bad481\") " pod="openstack/dnsmasq-dns-5bf47b49b7-wsrxw" Nov 29 08:03:07 crc kubenswrapper[4795]: I1129 08:03:07.779247 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/335e92c9-b508-4368-9af8-55dc11bad481-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-wsrxw\" (UID: \"335e92c9-b508-4368-9af8-55dc11bad481\") " pod="openstack/dnsmasq-dns-5bf47b49b7-wsrxw" Nov 29 08:03:07 crc kubenswrapper[4795]: I1129 08:03:07.779356 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-w6vqh" Nov 29 08:03:07 crc kubenswrapper[4795]: I1129 08:03:07.791617 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/335e92c9-b508-4368-9af8-55dc11bad481-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-wsrxw\" (UID: \"335e92c9-b508-4368-9af8-55dc11bad481\") " pod="openstack/dnsmasq-dns-5bf47b49b7-wsrxw" Nov 29 08:03:07 crc kubenswrapper[4795]: I1129 08:03:07.792437 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/335e92c9-b508-4368-9af8-55dc11bad481-config\") pod \"dnsmasq-dns-5bf47b49b7-wsrxw\" (UID: \"335e92c9-b508-4368-9af8-55dc11bad481\") " pod="openstack/dnsmasq-dns-5bf47b49b7-wsrxw" Nov 29 08:03:07 crc kubenswrapper[4795]: I1129 08:03:07.792924 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Nov 29 08:03:07 crc kubenswrapper[4795]: I1129 08:03:07.809540 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-w6vqh"] Nov 29 08:03:07 crc kubenswrapper[4795]: I1129 08:03:07.848429 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jj4jm\" (UniqueName: \"kubernetes.io/projected/335e92c9-b508-4368-9af8-55dc11bad481-kube-api-access-jj4jm\") pod \"dnsmasq-dns-5bf47b49b7-wsrxw\" (UID: \"335e92c9-b508-4368-9af8-55dc11bad481\") " pod="openstack/dnsmasq-dns-5bf47b49b7-wsrxw" Nov 29 08:03:07 crc kubenswrapper[4795]: I1129 08:03:07.855202 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-j2gnj" Nov 29 08:03:07 crc kubenswrapper[4795]: I1129 08:03:07.881045 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft8jj\" (UniqueName: \"kubernetes.io/projected/f0f88b89-3636-4ee0-aa86-efde063b436c-kube-api-access-ft8jj\") pod \"dnsmasq-dns-8554648995-w6vqh\" (UID: \"f0f88b89-3636-4ee0-aa86-efde063b436c\") " pod="openstack/dnsmasq-dns-8554648995-w6vqh" Nov 29 08:03:07 crc kubenswrapper[4795]: I1129 08:03:07.881249 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f0f88b89-3636-4ee0-aa86-efde063b436c-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-w6vqh\" (UID: \"f0f88b89-3636-4ee0-aa86-efde063b436c\") " pod="openstack/dnsmasq-dns-8554648995-w6vqh" Nov 29 08:03:07 crc kubenswrapper[4795]: I1129 08:03:07.881347 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0f88b89-3636-4ee0-aa86-efde063b436c-dns-svc\") pod \"dnsmasq-dns-8554648995-w6vqh\" (UID: \"f0f88b89-3636-4ee0-aa86-efde063b436c\") " pod="openstack/dnsmasq-dns-8554648995-w6vqh" Nov 29 08:03:07 crc kubenswrapper[4795]: I1129 08:03:07.881475 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0f88b89-3636-4ee0-aa86-efde063b436c-config\") pod \"dnsmasq-dns-8554648995-w6vqh\" (UID: \"f0f88b89-3636-4ee0-aa86-efde063b436c\") " pod="openstack/dnsmasq-dns-8554648995-w6vqh" Nov 29 08:03:07 crc kubenswrapper[4795]: I1129 08:03:07.881608 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0f88b89-3636-4ee0-aa86-efde063b436c-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-w6vqh\" (UID: \"f0f88b89-3636-4ee0-aa86-efde063b436c\") " pod="openstack/dnsmasq-dns-8554648995-w6vqh" Nov 29 08:03:07 crc kubenswrapper[4795]: I1129 08:03:07.892635 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-wsrxw" Nov 29 08:03:07 crc kubenswrapper[4795]: I1129 08:03:07.964227 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-8qck9" Nov 29 08:03:07 crc kubenswrapper[4795]: I1129 08:03:07.983746 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0f88b89-3636-4ee0-aa86-efde063b436c-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-w6vqh\" (UID: \"f0f88b89-3636-4ee0-aa86-efde063b436c\") " pod="openstack/dnsmasq-dns-8554648995-w6vqh" Nov 29 08:03:07 crc kubenswrapper[4795]: I1129 08:03:07.984976 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0f88b89-3636-4ee0-aa86-efde063b436c-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-w6vqh\" (UID: \"f0f88b89-3636-4ee0-aa86-efde063b436c\") " pod="openstack/dnsmasq-dns-8554648995-w6vqh" Nov 29 08:03:07 crc kubenswrapper[4795]: I1129 08:03:07.986118 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ft8jj\" (UniqueName: \"kubernetes.io/projected/f0f88b89-3636-4ee0-aa86-efde063b436c-kube-api-access-ft8jj\") pod \"dnsmasq-dns-8554648995-w6vqh\" (UID: \"f0f88b89-3636-4ee0-aa86-efde063b436c\") " pod="openstack/dnsmasq-dns-8554648995-w6vqh" Nov 29 08:03:07 crc kubenswrapper[4795]: I1129 08:03:07.986366 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f0f88b89-3636-4ee0-aa86-efde063b436c-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-w6vqh\" (UID: \"f0f88b89-3636-4ee0-aa86-efde063b436c\") " pod="openstack/dnsmasq-dns-8554648995-w6vqh" Nov 29 08:03:07 crc kubenswrapper[4795]: I1129 08:03:07.987795 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0f88b89-3636-4ee0-aa86-efde063b436c-dns-svc\") pod \"dnsmasq-dns-8554648995-w6vqh\" (UID: \"f0f88b89-3636-4ee0-aa86-efde063b436c\") " pod="openstack/dnsmasq-dns-8554648995-w6vqh" Nov 29 08:03:07 crc kubenswrapper[4795]: I1129 08:03:07.988866 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0f88b89-3636-4ee0-aa86-efde063b436c-config\") pod \"dnsmasq-dns-8554648995-w6vqh\" (UID: \"f0f88b89-3636-4ee0-aa86-efde063b436c\") " pod="openstack/dnsmasq-dns-8554648995-w6vqh" Nov 29 08:03:07 crc kubenswrapper[4795]: I1129 08:03:07.987698 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f0f88b89-3636-4ee0-aa86-efde063b436c-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-w6vqh\" (UID: \"f0f88b89-3636-4ee0-aa86-efde063b436c\") " pod="openstack/dnsmasq-dns-8554648995-w6vqh" Nov 29 08:03:07 crc kubenswrapper[4795]: I1129 08:03:07.989889 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0f88b89-3636-4ee0-aa86-efde063b436c-config\") pod \"dnsmasq-dns-8554648995-w6vqh\" (UID: \"f0f88b89-3636-4ee0-aa86-efde063b436c\") " pod="openstack/dnsmasq-dns-8554648995-w6vqh" Nov 29 08:03:07 crc kubenswrapper[4795]: I1129 08:03:07.990054 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0f88b89-3636-4ee0-aa86-efde063b436c-dns-svc\") pod \"dnsmasq-dns-8554648995-w6vqh\" (UID: \"f0f88b89-3636-4ee0-aa86-efde063b436c\") " pod="openstack/dnsmasq-dns-8554648995-w6vqh" Nov 29 08:03:08 crc kubenswrapper[4795]: I1129 08:03:08.019520 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ft8jj\" (UniqueName: \"kubernetes.io/projected/f0f88b89-3636-4ee0-aa86-efde063b436c-kube-api-access-ft8jj\") pod \"dnsmasq-dns-8554648995-w6vqh\" (UID: \"f0f88b89-3636-4ee0-aa86-efde063b436c\") " pod="openstack/dnsmasq-dns-8554648995-w6vqh" Nov 29 08:03:08 crc kubenswrapper[4795]: I1129 08:03:08.090531 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dckt\" (UniqueName: \"kubernetes.io/projected/3632abbe-bcd4-4af0-a0b4-05a09b26ccd0-kube-api-access-2dckt\") pod \"3632abbe-bcd4-4af0-a0b4-05a09b26ccd0\" (UID: \"3632abbe-bcd4-4af0-a0b4-05a09b26ccd0\") " Nov 29 08:03:08 crc kubenswrapper[4795]: I1129 08:03:08.090651 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3632abbe-bcd4-4af0-a0b4-05a09b26ccd0-config\") pod \"3632abbe-bcd4-4af0-a0b4-05a09b26ccd0\" (UID: \"3632abbe-bcd4-4af0-a0b4-05a09b26ccd0\") " Nov 29 08:03:08 crc kubenswrapper[4795]: I1129 08:03:08.090840 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3632abbe-bcd4-4af0-a0b4-05a09b26ccd0-dns-svc\") pod \"3632abbe-bcd4-4af0-a0b4-05a09b26ccd0\" (UID: \"3632abbe-bcd4-4af0-a0b4-05a09b26ccd0\") " Nov 29 08:03:08 crc kubenswrapper[4795]: I1129 08:03:08.091229 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3632abbe-bcd4-4af0-a0b4-05a09b26ccd0-config" (OuterVolumeSpecName: "config") pod "3632abbe-bcd4-4af0-a0b4-05a09b26ccd0" (UID: "3632abbe-bcd4-4af0-a0b4-05a09b26ccd0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:03:08 crc kubenswrapper[4795]: I1129 08:03:08.094503 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3632abbe-bcd4-4af0-a0b4-05a09b26ccd0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3632abbe-bcd4-4af0-a0b4-05a09b26ccd0" (UID: "3632abbe-bcd4-4af0-a0b4-05a09b26ccd0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:03:08 crc kubenswrapper[4795]: I1129 08:03:08.097475 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3632abbe-bcd4-4af0-a0b4-05a09b26ccd0-kube-api-access-2dckt" (OuterVolumeSpecName: "kube-api-access-2dckt") pod "3632abbe-bcd4-4af0-a0b4-05a09b26ccd0" (UID: "3632abbe-bcd4-4af0-a0b4-05a09b26ccd0"). InnerVolumeSpecName "kube-api-access-2dckt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:03:08 crc kubenswrapper[4795]: I1129 08:03:08.144110 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-w6vqh" Nov 29 08:03:08 crc kubenswrapper[4795]: I1129 08:03:08.170626 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-8qck9" Nov 29 08:03:08 crc kubenswrapper[4795]: I1129 08:03:08.170808 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-8qck9" event={"ID":"3632abbe-bcd4-4af0-a0b4-05a09b26ccd0","Type":"ContainerDied","Data":"5dd44097ed8b8bb42b31f7e975699df50ab893433863ce491cb084dac3608ecc"} Nov 29 08:03:08 crc kubenswrapper[4795]: I1129 08:03:08.172034 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Nov 29 08:03:08 crc kubenswrapper[4795]: I1129 08:03:08.196935 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dckt\" (UniqueName: \"kubernetes.io/projected/3632abbe-bcd4-4af0-a0b4-05a09b26ccd0-kube-api-access-2dckt\") on node \"crc\" DevicePath \"\"" Nov 29 08:03:08 crc kubenswrapper[4795]: I1129 08:03:08.197001 4795 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3632abbe-bcd4-4af0-a0b4-05a09b26ccd0-config\") on node \"crc\" DevicePath \"\"" Nov 29 08:03:08 crc kubenswrapper[4795]: I1129 08:03:08.197017 4795 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3632abbe-bcd4-4af0-a0b4-05a09b26ccd0-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 08:03:08 crc kubenswrapper[4795]: I1129 08:03:08.267880 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Nov 29 08:03:08 crc kubenswrapper[4795]: I1129 08:03:08.296576 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-8qck9"] Nov 29 08:03:08 crc kubenswrapper[4795]: I1129 08:03:08.326741 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-8qck9"] Nov 29 08:03:08 crc kubenswrapper[4795]: I1129 08:03:08.446149 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-j2gnj"] Nov 29 08:03:08 crc kubenswrapper[4795]: I1129 08:03:08.487183 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Nov 29 08:03:08 crc kubenswrapper[4795]: I1129 08:03:08.494616 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 29 08:03:08 crc kubenswrapper[4795]: I1129 08:03:08.497149 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Nov 29 08:03:08 crc kubenswrapper[4795]: I1129 08:03:08.499951 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Nov 29 08:03:08 crc kubenswrapper[4795]: I1129 08:03:08.500222 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-p4nmz" Nov 29 08:03:08 crc kubenswrapper[4795]: I1129 08:03:08.500383 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Nov 29 08:03:08 crc kubenswrapper[4795]: I1129 08:03:08.514813 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 29 08:03:08 crc kubenswrapper[4795]: I1129 08:03:08.602244 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-wsrxw"] Nov 29 08:03:08 crc kubenswrapper[4795]: W1129 08:03:08.605970 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod335e92c9_b508_4368_9af8_55dc11bad481.slice/crio-29bfd4b54d167e622cac75a36b9c83438af8082f09530d927ca4283da4ef2454 WatchSource:0}: Error finding container 29bfd4b54d167e622cac75a36b9c83438af8082f09530d927ca4283da4ef2454: Status 404 returned error can't find the container with id 29bfd4b54d167e622cac75a36b9c83438af8082f09530d927ca4283da4ef2454 Nov 29 08:03:08 crc kubenswrapper[4795]: I1129 08:03:08.613207 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f2bc988-6251-4a6d-95ab-8610dc2a2650-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"3f2bc988-6251-4a6d-95ab-8610dc2a2650\") " pod="openstack/ovn-northd-0" Nov 29 08:03:08 crc kubenswrapper[4795]: I1129 08:03:08.613278 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-626q6\" (UniqueName: \"kubernetes.io/projected/3f2bc988-6251-4a6d-95ab-8610dc2a2650-kube-api-access-626q6\") pod \"ovn-northd-0\" (UID: \"3f2bc988-6251-4a6d-95ab-8610dc2a2650\") " pod="openstack/ovn-northd-0" Nov 29 08:03:08 crc kubenswrapper[4795]: I1129 08:03:08.613347 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3f2bc988-6251-4a6d-95ab-8610dc2a2650-scripts\") pod \"ovn-northd-0\" (UID: \"3f2bc988-6251-4a6d-95ab-8610dc2a2650\") " pod="openstack/ovn-northd-0" Nov 29 08:03:08 crc kubenswrapper[4795]: I1129 08:03:08.613376 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3f2bc988-6251-4a6d-95ab-8610dc2a2650-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"3f2bc988-6251-4a6d-95ab-8610dc2a2650\") " pod="openstack/ovn-northd-0" Nov 29 08:03:08 crc kubenswrapper[4795]: I1129 08:03:08.613448 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f2bc988-6251-4a6d-95ab-8610dc2a2650-config\") pod \"ovn-northd-0\" (UID: \"3f2bc988-6251-4a6d-95ab-8610dc2a2650\") " pod="openstack/ovn-northd-0" Nov 29 08:03:08 crc kubenswrapper[4795]: I1129 08:03:08.613487 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f2bc988-6251-4a6d-95ab-8610dc2a2650-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"3f2bc988-6251-4a6d-95ab-8610dc2a2650\") " pod="openstack/ovn-northd-0" Nov 29 08:03:08 crc kubenswrapper[4795]: I1129 08:03:08.613511 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f2bc988-6251-4a6d-95ab-8610dc2a2650-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"3f2bc988-6251-4a6d-95ab-8610dc2a2650\") " pod="openstack/ovn-northd-0" Nov 29 08:03:08 crc kubenswrapper[4795]: I1129 08:03:08.715244 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f2bc988-6251-4a6d-95ab-8610dc2a2650-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"3f2bc988-6251-4a6d-95ab-8610dc2a2650\") " pod="openstack/ovn-northd-0" Nov 29 08:03:08 crc kubenswrapper[4795]: I1129 08:03:08.715338 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-626q6\" (UniqueName: \"kubernetes.io/projected/3f2bc988-6251-4a6d-95ab-8610dc2a2650-kube-api-access-626q6\") pod \"ovn-northd-0\" (UID: \"3f2bc988-6251-4a6d-95ab-8610dc2a2650\") " pod="openstack/ovn-northd-0" Nov 29 08:03:08 crc kubenswrapper[4795]: I1129 08:03:08.715407 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3f2bc988-6251-4a6d-95ab-8610dc2a2650-scripts\") pod \"ovn-northd-0\" (UID: \"3f2bc988-6251-4a6d-95ab-8610dc2a2650\") " pod="openstack/ovn-northd-0" Nov 29 08:03:08 crc kubenswrapper[4795]: I1129 08:03:08.715440 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3f2bc988-6251-4a6d-95ab-8610dc2a2650-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"3f2bc988-6251-4a6d-95ab-8610dc2a2650\") " pod="openstack/ovn-northd-0" Nov 29 08:03:08 crc kubenswrapper[4795]: I1129 08:03:08.715491 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f2bc988-6251-4a6d-95ab-8610dc2a2650-config\") pod \"ovn-northd-0\" (UID: \"3f2bc988-6251-4a6d-95ab-8610dc2a2650\") " pod="openstack/ovn-northd-0" Nov 29 08:03:08 crc kubenswrapper[4795]: I1129 08:03:08.715532 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f2bc988-6251-4a6d-95ab-8610dc2a2650-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"3f2bc988-6251-4a6d-95ab-8610dc2a2650\") " pod="openstack/ovn-northd-0" Nov 29 08:03:08 crc kubenswrapper[4795]: I1129 08:03:08.715555 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f2bc988-6251-4a6d-95ab-8610dc2a2650-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"3f2bc988-6251-4a6d-95ab-8610dc2a2650\") " pod="openstack/ovn-northd-0" Nov 29 08:03:08 crc kubenswrapper[4795]: I1129 08:03:08.716381 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3f2bc988-6251-4a6d-95ab-8610dc2a2650-scripts\") pod \"ovn-northd-0\" (UID: \"3f2bc988-6251-4a6d-95ab-8610dc2a2650\") " pod="openstack/ovn-northd-0" Nov 29 08:03:08 crc kubenswrapper[4795]: I1129 08:03:08.716712 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f2bc988-6251-4a6d-95ab-8610dc2a2650-config\") pod \"ovn-northd-0\" (UID: \"3f2bc988-6251-4a6d-95ab-8610dc2a2650\") " pod="openstack/ovn-northd-0" Nov 29 08:03:08 crc kubenswrapper[4795]: I1129 08:03:08.716839 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3f2bc988-6251-4a6d-95ab-8610dc2a2650-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"3f2bc988-6251-4a6d-95ab-8610dc2a2650\") " pod="openstack/ovn-northd-0" Nov 29 08:03:08 crc kubenswrapper[4795]: I1129 08:03:08.721954 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f2bc988-6251-4a6d-95ab-8610dc2a2650-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"3f2bc988-6251-4a6d-95ab-8610dc2a2650\") " pod="openstack/ovn-northd-0" Nov 29 08:03:08 crc kubenswrapper[4795]: I1129 08:03:08.722580 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f2bc988-6251-4a6d-95ab-8610dc2a2650-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"3f2bc988-6251-4a6d-95ab-8610dc2a2650\") " pod="openstack/ovn-northd-0" Nov 29 08:03:08 crc kubenswrapper[4795]: I1129 08:03:08.727043 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f2bc988-6251-4a6d-95ab-8610dc2a2650-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"3f2bc988-6251-4a6d-95ab-8610dc2a2650\") " pod="openstack/ovn-northd-0" Nov 29 08:03:08 crc kubenswrapper[4795]: I1129 08:03:08.739814 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-626q6\" (UniqueName: \"kubernetes.io/projected/3f2bc988-6251-4a6d-95ab-8610dc2a2650-kube-api-access-626q6\") pod \"ovn-northd-0\" (UID: \"3f2bc988-6251-4a6d-95ab-8610dc2a2650\") " pod="openstack/ovn-northd-0" Nov 29 08:03:08 crc kubenswrapper[4795]: I1129 08:03:08.821886 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 29 08:03:08 crc kubenswrapper[4795]: I1129 08:03:08.852259 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-w6vqh"] Nov 29 08:03:08 crc kubenswrapper[4795]: W1129 08:03:08.854170 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0f88b89_3636_4ee0_aa86_efde063b436c.slice/crio-a89bdfb4df9d950a0df410e2bbde2be8a8bfd93242b165ad6151d40e8cb4cf5b WatchSource:0}: Error finding container a89bdfb4df9d950a0df410e2bbde2be8a8bfd93242b165ad6151d40e8cb4cf5b: Status 404 returned error can't find the container with id a89bdfb4df9d950a0df410e2bbde2be8a8bfd93242b165ad6151d40e8cb4cf5b Nov 29 08:03:09 crc kubenswrapper[4795]: I1129 08:03:09.181157 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-w6vqh" event={"ID":"f0f88b89-3636-4ee0-aa86-efde063b436c","Type":"ContainerStarted","Data":"a89bdfb4df9d950a0df410e2bbde2be8a8bfd93242b165ad6151d40e8cb4cf5b"} Nov 29 08:03:09 crc kubenswrapper[4795]: I1129 08:03:09.182578 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-wsrxw" event={"ID":"335e92c9-b508-4368-9af8-55dc11bad481","Type":"ContainerStarted","Data":"29bfd4b54d167e622cac75a36b9c83438af8082f09530d927ca4283da4ef2454"} Nov 29 08:03:09 crc kubenswrapper[4795]: I1129 08:03:09.183524 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-j2gnj" event={"ID":"fb13c276-73ed-4b9b-90fd-58d6ae6e4169","Type":"ContainerStarted","Data":"b0385afc3f6de5c3653c86e0ef34ef1118900cd457823564055c5a0008aa29d8"} Nov 29 08:03:09 crc kubenswrapper[4795]: I1129 08:03:09.781892 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Nov 29 08:03:10 crc kubenswrapper[4795]: I1129 08:03:10.196756 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 29 08:03:10 crc kubenswrapper[4795]: I1129 08:03:10.288092 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3632abbe-bcd4-4af0-a0b4-05a09b26ccd0" path="/var/lib/kubelet/pods/3632abbe-bcd4-4af0-a0b4-05a09b26ccd0/volumes" Nov 29 08:03:11 crc kubenswrapper[4795]: I1129 08:03:11.202680 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"3f2bc988-6251-4a6d-95ab-8610dc2a2650","Type":"ContainerStarted","Data":"2bc96d8d1c130844f0316137afe3ef0f99623fa0d80ff9f2a78131a73a7023c6"} Nov 29 08:03:11 crc kubenswrapper[4795]: E1129 08:03:11.857211 4795 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0f88b89_3636_4ee0_aa86_efde063b436c.slice/crio-conmon-b55780aa8cf39df5e025a188977dd1e51851207bd7065b6abcbe109a9c70a483.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod335e92c9_b508_4368_9af8_55dc11bad481.slice/crio-conmon-d01f260afa20d9fa7953b4dd13f99bf7a1f827d4bc2aac9549a7d4d3d7c262eb.scope\": RecentStats: unable to find data in memory cache]" Nov 29 08:03:12 crc kubenswrapper[4795]: I1129 08:03:12.212715 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-j2gnj" event={"ID":"fb13c276-73ed-4b9b-90fd-58d6ae6e4169","Type":"ContainerStarted","Data":"b9915a599b71e56367548a8c8ac45b292737b86216c21090b7ec881abd1d5a2f"} Nov 29 08:03:12 crc kubenswrapper[4795]: I1129 08:03:12.214681 4795 generic.go:334] "Generic (PLEG): container finished" podID="d4434e71-afe0-404d-aeec-6c080ff6a767" containerID="d8454a1c0cfd35dbfbbec4418938e7484dce37bbc91294d902962b94ee2e40c8" exitCode=0 Nov 29 08:03:12 crc kubenswrapper[4795]: I1129 08:03:12.214719 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-2dl4b" event={"ID":"d4434e71-afe0-404d-aeec-6c080ff6a767","Type":"ContainerDied","Data":"d8454a1c0cfd35dbfbbec4418938e7484dce37bbc91294d902962b94ee2e40c8"} Nov 29 08:03:12 crc kubenswrapper[4795]: I1129 08:03:12.217227 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a2fd879b-6f46-437e-acf0-c60e879af239","Type":"ContainerStarted","Data":"93e30860a937742005cfd2320198062ed8b6279529facfd5831eaa6b6f348e24"} Nov 29 08:03:12 crc kubenswrapper[4795]: I1129 08:03:12.220201 4795 generic.go:334] "Generic (PLEG): container finished" podID="f0f88b89-3636-4ee0-aa86-efde063b436c" containerID="b55780aa8cf39df5e025a188977dd1e51851207bd7065b6abcbe109a9c70a483" exitCode=0 Nov 29 08:03:12 crc kubenswrapper[4795]: I1129 08:03:12.220236 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-w6vqh" event={"ID":"f0f88b89-3636-4ee0-aa86-efde063b436c","Type":"ContainerDied","Data":"b55780aa8cf39df5e025a188977dd1e51851207bd7065b6abcbe109a9c70a483"} Nov 29 08:03:12 crc kubenswrapper[4795]: I1129 08:03:12.222278 4795 generic.go:334] "Generic (PLEG): container finished" podID="335e92c9-b508-4368-9af8-55dc11bad481" containerID="d01f260afa20d9fa7953b4dd13f99bf7a1f827d4bc2aac9549a7d4d3d7c262eb" exitCode=0 Nov 29 08:03:12 crc kubenswrapper[4795]: I1129 08:03:12.222321 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-wsrxw" event={"ID":"335e92c9-b508-4368-9af8-55dc11bad481","Type":"ContainerDied","Data":"d01f260afa20d9fa7953b4dd13f99bf7a1f827d4bc2aac9549a7d4d3d7c262eb"} Nov 29 08:03:12 crc kubenswrapper[4795]: I1129 08:03:12.247182 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-j2gnj" podStartSLOduration=5.247161956 podStartE2EDuration="5.247161956s" podCreationTimestamp="2025-11-29 08:03:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 08:03:12.231202822 +0000 UTC m=+1438.206778612" watchObservedRunningTime="2025-11-29 08:03:12.247161956 +0000 UTC m=+1438.222737756" Nov 29 08:03:12 crc kubenswrapper[4795]: I1129 08:03:12.684207 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-2dl4b" Nov 29 08:03:12 crc kubenswrapper[4795]: I1129 08:03:12.816949 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4434e71-afe0-404d-aeec-6c080ff6a767-config\") pod \"d4434e71-afe0-404d-aeec-6c080ff6a767\" (UID: \"d4434e71-afe0-404d-aeec-6c080ff6a767\") " Nov 29 08:03:12 crc kubenswrapper[4795]: I1129 08:03:12.817012 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4434e71-afe0-404d-aeec-6c080ff6a767-dns-svc\") pod \"d4434e71-afe0-404d-aeec-6c080ff6a767\" (UID: \"d4434e71-afe0-404d-aeec-6c080ff6a767\") " Nov 29 08:03:12 crc kubenswrapper[4795]: I1129 08:03:12.817095 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9jd9s\" (UniqueName: \"kubernetes.io/projected/d4434e71-afe0-404d-aeec-6c080ff6a767-kube-api-access-9jd9s\") pod \"d4434e71-afe0-404d-aeec-6c080ff6a767\" (UID: \"d4434e71-afe0-404d-aeec-6c080ff6a767\") " Nov 29 08:03:12 crc kubenswrapper[4795]: I1129 08:03:12.828923 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4434e71-afe0-404d-aeec-6c080ff6a767-kube-api-access-9jd9s" (OuterVolumeSpecName: "kube-api-access-9jd9s") pod "d4434e71-afe0-404d-aeec-6c080ff6a767" (UID: "d4434e71-afe0-404d-aeec-6c080ff6a767"). InnerVolumeSpecName "kube-api-access-9jd9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:03:12 crc kubenswrapper[4795]: I1129 08:03:12.846474 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4434e71-afe0-404d-aeec-6c080ff6a767-config" (OuterVolumeSpecName: "config") pod "d4434e71-afe0-404d-aeec-6c080ff6a767" (UID: "d4434e71-afe0-404d-aeec-6c080ff6a767"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:03:12 crc kubenswrapper[4795]: I1129 08:03:12.850379 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4434e71-afe0-404d-aeec-6c080ff6a767-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d4434e71-afe0-404d-aeec-6c080ff6a767" (UID: "d4434e71-afe0-404d-aeec-6c080ff6a767"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:03:12 crc kubenswrapper[4795]: I1129 08:03:12.920486 4795 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4434e71-afe0-404d-aeec-6c080ff6a767-config\") on node \"crc\" DevicePath \"\"" Nov 29 08:03:12 crc kubenswrapper[4795]: I1129 08:03:12.920789 4795 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4434e71-afe0-404d-aeec-6c080ff6a767-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 08:03:12 crc kubenswrapper[4795]: I1129 08:03:12.920881 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9jd9s\" (UniqueName: \"kubernetes.io/projected/d4434e71-afe0-404d-aeec-6c080ff6a767-kube-api-access-9jd9s\") on node \"crc\" DevicePath \"\"" Nov 29 08:03:13 crc kubenswrapper[4795]: I1129 08:03:13.246017 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-w6vqh" event={"ID":"f0f88b89-3636-4ee0-aa86-efde063b436c","Type":"ContainerStarted","Data":"98b41368382e1e2ce54e940b7643a4327e50b8549c81d039d4ab306d0878c119"} Nov 29 08:03:13 crc kubenswrapper[4795]: I1129 08:03:13.246326 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-w6vqh" Nov 29 08:03:13 crc kubenswrapper[4795]: I1129 08:03:13.248093 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-wsrxw" event={"ID":"335e92c9-b508-4368-9af8-55dc11bad481","Type":"ContainerStarted","Data":"3098c8cf78cc59518136772df729db3d4678cb6e158ec7c196a090461cfb7ae6"} Nov 29 08:03:13 crc kubenswrapper[4795]: I1129 08:03:13.248575 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5bf47b49b7-wsrxw" Nov 29 08:03:13 crc kubenswrapper[4795]: I1129 08:03:13.251759 4795 generic.go:334] "Generic (PLEG): container finished" podID="a9c19857-e09b-4c26-bf5a-a64655eaa024" containerID="56e898a703e03ff2fade00fe18d84618fbb706c63bca621693c090cad3c3c4c3" exitCode=0 Nov 29 08:03:13 crc kubenswrapper[4795]: I1129 08:03:13.251829 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a9c19857-e09b-4c26-bf5a-a64655eaa024","Type":"ContainerDied","Data":"56e898a703e03ff2fade00fe18d84618fbb706c63bca621693c090cad3c3c4c3"} Nov 29 08:03:13 crc kubenswrapper[4795]: I1129 08:03:13.254946 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-2dl4b" Nov 29 08:03:13 crc kubenswrapper[4795]: I1129 08:03:13.255139 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-2dl4b" event={"ID":"d4434e71-afe0-404d-aeec-6c080ff6a767","Type":"ContainerDied","Data":"65b06c8127f9e1fed8aa26de0e17e9ea3223119c5a50737b0c13dd0d6cbf3e42"} Nov 29 08:03:13 crc kubenswrapper[4795]: I1129 08:03:13.255188 4795 scope.go:117] "RemoveContainer" containerID="d8454a1c0cfd35dbfbbec4418938e7484dce37bbc91294d902962b94ee2e40c8" Nov 29 08:03:13 crc kubenswrapper[4795]: I1129 08:03:13.260533 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"3f2bc988-6251-4a6d-95ab-8610dc2a2650","Type":"ContainerStarted","Data":"05c678e1fecaa21f3cd753cac2793af727cef8ea3546efcd098ac604749691fc"} Nov 29 08:03:13 crc kubenswrapper[4795]: I1129 08:03:13.282461 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-w6vqh" podStartSLOduration=6.282439295 podStartE2EDuration="6.282439295s" podCreationTimestamp="2025-11-29 08:03:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 08:03:13.269141587 +0000 UTC m=+1439.244717377" watchObservedRunningTime="2025-11-29 08:03:13.282439295 +0000 UTC m=+1439.258015085" Nov 29 08:03:13 crc kubenswrapper[4795]: I1129 08:03:13.320954 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5bf47b49b7-wsrxw" podStartSLOduration=6.32093661 podStartE2EDuration="6.32093661s" podCreationTimestamp="2025-11-29 08:03:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 08:03:13.315003972 +0000 UTC m=+1439.290579762" watchObservedRunningTime="2025-11-29 08:03:13.32093661 +0000 UTC m=+1439.296512400" Nov 29 08:03:13 crc kubenswrapper[4795]: I1129 08:03:13.360151 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-2dl4b"] Nov 29 08:03:13 crc kubenswrapper[4795]: I1129 08:03:13.370740 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-2dl4b"] Nov 29 08:03:14 crc kubenswrapper[4795]: I1129 08:03:14.270938 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a9c19857-e09b-4c26-bf5a-a64655eaa024","Type":"ContainerStarted","Data":"90b63281f5d3cc0f16f3c3dc549dbc3d7e4faad131014430ad1395081cc1b52d"} Nov 29 08:03:14 crc kubenswrapper[4795]: I1129 08:03:14.274547 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"3f2bc988-6251-4a6d-95ab-8610dc2a2650","Type":"ContainerStarted","Data":"f5bd0b7bce2334da4c36dcaf34a8b4dd0dd76033fbb476659d1e0a781b5ce3d2"} Nov 29 08:03:14 crc kubenswrapper[4795]: I1129 08:03:14.292675 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4434e71-afe0-404d-aeec-6c080ff6a767" path="/var/lib/kubelet/pods/d4434e71-afe0-404d-aeec-6c080ff6a767/volumes" Nov 29 08:03:14 crc kubenswrapper[4795]: I1129 08:03:14.294097 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=11.365822401 podStartE2EDuration="51.294080903s" podCreationTimestamp="2025-11-29 08:02:23 +0000 UTC" firstStartedPulling="2025-11-29 08:02:26.08300443 +0000 UTC m=+1392.058580220" lastFinishedPulling="2025-11-29 08:03:06.011262932 +0000 UTC m=+1431.986838722" observedRunningTime="2025-11-29 08:03:14.292266161 +0000 UTC m=+1440.267841951" watchObservedRunningTime="2025-11-29 08:03:14.294080903 +0000 UTC m=+1440.269656693" Nov 29 08:03:14 crc kubenswrapper[4795]: I1129 08:03:14.295621 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-q86dd" event={"ID":"5194485f-a306-493d-a1a3-f33030371413","Type":"ContainerStarted","Data":"e440ee280db5a2502e05803b5f8d21c0cfc7ae06248d13e0095caac1d63af3bc"} Nov 29 08:03:14 crc kubenswrapper[4795]: I1129 08:03:14.295742 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Nov 29 08:03:14 crc kubenswrapper[4795]: I1129 08:03:14.324792 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.548093578 podStartE2EDuration="6.324763516s" podCreationTimestamp="2025-11-29 08:03:08 +0000 UTC" firstStartedPulling="2025-11-29 08:03:10.202248927 +0000 UTC m=+1436.177824717" lastFinishedPulling="2025-11-29 08:03:12.978918865 +0000 UTC m=+1438.954494655" observedRunningTime="2025-11-29 08:03:14.312315971 +0000 UTC m=+1440.287891761" watchObservedRunningTime="2025-11-29 08:03:14.324763516 +0000 UTC m=+1440.300339306" Nov 29 08:03:14 crc kubenswrapper[4795]: I1129 08:03:14.346338 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-q86dd" podStartSLOduration=10.742073048 podStartE2EDuration="47.346309498s" podCreationTimestamp="2025-11-29 08:02:27 +0000 UTC" firstStartedPulling="2025-11-29 08:02:36.937339784 +0000 UTC m=+1402.912915574" lastFinishedPulling="2025-11-29 08:03:13.541576234 +0000 UTC m=+1439.517152024" observedRunningTime="2025-11-29 08:03:14.335873951 +0000 UTC m=+1440.311449741" watchObservedRunningTime="2025-11-29 08:03:14.346309498 +0000 UTC m=+1440.321885288" Nov 29 08:03:14 crc kubenswrapper[4795]: I1129 08:03:14.522027 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Nov 29 08:03:14 crc kubenswrapper[4795]: I1129 08:03:14.522091 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Nov 29 08:03:16 crc kubenswrapper[4795]: I1129 08:03:16.896974 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-wsrxw"] Nov 29 08:03:16 crc kubenswrapper[4795]: I1129 08:03:16.897522 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5bf47b49b7-wsrxw" podUID="335e92c9-b508-4368-9af8-55dc11bad481" containerName="dnsmasq-dns" containerID="cri-o://3098c8cf78cc59518136772df729db3d4678cb6e158ec7c196a090461cfb7ae6" gracePeriod=10 Nov 29 08:03:16 crc kubenswrapper[4795]: I1129 08:03:16.979809 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-24d2f"] Nov 29 08:03:16 crc kubenswrapper[4795]: E1129 08:03:16.980315 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4434e71-afe0-404d-aeec-6c080ff6a767" containerName="init" Nov 29 08:03:16 crc kubenswrapper[4795]: I1129 08:03:16.980351 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4434e71-afe0-404d-aeec-6c080ff6a767" containerName="init" Nov 29 08:03:16 crc kubenswrapper[4795]: I1129 08:03:16.980642 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4434e71-afe0-404d-aeec-6c080ff6a767" containerName="init" Nov 29 08:03:16 crc kubenswrapper[4795]: I1129 08:03:16.982072 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-24d2f" Nov 29 08:03:17 crc kubenswrapper[4795]: I1129 08:03:17.001524 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-24d2f"] Nov 29 08:03:17 crc kubenswrapper[4795]: I1129 08:03:17.011491 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/439904f7-bd39-4def-96c8-8975d412574f-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-24d2f\" (UID: \"439904f7-bd39-4def-96c8-8975d412574f\") " pod="openstack/dnsmasq-dns-b8fbc5445-24d2f" Nov 29 08:03:17 crc kubenswrapper[4795]: I1129 08:03:17.011686 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/439904f7-bd39-4def-96c8-8975d412574f-config\") pod \"dnsmasq-dns-b8fbc5445-24d2f\" (UID: \"439904f7-bd39-4def-96c8-8975d412574f\") " pod="openstack/dnsmasq-dns-b8fbc5445-24d2f" Nov 29 08:03:17 crc kubenswrapper[4795]: I1129 08:03:17.011765 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bjzj\" (UniqueName: \"kubernetes.io/projected/439904f7-bd39-4def-96c8-8975d412574f-kube-api-access-2bjzj\") pod \"dnsmasq-dns-b8fbc5445-24d2f\" (UID: \"439904f7-bd39-4def-96c8-8975d412574f\") " pod="openstack/dnsmasq-dns-b8fbc5445-24d2f" Nov 29 08:03:17 crc kubenswrapper[4795]: I1129 08:03:17.011799 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/439904f7-bd39-4def-96c8-8975d412574f-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-24d2f\" (UID: \"439904f7-bd39-4def-96c8-8975d412574f\") " pod="openstack/dnsmasq-dns-b8fbc5445-24d2f" Nov 29 08:03:17 crc kubenswrapper[4795]: I1129 08:03:17.011868 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/439904f7-bd39-4def-96c8-8975d412574f-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-24d2f\" (UID: \"439904f7-bd39-4def-96c8-8975d412574f\") " pod="openstack/dnsmasq-dns-b8fbc5445-24d2f" Nov 29 08:03:17 crc kubenswrapper[4795]: I1129 08:03:17.113335 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/439904f7-bd39-4def-96c8-8975d412574f-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-24d2f\" (UID: \"439904f7-bd39-4def-96c8-8975d412574f\") " pod="openstack/dnsmasq-dns-b8fbc5445-24d2f" Nov 29 08:03:17 crc kubenswrapper[4795]: I1129 08:03:17.113804 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/439904f7-bd39-4def-96c8-8975d412574f-config\") pod \"dnsmasq-dns-b8fbc5445-24d2f\" (UID: \"439904f7-bd39-4def-96c8-8975d412574f\") " pod="openstack/dnsmasq-dns-b8fbc5445-24d2f" Nov 29 08:03:17 crc kubenswrapper[4795]: I1129 08:03:17.113878 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bjzj\" (UniqueName: \"kubernetes.io/projected/439904f7-bd39-4def-96c8-8975d412574f-kube-api-access-2bjzj\") pod \"dnsmasq-dns-b8fbc5445-24d2f\" (UID: \"439904f7-bd39-4def-96c8-8975d412574f\") " pod="openstack/dnsmasq-dns-b8fbc5445-24d2f" Nov 29 08:03:17 crc kubenswrapper[4795]: I1129 08:03:17.113912 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/439904f7-bd39-4def-96c8-8975d412574f-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-24d2f\" (UID: \"439904f7-bd39-4def-96c8-8975d412574f\") " pod="openstack/dnsmasq-dns-b8fbc5445-24d2f" Nov 29 08:03:17 crc kubenswrapper[4795]: I1129 08:03:17.113975 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/439904f7-bd39-4def-96c8-8975d412574f-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-24d2f\" (UID: \"439904f7-bd39-4def-96c8-8975d412574f\") " pod="openstack/dnsmasq-dns-b8fbc5445-24d2f" Nov 29 08:03:17 crc kubenswrapper[4795]: I1129 08:03:17.114268 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/439904f7-bd39-4def-96c8-8975d412574f-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-24d2f\" (UID: \"439904f7-bd39-4def-96c8-8975d412574f\") " pod="openstack/dnsmasq-dns-b8fbc5445-24d2f" Nov 29 08:03:17 crc kubenswrapper[4795]: I1129 08:03:17.114811 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/439904f7-bd39-4def-96c8-8975d412574f-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-24d2f\" (UID: \"439904f7-bd39-4def-96c8-8975d412574f\") " pod="openstack/dnsmasq-dns-b8fbc5445-24d2f" Nov 29 08:03:17 crc kubenswrapper[4795]: I1129 08:03:17.115103 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/439904f7-bd39-4def-96c8-8975d412574f-config\") pod \"dnsmasq-dns-b8fbc5445-24d2f\" (UID: \"439904f7-bd39-4def-96c8-8975d412574f\") " pod="openstack/dnsmasq-dns-b8fbc5445-24d2f" Nov 29 08:03:17 crc kubenswrapper[4795]: I1129 08:03:17.115441 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/439904f7-bd39-4def-96c8-8975d412574f-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-24d2f\" (UID: \"439904f7-bd39-4def-96c8-8975d412574f\") " pod="openstack/dnsmasq-dns-b8fbc5445-24d2f" Nov 29 08:03:17 crc kubenswrapper[4795]: I1129 08:03:17.142896 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bjzj\" (UniqueName: \"kubernetes.io/projected/439904f7-bd39-4def-96c8-8975d412574f-kube-api-access-2bjzj\") pod \"dnsmasq-dns-b8fbc5445-24d2f\" (UID: \"439904f7-bd39-4def-96c8-8975d412574f\") " pod="openstack/dnsmasq-dns-b8fbc5445-24d2f" Nov 29 08:03:17 crc kubenswrapper[4795]: I1129 08:03:17.357520 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-24d2f" Nov 29 08:03:17 crc kubenswrapper[4795]: W1129 08:03:17.840278 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod439904f7_bd39_4def_96c8_8975d412574f.slice/crio-fe1329861aa0dea5c27e5c6dace70b6403e45ca290477f1ea8363138ba5e6674 WatchSource:0}: Error finding container fe1329861aa0dea5c27e5c6dace70b6403e45ca290477f1ea8363138ba5e6674: Status 404 returned error can't find the container with id fe1329861aa0dea5c27e5c6dace70b6403e45ca290477f1ea8363138ba5e6674 Nov 29 08:03:17 crc kubenswrapper[4795]: I1129 08:03:17.847757 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-24d2f"] Nov 29 08:03:17 crc kubenswrapper[4795]: I1129 08:03:17.896022 4795 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5bf47b49b7-wsrxw" podUID="335e92c9-b508-4368-9af8-55dc11bad481" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.143:5353: connect: connection refused" Nov 29 08:03:17 crc kubenswrapper[4795]: I1129 08:03:17.996005 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Nov 29 08:03:18 crc kubenswrapper[4795]: I1129 08:03:18.007997 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 29 08:03:18 crc kubenswrapper[4795]: I1129 08:03:18.013608 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Nov 29 08:03:18 crc kubenswrapper[4795]: I1129 08:03:18.013905 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-kpc24" Nov 29 08:03:18 crc kubenswrapper[4795]: I1129 08:03:18.013976 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Nov 29 08:03:18 crc kubenswrapper[4795]: I1129 08:03:18.016819 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Nov 29 08:03:18 crc kubenswrapper[4795]: I1129 08:03:18.035142 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 29 08:03:18 crc kubenswrapper[4795]: I1129 08:03:18.137978 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/28437f9f-e92e-46d7-9ffb-fcdda5dea25e-lock\") pod \"swift-storage-0\" (UID: \"28437f9f-e92e-46d7-9ffb-fcdda5dea25e\") " pod="openstack/swift-storage-0" Nov 29 08:03:18 crc kubenswrapper[4795]: I1129 08:03:18.139261 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"28437f9f-e92e-46d7-9ffb-fcdda5dea25e\") " pod="openstack/swift-storage-0" Nov 29 08:03:18 crc kubenswrapper[4795]: I1129 08:03:18.139414 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssvh9\" (UniqueName: \"kubernetes.io/projected/28437f9f-e92e-46d7-9ffb-fcdda5dea25e-kube-api-access-ssvh9\") pod \"swift-storage-0\" (UID: \"28437f9f-e92e-46d7-9ffb-fcdda5dea25e\") " pod="openstack/swift-storage-0" Nov 29 08:03:18 crc kubenswrapper[4795]: I1129 08:03:18.139491 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/28437f9f-e92e-46d7-9ffb-fcdda5dea25e-cache\") pod \"swift-storage-0\" (UID: \"28437f9f-e92e-46d7-9ffb-fcdda5dea25e\") " pod="openstack/swift-storage-0" Nov 29 08:03:18 crc kubenswrapper[4795]: I1129 08:03:18.139617 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/28437f9f-e92e-46d7-9ffb-fcdda5dea25e-etc-swift\") pod \"swift-storage-0\" (UID: \"28437f9f-e92e-46d7-9ffb-fcdda5dea25e\") " pod="openstack/swift-storage-0" Nov 29 08:03:18 crc kubenswrapper[4795]: I1129 08:03:18.146882 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-w6vqh" Nov 29 08:03:18 crc kubenswrapper[4795]: I1129 08:03:18.241334 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/28437f9f-e92e-46d7-9ffb-fcdda5dea25e-etc-swift\") pod \"swift-storage-0\" (UID: \"28437f9f-e92e-46d7-9ffb-fcdda5dea25e\") " pod="openstack/swift-storage-0" Nov 29 08:03:18 crc kubenswrapper[4795]: I1129 08:03:18.241569 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/28437f9f-e92e-46d7-9ffb-fcdda5dea25e-lock\") pod \"swift-storage-0\" (UID: \"28437f9f-e92e-46d7-9ffb-fcdda5dea25e\") " pod="openstack/swift-storage-0" Nov 29 08:03:18 crc kubenswrapper[4795]: I1129 08:03:18.241841 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"28437f9f-e92e-46d7-9ffb-fcdda5dea25e\") " pod="openstack/swift-storage-0" Nov 29 08:03:18 crc kubenswrapper[4795]: I1129 08:03:18.242014 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssvh9\" (UniqueName: \"kubernetes.io/projected/28437f9f-e92e-46d7-9ffb-fcdda5dea25e-kube-api-access-ssvh9\") pod \"swift-storage-0\" (UID: \"28437f9f-e92e-46d7-9ffb-fcdda5dea25e\") " pod="openstack/swift-storage-0" Nov 29 08:03:18 crc kubenswrapper[4795]: I1129 08:03:18.242117 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/28437f9f-e92e-46d7-9ffb-fcdda5dea25e-cache\") pod \"swift-storage-0\" (UID: \"28437f9f-e92e-46d7-9ffb-fcdda5dea25e\") " pod="openstack/swift-storage-0" Nov 29 08:03:18 crc kubenswrapper[4795]: I1129 08:03:18.242928 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/28437f9f-e92e-46d7-9ffb-fcdda5dea25e-cache\") pod \"swift-storage-0\" (UID: \"28437f9f-e92e-46d7-9ffb-fcdda5dea25e\") " pod="openstack/swift-storage-0" Nov 29 08:03:18 crc kubenswrapper[4795]: E1129 08:03:18.243810 4795 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 29 08:03:18 crc kubenswrapper[4795]: E1129 08:03:18.243887 4795 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 29 08:03:18 crc kubenswrapper[4795]: E1129 08:03:18.243973 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/28437f9f-e92e-46d7-9ffb-fcdda5dea25e-etc-swift podName:28437f9f-e92e-46d7-9ffb-fcdda5dea25e nodeName:}" failed. No retries permitted until 2025-11-29 08:03:18.743956563 +0000 UTC m=+1444.719532353 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/28437f9f-e92e-46d7-9ffb-fcdda5dea25e-etc-swift") pod "swift-storage-0" (UID: "28437f9f-e92e-46d7-9ffb-fcdda5dea25e") : configmap "swift-ring-files" not found Nov 29 08:03:18 crc kubenswrapper[4795]: I1129 08:03:18.244441 4795 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"28437f9f-e92e-46d7-9ffb-fcdda5dea25e\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/swift-storage-0" Nov 29 08:03:18 crc kubenswrapper[4795]: I1129 08:03:18.245000 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/28437f9f-e92e-46d7-9ffb-fcdda5dea25e-lock\") pod \"swift-storage-0\" (UID: \"28437f9f-e92e-46d7-9ffb-fcdda5dea25e\") " pod="openstack/swift-storage-0" Nov 29 08:03:18 crc kubenswrapper[4795]: I1129 08:03:18.275866 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssvh9\" (UniqueName: \"kubernetes.io/projected/28437f9f-e92e-46d7-9ffb-fcdda5dea25e-kube-api-access-ssvh9\") pod \"swift-storage-0\" (UID: \"28437f9f-e92e-46d7-9ffb-fcdda5dea25e\") " pod="openstack/swift-storage-0" Nov 29 08:03:18 crc kubenswrapper[4795]: I1129 08:03:18.287239 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"28437f9f-e92e-46d7-9ffb-fcdda5dea25e\") " pod="openstack/swift-storage-0" Nov 29 08:03:18 crc kubenswrapper[4795]: I1129 08:03:18.320518 4795 generic.go:334] "Generic (PLEG): container finished" podID="335e92c9-b508-4368-9af8-55dc11bad481" containerID="3098c8cf78cc59518136772df729db3d4678cb6e158ec7c196a090461cfb7ae6" exitCode=0 Nov 29 08:03:18 crc kubenswrapper[4795]: I1129 08:03:18.320643 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-wsrxw" event={"ID":"335e92c9-b508-4368-9af8-55dc11bad481","Type":"ContainerDied","Data":"3098c8cf78cc59518136772df729db3d4678cb6e158ec7c196a090461cfb7ae6"} Nov 29 08:03:18 crc kubenswrapper[4795]: I1129 08:03:18.322123 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-24d2f" event={"ID":"439904f7-bd39-4def-96c8-8975d412574f","Type":"ContainerStarted","Data":"fe1329861aa0dea5c27e5c6dace70b6403e45ca290477f1ea8363138ba5e6674"} Nov 29 08:03:18 crc kubenswrapper[4795]: I1129 08:03:18.615923 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-ww42q"] Nov 29 08:03:18 crc kubenswrapper[4795]: I1129 08:03:18.617638 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-ww42q" Nov 29 08:03:18 crc kubenswrapper[4795]: I1129 08:03:18.622057 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 29 08:03:18 crc kubenswrapper[4795]: I1129 08:03:18.622288 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Nov 29 08:03:18 crc kubenswrapper[4795]: I1129 08:03:18.622409 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Nov 29 08:03:18 crc kubenswrapper[4795]: I1129 08:03:18.625778 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-ww42q"] Nov 29 08:03:18 crc kubenswrapper[4795]: I1129 08:03:18.754554 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60a004a5-f226-49aa-b9e7-12a384ddece6-combined-ca-bundle\") pod \"swift-ring-rebalance-ww42q\" (UID: \"60a004a5-f226-49aa-b9e7-12a384ddece6\") " pod="openstack/swift-ring-rebalance-ww42q" Nov 29 08:03:18 crc kubenswrapper[4795]: I1129 08:03:18.755295 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/60a004a5-f226-49aa-b9e7-12a384ddece6-etc-swift\") pod \"swift-ring-rebalance-ww42q\" (UID: \"60a004a5-f226-49aa-b9e7-12a384ddece6\") " pod="openstack/swift-ring-rebalance-ww42q" Nov 29 08:03:18 crc kubenswrapper[4795]: I1129 08:03:18.755391 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/60a004a5-f226-49aa-b9e7-12a384ddece6-scripts\") pod \"swift-ring-rebalance-ww42q\" (UID: \"60a004a5-f226-49aa-b9e7-12a384ddece6\") " pod="openstack/swift-ring-rebalance-ww42q" Nov 29 08:03:18 crc kubenswrapper[4795]: I1129 08:03:18.755462 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/60a004a5-f226-49aa-b9e7-12a384ddece6-ring-data-devices\") pod \"swift-ring-rebalance-ww42q\" (UID: \"60a004a5-f226-49aa-b9e7-12a384ddece6\") " pod="openstack/swift-ring-rebalance-ww42q" Nov 29 08:03:18 crc kubenswrapper[4795]: I1129 08:03:18.755556 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/28437f9f-e92e-46d7-9ffb-fcdda5dea25e-etc-swift\") pod \"swift-storage-0\" (UID: \"28437f9f-e92e-46d7-9ffb-fcdda5dea25e\") " pod="openstack/swift-storage-0" Nov 29 08:03:18 crc kubenswrapper[4795]: I1129 08:03:18.755668 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/60a004a5-f226-49aa-b9e7-12a384ddece6-dispersionconf\") pod \"swift-ring-rebalance-ww42q\" (UID: \"60a004a5-f226-49aa-b9e7-12a384ddece6\") " pod="openstack/swift-ring-rebalance-ww42q" Nov 29 08:03:18 crc kubenswrapper[4795]: I1129 08:03:18.755782 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fx942\" (UniqueName: \"kubernetes.io/projected/60a004a5-f226-49aa-b9e7-12a384ddece6-kube-api-access-fx942\") pod \"swift-ring-rebalance-ww42q\" (UID: \"60a004a5-f226-49aa-b9e7-12a384ddece6\") " pod="openstack/swift-ring-rebalance-ww42q" Nov 29 08:03:18 crc kubenswrapper[4795]: E1129 08:03:18.755806 4795 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 29 08:03:18 crc kubenswrapper[4795]: E1129 08:03:18.755917 4795 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 29 08:03:18 crc kubenswrapper[4795]: E1129 08:03:18.756014 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/28437f9f-e92e-46d7-9ffb-fcdda5dea25e-etc-swift podName:28437f9f-e92e-46d7-9ffb-fcdda5dea25e nodeName:}" failed. No retries permitted until 2025-11-29 08:03:19.755997333 +0000 UTC m=+1445.731573113 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/28437f9f-e92e-46d7-9ffb-fcdda5dea25e-etc-swift") pod "swift-storage-0" (UID: "28437f9f-e92e-46d7-9ffb-fcdda5dea25e") : configmap "swift-ring-files" not found Nov 29 08:03:18 crc kubenswrapper[4795]: I1129 08:03:18.755939 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/60a004a5-f226-49aa-b9e7-12a384ddece6-swiftconf\") pod \"swift-ring-rebalance-ww42q\" (UID: \"60a004a5-f226-49aa-b9e7-12a384ddece6\") " pod="openstack/swift-ring-rebalance-ww42q" Nov 29 08:03:18 crc kubenswrapper[4795]: I1129 08:03:18.857711 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/60a004a5-f226-49aa-b9e7-12a384ddece6-dispersionconf\") pod \"swift-ring-rebalance-ww42q\" (UID: \"60a004a5-f226-49aa-b9e7-12a384ddece6\") " pod="openstack/swift-ring-rebalance-ww42q" Nov 29 08:03:18 crc kubenswrapper[4795]: I1129 08:03:18.859184 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fx942\" (UniqueName: \"kubernetes.io/projected/60a004a5-f226-49aa-b9e7-12a384ddece6-kube-api-access-fx942\") pod \"swift-ring-rebalance-ww42q\" (UID: \"60a004a5-f226-49aa-b9e7-12a384ddece6\") " pod="openstack/swift-ring-rebalance-ww42q" Nov 29 08:03:18 crc kubenswrapper[4795]: I1129 08:03:18.859319 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/60a004a5-f226-49aa-b9e7-12a384ddece6-swiftconf\") pod \"swift-ring-rebalance-ww42q\" (UID: \"60a004a5-f226-49aa-b9e7-12a384ddece6\") " pod="openstack/swift-ring-rebalance-ww42q" Nov 29 08:03:18 crc kubenswrapper[4795]: I1129 08:03:18.859459 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60a004a5-f226-49aa-b9e7-12a384ddece6-combined-ca-bundle\") pod \"swift-ring-rebalance-ww42q\" (UID: \"60a004a5-f226-49aa-b9e7-12a384ddece6\") " pod="openstack/swift-ring-rebalance-ww42q" Nov 29 08:03:18 crc kubenswrapper[4795]: I1129 08:03:18.859580 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/60a004a5-f226-49aa-b9e7-12a384ddece6-etc-swift\") pod \"swift-ring-rebalance-ww42q\" (UID: \"60a004a5-f226-49aa-b9e7-12a384ddece6\") " pod="openstack/swift-ring-rebalance-ww42q" Nov 29 08:03:18 crc kubenswrapper[4795]: I1129 08:03:18.859735 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/60a004a5-f226-49aa-b9e7-12a384ddece6-scripts\") pod \"swift-ring-rebalance-ww42q\" (UID: \"60a004a5-f226-49aa-b9e7-12a384ddece6\") " pod="openstack/swift-ring-rebalance-ww42q" Nov 29 08:03:18 crc kubenswrapper[4795]: I1129 08:03:18.859810 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/60a004a5-f226-49aa-b9e7-12a384ddece6-ring-data-devices\") pod \"swift-ring-rebalance-ww42q\" (UID: \"60a004a5-f226-49aa-b9e7-12a384ddece6\") " pod="openstack/swift-ring-rebalance-ww42q" Nov 29 08:03:18 crc kubenswrapper[4795]: I1129 08:03:18.860156 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/60a004a5-f226-49aa-b9e7-12a384ddece6-etc-swift\") pod \"swift-ring-rebalance-ww42q\" (UID: \"60a004a5-f226-49aa-b9e7-12a384ddece6\") " pod="openstack/swift-ring-rebalance-ww42q" Nov 29 08:03:18 crc kubenswrapper[4795]: I1129 08:03:18.860626 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/60a004a5-f226-49aa-b9e7-12a384ddece6-ring-data-devices\") pod \"swift-ring-rebalance-ww42q\" (UID: \"60a004a5-f226-49aa-b9e7-12a384ddece6\") " pod="openstack/swift-ring-rebalance-ww42q" Nov 29 08:03:18 crc kubenswrapper[4795]: I1129 08:03:18.860873 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/60a004a5-f226-49aa-b9e7-12a384ddece6-scripts\") pod \"swift-ring-rebalance-ww42q\" (UID: \"60a004a5-f226-49aa-b9e7-12a384ddece6\") " pod="openstack/swift-ring-rebalance-ww42q" Nov 29 08:03:18 crc kubenswrapper[4795]: I1129 08:03:18.863477 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/60a004a5-f226-49aa-b9e7-12a384ddece6-dispersionconf\") pod \"swift-ring-rebalance-ww42q\" (UID: \"60a004a5-f226-49aa-b9e7-12a384ddece6\") " pod="openstack/swift-ring-rebalance-ww42q" Nov 29 08:03:18 crc kubenswrapper[4795]: I1129 08:03:18.863969 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/60a004a5-f226-49aa-b9e7-12a384ddece6-swiftconf\") pod \"swift-ring-rebalance-ww42q\" (UID: \"60a004a5-f226-49aa-b9e7-12a384ddece6\") " pod="openstack/swift-ring-rebalance-ww42q" Nov 29 08:03:18 crc kubenswrapper[4795]: I1129 08:03:18.878024 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fx942\" (UniqueName: \"kubernetes.io/projected/60a004a5-f226-49aa-b9e7-12a384ddece6-kube-api-access-fx942\") pod \"swift-ring-rebalance-ww42q\" (UID: \"60a004a5-f226-49aa-b9e7-12a384ddece6\") " pod="openstack/swift-ring-rebalance-ww42q" Nov 29 08:03:18 crc kubenswrapper[4795]: I1129 08:03:18.884621 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60a004a5-f226-49aa-b9e7-12a384ddece6-combined-ca-bundle\") pod \"swift-ring-rebalance-ww42q\" (UID: \"60a004a5-f226-49aa-b9e7-12a384ddece6\") " pod="openstack/swift-ring-rebalance-ww42q" Nov 29 08:03:18 crc kubenswrapper[4795]: I1129 08:03:18.946768 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-ww42q" Nov 29 08:03:19 crc kubenswrapper[4795]: I1129 08:03:19.741254 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-ww42q"] Nov 29 08:03:19 crc kubenswrapper[4795]: W1129 08:03:19.765070 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod60a004a5_f226_49aa_b9e7_12a384ddece6.slice/crio-ed607f7b23c23e897c279fc02739ca1b906fffbad01dd58568d7b970cc531703 WatchSource:0}: Error finding container ed607f7b23c23e897c279fc02739ca1b906fffbad01dd58568d7b970cc531703: Status 404 returned error can't find the container with id ed607f7b23c23e897c279fc02739ca1b906fffbad01dd58568d7b970cc531703 Nov 29 08:03:19 crc kubenswrapper[4795]: I1129 08:03:19.777883 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/28437f9f-e92e-46d7-9ffb-fcdda5dea25e-etc-swift\") pod \"swift-storage-0\" (UID: \"28437f9f-e92e-46d7-9ffb-fcdda5dea25e\") " pod="openstack/swift-storage-0" Nov 29 08:03:19 crc kubenswrapper[4795]: E1129 08:03:19.778430 4795 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 29 08:03:19 crc kubenswrapper[4795]: E1129 08:03:19.778458 4795 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 29 08:03:19 crc kubenswrapper[4795]: E1129 08:03:19.778533 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/28437f9f-e92e-46d7-9ffb-fcdda5dea25e-etc-swift podName:28437f9f-e92e-46d7-9ffb-fcdda5dea25e nodeName:}" failed. No retries permitted until 2025-11-29 08:03:21.778509139 +0000 UTC m=+1447.754084929 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/28437f9f-e92e-46d7-9ffb-fcdda5dea25e-etc-swift") pod "swift-storage-0" (UID: "28437f9f-e92e-46d7-9ffb-fcdda5dea25e") : configmap "swift-ring-files" not found Nov 29 08:03:20 crc kubenswrapper[4795]: I1129 08:03:20.118242 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-wsrxw" Nov 29 08:03:20 crc kubenswrapper[4795]: I1129 08:03:20.189342 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/335e92c9-b508-4368-9af8-55dc11bad481-dns-svc\") pod \"335e92c9-b508-4368-9af8-55dc11bad481\" (UID: \"335e92c9-b508-4368-9af8-55dc11bad481\") " Nov 29 08:03:20 crc kubenswrapper[4795]: I1129 08:03:20.189958 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jj4jm\" (UniqueName: \"kubernetes.io/projected/335e92c9-b508-4368-9af8-55dc11bad481-kube-api-access-jj4jm\") pod \"335e92c9-b508-4368-9af8-55dc11bad481\" (UID: \"335e92c9-b508-4368-9af8-55dc11bad481\") " Nov 29 08:03:20 crc kubenswrapper[4795]: I1129 08:03:20.190176 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/335e92c9-b508-4368-9af8-55dc11bad481-ovsdbserver-nb\") pod \"335e92c9-b508-4368-9af8-55dc11bad481\" (UID: \"335e92c9-b508-4368-9af8-55dc11bad481\") " Nov 29 08:03:20 crc kubenswrapper[4795]: I1129 08:03:20.190480 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/335e92c9-b508-4368-9af8-55dc11bad481-config\") pod \"335e92c9-b508-4368-9af8-55dc11bad481\" (UID: \"335e92c9-b508-4368-9af8-55dc11bad481\") " Nov 29 08:03:20 crc kubenswrapper[4795]: I1129 08:03:20.203424 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/335e92c9-b508-4368-9af8-55dc11bad481-kube-api-access-jj4jm" (OuterVolumeSpecName: "kube-api-access-jj4jm") pod "335e92c9-b508-4368-9af8-55dc11bad481" (UID: "335e92c9-b508-4368-9af8-55dc11bad481"). InnerVolumeSpecName "kube-api-access-jj4jm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:03:20 crc kubenswrapper[4795]: I1129 08:03:20.256898 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/335e92c9-b508-4368-9af8-55dc11bad481-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "335e92c9-b508-4368-9af8-55dc11bad481" (UID: "335e92c9-b508-4368-9af8-55dc11bad481"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:03:20 crc kubenswrapper[4795]: I1129 08:03:20.271054 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/335e92c9-b508-4368-9af8-55dc11bad481-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "335e92c9-b508-4368-9af8-55dc11bad481" (UID: "335e92c9-b508-4368-9af8-55dc11bad481"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:03:20 crc kubenswrapper[4795]: I1129 08:03:20.293901 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jj4jm\" (UniqueName: \"kubernetes.io/projected/335e92c9-b508-4368-9af8-55dc11bad481-kube-api-access-jj4jm\") on node \"crc\" DevicePath \"\"" Nov 29 08:03:20 crc kubenswrapper[4795]: I1129 08:03:20.293948 4795 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/335e92c9-b508-4368-9af8-55dc11bad481-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 29 08:03:20 crc kubenswrapper[4795]: I1129 08:03:20.293959 4795 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/335e92c9-b508-4368-9af8-55dc11bad481-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 08:03:20 crc kubenswrapper[4795]: I1129 08:03:20.330774 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/335e92c9-b508-4368-9af8-55dc11bad481-config" (OuterVolumeSpecName: "config") pod "335e92c9-b508-4368-9af8-55dc11bad481" (UID: "335e92c9-b508-4368-9af8-55dc11bad481"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:03:20 crc kubenswrapper[4795]: I1129 08:03:20.364041 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-24d2f" event={"ID":"439904f7-bd39-4def-96c8-8975d412574f","Type":"ContainerStarted","Data":"73895a78e65661359e4b10fde34c42b0170bd599ae2dd740e531f5090b1742de"} Nov 29 08:03:20 crc kubenswrapper[4795]: I1129 08:03:20.365267 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-wsrxw" event={"ID":"335e92c9-b508-4368-9af8-55dc11bad481","Type":"ContainerDied","Data":"29bfd4b54d167e622cac75a36b9c83438af8082f09530d927ca4283da4ef2454"} Nov 29 08:03:20 crc kubenswrapper[4795]: I1129 08:03:20.365321 4795 scope.go:117] "RemoveContainer" containerID="3098c8cf78cc59518136772df729db3d4678cb6e158ec7c196a090461cfb7ae6" Nov 29 08:03:20 crc kubenswrapper[4795]: I1129 08:03:20.365649 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-wsrxw" Nov 29 08:03:20 crc kubenswrapper[4795]: I1129 08:03:20.372728 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-ww42q" event={"ID":"60a004a5-f226-49aa-b9e7-12a384ddece6","Type":"ContainerStarted","Data":"ed607f7b23c23e897c279fc02739ca1b906fffbad01dd58568d7b970cc531703"} Nov 29 08:03:20 crc kubenswrapper[4795]: I1129 08:03:20.399614 4795 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/335e92c9-b508-4368-9af8-55dc11bad481-config\") on node \"crc\" DevicePath \"\"" Nov 29 08:03:20 crc kubenswrapper[4795]: I1129 08:03:20.416677 4795 scope.go:117] "RemoveContainer" containerID="d01f260afa20d9fa7953b4dd13f99bf7a1f827d4bc2aac9549a7d4d3d7c262eb" Nov 29 08:03:20 crc kubenswrapper[4795]: I1129 08:03:20.420955 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-wsrxw"] Nov 29 08:03:20 crc kubenswrapper[4795]: I1129 08:03:20.428224 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-wsrxw"] Nov 29 08:03:20 crc kubenswrapper[4795]: E1129 08:03:20.543813 4795 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod335e92c9_b508_4368_9af8_55dc11bad481.slice/crio-29bfd4b54d167e622cac75a36b9c83438af8082f09530d927ca4283da4ef2454\": RecentStats: unable to find data in memory cache]" Nov 29 08:03:21 crc kubenswrapper[4795]: I1129 08:03:21.385180 4795 generic.go:334] "Generic (PLEG): container finished" podID="439904f7-bd39-4def-96c8-8975d412574f" containerID="73895a78e65661359e4b10fde34c42b0170bd599ae2dd740e531f5090b1742de" exitCode=0 Nov 29 08:03:21 crc kubenswrapper[4795]: I1129 08:03:21.385280 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-24d2f" event={"ID":"439904f7-bd39-4def-96c8-8975d412574f","Type":"ContainerDied","Data":"73895a78e65661359e4b10fde34c42b0170bd599ae2dd740e531f5090b1742de"} Nov 29 08:03:21 crc kubenswrapper[4795]: I1129 08:03:21.832954 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/28437f9f-e92e-46d7-9ffb-fcdda5dea25e-etc-swift\") pod \"swift-storage-0\" (UID: \"28437f9f-e92e-46d7-9ffb-fcdda5dea25e\") " pod="openstack/swift-storage-0" Nov 29 08:03:21 crc kubenswrapper[4795]: E1129 08:03:21.833292 4795 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 29 08:03:21 crc kubenswrapper[4795]: E1129 08:03:21.833566 4795 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 29 08:03:21 crc kubenswrapper[4795]: E1129 08:03:21.833698 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/28437f9f-e92e-46d7-9ffb-fcdda5dea25e-etc-swift podName:28437f9f-e92e-46d7-9ffb-fcdda5dea25e nodeName:}" failed. No retries permitted until 2025-11-29 08:03:25.83366691 +0000 UTC m=+1451.809242710 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/28437f9f-e92e-46d7-9ffb-fcdda5dea25e-etc-swift") pod "swift-storage-0" (UID: "28437f9f-e92e-46d7-9ffb-fcdda5dea25e") : configmap "swift-ring-files" not found Nov 29 08:03:22 crc kubenswrapper[4795]: I1129 08:03:22.286337 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="335e92c9-b508-4368-9af8-55dc11bad481" path="/var/lib/kubelet/pods/335e92c9-b508-4368-9af8-55dc11bad481/volumes" Nov 29 08:03:22 crc kubenswrapper[4795]: I1129 08:03:22.398490 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-24d2f" event={"ID":"439904f7-bd39-4def-96c8-8975d412574f","Type":"ContainerStarted","Data":"d472010b37d590e430d767a87d8ac19b69bda8f43512edc03908fb14c169e8a8"} Nov 29 08:03:22 crc kubenswrapper[4795]: I1129 08:03:22.399826 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-24d2f" Nov 29 08:03:22 crc kubenswrapper[4795]: I1129 08:03:22.401687 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"063373b7-6898-409d-b792-d770a8f6f021","Type":"ContainerStarted","Data":"10ac0e6c9dea581ab810775fab5fa9d47c5e63e062410134dcdb0d1e701ee7d8"} Nov 29 08:03:22 crc kubenswrapper[4795]: I1129 08:03:22.421342 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-24d2f" podStartSLOduration=6.421324071 podStartE2EDuration="6.421324071s" podCreationTimestamp="2025-11-29 08:03:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 08:03:22.414755184 +0000 UTC m=+1448.390330994" watchObservedRunningTime="2025-11-29 08:03:22.421324071 +0000 UTC m=+1448.396899861" Nov 29 08:03:23 crc kubenswrapper[4795]: I1129 08:03:23.416259 4795 generic.go:334] "Generic (PLEG): container finished" podID="a2fd879b-6f46-437e-acf0-c60e879af239" containerID="93e30860a937742005cfd2320198062ed8b6279529facfd5831eaa6b6f348e24" exitCode=0 Nov 29 08:03:23 crc kubenswrapper[4795]: I1129 08:03:23.416535 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a2fd879b-6f46-437e-acf0-c60e879af239","Type":"ContainerDied","Data":"93e30860a937742005cfd2320198062ed8b6279529facfd5831eaa6b6f348e24"} Nov 29 08:03:23 crc kubenswrapper[4795]: I1129 08:03:23.892840 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Nov 29 08:03:24 crc kubenswrapper[4795]: I1129 08:03:24.148264 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f5d67bfff-wl4rm" podUID="8385f150-5088-461e-b9a7-05eb8990b8ca" containerName="console" containerID="cri-o://ffcaa5f34d5d0231400dd2ee0d11e99540dddc52064abbbcfb8afbd420993e01" gracePeriod=15 Nov 29 08:03:24 crc kubenswrapper[4795]: I1129 08:03:24.429245 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f5d67bfff-wl4rm_8385f150-5088-461e-b9a7-05eb8990b8ca/console/0.log" Nov 29 08:03:24 crc kubenswrapper[4795]: I1129 08:03:24.429300 4795 generic.go:334] "Generic (PLEG): container finished" podID="8385f150-5088-461e-b9a7-05eb8990b8ca" containerID="ffcaa5f34d5d0231400dd2ee0d11e99540dddc52064abbbcfb8afbd420993e01" exitCode=2 Nov 29 08:03:24 crc kubenswrapper[4795]: I1129 08:03:24.430307 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f5d67bfff-wl4rm" event={"ID":"8385f150-5088-461e-b9a7-05eb8990b8ca","Type":"ContainerDied","Data":"ffcaa5f34d5d0231400dd2ee0d11e99540dddc52064abbbcfb8afbd420993e01"} Nov 29 08:03:24 crc kubenswrapper[4795]: I1129 08:03:24.694106 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Nov 29 08:03:24 crc kubenswrapper[4795]: I1129 08:03:24.772929 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Nov 29 08:03:25 crc kubenswrapper[4795]: I1129 08:03:25.929213 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/28437f9f-e92e-46d7-9ffb-fcdda5dea25e-etc-swift\") pod \"swift-storage-0\" (UID: \"28437f9f-e92e-46d7-9ffb-fcdda5dea25e\") " pod="openstack/swift-storage-0" Nov 29 08:03:25 crc kubenswrapper[4795]: E1129 08:03:25.929425 4795 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 29 08:03:25 crc kubenswrapper[4795]: E1129 08:03:25.929633 4795 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 29 08:03:25 crc kubenswrapper[4795]: E1129 08:03:25.929731 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/28437f9f-e92e-46d7-9ffb-fcdda5dea25e-etc-swift podName:28437f9f-e92e-46d7-9ffb-fcdda5dea25e nodeName:}" failed. No retries permitted until 2025-11-29 08:03:33.929712216 +0000 UTC m=+1459.905287996 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/28437f9f-e92e-46d7-9ffb-fcdda5dea25e-etc-swift") pod "swift-storage-0" (UID: "28437f9f-e92e-46d7-9ffb-fcdda5dea25e") : configmap "swift-ring-files" not found Nov 29 08:03:26 crc kubenswrapper[4795]: I1129 08:03:26.339334 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f5d67bfff-wl4rm_8385f150-5088-461e-b9a7-05eb8990b8ca/console/0.log" Nov 29 08:03:26 crc kubenswrapper[4795]: I1129 08:03:26.339993 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f5d67bfff-wl4rm" Nov 29 08:03:26 crc kubenswrapper[4795]: I1129 08:03:26.439426 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8385f150-5088-461e-b9a7-05eb8990b8ca-console-oauth-config\") pod \"8385f150-5088-461e-b9a7-05eb8990b8ca\" (UID: \"8385f150-5088-461e-b9a7-05eb8990b8ca\") " Nov 29 08:03:26 crc kubenswrapper[4795]: I1129 08:03:26.439637 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8385f150-5088-461e-b9a7-05eb8990b8ca-console-serving-cert\") pod \"8385f150-5088-461e-b9a7-05eb8990b8ca\" (UID: \"8385f150-5088-461e-b9a7-05eb8990b8ca\") " Nov 29 08:03:26 crc kubenswrapper[4795]: I1129 08:03:26.439795 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqsq4\" (UniqueName: \"kubernetes.io/projected/8385f150-5088-461e-b9a7-05eb8990b8ca-kube-api-access-mqsq4\") pod \"8385f150-5088-461e-b9a7-05eb8990b8ca\" (UID: \"8385f150-5088-461e-b9a7-05eb8990b8ca\") " Nov 29 08:03:26 crc kubenswrapper[4795]: I1129 08:03:26.439832 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8385f150-5088-461e-b9a7-05eb8990b8ca-oauth-serving-cert\") pod \"8385f150-5088-461e-b9a7-05eb8990b8ca\" (UID: \"8385f150-5088-461e-b9a7-05eb8990b8ca\") " Nov 29 08:03:26 crc kubenswrapper[4795]: I1129 08:03:26.439960 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8385f150-5088-461e-b9a7-05eb8990b8ca-service-ca\") pod \"8385f150-5088-461e-b9a7-05eb8990b8ca\" (UID: \"8385f150-5088-461e-b9a7-05eb8990b8ca\") " Nov 29 08:03:26 crc kubenswrapper[4795]: I1129 08:03:26.440029 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8385f150-5088-461e-b9a7-05eb8990b8ca-console-config\") pod \"8385f150-5088-461e-b9a7-05eb8990b8ca\" (UID: \"8385f150-5088-461e-b9a7-05eb8990b8ca\") " Nov 29 08:03:26 crc kubenswrapper[4795]: I1129 08:03:26.440126 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8385f150-5088-461e-b9a7-05eb8990b8ca-trusted-ca-bundle\") pod \"8385f150-5088-461e-b9a7-05eb8990b8ca\" (UID: \"8385f150-5088-461e-b9a7-05eb8990b8ca\") " Nov 29 08:03:26 crc kubenswrapper[4795]: I1129 08:03:26.440935 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8385f150-5088-461e-b9a7-05eb8990b8ca-console-config" (OuterVolumeSpecName: "console-config") pod "8385f150-5088-461e-b9a7-05eb8990b8ca" (UID: "8385f150-5088-461e-b9a7-05eb8990b8ca"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:03:26 crc kubenswrapper[4795]: I1129 08:03:26.440971 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8385f150-5088-461e-b9a7-05eb8990b8ca-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "8385f150-5088-461e-b9a7-05eb8990b8ca" (UID: "8385f150-5088-461e-b9a7-05eb8990b8ca"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:03:26 crc kubenswrapper[4795]: I1129 08:03:26.441892 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8385f150-5088-461e-b9a7-05eb8990b8ca-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "8385f150-5088-461e-b9a7-05eb8990b8ca" (UID: "8385f150-5088-461e-b9a7-05eb8990b8ca"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:03:26 crc kubenswrapper[4795]: I1129 08:03:26.442500 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8385f150-5088-461e-b9a7-05eb8990b8ca-service-ca" (OuterVolumeSpecName: "service-ca") pod "8385f150-5088-461e-b9a7-05eb8990b8ca" (UID: "8385f150-5088-461e-b9a7-05eb8990b8ca"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:03:26 crc kubenswrapper[4795]: I1129 08:03:26.445153 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8385f150-5088-461e-b9a7-05eb8990b8ca-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "8385f150-5088-461e-b9a7-05eb8990b8ca" (UID: "8385f150-5088-461e-b9a7-05eb8990b8ca"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:03:26 crc kubenswrapper[4795]: I1129 08:03:26.445346 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8385f150-5088-461e-b9a7-05eb8990b8ca-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "8385f150-5088-461e-b9a7-05eb8990b8ca" (UID: "8385f150-5088-461e-b9a7-05eb8990b8ca"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:03:26 crc kubenswrapper[4795]: I1129 08:03:26.445626 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8385f150-5088-461e-b9a7-05eb8990b8ca-kube-api-access-mqsq4" (OuterVolumeSpecName: "kube-api-access-mqsq4") pod "8385f150-5088-461e-b9a7-05eb8990b8ca" (UID: "8385f150-5088-461e-b9a7-05eb8990b8ca"). InnerVolumeSpecName "kube-api-access-mqsq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:03:26 crc kubenswrapper[4795]: I1129 08:03:26.452326 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a2fd879b-6f46-437e-acf0-c60e879af239","Type":"ContainerStarted","Data":"fb449b16c2ad59d076aeb737610095994330cd5aa85eb55c37174b5333ee10af"} Nov 29 08:03:26 crc kubenswrapper[4795]: I1129 08:03:26.455204 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-ww42q" event={"ID":"60a004a5-f226-49aa-b9e7-12a384ddece6","Type":"ContainerStarted","Data":"6daad0efe68604bf3164e74b8a7b68ea18664e063af9657742a01a3eab4a05b6"} Nov 29 08:03:26 crc kubenswrapper[4795]: I1129 08:03:26.457967 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f5d67bfff-wl4rm_8385f150-5088-461e-b9a7-05eb8990b8ca/console/0.log" Nov 29 08:03:26 crc kubenswrapper[4795]: I1129 08:03:26.458071 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f5d67bfff-wl4rm" event={"ID":"8385f150-5088-461e-b9a7-05eb8990b8ca","Type":"ContainerDied","Data":"a501e1dc4ab408658b15b94a0dc0be17b862193c5a680407dfe31b95e3f4f500"} Nov 29 08:03:26 crc kubenswrapper[4795]: I1129 08:03:26.458116 4795 scope.go:117] "RemoveContainer" containerID="ffcaa5f34d5d0231400dd2ee0d11e99540dddc52064abbbcfb8afbd420993e01" Nov 29 08:03:26 crc kubenswrapper[4795]: I1129 08:03:26.458117 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f5d67bfff-wl4rm" Nov 29 08:03:26 crc kubenswrapper[4795]: I1129 08:03:26.481884 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=-9223371971.372915 podStartE2EDuration="1m5.481859866s" podCreationTimestamp="2025-11-29 08:02:21 +0000 UTC" firstStartedPulling="2025-11-29 08:02:24.188061495 +0000 UTC m=+1390.163637285" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 08:03:26.472790328 +0000 UTC m=+1452.448366118" watchObservedRunningTime="2025-11-29 08:03:26.481859866 +0000 UTC m=+1452.457435656" Nov 29 08:03:26 crc kubenswrapper[4795]: I1129 08:03:26.507926 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-ww42q" podStartSLOduration=2.26458723 podStartE2EDuration="8.507900086s" podCreationTimestamp="2025-11-29 08:03:18 +0000 UTC" firstStartedPulling="2025-11-29 08:03:19.767029792 +0000 UTC m=+1445.742605582" lastFinishedPulling="2025-11-29 08:03:26.010342648 +0000 UTC m=+1451.985918438" observedRunningTime="2025-11-29 08:03:26.497015247 +0000 UTC m=+1452.472591037" watchObservedRunningTime="2025-11-29 08:03:26.507900086 +0000 UTC m=+1452.483475876" Nov 29 08:03:26 crc kubenswrapper[4795]: I1129 08:03:26.518910 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f5d67bfff-wl4rm"] Nov 29 08:03:26 crc kubenswrapper[4795]: I1129 08:03:26.529009 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f5d67bfff-wl4rm"] Nov 29 08:03:26 crc kubenswrapper[4795]: I1129 08:03:26.543426 4795 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8385f150-5088-461e-b9a7-05eb8990b8ca-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 29 08:03:26 crc kubenswrapper[4795]: I1129 08:03:26.543469 4795 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8385f150-5088-461e-b9a7-05eb8990b8ca-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 08:03:26 crc kubenswrapper[4795]: I1129 08:03:26.543482 4795 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8385f150-5088-461e-b9a7-05eb8990b8ca-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 08:03:26 crc kubenswrapper[4795]: I1129 08:03:26.543495 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqsq4\" (UniqueName: \"kubernetes.io/projected/8385f150-5088-461e-b9a7-05eb8990b8ca-kube-api-access-mqsq4\") on node \"crc\" DevicePath \"\"" Nov 29 08:03:26 crc kubenswrapper[4795]: I1129 08:03:26.543507 4795 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8385f150-5088-461e-b9a7-05eb8990b8ca-service-ca\") on node \"crc\" DevicePath \"\"" Nov 29 08:03:26 crc kubenswrapper[4795]: I1129 08:03:26.543521 4795 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8385f150-5088-461e-b9a7-05eb8990b8ca-console-config\") on node \"crc\" DevicePath \"\"" Nov 29 08:03:26 crc kubenswrapper[4795]: I1129 08:03:26.543534 4795 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8385f150-5088-461e-b9a7-05eb8990b8ca-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 08:03:27 crc kubenswrapper[4795]: I1129 08:03:27.359904 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-24d2f" Nov 29 08:03:27 crc kubenswrapper[4795]: I1129 08:03:27.426900 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-w6vqh"] Nov 29 08:03:27 crc kubenswrapper[4795]: I1129 08:03:27.427252 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-w6vqh" podUID="f0f88b89-3636-4ee0-aa86-efde063b436c" containerName="dnsmasq-dns" containerID="cri-o://98b41368382e1e2ce54e940b7643a4327e50b8549c81d039d4ab306d0878c119" gracePeriod=10 Nov 29 08:03:28 crc kubenswrapper[4795]: I1129 08:03:28.145629 4795 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8554648995-w6vqh" podUID="f0f88b89-3636-4ee0-aa86-efde063b436c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.144:5353: connect: connection refused" Nov 29 08:03:28 crc kubenswrapper[4795]: I1129 08:03:28.299697 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8385f150-5088-461e-b9a7-05eb8990b8ca" path="/var/lib/kubelet/pods/8385f150-5088-461e-b9a7-05eb8990b8ca/volumes" Nov 29 08:03:28 crc kubenswrapper[4795]: I1129 08:03:28.486914 4795 generic.go:334] "Generic (PLEG): container finished" podID="f0f88b89-3636-4ee0-aa86-efde063b436c" containerID="98b41368382e1e2ce54e940b7643a4327e50b8549c81d039d4ab306d0878c119" exitCode=0 Nov 29 08:03:28 crc kubenswrapper[4795]: I1129 08:03:28.486961 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-w6vqh" event={"ID":"f0f88b89-3636-4ee0-aa86-efde063b436c","Type":"ContainerDied","Data":"98b41368382e1e2ce54e940b7643a4327e50b8549c81d039d4ab306d0878c119"} Nov 29 08:03:28 crc kubenswrapper[4795]: I1129 08:03:28.638440 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-w6vqh" Nov 29 08:03:28 crc kubenswrapper[4795]: I1129 08:03:28.789343 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0f88b89-3636-4ee0-aa86-efde063b436c-config\") pod \"f0f88b89-3636-4ee0-aa86-efde063b436c\" (UID: \"f0f88b89-3636-4ee0-aa86-efde063b436c\") " Nov 29 08:03:28 crc kubenswrapper[4795]: I1129 08:03:28.789652 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0f88b89-3636-4ee0-aa86-efde063b436c-ovsdbserver-sb\") pod \"f0f88b89-3636-4ee0-aa86-efde063b436c\" (UID: \"f0f88b89-3636-4ee0-aa86-efde063b436c\") " Nov 29 08:03:28 crc kubenswrapper[4795]: I1129 08:03:28.789731 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0f88b89-3636-4ee0-aa86-efde063b436c-dns-svc\") pod \"f0f88b89-3636-4ee0-aa86-efde063b436c\" (UID: \"f0f88b89-3636-4ee0-aa86-efde063b436c\") " Nov 29 08:03:28 crc kubenswrapper[4795]: I1129 08:03:28.789780 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ft8jj\" (UniqueName: \"kubernetes.io/projected/f0f88b89-3636-4ee0-aa86-efde063b436c-kube-api-access-ft8jj\") pod \"f0f88b89-3636-4ee0-aa86-efde063b436c\" (UID: \"f0f88b89-3636-4ee0-aa86-efde063b436c\") " Nov 29 08:03:28 crc kubenswrapper[4795]: I1129 08:03:28.789840 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f0f88b89-3636-4ee0-aa86-efde063b436c-ovsdbserver-nb\") pod \"f0f88b89-3636-4ee0-aa86-efde063b436c\" (UID: \"f0f88b89-3636-4ee0-aa86-efde063b436c\") " Nov 29 08:03:28 crc kubenswrapper[4795]: I1129 08:03:28.801785 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0f88b89-3636-4ee0-aa86-efde063b436c-kube-api-access-ft8jj" (OuterVolumeSpecName: "kube-api-access-ft8jj") pod "f0f88b89-3636-4ee0-aa86-efde063b436c" (UID: "f0f88b89-3636-4ee0-aa86-efde063b436c"). InnerVolumeSpecName "kube-api-access-ft8jj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:03:28 crc kubenswrapper[4795]: I1129 08:03:28.841006 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0f88b89-3636-4ee0-aa86-efde063b436c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f0f88b89-3636-4ee0-aa86-efde063b436c" (UID: "f0f88b89-3636-4ee0-aa86-efde063b436c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:03:28 crc kubenswrapper[4795]: I1129 08:03:28.848315 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0f88b89-3636-4ee0-aa86-efde063b436c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f0f88b89-3636-4ee0-aa86-efde063b436c" (UID: "f0f88b89-3636-4ee0-aa86-efde063b436c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:03:28 crc kubenswrapper[4795]: I1129 08:03:28.854714 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0f88b89-3636-4ee0-aa86-efde063b436c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f0f88b89-3636-4ee0-aa86-efde063b436c" (UID: "f0f88b89-3636-4ee0-aa86-efde063b436c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:03:28 crc kubenswrapper[4795]: I1129 08:03:28.869903 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0f88b89-3636-4ee0-aa86-efde063b436c-config" (OuterVolumeSpecName: "config") pod "f0f88b89-3636-4ee0-aa86-efde063b436c" (UID: "f0f88b89-3636-4ee0-aa86-efde063b436c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:03:28 crc kubenswrapper[4795]: I1129 08:03:28.892808 4795 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0f88b89-3636-4ee0-aa86-efde063b436c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 29 08:03:28 crc kubenswrapper[4795]: I1129 08:03:28.892851 4795 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0f88b89-3636-4ee0-aa86-efde063b436c-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 08:03:28 crc kubenswrapper[4795]: I1129 08:03:28.892863 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ft8jj\" (UniqueName: \"kubernetes.io/projected/f0f88b89-3636-4ee0-aa86-efde063b436c-kube-api-access-ft8jj\") on node \"crc\" DevicePath \"\"" Nov 29 08:03:28 crc kubenswrapper[4795]: I1129 08:03:28.892877 4795 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f0f88b89-3636-4ee0-aa86-efde063b436c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 29 08:03:28 crc kubenswrapper[4795]: I1129 08:03:28.892895 4795 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0f88b89-3636-4ee0-aa86-efde063b436c-config\") on node \"crc\" DevicePath \"\"" Nov 29 08:03:29 crc kubenswrapper[4795]: I1129 08:03:29.497117 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-w6vqh" event={"ID":"f0f88b89-3636-4ee0-aa86-efde063b436c","Type":"ContainerDied","Data":"a89bdfb4df9d950a0df410e2bbde2be8a8bfd93242b165ad6151d40e8cb4cf5b"} Nov 29 08:03:29 crc kubenswrapper[4795]: I1129 08:03:29.497466 4795 scope.go:117] "RemoveContainer" containerID="98b41368382e1e2ce54e940b7643a4327e50b8549c81d039d4ab306d0878c119" Nov 29 08:03:29 crc kubenswrapper[4795]: I1129 08:03:29.497646 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-w6vqh" Nov 29 08:03:29 crc kubenswrapper[4795]: I1129 08:03:29.529879 4795 scope.go:117] "RemoveContainer" containerID="b55780aa8cf39df5e025a188977dd1e51851207bd7065b6abcbe109a9c70a483" Nov 29 08:03:29 crc kubenswrapper[4795]: I1129 08:03:29.533907 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-w6vqh"] Nov 29 08:03:29 crc kubenswrapper[4795]: I1129 08:03:29.544101 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-w6vqh"] Nov 29 08:03:30 crc kubenswrapper[4795]: I1129 08:03:30.289821 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0f88b89-3636-4ee0-aa86-efde063b436c" path="/var/lib/kubelet/pods/f0f88b89-3636-4ee0-aa86-efde063b436c/volumes" Nov 29 08:03:30 crc kubenswrapper[4795]: I1129 08:03:30.507229 4795 generic.go:334] "Generic (PLEG): container finished" podID="063373b7-6898-409d-b792-d770a8f6f021" containerID="10ac0e6c9dea581ab810775fab5fa9d47c5e63e062410134dcdb0d1e701ee7d8" exitCode=0 Nov 29 08:03:30 crc kubenswrapper[4795]: I1129 08:03:30.507346 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"063373b7-6898-409d-b792-d770a8f6f021","Type":"ContainerDied","Data":"10ac0e6c9dea581ab810775fab5fa9d47c5e63e062410134dcdb0d1e701ee7d8"} Nov 29 08:03:31 crc kubenswrapper[4795]: I1129 08:03:31.631723 4795 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-r5w67" podUID="77e980be-cb41-448f-96d7-0c99fec4d400" containerName="ovn-controller" probeResult="failure" output=< Nov 29 08:03:31 crc kubenswrapper[4795]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 29 08:03:31 crc kubenswrapper[4795]: > Nov 29 08:03:33 crc kubenswrapper[4795]: I1129 08:03:33.262321 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Nov 29 08:03:33 crc kubenswrapper[4795]: I1129 08:03:33.262874 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Nov 29 08:03:33 crc kubenswrapper[4795]: I1129 08:03:33.350190 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Nov 29 08:03:33 crc kubenswrapper[4795]: I1129 08:03:33.621686 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Nov 29 08:03:34 crc kubenswrapper[4795]: I1129 08:03:34.013238 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/28437f9f-e92e-46d7-9ffb-fcdda5dea25e-etc-swift\") pod \"swift-storage-0\" (UID: \"28437f9f-e92e-46d7-9ffb-fcdda5dea25e\") " pod="openstack/swift-storage-0" Nov 29 08:03:34 crc kubenswrapper[4795]: E1129 08:03:34.013606 4795 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 29 08:03:34 crc kubenswrapper[4795]: E1129 08:03:34.013631 4795 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 29 08:03:34 crc kubenswrapper[4795]: E1129 08:03:34.013693 4795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/28437f9f-e92e-46d7-9ffb-fcdda5dea25e-etc-swift podName:28437f9f-e92e-46d7-9ffb-fcdda5dea25e nodeName:}" failed. No retries permitted until 2025-11-29 08:03:50.013675251 +0000 UTC m=+1475.989251041 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/28437f9f-e92e-46d7-9ffb-fcdda5dea25e-etc-swift") pod "swift-storage-0" (UID: "28437f9f-e92e-46d7-9ffb-fcdda5dea25e") : configmap "swift-ring-files" not found Nov 29 08:03:34 crc kubenswrapper[4795]: I1129 08:03:34.390295 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-4s8c4"] Nov 29 08:03:34 crc kubenswrapper[4795]: E1129 08:03:34.390761 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8385f150-5088-461e-b9a7-05eb8990b8ca" containerName="console" Nov 29 08:03:34 crc kubenswrapper[4795]: I1129 08:03:34.390779 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="8385f150-5088-461e-b9a7-05eb8990b8ca" containerName="console" Nov 29 08:03:34 crc kubenswrapper[4795]: E1129 08:03:34.390799 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0f88b89-3636-4ee0-aa86-efde063b436c" containerName="init" Nov 29 08:03:34 crc kubenswrapper[4795]: I1129 08:03:34.390805 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0f88b89-3636-4ee0-aa86-efde063b436c" containerName="init" Nov 29 08:03:34 crc kubenswrapper[4795]: E1129 08:03:34.390819 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="335e92c9-b508-4368-9af8-55dc11bad481" containerName="dnsmasq-dns" Nov 29 08:03:34 crc kubenswrapper[4795]: I1129 08:03:34.390826 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="335e92c9-b508-4368-9af8-55dc11bad481" containerName="dnsmasq-dns" Nov 29 08:03:34 crc kubenswrapper[4795]: E1129 08:03:34.390839 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="335e92c9-b508-4368-9af8-55dc11bad481" containerName="init" Nov 29 08:03:34 crc kubenswrapper[4795]: I1129 08:03:34.390844 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="335e92c9-b508-4368-9af8-55dc11bad481" containerName="init" Nov 29 08:03:34 crc kubenswrapper[4795]: E1129 08:03:34.390858 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0f88b89-3636-4ee0-aa86-efde063b436c" containerName="dnsmasq-dns" Nov 29 08:03:34 crc kubenswrapper[4795]: I1129 08:03:34.390863 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0f88b89-3636-4ee0-aa86-efde063b436c" containerName="dnsmasq-dns" Nov 29 08:03:34 crc kubenswrapper[4795]: I1129 08:03:34.391100 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="8385f150-5088-461e-b9a7-05eb8990b8ca" containerName="console" Nov 29 08:03:34 crc kubenswrapper[4795]: I1129 08:03:34.391127 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="335e92c9-b508-4368-9af8-55dc11bad481" containerName="dnsmasq-dns" Nov 29 08:03:34 crc kubenswrapper[4795]: I1129 08:03:34.391139 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0f88b89-3636-4ee0-aa86-efde063b436c" containerName="dnsmasq-dns" Nov 29 08:03:34 crc kubenswrapper[4795]: I1129 08:03:34.391874 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-4s8c4" Nov 29 08:03:34 crc kubenswrapper[4795]: I1129 08:03:34.404505 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-4s8c4"] Nov 29 08:03:34 crc kubenswrapper[4795]: I1129 08:03:34.423426 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-6ef0-account-create-update-7swjc"] Nov 29 08:03:34 crc kubenswrapper[4795]: I1129 08:03:34.425468 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6ef0-account-create-update-7swjc" Nov 29 08:03:34 crc kubenswrapper[4795]: I1129 08:03:34.434193 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Nov 29 08:03:34 crc kubenswrapper[4795]: I1129 08:03:34.463340 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6ef0-account-create-update-7swjc"] Nov 29 08:03:34 crc kubenswrapper[4795]: I1129 08:03:34.526787 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9llht\" (UniqueName: \"kubernetes.io/projected/2cf81321-88f9-43ee-8356-e0355bc3da52-kube-api-access-9llht\") pod \"keystone-db-create-4s8c4\" (UID: \"2cf81321-88f9-43ee-8356-e0355bc3da52\") " pod="openstack/keystone-db-create-4s8c4" Nov 29 08:03:34 crc kubenswrapper[4795]: I1129 08:03:34.526936 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cf81321-88f9-43ee-8356-e0355bc3da52-operator-scripts\") pod \"keystone-db-create-4s8c4\" (UID: \"2cf81321-88f9-43ee-8356-e0355bc3da52\") " pod="openstack/keystone-db-create-4s8c4" Nov 29 08:03:34 crc kubenswrapper[4795]: I1129 08:03:34.526982 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0a8025f-2b8a-4f5d-9462-09e032e5b6c2-operator-scripts\") pod \"keystone-6ef0-account-create-update-7swjc\" (UID: \"f0a8025f-2b8a-4f5d-9462-09e032e5b6c2\") " pod="openstack/keystone-6ef0-account-create-update-7swjc" Nov 29 08:03:34 crc kubenswrapper[4795]: I1129 08:03:34.527067 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prrkq\" (UniqueName: \"kubernetes.io/projected/f0a8025f-2b8a-4f5d-9462-09e032e5b6c2-kube-api-access-prrkq\") pod \"keystone-6ef0-account-create-update-7swjc\" (UID: \"f0a8025f-2b8a-4f5d-9462-09e032e5b6c2\") " pod="openstack/keystone-6ef0-account-create-update-7swjc" Nov 29 08:03:34 crc kubenswrapper[4795]: I1129 08:03:34.636706 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-b6t5r"] Nov 29 08:03:34 crc kubenswrapper[4795]: I1129 08:03:34.639867 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0a8025f-2b8a-4f5d-9462-09e032e5b6c2-operator-scripts\") pod \"keystone-6ef0-account-create-update-7swjc\" (UID: \"f0a8025f-2b8a-4f5d-9462-09e032e5b6c2\") " pod="openstack/keystone-6ef0-account-create-update-7swjc" Nov 29 08:03:34 crc kubenswrapper[4795]: I1129 08:03:34.640253 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prrkq\" (UniqueName: \"kubernetes.io/projected/f0a8025f-2b8a-4f5d-9462-09e032e5b6c2-kube-api-access-prrkq\") pod \"keystone-6ef0-account-create-update-7swjc\" (UID: \"f0a8025f-2b8a-4f5d-9462-09e032e5b6c2\") " pod="openstack/keystone-6ef0-account-create-update-7swjc" Nov 29 08:03:34 crc kubenswrapper[4795]: I1129 08:03:34.641420 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-b6t5r" Nov 29 08:03:34 crc kubenswrapper[4795]: I1129 08:03:34.642233 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0a8025f-2b8a-4f5d-9462-09e032e5b6c2-operator-scripts\") pod \"keystone-6ef0-account-create-update-7swjc\" (UID: \"f0a8025f-2b8a-4f5d-9462-09e032e5b6c2\") " pod="openstack/keystone-6ef0-account-create-update-7swjc" Nov 29 08:03:34 crc kubenswrapper[4795]: I1129 08:03:34.642504 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9llht\" (UniqueName: \"kubernetes.io/projected/2cf81321-88f9-43ee-8356-e0355bc3da52-kube-api-access-9llht\") pod \"keystone-db-create-4s8c4\" (UID: \"2cf81321-88f9-43ee-8356-e0355bc3da52\") " pod="openstack/keystone-db-create-4s8c4" Nov 29 08:03:34 crc kubenswrapper[4795]: I1129 08:03:34.642770 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cf81321-88f9-43ee-8356-e0355bc3da52-operator-scripts\") pod \"keystone-db-create-4s8c4\" (UID: \"2cf81321-88f9-43ee-8356-e0355bc3da52\") " pod="openstack/keystone-db-create-4s8c4" Nov 29 08:03:34 crc kubenswrapper[4795]: I1129 08:03:34.643698 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cf81321-88f9-43ee-8356-e0355bc3da52-operator-scripts\") pod \"keystone-db-create-4s8c4\" (UID: \"2cf81321-88f9-43ee-8356-e0355bc3da52\") " pod="openstack/keystone-db-create-4s8c4" Nov 29 08:03:34 crc kubenswrapper[4795]: I1129 08:03:34.648512 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-b6t5r"] Nov 29 08:03:34 crc kubenswrapper[4795]: I1129 08:03:34.669344 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9llht\" (UniqueName: \"kubernetes.io/projected/2cf81321-88f9-43ee-8356-e0355bc3da52-kube-api-access-9llht\") pod \"keystone-db-create-4s8c4\" (UID: \"2cf81321-88f9-43ee-8356-e0355bc3da52\") " pod="openstack/keystone-db-create-4s8c4" Nov 29 08:03:34 crc kubenswrapper[4795]: I1129 08:03:34.674855 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prrkq\" (UniqueName: \"kubernetes.io/projected/f0a8025f-2b8a-4f5d-9462-09e032e5b6c2-kube-api-access-prrkq\") pod \"keystone-6ef0-account-create-update-7swjc\" (UID: \"f0a8025f-2b8a-4f5d-9462-09e032e5b6c2\") " pod="openstack/keystone-6ef0-account-create-update-7swjc" Nov 29 08:03:34 crc kubenswrapper[4795]: I1129 08:03:34.715198 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-4s8c4" Nov 29 08:03:34 crc kubenswrapper[4795]: I1129 08:03:34.726635 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-4710-account-create-update-469z8"] Nov 29 08:03:34 crc kubenswrapper[4795]: I1129 08:03:34.728578 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4710-account-create-update-469z8" Nov 29 08:03:34 crc kubenswrapper[4795]: I1129 08:03:34.736165 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Nov 29 08:03:34 crc kubenswrapper[4795]: I1129 08:03:34.748764 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9tmf\" (UniqueName: \"kubernetes.io/projected/5e6def42-cee6-477c-940a-1bb6e20df694-kube-api-access-d9tmf\") pod \"placement-db-create-b6t5r\" (UID: \"5e6def42-cee6-477c-940a-1bb6e20df694\") " pod="openstack/placement-db-create-b6t5r" Nov 29 08:03:34 crc kubenswrapper[4795]: I1129 08:03:34.748855 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e6def42-cee6-477c-940a-1bb6e20df694-operator-scripts\") pod \"placement-db-create-b6t5r\" (UID: \"5e6def42-cee6-477c-940a-1bb6e20df694\") " pod="openstack/placement-db-create-b6t5r" Nov 29 08:03:34 crc kubenswrapper[4795]: I1129 08:03:34.750978 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6ef0-account-create-update-7swjc" Nov 29 08:03:34 crc kubenswrapper[4795]: I1129 08:03:34.752698 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-4710-account-create-update-469z8"] Nov 29 08:03:34 crc kubenswrapper[4795]: I1129 08:03:34.851118 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4lf7\" (UniqueName: \"kubernetes.io/projected/696f8b00-2104-47ea-bc5c-0e317dd00de1-kube-api-access-w4lf7\") pod \"placement-4710-account-create-update-469z8\" (UID: \"696f8b00-2104-47ea-bc5c-0e317dd00de1\") " pod="openstack/placement-4710-account-create-update-469z8" Nov 29 08:03:34 crc kubenswrapper[4795]: I1129 08:03:34.851184 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9tmf\" (UniqueName: \"kubernetes.io/projected/5e6def42-cee6-477c-940a-1bb6e20df694-kube-api-access-d9tmf\") pod \"placement-db-create-b6t5r\" (UID: \"5e6def42-cee6-477c-940a-1bb6e20df694\") " pod="openstack/placement-db-create-b6t5r" Nov 29 08:03:34 crc kubenswrapper[4795]: I1129 08:03:34.851370 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e6def42-cee6-477c-940a-1bb6e20df694-operator-scripts\") pod \"placement-db-create-b6t5r\" (UID: \"5e6def42-cee6-477c-940a-1bb6e20df694\") " pod="openstack/placement-db-create-b6t5r" Nov 29 08:03:34 crc kubenswrapper[4795]: I1129 08:03:34.851686 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/696f8b00-2104-47ea-bc5c-0e317dd00de1-operator-scripts\") pod \"placement-4710-account-create-update-469z8\" (UID: \"696f8b00-2104-47ea-bc5c-0e317dd00de1\") " pod="openstack/placement-4710-account-create-update-469z8" Nov 29 08:03:34 crc kubenswrapper[4795]: I1129 08:03:34.852750 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e6def42-cee6-477c-940a-1bb6e20df694-operator-scripts\") pod \"placement-db-create-b6t5r\" (UID: \"5e6def42-cee6-477c-940a-1bb6e20df694\") " pod="openstack/placement-db-create-b6t5r" Nov 29 08:03:34 crc kubenswrapper[4795]: I1129 08:03:34.876091 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9tmf\" (UniqueName: \"kubernetes.io/projected/5e6def42-cee6-477c-940a-1bb6e20df694-kube-api-access-d9tmf\") pod \"placement-db-create-b6t5r\" (UID: \"5e6def42-cee6-477c-940a-1bb6e20df694\") " pod="openstack/placement-db-create-b6t5r" Nov 29 08:03:34 crc kubenswrapper[4795]: I1129 08:03:34.955494 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4lf7\" (UniqueName: \"kubernetes.io/projected/696f8b00-2104-47ea-bc5c-0e317dd00de1-kube-api-access-w4lf7\") pod \"placement-4710-account-create-update-469z8\" (UID: \"696f8b00-2104-47ea-bc5c-0e317dd00de1\") " pod="openstack/placement-4710-account-create-update-469z8" Nov 29 08:03:34 crc kubenswrapper[4795]: I1129 08:03:34.956192 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/696f8b00-2104-47ea-bc5c-0e317dd00de1-operator-scripts\") pod \"placement-4710-account-create-update-469z8\" (UID: \"696f8b00-2104-47ea-bc5c-0e317dd00de1\") " pod="openstack/placement-4710-account-create-update-469z8" Nov 29 08:03:34 crc kubenswrapper[4795]: I1129 08:03:34.957368 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/696f8b00-2104-47ea-bc5c-0e317dd00de1-operator-scripts\") pod \"placement-4710-account-create-update-469z8\" (UID: \"696f8b00-2104-47ea-bc5c-0e317dd00de1\") " pod="openstack/placement-4710-account-create-update-469z8" Nov 29 08:03:34 crc kubenswrapper[4795]: I1129 08:03:34.970129 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-b6t5r" Nov 29 08:03:34 crc kubenswrapper[4795]: I1129 08:03:34.988159 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4lf7\" (UniqueName: \"kubernetes.io/projected/696f8b00-2104-47ea-bc5c-0e317dd00de1-kube-api-access-w4lf7\") pod \"placement-4710-account-create-update-469z8\" (UID: \"696f8b00-2104-47ea-bc5c-0e317dd00de1\") " pod="openstack/placement-4710-account-create-update-469z8" Nov 29 08:03:35 crc kubenswrapper[4795]: I1129 08:03:35.002639 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4710-account-create-update-469z8" Nov 29 08:03:35 crc kubenswrapper[4795]: I1129 08:03:35.357961 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6ef0-account-create-update-7swjc"] Nov 29 08:03:35 crc kubenswrapper[4795]: I1129 08:03:35.482980 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-4s8c4"] Nov 29 08:03:35 crc kubenswrapper[4795]: I1129 08:03:35.564791 4795 generic.go:334] "Generic (PLEG): container finished" podID="74169c45-99e0-4179-a18b-07a1c2cade8b" containerID="bf11edd6d9ce5ac04733b56bab1255c59a0c0bb035e4e46f3a8d314f0c4f8633" exitCode=0 Nov 29 08:03:35 crc kubenswrapper[4795]: I1129 08:03:35.564870 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"74169c45-99e0-4179-a18b-07a1c2cade8b","Type":"ContainerDied","Data":"bf11edd6d9ce5ac04733b56bab1255c59a0c0bb035e4e46f3a8d314f0c4f8633"} Nov 29 08:03:36 crc kubenswrapper[4795]: I1129 08:03:36.582292 4795 generic.go:334] "Generic (PLEG): container finished" podID="60a004a5-f226-49aa-b9e7-12a384ddece6" containerID="6daad0efe68604bf3164e74b8a7b68ea18664e063af9657742a01a3eab4a05b6" exitCode=0 Nov 29 08:03:36 crc kubenswrapper[4795]: I1129 08:03:36.582346 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-ww42q" event={"ID":"60a004a5-f226-49aa-b9e7-12a384ddece6","Type":"ContainerDied","Data":"6daad0efe68604bf3164e74b8a7b68ea18664e063af9657742a01a3eab4a05b6"} Nov 29 08:03:36 crc kubenswrapper[4795]: I1129 08:03:36.584526 4795 generic.go:334] "Generic (PLEG): container finished" podID="28c13f65-78c4-4d4a-8960-7ef17a4c93e8" containerID="66c2994154c169b7826b57d071eca33a6effd691ef0e54ad3484e4595c873484" exitCode=0 Nov 29 08:03:36 crc kubenswrapper[4795]: I1129 08:03:36.584570 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"28c13f65-78c4-4d4a-8960-7ef17a4c93e8","Type":"ContainerDied","Data":"66c2994154c169b7826b57d071eca33a6effd691ef0e54ad3484e4595c873484"} Nov 29 08:03:36 crc kubenswrapper[4795]: I1129 08:03:36.642779 4795 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-r5w67" podUID="77e980be-cb41-448f-96d7-0c99fec4d400" containerName="ovn-controller" probeResult="failure" output=< Nov 29 08:03:36 crc kubenswrapper[4795]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 29 08:03:36 crc kubenswrapper[4795]: > Nov 29 08:03:36 crc kubenswrapper[4795]: I1129 08:03:36.669843 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-fjltq" Nov 29 08:03:36 crc kubenswrapper[4795]: I1129 08:03:36.679342 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-fjltq" Nov 29 08:03:36 crc kubenswrapper[4795]: I1129 08:03:36.924737 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-r5w67-config-rzsdn"] Nov 29 08:03:36 crc kubenswrapper[4795]: I1129 08:03:36.926955 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-r5w67-config-rzsdn" Nov 29 08:03:36 crc kubenswrapper[4795]: I1129 08:03:36.930638 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 29 08:03:36 crc kubenswrapper[4795]: I1129 08:03:36.939634 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-r5w67-config-rzsdn"] Nov 29 08:03:36 crc kubenswrapper[4795]: I1129 08:03:36.975647 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-f7j9q"] Nov 29 08:03:36 crc kubenswrapper[4795]: I1129 08:03:36.977201 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-f7j9q" Nov 29 08:03:37 crc kubenswrapper[4795]: I1129 08:03:37.042414 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/48a8478e-b079-46a5-b3f6-c60ac90b7bce-additional-scripts\") pod \"ovn-controller-r5w67-config-rzsdn\" (UID: \"48a8478e-b079-46a5-b3f6-c60ac90b7bce\") " pod="openstack/ovn-controller-r5w67-config-rzsdn" Nov 29 08:03:37 crc kubenswrapper[4795]: I1129 08:03:37.042553 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/48a8478e-b079-46a5-b3f6-c60ac90b7bce-scripts\") pod \"ovn-controller-r5w67-config-rzsdn\" (UID: \"48a8478e-b079-46a5-b3f6-c60ac90b7bce\") " pod="openstack/ovn-controller-r5w67-config-rzsdn" Nov 29 08:03:37 crc kubenswrapper[4795]: I1129 08:03:37.042627 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/48a8478e-b079-46a5-b3f6-c60ac90b7bce-var-run\") pod \"ovn-controller-r5w67-config-rzsdn\" (UID: \"48a8478e-b079-46a5-b3f6-c60ac90b7bce\") " pod="openstack/ovn-controller-r5w67-config-rzsdn" Nov 29 08:03:37 crc kubenswrapper[4795]: I1129 08:03:37.042819 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9xg8\" (UniqueName: \"kubernetes.io/projected/48a8478e-b079-46a5-b3f6-c60ac90b7bce-kube-api-access-k9xg8\") pod \"ovn-controller-r5w67-config-rzsdn\" (UID: \"48a8478e-b079-46a5-b3f6-c60ac90b7bce\") " pod="openstack/ovn-controller-r5w67-config-rzsdn" Nov 29 08:03:37 crc kubenswrapper[4795]: I1129 08:03:37.043197 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/48a8478e-b079-46a5-b3f6-c60ac90b7bce-var-run-ovn\") pod \"ovn-controller-r5w67-config-rzsdn\" (UID: \"48a8478e-b079-46a5-b3f6-c60ac90b7bce\") " pod="openstack/ovn-controller-r5w67-config-rzsdn" Nov 29 08:03:37 crc kubenswrapper[4795]: I1129 08:03:37.044921 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/48a8478e-b079-46a5-b3f6-c60ac90b7bce-var-log-ovn\") pod \"ovn-controller-r5w67-config-rzsdn\" (UID: \"48a8478e-b079-46a5-b3f6-c60ac90b7bce\") " pod="openstack/ovn-controller-r5w67-config-rzsdn" Nov 29 08:03:37 crc kubenswrapper[4795]: I1129 08:03:37.051487 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-f7j9q"] Nov 29 08:03:37 crc kubenswrapper[4795]: I1129 08:03:37.095507 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-dff1-account-create-update-qpdmv"] Nov 29 08:03:37 crc kubenswrapper[4795]: I1129 08:03:37.096929 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-dff1-account-create-update-qpdmv" Nov 29 08:03:37 crc kubenswrapper[4795]: I1129 08:03:37.099734 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-db-secret" Nov 29 08:03:37 crc kubenswrapper[4795]: I1129 08:03:37.106914 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-dff1-account-create-update-qpdmv"] Nov 29 08:03:37 crc kubenswrapper[4795]: I1129 08:03:37.147071 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/48a8478e-b079-46a5-b3f6-c60ac90b7bce-var-log-ovn\") pod \"ovn-controller-r5w67-config-rzsdn\" (UID: \"48a8478e-b079-46a5-b3f6-c60ac90b7bce\") " pod="openstack/ovn-controller-r5w67-config-rzsdn" Nov 29 08:03:37 crc kubenswrapper[4795]: I1129 08:03:37.147155 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/48a8478e-b079-46a5-b3f6-c60ac90b7bce-additional-scripts\") pod \"ovn-controller-r5w67-config-rzsdn\" (UID: \"48a8478e-b079-46a5-b3f6-c60ac90b7bce\") " pod="openstack/ovn-controller-r5w67-config-rzsdn" Nov 29 08:03:37 crc kubenswrapper[4795]: I1129 08:03:37.147199 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/48a8478e-b079-46a5-b3f6-c60ac90b7bce-scripts\") pod \"ovn-controller-r5w67-config-rzsdn\" (UID: \"48a8478e-b079-46a5-b3f6-c60ac90b7bce\") " pod="openstack/ovn-controller-r5w67-config-rzsdn" Nov 29 08:03:37 crc kubenswrapper[4795]: I1129 08:03:37.147231 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/48a8478e-b079-46a5-b3f6-c60ac90b7bce-var-run\") pod \"ovn-controller-r5w67-config-rzsdn\" (UID: \"48a8478e-b079-46a5-b3f6-c60ac90b7bce\") " pod="openstack/ovn-controller-r5w67-config-rzsdn" Nov 29 08:03:37 crc kubenswrapper[4795]: I1129 08:03:37.147283 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9xg8\" (UniqueName: \"kubernetes.io/projected/48a8478e-b079-46a5-b3f6-c60ac90b7bce-kube-api-access-k9xg8\") pod \"ovn-controller-r5w67-config-rzsdn\" (UID: \"48a8478e-b079-46a5-b3f6-c60ac90b7bce\") " pod="openstack/ovn-controller-r5w67-config-rzsdn" Nov 29 08:03:37 crc kubenswrapper[4795]: I1129 08:03:37.147336 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/48a8478e-b079-46a5-b3f6-c60ac90b7bce-var-run-ovn\") pod \"ovn-controller-r5w67-config-rzsdn\" (UID: \"48a8478e-b079-46a5-b3f6-c60ac90b7bce\") " pod="openstack/ovn-controller-r5w67-config-rzsdn" Nov 29 08:03:37 crc kubenswrapper[4795]: I1129 08:03:37.147369 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvwh8\" (UniqueName: \"kubernetes.io/projected/02097025-8724-4b6a-b817-d8a5d40f2d24-kube-api-access-hvwh8\") pod \"mysqld-exporter-openstack-db-create-f7j9q\" (UID: \"02097025-8724-4b6a-b817-d8a5d40f2d24\") " pod="openstack/mysqld-exporter-openstack-db-create-f7j9q" Nov 29 08:03:37 crc kubenswrapper[4795]: I1129 08:03:37.147419 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02097025-8724-4b6a-b817-d8a5d40f2d24-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-f7j9q\" (UID: \"02097025-8724-4b6a-b817-d8a5d40f2d24\") " pod="openstack/mysqld-exporter-openstack-db-create-f7j9q" Nov 29 08:03:37 crc kubenswrapper[4795]: I1129 08:03:37.147571 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/48a8478e-b079-46a5-b3f6-c60ac90b7bce-var-log-ovn\") pod \"ovn-controller-r5w67-config-rzsdn\" (UID: \"48a8478e-b079-46a5-b3f6-c60ac90b7bce\") " pod="openstack/ovn-controller-r5w67-config-rzsdn" Nov 29 08:03:37 crc kubenswrapper[4795]: I1129 08:03:37.148173 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/48a8478e-b079-46a5-b3f6-c60ac90b7bce-additional-scripts\") pod \"ovn-controller-r5w67-config-rzsdn\" (UID: \"48a8478e-b079-46a5-b3f6-c60ac90b7bce\") " pod="openstack/ovn-controller-r5w67-config-rzsdn" Nov 29 08:03:37 crc kubenswrapper[4795]: I1129 08:03:37.148201 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/48a8478e-b079-46a5-b3f6-c60ac90b7bce-var-run\") pod \"ovn-controller-r5w67-config-rzsdn\" (UID: \"48a8478e-b079-46a5-b3f6-c60ac90b7bce\") " pod="openstack/ovn-controller-r5w67-config-rzsdn" Nov 29 08:03:37 crc kubenswrapper[4795]: I1129 08:03:37.148275 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/48a8478e-b079-46a5-b3f6-c60ac90b7bce-var-run-ovn\") pod \"ovn-controller-r5w67-config-rzsdn\" (UID: \"48a8478e-b079-46a5-b3f6-c60ac90b7bce\") " pod="openstack/ovn-controller-r5w67-config-rzsdn" Nov 29 08:03:37 crc kubenswrapper[4795]: I1129 08:03:37.151324 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/48a8478e-b079-46a5-b3f6-c60ac90b7bce-scripts\") pod \"ovn-controller-r5w67-config-rzsdn\" (UID: \"48a8478e-b079-46a5-b3f6-c60ac90b7bce\") " pod="openstack/ovn-controller-r5w67-config-rzsdn" Nov 29 08:03:37 crc kubenswrapper[4795]: I1129 08:03:37.179962 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9xg8\" (UniqueName: \"kubernetes.io/projected/48a8478e-b079-46a5-b3f6-c60ac90b7bce-kube-api-access-k9xg8\") pod \"ovn-controller-r5w67-config-rzsdn\" (UID: \"48a8478e-b079-46a5-b3f6-c60ac90b7bce\") " pod="openstack/ovn-controller-r5w67-config-rzsdn" Nov 29 08:03:37 crc kubenswrapper[4795]: I1129 08:03:37.249744 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvwh8\" (UniqueName: \"kubernetes.io/projected/02097025-8724-4b6a-b817-d8a5d40f2d24-kube-api-access-hvwh8\") pod \"mysqld-exporter-openstack-db-create-f7j9q\" (UID: \"02097025-8724-4b6a-b817-d8a5d40f2d24\") " pod="openstack/mysqld-exporter-openstack-db-create-f7j9q" Nov 29 08:03:37 crc kubenswrapper[4795]: I1129 08:03:37.249833 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02097025-8724-4b6a-b817-d8a5d40f2d24-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-f7j9q\" (UID: \"02097025-8724-4b6a-b817-d8a5d40f2d24\") " pod="openstack/mysqld-exporter-openstack-db-create-f7j9q" Nov 29 08:03:37 crc kubenswrapper[4795]: I1129 08:03:37.249888 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqj8s\" (UniqueName: \"kubernetes.io/projected/ff621b92-0616-4d28-abd8-b18c17c6990e-kube-api-access-gqj8s\") pod \"mysqld-exporter-dff1-account-create-update-qpdmv\" (UID: \"ff621b92-0616-4d28-abd8-b18c17c6990e\") " pod="openstack/mysqld-exporter-dff1-account-create-update-qpdmv" Nov 29 08:03:37 crc kubenswrapper[4795]: I1129 08:03:37.249925 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff621b92-0616-4d28-abd8-b18c17c6990e-operator-scripts\") pod \"mysqld-exporter-dff1-account-create-update-qpdmv\" (UID: \"ff621b92-0616-4d28-abd8-b18c17c6990e\") " pod="openstack/mysqld-exporter-dff1-account-create-update-qpdmv" Nov 29 08:03:37 crc kubenswrapper[4795]: I1129 08:03:37.252325 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02097025-8724-4b6a-b817-d8a5d40f2d24-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-f7j9q\" (UID: \"02097025-8724-4b6a-b817-d8a5d40f2d24\") " pod="openstack/mysqld-exporter-openstack-db-create-f7j9q" Nov 29 08:03:37 crc kubenswrapper[4795]: I1129 08:03:37.260411 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-r5w67-config-rzsdn" Nov 29 08:03:37 crc kubenswrapper[4795]: I1129 08:03:37.274286 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvwh8\" (UniqueName: \"kubernetes.io/projected/02097025-8724-4b6a-b817-d8a5d40f2d24-kube-api-access-hvwh8\") pod \"mysqld-exporter-openstack-db-create-f7j9q\" (UID: \"02097025-8724-4b6a-b817-d8a5d40f2d24\") " pod="openstack/mysqld-exporter-openstack-db-create-f7j9q" Nov 29 08:03:37 crc kubenswrapper[4795]: I1129 08:03:37.335962 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-f7j9q" Nov 29 08:03:37 crc kubenswrapper[4795]: I1129 08:03:37.351970 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqj8s\" (UniqueName: \"kubernetes.io/projected/ff621b92-0616-4d28-abd8-b18c17c6990e-kube-api-access-gqj8s\") pod \"mysqld-exporter-dff1-account-create-update-qpdmv\" (UID: \"ff621b92-0616-4d28-abd8-b18c17c6990e\") " pod="openstack/mysqld-exporter-dff1-account-create-update-qpdmv" Nov 29 08:03:37 crc kubenswrapper[4795]: I1129 08:03:37.352076 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff621b92-0616-4d28-abd8-b18c17c6990e-operator-scripts\") pod \"mysqld-exporter-dff1-account-create-update-qpdmv\" (UID: \"ff621b92-0616-4d28-abd8-b18c17c6990e\") " pod="openstack/mysqld-exporter-dff1-account-create-update-qpdmv" Nov 29 08:03:37 crc kubenswrapper[4795]: I1129 08:03:37.353188 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff621b92-0616-4d28-abd8-b18c17c6990e-operator-scripts\") pod \"mysqld-exporter-dff1-account-create-update-qpdmv\" (UID: \"ff621b92-0616-4d28-abd8-b18c17c6990e\") " pod="openstack/mysqld-exporter-dff1-account-create-update-qpdmv" Nov 29 08:03:37 crc kubenswrapper[4795]: I1129 08:03:37.373142 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqj8s\" (UniqueName: \"kubernetes.io/projected/ff621b92-0616-4d28-abd8-b18c17c6990e-kube-api-access-gqj8s\") pod \"mysqld-exporter-dff1-account-create-update-qpdmv\" (UID: \"ff621b92-0616-4d28-abd8-b18c17c6990e\") " pod="openstack/mysqld-exporter-dff1-account-create-update-qpdmv" Nov 29 08:03:37 crc kubenswrapper[4795]: I1129 08:03:37.416364 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-dff1-account-create-update-qpdmv" Nov 29 08:03:39 crc kubenswrapper[4795]: I1129 08:03:39.973225 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-hld85"] Nov 29 08:03:39 crc kubenswrapper[4795]: I1129 08:03:39.975450 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-hld85" Nov 29 08:03:39 crc kubenswrapper[4795]: I1129 08:03:39.993648 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-hld85"] Nov 29 08:03:40 crc kubenswrapper[4795]: W1129 08:03:40.011350 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0a8025f_2b8a_4f5d_9462_09e032e5b6c2.slice/crio-f63c64a0afa09e408e0b5788d30a4557f66a0bfcf864ed65262a26e397a57086 WatchSource:0}: Error finding container f63c64a0afa09e408e0b5788d30a4557f66a0bfcf864ed65262a26e397a57086: Status 404 returned error can't find the container with id f63c64a0afa09e408e0b5788d30a4557f66a0bfcf864ed65262a26e397a57086 Nov 29 08:03:40 crc kubenswrapper[4795]: I1129 08:03:40.076460 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-346e-account-create-update-cw8zl"] Nov 29 08:03:40 crc kubenswrapper[4795]: I1129 08:03:40.077968 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-346e-account-create-update-cw8zl" Nov 29 08:03:40 crc kubenswrapper[4795]: I1129 08:03:40.082447 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Nov 29 08:03:40 crc kubenswrapper[4795]: I1129 08:03:40.109438 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-346e-account-create-update-cw8zl"] Nov 29 08:03:40 crc kubenswrapper[4795]: I1129 08:03:40.120273 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c47add8-93ad-456c-8f90-bb854d981a3e-operator-scripts\") pod \"glance-db-create-hld85\" (UID: \"8c47add8-93ad-456c-8f90-bb854d981a3e\") " pod="openstack/glance-db-create-hld85" Nov 29 08:03:40 crc kubenswrapper[4795]: I1129 08:03:40.120604 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqffh\" (UniqueName: \"kubernetes.io/projected/8c47add8-93ad-456c-8f90-bb854d981a3e-kube-api-access-mqffh\") pod \"glance-db-create-hld85\" (UID: \"8c47add8-93ad-456c-8f90-bb854d981a3e\") " pod="openstack/glance-db-create-hld85" Nov 29 08:03:40 crc kubenswrapper[4795]: I1129 08:03:40.223459 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qj8wt\" (UniqueName: \"kubernetes.io/projected/e4891454-7144-4416-b2d3-f16e56001077-kube-api-access-qj8wt\") pod \"glance-346e-account-create-update-cw8zl\" (UID: \"e4891454-7144-4416-b2d3-f16e56001077\") " pod="openstack/glance-346e-account-create-update-cw8zl" Nov 29 08:03:40 crc kubenswrapper[4795]: I1129 08:03:40.224017 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c47add8-93ad-456c-8f90-bb854d981a3e-operator-scripts\") pod \"glance-db-create-hld85\" (UID: \"8c47add8-93ad-456c-8f90-bb854d981a3e\") " pod="openstack/glance-db-create-hld85" Nov 29 08:03:40 crc kubenswrapper[4795]: I1129 08:03:40.224215 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqffh\" (UniqueName: \"kubernetes.io/projected/8c47add8-93ad-456c-8f90-bb854d981a3e-kube-api-access-mqffh\") pod \"glance-db-create-hld85\" (UID: \"8c47add8-93ad-456c-8f90-bb854d981a3e\") " pod="openstack/glance-db-create-hld85" Nov 29 08:03:40 crc kubenswrapper[4795]: I1129 08:03:40.224272 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4891454-7144-4416-b2d3-f16e56001077-operator-scripts\") pod \"glance-346e-account-create-update-cw8zl\" (UID: \"e4891454-7144-4416-b2d3-f16e56001077\") " pod="openstack/glance-346e-account-create-update-cw8zl" Nov 29 08:03:40 crc kubenswrapper[4795]: I1129 08:03:40.225505 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c47add8-93ad-456c-8f90-bb854d981a3e-operator-scripts\") pod \"glance-db-create-hld85\" (UID: \"8c47add8-93ad-456c-8f90-bb854d981a3e\") " pod="openstack/glance-db-create-hld85" Nov 29 08:03:40 crc kubenswrapper[4795]: I1129 08:03:40.251701 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqffh\" (UniqueName: \"kubernetes.io/projected/8c47add8-93ad-456c-8f90-bb854d981a3e-kube-api-access-mqffh\") pod \"glance-db-create-hld85\" (UID: \"8c47add8-93ad-456c-8f90-bb854d981a3e\") " pod="openstack/glance-db-create-hld85" Nov 29 08:03:40 crc kubenswrapper[4795]: I1129 08:03:40.329874 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-ww42q" Nov 29 08:03:40 crc kubenswrapper[4795]: I1129 08:03:40.337402 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qj8wt\" (UniqueName: \"kubernetes.io/projected/e4891454-7144-4416-b2d3-f16e56001077-kube-api-access-qj8wt\") pod \"glance-346e-account-create-update-cw8zl\" (UID: \"e4891454-7144-4416-b2d3-f16e56001077\") " pod="openstack/glance-346e-account-create-update-cw8zl" Nov 29 08:03:40 crc kubenswrapper[4795]: I1129 08:03:40.337852 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4891454-7144-4416-b2d3-f16e56001077-operator-scripts\") pod \"glance-346e-account-create-update-cw8zl\" (UID: \"e4891454-7144-4416-b2d3-f16e56001077\") " pod="openstack/glance-346e-account-create-update-cw8zl" Nov 29 08:03:40 crc kubenswrapper[4795]: I1129 08:03:40.338169 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-hld85" Nov 29 08:03:40 crc kubenswrapper[4795]: I1129 08:03:40.340061 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4891454-7144-4416-b2d3-f16e56001077-operator-scripts\") pod \"glance-346e-account-create-update-cw8zl\" (UID: \"e4891454-7144-4416-b2d3-f16e56001077\") " pod="openstack/glance-346e-account-create-update-cw8zl" Nov 29 08:03:40 crc kubenswrapper[4795]: I1129 08:03:40.360236 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qj8wt\" (UniqueName: \"kubernetes.io/projected/e4891454-7144-4416-b2d3-f16e56001077-kube-api-access-qj8wt\") pod \"glance-346e-account-create-update-cw8zl\" (UID: \"e4891454-7144-4416-b2d3-f16e56001077\") " pod="openstack/glance-346e-account-create-update-cw8zl" Nov 29 08:03:40 crc kubenswrapper[4795]: I1129 08:03:40.362830 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-346e-account-create-update-cw8zl" Nov 29 08:03:40 crc kubenswrapper[4795]: I1129 08:03:40.440038 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/60a004a5-f226-49aa-b9e7-12a384ddece6-ring-data-devices\") pod \"60a004a5-f226-49aa-b9e7-12a384ddece6\" (UID: \"60a004a5-f226-49aa-b9e7-12a384ddece6\") " Nov 29 08:03:40 crc kubenswrapper[4795]: I1129 08:03:40.440204 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/60a004a5-f226-49aa-b9e7-12a384ddece6-etc-swift\") pod \"60a004a5-f226-49aa-b9e7-12a384ddece6\" (UID: \"60a004a5-f226-49aa-b9e7-12a384ddece6\") " Nov 29 08:03:40 crc kubenswrapper[4795]: I1129 08:03:40.440239 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fx942\" (UniqueName: \"kubernetes.io/projected/60a004a5-f226-49aa-b9e7-12a384ddece6-kube-api-access-fx942\") pod \"60a004a5-f226-49aa-b9e7-12a384ddece6\" (UID: \"60a004a5-f226-49aa-b9e7-12a384ddece6\") " Nov 29 08:03:40 crc kubenswrapper[4795]: I1129 08:03:40.440260 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60a004a5-f226-49aa-b9e7-12a384ddece6-combined-ca-bundle\") pod \"60a004a5-f226-49aa-b9e7-12a384ddece6\" (UID: \"60a004a5-f226-49aa-b9e7-12a384ddece6\") " Nov 29 08:03:40 crc kubenswrapper[4795]: I1129 08:03:40.440292 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/60a004a5-f226-49aa-b9e7-12a384ddece6-dispersionconf\") pod \"60a004a5-f226-49aa-b9e7-12a384ddece6\" (UID: \"60a004a5-f226-49aa-b9e7-12a384ddece6\") " Nov 29 08:03:40 crc kubenswrapper[4795]: I1129 08:03:40.440369 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/60a004a5-f226-49aa-b9e7-12a384ddece6-scripts\") pod \"60a004a5-f226-49aa-b9e7-12a384ddece6\" (UID: \"60a004a5-f226-49aa-b9e7-12a384ddece6\") " Nov 29 08:03:40 crc kubenswrapper[4795]: I1129 08:03:40.440440 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/60a004a5-f226-49aa-b9e7-12a384ddece6-swiftconf\") pod \"60a004a5-f226-49aa-b9e7-12a384ddece6\" (UID: \"60a004a5-f226-49aa-b9e7-12a384ddece6\") " Nov 29 08:03:40 crc kubenswrapper[4795]: I1129 08:03:40.442224 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60a004a5-f226-49aa-b9e7-12a384ddece6-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "60a004a5-f226-49aa-b9e7-12a384ddece6" (UID: "60a004a5-f226-49aa-b9e7-12a384ddece6"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:03:40 crc kubenswrapper[4795]: I1129 08:03:40.442548 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60a004a5-f226-49aa-b9e7-12a384ddece6-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "60a004a5-f226-49aa-b9e7-12a384ddece6" (UID: "60a004a5-f226-49aa-b9e7-12a384ddece6"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:03:40 crc kubenswrapper[4795]: I1129 08:03:40.446237 4795 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/60a004a5-f226-49aa-b9e7-12a384ddece6-ring-data-devices\") on node \"crc\" DevicePath \"\"" Nov 29 08:03:40 crc kubenswrapper[4795]: I1129 08:03:40.446267 4795 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/60a004a5-f226-49aa-b9e7-12a384ddece6-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 29 08:03:40 crc kubenswrapper[4795]: I1129 08:03:40.449902 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60a004a5-f226-49aa-b9e7-12a384ddece6-kube-api-access-fx942" (OuterVolumeSpecName: "kube-api-access-fx942") pod "60a004a5-f226-49aa-b9e7-12a384ddece6" (UID: "60a004a5-f226-49aa-b9e7-12a384ddece6"). InnerVolumeSpecName "kube-api-access-fx942". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:03:40 crc kubenswrapper[4795]: I1129 08:03:40.453100 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60a004a5-f226-49aa-b9e7-12a384ddece6-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "60a004a5-f226-49aa-b9e7-12a384ddece6" (UID: "60a004a5-f226-49aa-b9e7-12a384ddece6"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:03:40 crc kubenswrapper[4795]: I1129 08:03:40.477553 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60a004a5-f226-49aa-b9e7-12a384ddece6-scripts" (OuterVolumeSpecName: "scripts") pod "60a004a5-f226-49aa-b9e7-12a384ddece6" (UID: "60a004a5-f226-49aa-b9e7-12a384ddece6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:03:40 crc kubenswrapper[4795]: I1129 08:03:40.479475 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60a004a5-f226-49aa-b9e7-12a384ddece6-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "60a004a5-f226-49aa-b9e7-12a384ddece6" (UID: "60a004a5-f226-49aa-b9e7-12a384ddece6"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:03:40 crc kubenswrapper[4795]: I1129 08:03:40.482845 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60a004a5-f226-49aa-b9e7-12a384ddece6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "60a004a5-f226-49aa-b9e7-12a384ddece6" (UID: "60a004a5-f226-49aa-b9e7-12a384ddece6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:03:40 crc kubenswrapper[4795]: I1129 08:03:40.548019 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fx942\" (UniqueName: \"kubernetes.io/projected/60a004a5-f226-49aa-b9e7-12a384ddece6-kube-api-access-fx942\") on node \"crc\" DevicePath \"\"" Nov 29 08:03:40 crc kubenswrapper[4795]: I1129 08:03:40.548079 4795 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60a004a5-f226-49aa-b9e7-12a384ddece6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 08:03:40 crc kubenswrapper[4795]: I1129 08:03:40.548091 4795 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/60a004a5-f226-49aa-b9e7-12a384ddece6-dispersionconf\") on node \"crc\" DevicePath \"\"" Nov 29 08:03:40 crc kubenswrapper[4795]: I1129 08:03:40.548104 4795 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/60a004a5-f226-49aa-b9e7-12a384ddece6-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 08:03:40 crc kubenswrapper[4795]: I1129 08:03:40.548115 4795 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/60a004a5-f226-49aa-b9e7-12a384ddece6-swiftconf\") on node \"crc\" DevicePath \"\"" Nov 29 08:03:40 crc kubenswrapper[4795]: I1129 08:03:40.555690 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-b6t5r"] Nov 29 08:03:40 crc kubenswrapper[4795]: I1129 08:03:40.626142 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-ww42q" event={"ID":"60a004a5-f226-49aa-b9e7-12a384ddece6","Type":"ContainerDied","Data":"ed607f7b23c23e897c279fc02739ca1b906fffbad01dd58568d7b970cc531703"} Nov 29 08:03:40 crc kubenswrapper[4795]: I1129 08:03:40.626367 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed607f7b23c23e897c279fc02739ca1b906fffbad01dd58568d7b970cc531703" Nov 29 08:03:40 crc kubenswrapper[4795]: I1129 08:03:40.626198 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-ww42q" Nov 29 08:03:40 crc kubenswrapper[4795]: I1129 08:03:40.627601 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-4s8c4" event={"ID":"2cf81321-88f9-43ee-8356-e0355bc3da52","Type":"ContainerStarted","Data":"b7680e10aa9337b1d54b04197de717b1b2620483fee76200ef2b4deb17a2510c"} Nov 29 08:03:40 crc kubenswrapper[4795]: I1129 08:03:40.628758 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6ef0-account-create-update-7swjc" event={"ID":"f0a8025f-2b8a-4f5d-9462-09e032e5b6c2","Type":"ContainerStarted","Data":"f63c64a0afa09e408e0b5788d30a4557f66a0bfcf864ed65262a26e397a57086"} Nov 29 08:03:40 crc kubenswrapper[4795]: I1129 08:03:40.745535 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-4710-account-create-update-469z8"] Nov 29 08:03:40 crc kubenswrapper[4795]: I1129 08:03:40.756440 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-dff1-account-create-update-qpdmv"] Nov 29 08:03:40 crc kubenswrapper[4795]: W1129 08:03:40.969214 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff621b92_0616_4d28_abd8_b18c17c6990e.slice/crio-a109b9cffe6ab75815f0e2aa62ac5c1149c9db966f1dfb4d8c7c51e2404d9ad8 WatchSource:0}: Error finding container a109b9cffe6ab75815f0e2aa62ac5c1149c9db966f1dfb4d8c7c51e2404d9ad8: Status 404 returned error can't find the container with id a109b9cffe6ab75815f0e2aa62ac5c1149c9db966f1dfb4d8c7c51e2404d9ad8 Nov 29 08:03:41 crc kubenswrapper[4795]: I1129 08:03:41.570004 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-f7j9q"] Nov 29 08:03:41 crc kubenswrapper[4795]: I1129 08:03:41.643379 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-4710-account-create-update-469z8" event={"ID":"696f8b00-2104-47ea-bc5c-0e317dd00de1","Type":"ContainerStarted","Data":"507e198ce0aac6ae2d2bd7529c34af247dbfd5d4eedd2dcc25b68099472209b8"} Nov 29 08:03:41 crc kubenswrapper[4795]: I1129 08:03:41.645364 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-4s8c4" event={"ID":"2cf81321-88f9-43ee-8356-e0355bc3da52","Type":"ContainerStarted","Data":"99528692e0224a85c4c4c7b0fad44e9b31c922a5af2675a2687db64700fbe1be"} Nov 29 08:03:41 crc kubenswrapper[4795]: I1129 08:03:41.653121 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-b6t5r" event={"ID":"5e6def42-cee6-477c-940a-1bb6e20df694","Type":"ContainerStarted","Data":"6392048faa91c2e42a505302668bf2061e22a616662633078b2a57fd05b79198"} Nov 29 08:03:41 crc kubenswrapper[4795]: I1129 08:03:41.656240 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-f7j9q" event={"ID":"02097025-8724-4b6a-b817-d8a5d40f2d24","Type":"ContainerStarted","Data":"e46d244f83e9803a24f98aceeb79eb01a7309945dfd56b133d9bd601ba678b1e"} Nov 29 08:03:41 crc kubenswrapper[4795]: I1129 08:03:41.661922 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-dff1-account-create-update-qpdmv" event={"ID":"ff621b92-0616-4d28-abd8-b18c17c6990e","Type":"ContainerStarted","Data":"a109b9cffe6ab75815f0e2aa62ac5c1149c9db966f1dfb4d8c7c51e2404d9ad8"} Nov 29 08:03:41 crc kubenswrapper[4795]: I1129 08:03:41.684784 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-4s8c4" podStartSLOduration=7.684756307 podStartE2EDuration="7.684756307s" podCreationTimestamp="2025-11-29 08:03:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 08:03:41.670660276 +0000 UTC m=+1467.646236066" watchObservedRunningTime="2025-11-29 08:03:41.684756307 +0000 UTC m=+1467.660332097" Nov 29 08:03:41 crc kubenswrapper[4795]: I1129 08:03:41.708268 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-hld85"] Nov 29 08:03:41 crc kubenswrapper[4795]: W1129 08:03:41.717910 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c47add8_93ad_456c_8f90_bb854d981a3e.slice/crio-1e4672731470dfa449d682ede1c0433b04cb01b50ca384d96d96b6db4b724e5e WatchSource:0}: Error finding container 1e4672731470dfa449d682ede1c0433b04cb01b50ca384d96d96b6db4b724e5e: Status 404 returned error can't find the container with id 1e4672731470dfa449d682ede1c0433b04cb01b50ca384d96d96b6db4b724e5e Nov 29 08:03:41 crc kubenswrapper[4795]: I1129 08:03:41.921345 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-r5w67-config-rzsdn"] Nov 29 08:03:41 crc kubenswrapper[4795]: I1129 08:03:41.939117 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-346e-account-create-update-cw8zl"] Nov 29 08:03:41 crc kubenswrapper[4795]: I1129 08:03:41.943826 4795 patch_prober.go:28] interesting pod/machine-config-daemon-bkmq6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:03:41 crc kubenswrapper[4795]: I1129 08:03:41.943917 4795 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:03:41 crc kubenswrapper[4795]: I1129 08:03:41.974358 4795 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-r5w67" podUID="77e980be-cb41-448f-96d7-0c99fec4d400" containerName="ovn-controller" probeResult="failure" output=< Nov 29 08:03:41 crc kubenswrapper[4795]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 29 08:03:41 crc kubenswrapper[4795]: > Nov 29 08:03:41 crc kubenswrapper[4795]: W1129 08:03:41.980642 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48a8478e_b079_46a5_b3f6_c60ac90b7bce.slice/crio-2a0fe2578783f39ee9cf21eeffe6174ab0c63f9add6389f71ac492c126f7e63a WatchSource:0}: Error finding container 2a0fe2578783f39ee9cf21eeffe6174ab0c63f9add6389f71ac492c126f7e63a: Status 404 returned error can't find the container with id 2a0fe2578783f39ee9cf21eeffe6174ab0c63f9add6389f71ac492c126f7e63a Nov 29 08:03:42 crc kubenswrapper[4795]: E1129 08:03:42.462432 4795 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e6def42_cee6_477c_940a_1bb6e20df694.slice/crio-1622293259e24b7de444593ff2fcb10780ecf4f31c9605db98ea4019f5844129.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod696f8b00_2104_47ea_bc5c_0e317dd00de1.slice/crio-e2bfe98b406cf8db2d40fe791d8b9eee06db2c4a7314293a41f4604dee5c24d0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e6def42_cee6_477c_940a_1bb6e20df694.slice/crio-conmon-1622293259e24b7de444593ff2fcb10780ecf4f31c9605db98ea4019f5844129.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff621b92_0616_4d28_abd8_b18c17c6990e.slice/crio-conmon-4269373f1eb543f8207df3d61d66c9cbd6d73556e63520f1ae2fa6f3bc0a0df6.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff621b92_0616_4d28_abd8_b18c17c6990e.slice/crio-4269373f1eb543f8207df3d61d66c9cbd6d73556e63520f1ae2fa6f3bc0a0df6.scope\": RecentStats: unable to find data in memory cache]" Nov 29 08:03:42 crc kubenswrapper[4795]: I1129 08:03:42.714162 4795 generic.go:334] "Generic (PLEG): container finished" podID="5e6def42-cee6-477c-940a-1bb6e20df694" containerID="1622293259e24b7de444593ff2fcb10780ecf4f31c9605db98ea4019f5844129" exitCode=0 Nov 29 08:03:42 crc kubenswrapper[4795]: I1129 08:03:42.714414 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-b6t5r" event={"ID":"5e6def42-cee6-477c-940a-1bb6e20df694","Type":"ContainerDied","Data":"1622293259e24b7de444593ff2fcb10780ecf4f31c9605db98ea4019f5844129"} Nov 29 08:03:42 crc kubenswrapper[4795]: I1129 08:03:42.723145 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-hld85" event={"ID":"8c47add8-93ad-456c-8f90-bb854d981a3e","Type":"ContainerStarted","Data":"1e4672731470dfa449d682ede1c0433b04cb01b50ca384d96d96b6db4b724e5e"} Nov 29 08:03:42 crc kubenswrapper[4795]: I1129 08:03:42.727178 4795 generic.go:334] "Generic (PLEG): container finished" podID="696f8b00-2104-47ea-bc5c-0e317dd00de1" containerID="e2bfe98b406cf8db2d40fe791d8b9eee06db2c4a7314293a41f4604dee5c24d0" exitCode=0 Nov 29 08:03:42 crc kubenswrapper[4795]: I1129 08:03:42.727274 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-4710-account-create-update-469z8" event={"ID":"696f8b00-2104-47ea-bc5c-0e317dd00de1","Type":"ContainerDied","Data":"e2bfe98b406cf8db2d40fe791d8b9eee06db2c4a7314293a41f4604dee5c24d0"} Nov 29 08:03:42 crc kubenswrapper[4795]: I1129 08:03:42.731760 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-346e-account-create-update-cw8zl" event={"ID":"e4891454-7144-4416-b2d3-f16e56001077","Type":"ContainerStarted","Data":"b5873862b907d380ae390ee1ac1c3a155137769f623db8595b7a8b70e0b861bc"} Nov 29 08:03:42 crc kubenswrapper[4795]: I1129 08:03:42.738112 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"74169c45-99e0-4179-a18b-07a1c2cade8b","Type":"ContainerStarted","Data":"223146acec54e1f27f623b4d80745a10b0a3e4ff9128f8cc93d3e916a4e18752"} Nov 29 08:03:42 crc kubenswrapper[4795]: I1129 08:03:42.739880 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 29 08:03:42 crc kubenswrapper[4795]: I1129 08:03:42.744165 4795 generic.go:334] "Generic (PLEG): container finished" podID="ff621b92-0616-4d28-abd8-b18c17c6990e" containerID="4269373f1eb543f8207df3d61d66c9cbd6d73556e63520f1ae2fa6f3bc0a0df6" exitCode=0 Nov 29 08:03:42 crc kubenswrapper[4795]: I1129 08:03:42.744232 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-dff1-account-create-update-qpdmv" event={"ID":"ff621b92-0616-4d28-abd8-b18c17c6990e","Type":"ContainerDied","Data":"4269373f1eb543f8207df3d61d66c9cbd6d73556e63520f1ae2fa6f3bc0a0df6"} Nov 29 08:03:42 crc kubenswrapper[4795]: I1129 08:03:42.747965 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"28c13f65-78c4-4d4a-8960-7ef17a4c93e8","Type":"ContainerStarted","Data":"5113ad63b22ce09343c250010d6d6471a16602dc3aa39754aa991fb1ca8b6f22"} Nov 29 08:03:42 crc kubenswrapper[4795]: I1129 08:03:42.751385 4795 generic.go:334] "Generic (PLEG): container finished" podID="2cf81321-88f9-43ee-8356-e0355bc3da52" containerID="99528692e0224a85c4c4c7b0fad44e9b31c922a5af2675a2687db64700fbe1be" exitCode=0 Nov 29 08:03:42 crc kubenswrapper[4795]: I1129 08:03:42.751482 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-4s8c4" event={"ID":"2cf81321-88f9-43ee-8356-e0355bc3da52","Type":"ContainerDied","Data":"99528692e0224a85c4c4c7b0fad44e9b31c922a5af2675a2687db64700fbe1be"} Nov 29 08:03:42 crc kubenswrapper[4795]: I1129 08:03:42.757000 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"063373b7-6898-409d-b792-d770a8f6f021","Type":"ContainerStarted","Data":"bcd84a11402b10454dd7031340f3d460680caa2f285ba45c386b62aad92ba57e"} Nov 29 08:03:42 crc kubenswrapper[4795]: I1129 08:03:42.760524 4795 generic.go:334] "Generic (PLEG): container finished" podID="f0a8025f-2b8a-4f5d-9462-09e032e5b6c2" containerID="4ae6b555a501b4ab2dee9dca62b036c42c21222fa517f7b4c7ddb0016932ff38" exitCode=0 Nov 29 08:03:42 crc kubenswrapper[4795]: I1129 08:03:42.760628 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6ef0-account-create-update-7swjc" event={"ID":"f0a8025f-2b8a-4f5d-9462-09e032e5b6c2","Type":"ContainerDied","Data":"4ae6b555a501b4ab2dee9dca62b036c42c21222fa517f7b4c7ddb0016932ff38"} Nov 29 08:03:42 crc kubenswrapper[4795]: I1129 08:03:42.762688 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-r5w67-config-rzsdn" event={"ID":"48a8478e-b079-46a5-b3f6-c60ac90b7bce","Type":"ContainerStarted","Data":"2a0fe2578783f39ee9cf21eeffe6174ab0c63f9add6389f71ac492c126f7e63a"} Nov 29 08:03:42 crc kubenswrapper[4795]: I1129 08:03:42.785246 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-346e-account-create-update-cw8zl" podStartSLOduration=2.785221901 podStartE2EDuration="2.785221901s" podCreationTimestamp="2025-11-29 08:03:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 08:03:42.781035821 +0000 UTC m=+1468.756611611" watchObservedRunningTime="2025-11-29 08:03:42.785221901 +0000 UTC m=+1468.760797701" Nov 29 08:03:42 crc kubenswrapper[4795]: I1129 08:03:42.879347 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=45.592347586 podStartE2EDuration="1m23.879330147s" podCreationTimestamp="2025-11-29 08:02:19 +0000 UTC" firstStartedPulling="2025-11-29 08:02:22.519894729 +0000 UTC m=+1388.495470519" lastFinishedPulling="2025-11-29 08:03:00.80687729 +0000 UTC m=+1426.782453080" observedRunningTime="2025-11-29 08:03:42.813226377 +0000 UTC m=+1468.788802167" watchObservedRunningTime="2025-11-29 08:03:42.879330147 +0000 UTC m=+1468.854905937" Nov 29 08:03:42 crc kubenswrapper[4795]: I1129 08:03:42.909667 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=-9223371953.945124 podStartE2EDuration="1m22.909652429s" podCreationTimestamp="2025-11-29 08:02:20 +0000 UTC" firstStartedPulling="2025-11-29 08:02:22.756677542 +0000 UTC m=+1388.732253342" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 08:03:42.908332061 +0000 UTC m=+1468.883907851" watchObservedRunningTime="2025-11-29 08:03:42.909652429 +0000 UTC m=+1468.885228229" Nov 29 08:03:43 crc kubenswrapper[4795]: I1129 08:03:43.776435 4795 generic.go:334] "Generic (PLEG): container finished" podID="48a8478e-b079-46a5-b3f6-c60ac90b7bce" containerID="941249688badc0d381c329a17fc5b82e6614eecd7e35012a87826a56d60c35fa" exitCode=0 Nov 29 08:03:43 crc kubenswrapper[4795]: I1129 08:03:43.776477 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-r5w67-config-rzsdn" event={"ID":"48a8478e-b079-46a5-b3f6-c60ac90b7bce","Type":"ContainerDied","Data":"941249688badc0d381c329a17fc5b82e6614eecd7e35012a87826a56d60c35fa"} Nov 29 08:03:43 crc kubenswrapper[4795]: I1129 08:03:43.779006 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-f7j9q" event={"ID":"02097025-8724-4b6a-b817-d8a5d40f2d24","Type":"ContainerStarted","Data":"2f63923573ab34048b911384ddf736793b908e5486529890d64dbd2d8ef51bbe"} Nov 29 08:03:43 crc kubenswrapper[4795]: I1129 08:03:43.782430 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-hld85" event={"ID":"8c47add8-93ad-456c-8f90-bb854d981a3e","Type":"ContainerStarted","Data":"b88fd5677e30ef7e944d5c161813b169ab1cb077b0f034bf277cbbc4f3525b02"} Nov 29 08:03:43 crc kubenswrapper[4795]: I1129 08:03:43.784661 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-346e-account-create-update-cw8zl" event={"ID":"e4891454-7144-4416-b2d3-f16e56001077","Type":"ContainerStarted","Data":"483fdeba8bc0eaec0069cd90617ef89cd7b89f192c0180faab24864d98a4821a"} Nov 29 08:03:43 crc kubenswrapper[4795]: I1129 08:03:43.835011 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-hld85" podStartSLOduration=4.834987142 podStartE2EDuration="4.834987142s" podCreationTimestamp="2025-11-29 08:03:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 08:03:43.824640238 +0000 UTC m=+1469.800216048" watchObservedRunningTime="2025-11-29 08:03:43.834987142 +0000 UTC m=+1469.810562932" Nov 29 08:03:43 crc kubenswrapper[4795]: I1129 08:03:43.855224 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-openstack-db-create-f7j9q" podStartSLOduration=7.855204057 podStartE2EDuration="7.855204057s" podCreationTimestamp="2025-11-29 08:03:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 08:03:43.851533543 +0000 UTC m=+1469.827109343" watchObservedRunningTime="2025-11-29 08:03:43.855204057 +0000 UTC m=+1469.830779847" Nov 29 08:03:44 crc kubenswrapper[4795]: I1129 08:03:44.312275 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4710-account-create-update-469z8" Nov 29 08:03:44 crc kubenswrapper[4795]: I1129 08:03:44.446856 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/696f8b00-2104-47ea-bc5c-0e317dd00de1-operator-scripts\") pod \"696f8b00-2104-47ea-bc5c-0e317dd00de1\" (UID: \"696f8b00-2104-47ea-bc5c-0e317dd00de1\") " Nov 29 08:03:44 crc kubenswrapper[4795]: I1129 08:03:44.446963 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4lf7\" (UniqueName: \"kubernetes.io/projected/696f8b00-2104-47ea-bc5c-0e317dd00de1-kube-api-access-w4lf7\") pod \"696f8b00-2104-47ea-bc5c-0e317dd00de1\" (UID: \"696f8b00-2104-47ea-bc5c-0e317dd00de1\") " Nov 29 08:03:44 crc kubenswrapper[4795]: I1129 08:03:44.448416 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/696f8b00-2104-47ea-bc5c-0e317dd00de1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "696f8b00-2104-47ea-bc5c-0e317dd00de1" (UID: "696f8b00-2104-47ea-bc5c-0e317dd00de1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:03:44 crc kubenswrapper[4795]: I1129 08:03:44.469206 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/696f8b00-2104-47ea-bc5c-0e317dd00de1-kube-api-access-w4lf7" (OuterVolumeSpecName: "kube-api-access-w4lf7") pod "696f8b00-2104-47ea-bc5c-0e317dd00de1" (UID: "696f8b00-2104-47ea-bc5c-0e317dd00de1"). InnerVolumeSpecName "kube-api-access-w4lf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:03:44 crc kubenswrapper[4795]: I1129 08:03:44.551222 4795 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/696f8b00-2104-47ea-bc5c-0e317dd00de1-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 08:03:44 crc kubenswrapper[4795]: I1129 08:03:44.551283 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4lf7\" (UniqueName: \"kubernetes.io/projected/696f8b00-2104-47ea-bc5c-0e317dd00de1-kube-api-access-w4lf7\") on node \"crc\" DevicePath \"\"" Nov 29 08:03:44 crc kubenswrapper[4795]: I1129 08:03:44.736805 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-dff1-account-create-update-qpdmv" Nov 29 08:03:44 crc kubenswrapper[4795]: I1129 08:03:44.742075 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-4s8c4" Nov 29 08:03:44 crc kubenswrapper[4795]: I1129 08:03:44.752420 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6ef0-account-create-update-7swjc" Nov 29 08:03:44 crc kubenswrapper[4795]: I1129 08:03:44.763021 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-b6t5r" Nov 29 08:03:44 crc kubenswrapper[4795]: I1129 08:03:44.803779 4795 generic.go:334] "Generic (PLEG): container finished" podID="e4891454-7144-4416-b2d3-f16e56001077" containerID="483fdeba8bc0eaec0069cd90617ef89cd7b89f192c0180faab24864d98a4821a" exitCode=0 Nov 29 08:03:44 crc kubenswrapper[4795]: I1129 08:03:44.803843 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-346e-account-create-update-cw8zl" event={"ID":"e4891454-7144-4416-b2d3-f16e56001077","Type":"ContainerDied","Data":"483fdeba8bc0eaec0069cd90617ef89cd7b89f192c0180faab24864d98a4821a"} Nov 29 08:03:44 crc kubenswrapper[4795]: I1129 08:03:44.807293 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6ef0-account-create-update-7swjc" Nov 29 08:03:44 crc kubenswrapper[4795]: I1129 08:03:44.807301 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6ef0-account-create-update-7swjc" event={"ID":"f0a8025f-2b8a-4f5d-9462-09e032e5b6c2","Type":"ContainerDied","Data":"f63c64a0afa09e408e0b5788d30a4557f66a0bfcf864ed65262a26e397a57086"} Nov 29 08:03:44 crc kubenswrapper[4795]: I1129 08:03:44.807913 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f63c64a0afa09e408e0b5788d30a4557f66a0bfcf864ed65262a26e397a57086" Nov 29 08:03:44 crc kubenswrapper[4795]: I1129 08:03:44.808741 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-b6t5r" event={"ID":"5e6def42-cee6-477c-940a-1bb6e20df694","Type":"ContainerDied","Data":"6392048faa91c2e42a505302668bf2061e22a616662633078b2a57fd05b79198"} Nov 29 08:03:44 crc kubenswrapper[4795]: I1129 08:03:44.808763 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6392048faa91c2e42a505302668bf2061e22a616662633078b2a57fd05b79198" Nov 29 08:03:44 crc kubenswrapper[4795]: I1129 08:03:44.808813 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-b6t5r" Nov 29 08:03:44 crc kubenswrapper[4795]: I1129 08:03:44.811985 4795 generic.go:334] "Generic (PLEG): container finished" podID="02097025-8724-4b6a-b817-d8a5d40f2d24" containerID="2f63923573ab34048b911384ddf736793b908e5486529890d64dbd2d8ef51bbe" exitCode=0 Nov 29 08:03:44 crc kubenswrapper[4795]: I1129 08:03:44.812066 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-f7j9q" event={"ID":"02097025-8724-4b6a-b817-d8a5d40f2d24","Type":"ContainerDied","Data":"2f63923573ab34048b911384ddf736793b908e5486529890d64dbd2d8ef51bbe"} Nov 29 08:03:44 crc kubenswrapper[4795]: I1129 08:03:44.814240 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-dff1-account-create-update-qpdmv" event={"ID":"ff621b92-0616-4d28-abd8-b18c17c6990e","Type":"ContainerDied","Data":"a109b9cffe6ab75815f0e2aa62ac5c1149c9db966f1dfb4d8c7c51e2404d9ad8"} Nov 29 08:03:44 crc kubenswrapper[4795]: I1129 08:03:44.814272 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a109b9cffe6ab75815f0e2aa62ac5c1149c9db966f1dfb4d8c7c51e2404d9ad8" Nov 29 08:03:44 crc kubenswrapper[4795]: I1129 08:03:44.814334 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-dff1-account-create-update-qpdmv" Nov 29 08:03:44 crc kubenswrapper[4795]: I1129 08:03:44.828867 4795 generic.go:334] "Generic (PLEG): container finished" podID="8c47add8-93ad-456c-8f90-bb854d981a3e" containerID="b88fd5677e30ef7e944d5c161813b169ab1cb077b0f034bf277cbbc4f3525b02" exitCode=0 Nov 29 08:03:44 crc kubenswrapper[4795]: I1129 08:03:44.828995 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-hld85" event={"ID":"8c47add8-93ad-456c-8f90-bb854d981a3e","Type":"ContainerDied","Data":"b88fd5677e30ef7e944d5c161813b169ab1cb077b0f034bf277cbbc4f3525b02"} Nov 29 08:03:44 crc kubenswrapper[4795]: I1129 08:03:44.831270 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-4710-account-create-update-469z8" event={"ID":"696f8b00-2104-47ea-bc5c-0e317dd00de1","Type":"ContainerDied","Data":"507e198ce0aac6ae2d2bd7529c34af247dbfd5d4eedd2dcc25b68099472209b8"} Nov 29 08:03:44 crc kubenswrapper[4795]: I1129 08:03:44.831303 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="507e198ce0aac6ae2d2bd7529c34af247dbfd5d4eedd2dcc25b68099472209b8" Nov 29 08:03:44 crc kubenswrapper[4795]: I1129 08:03:44.831355 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4710-account-create-update-469z8" Nov 29 08:03:44 crc kubenswrapper[4795]: I1129 08:03:44.848234 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-4s8c4" Nov 29 08:03:44 crc kubenswrapper[4795]: I1129 08:03:44.848804 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-4s8c4" event={"ID":"2cf81321-88f9-43ee-8356-e0355bc3da52","Type":"ContainerDied","Data":"b7680e10aa9337b1d54b04197de717b1b2620483fee76200ef2b4deb17a2510c"} Nov 29 08:03:44 crc kubenswrapper[4795]: I1129 08:03:44.848894 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7680e10aa9337b1d54b04197de717b1b2620483fee76200ef2b4deb17a2510c" Nov 29 08:03:44 crc kubenswrapper[4795]: I1129 08:03:44.857744 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqj8s\" (UniqueName: \"kubernetes.io/projected/ff621b92-0616-4d28-abd8-b18c17c6990e-kube-api-access-gqj8s\") pod \"ff621b92-0616-4d28-abd8-b18c17c6990e\" (UID: \"ff621b92-0616-4d28-abd8-b18c17c6990e\") " Nov 29 08:03:44 crc kubenswrapper[4795]: I1129 08:03:44.857834 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prrkq\" (UniqueName: \"kubernetes.io/projected/f0a8025f-2b8a-4f5d-9462-09e032e5b6c2-kube-api-access-prrkq\") pod \"f0a8025f-2b8a-4f5d-9462-09e032e5b6c2\" (UID: \"f0a8025f-2b8a-4f5d-9462-09e032e5b6c2\") " Nov 29 08:03:44 crc kubenswrapper[4795]: I1129 08:03:44.857886 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0a8025f-2b8a-4f5d-9462-09e032e5b6c2-operator-scripts\") pod \"f0a8025f-2b8a-4f5d-9462-09e032e5b6c2\" (UID: \"f0a8025f-2b8a-4f5d-9462-09e032e5b6c2\") " Nov 29 08:03:44 crc kubenswrapper[4795]: I1129 08:03:44.857957 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff621b92-0616-4d28-abd8-b18c17c6990e-operator-scripts\") pod \"ff621b92-0616-4d28-abd8-b18c17c6990e\" (UID: \"ff621b92-0616-4d28-abd8-b18c17c6990e\") " Nov 29 08:03:44 crc kubenswrapper[4795]: I1129 08:03:44.858016 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e6def42-cee6-477c-940a-1bb6e20df694-operator-scripts\") pod \"5e6def42-cee6-477c-940a-1bb6e20df694\" (UID: \"5e6def42-cee6-477c-940a-1bb6e20df694\") " Nov 29 08:03:44 crc kubenswrapper[4795]: I1129 08:03:44.858072 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9tmf\" (UniqueName: \"kubernetes.io/projected/5e6def42-cee6-477c-940a-1bb6e20df694-kube-api-access-d9tmf\") pod \"5e6def42-cee6-477c-940a-1bb6e20df694\" (UID: \"5e6def42-cee6-477c-940a-1bb6e20df694\") " Nov 29 08:03:44 crc kubenswrapper[4795]: I1129 08:03:44.858371 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cf81321-88f9-43ee-8356-e0355bc3da52-operator-scripts\") pod \"2cf81321-88f9-43ee-8356-e0355bc3da52\" (UID: \"2cf81321-88f9-43ee-8356-e0355bc3da52\") " Nov 29 08:03:44 crc kubenswrapper[4795]: I1129 08:03:44.858408 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9llht\" (UniqueName: \"kubernetes.io/projected/2cf81321-88f9-43ee-8356-e0355bc3da52-kube-api-access-9llht\") pod \"2cf81321-88f9-43ee-8356-e0355bc3da52\" (UID: \"2cf81321-88f9-43ee-8356-e0355bc3da52\") " Nov 29 08:03:44 crc kubenswrapper[4795]: I1129 08:03:44.858468 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff621b92-0616-4d28-abd8-b18c17c6990e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ff621b92-0616-4d28-abd8-b18c17c6990e" (UID: "ff621b92-0616-4d28-abd8-b18c17c6990e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:03:44 crc kubenswrapper[4795]: I1129 08:03:44.858674 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0a8025f-2b8a-4f5d-9462-09e032e5b6c2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f0a8025f-2b8a-4f5d-9462-09e032e5b6c2" (UID: "f0a8025f-2b8a-4f5d-9462-09e032e5b6c2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:03:44 crc kubenswrapper[4795]: I1129 08:03:44.858947 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e6def42-cee6-477c-940a-1bb6e20df694-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5e6def42-cee6-477c-940a-1bb6e20df694" (UID: "5e6def42-cee6-477c-940a-1bb6e20df694"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:03:44 crc kubenswrapper[4795]: I1129 08:03:44.859030 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cf81321-88f9-43ee-8356-e0355bc3da52-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2cf81321-88f9-43ee-8356-e0355bc3da52" (UID: "2cf81321-88f9-43ee-8356-e0355bc3da52"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:03:44 crc kubenswrapper[4795]: I1129 08:03:44.859795 4795 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cf81321-88f9-43ee-8356-e0355bc3da52-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 08:03:44 crc kubenswrapper[4795]: I1129 08:03:44.859828 4795 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0a8025f-2b8a-4f5d-9462-09e032e5b6c2-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 08:03:44 crc kubenswrapper[4795]: I1129 08:03:44.859846 4795 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff621b92-0616-4d28-abd8-b18c17c6990e-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 08:03:44 crc kubenswrapper[4795]: I1129 08:03:44.859864 4795 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e6def42-cee6-477c-940a-1bb6e20df694-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 08:03:45 crc kubenswrapper[4795]: I1129 08:03:45.068424 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0a8025f-2b8a-4f5d-9462-09e032e5b6c2-kube-api-access-prrkq" (OuterVolumeSpecName: "kube-api-access-prrkq") pod "f0a8025f-2b8a-4f5d-9462-09e032e5b6c2" (UID: "f0a8025f-2b8a-4f5d-9462-09e032e5b6c2"). InnerVolumeSpecName "kube-api-access-prrkq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:03:45 crc kubenswrapper[4795]: I1129 08:03:45.068799 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff621b92-0616-4d28-abd8-b18c17c6990e-kube-api-access-gqj8s" (OuterVolumeSpecName: "kube-api-access-gqj8s") pod "ff621b92-0616-4d28-abd8-b18c17c6990e" (UID: "ff621b92-0616-4d28-abd8-b18c17c6990e"). InnerVolumeSpecName "kube-api-access-gqj8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:03:45 crc kubenswrapper[4795]: I1129 08:03:45.069685 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqj8s\" (UniqueName: \"kubernetes.io/projected/ff621b92-0616-4d28-abd8-b18c17c6990e-kube-api-access-gqj8s\") pod \"ff621b92-0616-4d28-abd8-b18c17c6990e\" (UID: \"ff621b92-0616-4d28-abd8-b18c17c6990e\") " Nov 29 08:03:45 crc kubenswrapper[4795]: I1129 08:03:45.069753 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prrkq\" (UniqueName: \"kubernetes.io/projected/f0a8025f-2b8a-4f5d-9462-09e032e5b6c2-kube-api-access-prrkq\") pod \"f0a8025f-2b8a-4f5d-9462-09e032e5b6c2\" (UID: \"f0a8025f-2b8a-4f5d-9462-09e032e5b6c2\") " Nov 29 08:03:45 crc kubenswrapper[4795]: I1129 08:03:45.069843 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cf81321-88f9-43ee-8356-e0355bc3da52-kube-api-access-9llht" (OuterVolumeSpecName: "kube-api-access-9llht") pod "2cf81321-88f9-43ee-8356-e0355bc3da52" (UID: "2cf81321-88f9-43ee-8356-e0355bc3da52"). InnerVolumeSpecName "kube-api-access-9llht". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:03:45 crc kubenswrapper[4795]: W1129 08:03:45.070022 4795 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/f0a8025f-2b8a-4f5d-9462-09e032e5b6c2/volumes/kubernetes.io~projected/kube-api-access-prrkq Nov 29 08:03:45 crc kubenswrapper[4795]: I1129 08:03:45.070046 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0a8025f-2b8a-4f5d-9462-09e032e5b6c2-kube-api-access-prrkq" (OuterVolumeSpecName: "kube-api-access-prrkq") pod "f0a8025f-2b8a-4f5d-9462-09e032e5b6c2" (UID: "f0a8025f-2b8a-4f5d-9462-09e032e5b6c2"). InnerVolumeSpecName "kube-api-access-prrkq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:03:45 crc kubenswrapper[4795]: W1129 08:03:45.070347 4795 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/ff621b92-0616-4d28-abd8-b18c17c6990e/volumes/kubernetes.io~projected/kube-api-access-gqj8s Nov 29 08:03:45 crc kubenswrapper[4795]: I1129 08:03:45.070554 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff621b92-0616-4d28-abd8-b18c17c6990e-kube-api-access-gqj8s" (OuterVolumeSpecName: "kube-api-access-gqj8s") pod "ff621b92-0616-4d28-abd8-b18c17c6990e" (UID: "ff621b92-0616-4d28-abd8-b18c17c6990e"). InnerVolumeSpecName "kube-api-access-gqj8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:03:45 crc kubenswrapper[4795]: I1129 08:03:45.072473 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqj8s\" (UniqueName: \"kubernetes.io/projected/ff621b92-0616-4d28-abd8-b18c17c6990e-kube-api-access-gqj8s\") on node \"crc\" DevicePath \"\"" Nov 29 08:03:45 crc kubenswrapper[4795]: I1129 08:03:45.072942 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prrkq\" (UniqueName: \"kubernetes.io/projected/f0a8025f-2b8a-4f5d-9462-09e032e5b6c2-kube-api-access-prrkq\") on node \"crc\" DevicePath \"\"" Nov 29 08:03:45 crc kubenswrapper[4795]: I1129 08:03:45.073049 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9llht\" (UniqueName: \"kubernetes.io/projected/2cf81321-88f9-43ee-8356-e0355bc3da52-kube-api-access-9llht\") on node \"crc\" DevicePath \"\"" Nov 29 08:03:45 crc kubenswrapper[4795]: I1129 08:03:45.085801 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e6def42-cee6-477c-940a-1bb6e20df694-kube-api-access-d9tmf" (OuterVolumeSpecName: "kube-api-access-d9tmf") pod "5e6def42-cee6-477c-940a-1bb6e20df694" (UID: "5e6def42-cee6-477c-940a-1bb6e20df694"). InnerVolumeSpecName "kube-api-access-d9tmf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:03:45 crc kubenswrapper[4795]: I1129 08:03:45.174966 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9tmf\" (UniqueName: \"kubernetes.io/projected/5e6def42-cee6-477c-940a-1bb6e20df694-kube-api-access-d9tmf\") on node \"crc\" DevicePath \"\"" Nov 29 08:03:45 crc kubenswrapper[4795]: I1129 08:03:45.179466 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-r5w67-config-rzsdn" Nov 29 08:03:45 crc kubenswrapper[4795]: I1129 08:03:45.288235 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/48a8478e-b079-46a5-b3f6-c60ac90b7bce-var-run-ovn\") pod \"48a8478e-b079-46a5-b3f6-c60ac90b7bce\" (UID: \"48a8478e-b079-46a5-b3f6-c60ac90b7bce\") " Nov 29 08:03:45 crc kubenswrapper[4795]: I1129 08:03:45.288343 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/48a8478e-b079-46a5-b3f6-c60ac90b7bce-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "48a8478e-b079-46a5-b3f6-c60ac90b7bce" (UID: "48a8478e-b079-46a5-b3f6-c60ac90b7bce"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 08:03:45 crc kubenswrapper[4795]: I1129 08:03:45.289084 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9xg8\" (UniqueName: \"kubernetes.io/projected/48a8478e-b079-46a5-b3f6-c60ac90b7bce-kube-api-access-k9xg8\") pod \"48a8478e-b079-46a5-b3f6-c60ac90b7bce\" (UID: \"48a8478e-b079-46a5-b3f6-c60ac90b7bce\") " Nov 29 08:03:45 crc kubenswrapper[4795]: I1129 08:03:45.289201 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/48a8478e-b079-46a5-b3f6-c60ac90b7bce-var-run\") pod \"48a8478e-b079-46a5-b3f6-c60ac90b7bce\" (UID: \"48a8478e-b079-46a5-b3f6-c60ac90b7bce\") " Nov 29 08:03:45 crc kubenswrapper[4795]: I1129 08:03:45.289285 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/48a8478e-b079-46a5-b3f6-c60ac90b7bce-var-run" (OuterVolumeSpecName: "var-run") pod "48a8478e-b079-46a5-b3f6-c60ac90b7bce" (UID: "48a8478e-b079-46a5-b3f6-c60ac90b7bce"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 08:03:45 crc kubenswrapper[4795]: I1129 08:03:45.289434 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/48a8478e-b079-46a5-b3f6-c60ac90b7bce-var-log-ovn\") pod \"48a8478e-b079-46a5-b3f6-c60ac90b7bce\" (UID: \"48a8478e-b079-46a5-b3f6-c60ac90b7bce\") " Nov 29 08:03:45 crc kubenswrapper[4795]: I1129 08:03:45.289561 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/48a8478e-b079-46a5-b3f6-c60ac90b7bce-additional-scripts\") pod \"48a8478e-b079-46a5-b3f6-c60ac90b7bce\" (UID: \"48a8478e-b079-46a5-b3f6-c60ac90b7bce\") " Nov 29 08:03:45 crc kubenswrapper[4795]: I1129 08:03:45.289718 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/48a8478e-b079-46a5-b3f6-c60ac90b7bce-scripts\") pod \"48a8478e-b079-46a5-b3f6-c60ac90b7bce\" (UID: \"48a8478e-b079-46a5-b3f6-c60ac90b7bce\") " Nov 29 08:03:45 crc kubenswrapper[4795]: I1129 08:03:45.289807 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/48a8478e-b079-46a5-b3f6-c60ac90b7bce-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "48a8478e-b079-46a5-b3f6-c60ac90b7bce" (UID: "48a8478e-b079-46a5-b3f6-c60ac90b7bce"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 08:03:45 crc kubenswrapper[4795]: I1129 08:03:45.290455 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48a8478e-b079-46a5-b3f6-c60ac90b7bce-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "48a8478e-b079-46a5-b3f6-c60ac90b7bce" (UID: "48a8478e-b079-46a5-b3f6-c60ac90b7bce"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:03:45 crc kubenswrapper[4795]: I1129 08:03:45.290754 4795 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/48a8478e-b079-46a5-b3f6-c60ac90b7bce-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 29 08:03:45 crc kubenswrapper[4795]: I1129 08:03:45.290862 4795 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/48a8478e-b079-46a5-b3f6-c60ac90b7bce-var-run\") on node \"crc\" DevicePath \"\"" Nov 29 08:03:45 crc kubenswrapper[4795]: I1129 08:03:45.290944 4795 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/48a8478e-b079-46a5-b3f6-c60ac90b7bce-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 29 08:03:45 crc kubenswrapper[4795]: I1129 08:03:45.291016 4795 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/48a8478e-b079-46a5-b3f6-c60ac90b7bce-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 08:03:45 crc kubenswrapper[4795]: I1129 08:03:45.290883 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48a8478e-b079-46a5-b3f6-c60ac90b7bce-scripts" (OuterVolumeSpecName: "scripts") pod "48a8478e-b079-46a5-b3f6-c60ac90b7bce" (UID: "48a8478e-b079-46a5-b3f6-c60ac90b7bce"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:03:45 crc kubenswrapper[4795]: I1129 08:03:45.292987 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48a8478e-b079-46a5-b3f6-c60ac90b7bce-kube-api-access-k9xg8" (OuterVolumeSpecName: "kube-api-access-k9xg8") pod "48a8478e-b079-46a5-b3f6-c60ac90b7bce" (UID: "48a8478e-b079-46a5-b3f6-c60ac90b7bce"). InnerVolumeSpecName "kube-api-access-k9xg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:03:45 crc kubenswrapper[4795]: I1129 08:03:45.393490 4795 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/48a8478e-b079-46a5-b3f6-c60ac90b7bce-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 08:03:45 crc kubenswrapper[4795]: I1129 08:03:45.393537 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9xg8\" (UniqueName: \"kubernetes.io/projected/48a8478e-b079-46a5-b3f6-c60ac90b7bce-kube-api-access-k9xg8\") on node \"crc\" DevicePath \"\"" Nov 29 08:03:45 crc kubenswrapper[4795]: I1129 08:03:45.861104 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"063373b7-6898-409d-b792-d770a8f6f021","Type":"ContainerStarted","Data":"7100a344cdd389fb34fb5aee6a3f40ec2312d4b0c7003459cc684cba8403e516"} Nov 29 08:03:45 crc kubenswrapper[4795]: I1129 08:03:45.863031 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-r5w67-config-rzsdn" event={"ID":"48a8478e-b079-46a5-b3f6-c60ac90b7bce","Type":"ContainerDied","Data":"2a0fe2578783f39ee9cf21eeffe6174ab0c63f9add6389f71ac492c126f7e63a"} Nov 29 08:03:45 crc kubenswrapper[4795]: I1129 08:03:45.863067 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a0fe2578783f39ee9cf21eeffe6174ab0c63f9add6389f71ac492c126f7e63a" Nov 29 08:03:45 crc kubenswrapper[4795]: I1129 08:03:45.863250 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-r5w67-config-rzsdn" Nov 29 08:03:46 crc kubenswrapper[4795]: I1129 08:03:46.246792 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-hld85" Nov 29 08:03:46 crc kubenswrapper[4795]: I1129 08:03:46.368343 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-r5w67-config-rzsdn"] Nov 29 08:03:46 crc kubenswrapper[4795]: I1129 08:03:46.377449 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-r5w67-config-rzsdn"] Nov 29 08:03:46 crc kubenswrapper[4795]: I1129 08:03:46.420341 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqffh\" (UniqueName: \"kubernetes.io/projected/8c47add8-93ad-456c-8f90-bb854d981a3e-kube-api-access-mqffh\") pod \"8c47add8-93ad-456c-8f90-bb854d981a3e\" (UID: \"8c47add8-93ad-456c-8f90-bb854d981a3e\") " Nov 29 08:03:46 crc kubenswrapper[4795]: I1129 08:03:46.420863 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c47add8-93ad-456c-8f90-bb854d981a3e-operator-scripts\") pod \"8c47add8-93ad-456c-8f90-bb854d981a3e\" (UID: \"8c47add8-93ad-456c-8f90-bb854d981a3e\") " Nov 29 08:03:46 crc kubenswrapper[4795]: I1129 08:03:46.424984 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c47add8-93ad-456c-8f90-bb854d981a3e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8c47add8-93ad-456c-8f90-bb854d981a3e" (UID: "8c47add8-93ad-456c-8f90-bb854d981a3e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:03:46 crc kubenswrapper[4795]: I1129 08:03:46.428403 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c47add8-93ad-456c-8f90-bb854d981a3e-kube-api-access-mqffh" (OuterVolumeSpecName: "kube-api-access-mqffh") pod "8c47add8-93ad-456c-8f90-bb854d981a3e" (UID: "8c47add8-93ad-456c-8f90-bb854d981a3e"). InnerVolumeSpecName "kube-api-access-mqffh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:03:46 crc kubenswrapper[4795]: I1129 08:03:46.526236 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqffh\" (UniqueName: \"kubernetes.io/projected/8c47add8-93ad-456c-8f90-bb854d981a3e-kube-api-access-mqffh\") on node \"crc\" DevicePath \"\"" Nov 29 08:03:46 crc kubenswrapper[4795]: I1129 08:03:46.526287 4795 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c47add8-93ad-456c-8f90-bb854d981a3e-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 08:03:46 crc kubenswrapper[4795]: I1129 08:03:46.574467 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-r5w67-config-x6g5t"] Nov 29 08:03:46 crc kubenswrapper[4795]: E1129 08:03:46.575027 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff621b92-0616-4d28-abd8-b18c17c6990e" containerName="mariadb-account-create-update" Nov 29 08:03:46 crc kubenswrapper[4795]: I1129 08:03:46.575053 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff621b92-0616-4d28-abd8-b18c17c6990e" containerName="mariadb-account-create-update" Nov 29 08:03:46 crc kubenswrapper[4795]: E1129 08:03:46.575069 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cf81321-88f9-43ee-8356-e0355bc3da52" containerName="mariadb-database-create" Nov 29 08:03:46 crc kubenswrapper[4795]: I1129 08:03:46.575077 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cf81321-88f9-43ee-8356-e0355bc3da52" containerName="mariadb-database-create" Nov 29 08:03:46 crc kubenswrapper[4795]: E1129 08:03:46.575093 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0a8025f-2b8a-4f5d-9462-09e032e5b6c2" containerName="mariadb-account-create-update" Nov 29 08:03:46 crc kubenswrapper[4795]: I1129 08:03:46.575101 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0a8025f-2b8a-4f5d-9462-09e032e5b6c2" containerName="mariadb-account-create-update" Nov 29 08:03:46 crc kubenswrapper[4795]: E1129 08:03:46.575133 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="696f8b00-2104-47ea-bc5c-0e317dd00de1" containerName="mariadb-account-create-update" Nov 29 08:03:46 crc kubenswrapper[4795]: I1129 08:03:46.575142 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="696f8b00-2104-47ea-bc5c-0e317dd00de1" containerName="mariadb-account-create-update" Nov 29 08:03:46 crc kubenswrapper[4795]: E1129 08:03:46.575172 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e6def42-cee6-477c-940a-1bb6e20df694" containerName="mariadb-database-create" Nov 29 08:03:46 crc kubenswrapper[4795]: I1129 08:03:46.575181 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e6def42-cee6-477c-940a-1bb6e20df694" containerName="mariadb-database-create" Nov 29 08:03:46 crc kubenswrapper[4795]: E1129 08:03:46.575191 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c47add8-93ad-456c-8f90-bb854d981a3e" containerName="mariadb-database-create" Nov 29 08:03:46 crc kubenswrapper[4795]: I1129 08:03:46.575200 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c47add8-93ad-456c-8f90-bb854d981a3e" containerName="mariadb-database-create" Nov 29 08:03:46 crc kubenswrapper[4795]: E1129 08:03:46.575218 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48a8478e-b079-46a5-b3f6-c60ac90b7bce" containerName="ovn-config" Nov 29 08:03:46 crc kubenswrapper[4795]: I1129 08:03:46.575225 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="48a8478e-b079-46a5-b3f6-c60ac90b7bce" containerName="ovn-config" Nov 29 08:03:46 crc kubenswrapper[4795]: E1129 08:03:46.575235 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60a004a5-f226-49aa-b9e7-12a384ddece6" containerName="swift-ring-rebalance" Nov 29 08:03:46 crc kubenswrapper[4795]: I1129 08:03:46.575254 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="60a004a5-f226-49aa-b9e7-12a384ddece6" containerName="swift-ring-rebalance" Nov 29 08:03:46 crc kubenswrapper[4795]: I1129 08:03:46.575500 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0a8025f-2b8a-4f5d-9462-09e032e5b6c2" containerName="mariadb-account-create-update" Nov 29 08:03:46 crc kubenswrapper[4795]: I1129 08:03:46.575532 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e6def42-cee6-477c-940a-1bb6e20df694" containerName="mariadb-database-create" Nov 29 08:03:46 crc kubenswrapper[4795]: I1129 08:03:46.575546 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="60a004a5-f226-49aa-b9e7-12a384ddece6" containerName="swift-ring-rebalance" Nov 29 08:03:46 crc kubenswrapper[4795]: I1129 08:03:46.575579 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="696f8b00-2104-47ea-bc5c-0e317dd00de1" containerName="mariadb-account-create-update" Nov 29 08:03:46 crc kubenswrapper[4795]: I1129 08:03:46.575608 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff621b92-0616-4d28-abd8-b18c17c6990e" containerName="mariadb-account-create-update" Nov 29 08:03:46 crc kubenswrapper[4795]: I1129 08:03:46.575623 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cf81321-88f9-43ee-8356-e0355bc3da52" containerName="mariadb-database-create" Nov 29 08:03:46 crc kubenswrapper[4795]: I1129 08:03:46.575638 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c47add8-93ad-456c-8f90-bb854d981a3e" containerName="mariadb-database-create" Nov 29 08:03:46 crc kubenswrapper[4795]: I1129 08:03:46.575652 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="48a8478e-b079-46a5-b3f6-c60ac90b7bce" containerName="ovn-config" Nov 29 08:03:46 crc kubenswrapper[4795]: I1129 08:03:46.576629 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-r5w67-config-x6g5t" Nov 29 08:03:46 crc kubenswrapper[4795]: I1129 08:03:46.579409 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 29 08:03:46 crc kubenswrapper[4795]: I1129 08:03:46.584662 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-r5w67-config-x6g5t"] Nov 29 08:03:46 crc kubenswrapper[4795]: I1129 08:03:46.608090 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-f7j9q" Nov 29 08:03:46 crc kubenswrapper[4795]: I1129 08:03:46.628413 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02097025-8724-4b6a-b817-d8a5d40f2d24-operator-scripts\") pod \"02097025-8724-4b6a-b817-d8a5d40f2d24\" (UID: \"02097025-8724-4b6a-b817-d8a5d40f2d24\") " Nov 29 08:03:46 crc kubenswrapper[4795]: I1129 08:03:46.628736 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvwh8\" (UniqueName: \"kubernetes.io/projected/02097025-8724-4b6a-b817-d8a5d40f2d24-kube-api-access-hvwh8\") pod \"02097025-8724-4b6a-b817-d8a5d40f2d24\" (UID: \"02097025-8724-4b6a-b817-d8a5d40f2d24\") " Nov 29 08:03:46 crc kubenswrapper[4795]: I1129 08:03:46.629077 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/de022598-d8a1-4331-af2b-895aa6ee2f9b-var-run-ovn\") pod \"ovn-controller-r5w67-config-x6g5t\" (UID: \"de022598-d8a1-4331-af2b-895aa6ee2f9b\") " pod="openstack/ovn-controller-r5w67-config-x6g5t" Nov 29 08:03:46 crc kubenswrapper[4795]: I1129 08:03:46.629118 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkhgr\" (UniqueName: \"kubernetes.io/projected/de022598-d8a1-4331-af2b-895aa6ee2f9b-kube-api-access-jkhgr\") pod \"ovn-controller-r5w67-config-x6g5t\" (UID: \"de022598-d8a1-4331-af2b-895aa6ee2f9b\") " pod="openstack/ovn-controller-r5w67-config-x6g5t" Nov 29 08:03:46 crc kubenswrapper[4795]: I1129 08:03:46.629143 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/de022598-d8a1-4331-af2b-895aa6ee2f9b-var-log-ovn\") pod \"ovn-controller-r5w67-config-x6g5t\" (UID: \"de022598-d8a1-4331-af2b-895aa6ee2f9b\") " pod="openstack/ovn-controller-r5w67-config-x6g5t" Nov 29 08:03:46 crc kubenswrapper[4795]: I1129 08:03:46.629195 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/de022598-d8a1-4331-af2b-895aa6ee2f9b-additional-scripts\") pod \"ovn-controller-r5w67-config-x6g5t\" (UID: \"de022598-d8a1-4331-af2b-895aa6ee2f9b\") " pod="openstack/ovn-controller-r5w67-config-x6g5t" Nov 29 08:03:46 crc kubenswrapper[4795]: I1129 08:03:46.629197 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02097025-8724-4b6a-b817-d8a5d40f2d24-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "02097025-8724-4b6a-b817-d8a5d40f2d24" (UID: "02097025-8724-4b6a-b817-d8a5d40f2d24"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:03:46 crc kubenswrapper[4795]: I1129 08:03:46.629340 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/de022598-d8a1-4331-af2b-895aa6ee2f9b-scripts\") pod \"ovn-controller-r5w67-config-x6g5t\" (UID: \"de022598-d8a1-4331-af2b-895aa6ee2f9b\") " pod="openstack/ovn-controller-r5w67-config-x6g5t" Nov 29 08:03:46 crc kubenswrapper[4795]: I1129 08:03:46.629452 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/de022598-d8a1-4331-af2b-895aa6ee2f9b-var-run\") pod \"ovn-controller-r5w67-config-x6g5t\" (UID: \"de022598-d8a1-4331-af2b-895aa6ee2f9b\") " pod="openstack/ovn-controller-r5w67-config-x6g5t" Nov 29 08:03:46 crc kubenswrapper[4795]: I1129 08:03:46.629519 4795 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02097025-8724-4b6a-b817-d8a5d40f2d24-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 08:03:46 crc kubenswrapper[4795]: I1129 08:03:46.637543 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-346e-account-create-update-cw8zl" Nov 29 08:03:46 crc kubenswrapper[4795]: I1129 08:03:46.637706 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02097025-8724-4b6a-b817-d8a5d40f2d24-kube-api-access-hvwh8" (OuterVolumeSpecName: "kube-api-access-hvwh8") pod "02097025-8724-4b6a-b817-d8a5d40f2d24" (UID: "02097025-8724-4b6a-b817-d8a5d40f2d24"). InnerVolumeSpecName "kube-api-access-hvwh8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:03:46 crc kubenswrapper[4795]: I1129 08:03:46.670190 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-r5w67" Nov 29 08:03:46 crc kubenswrapper[4795]: I1129 08:03:46.731430 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qj8wt\" (UniqueName: \"kubernetes.io/projected/e4891454-7144-4416-b2d3-f16e56001077-kube-api-access-qj8wt\") pod \"e4891454-7144-4416-b2d3-f16e56001077\" (UID: \"e4891454-7144-4416-b2d3-f16e56001077\") " Nov 29 08:03:46 crc kubenswrapper[4795]: I1129 08:03:46.731624 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4891454-7144-4416-b2d3-f16e56001077-operator-scripts\") pod \"e4891454-7144-4416-b2d3-f16e56001077\" (UID: \"e4891454-7144-4416-b2d3-f16e56001077\") " Nov 29 08:03:46 crc kubenswrapper[4795]: I1129 08:03:46.732101 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/de022598-d8a1-4331-af2b-895aa6ee2f9b-var-run\") pod \"ovn-controller-r5w67-config-x6g5t\" (UID: \"de022598-d8a1-4331-af2b-895aa6ee2f9b\") " pod="openstack/ovn-controller-r5w67-config-x6g5t" Nov 29 08:03:46 crc kubenswrapper[4795]: I1129 08:03:46.732157 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/de022598-d8a1-4331-af2b-895aa6ee2f9b-var-run-ovn\") pod \"ovn-controller-r5w67-config-x6g5t\" (UID: \"de022598-d8a1-4331-af2b-895aa6ee2f9b\") " pod="openstack/ovn-controller-r5w67-config-x6g5t" Nov 29 08:03:46 crc kubenswrapper[4795]: I1129 08:03:46.732184 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkhgr\" (UniqueName: \"kubernetes.io/projected/de022598-d8a1-4331-af2b-895aa6ee2f9b-kube-api-access-jkhgr\") pod \"ovn-controller-r5w67-config-x6g5t\" (UID: \"de022598-d8a1-4331-af2b-895aa6ee2f9b\") " pod="openstack/ovn-controller-r5w67-config-x6g5t" Nov 29 08:03:46 crc kubenswrapper[4795]: I1129 08:03:46.732206 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/de022598-d8a1-4331-af2b-895aa6ee2f9b-var-log-ovn\") pod \"ovn-controller-r5w67-config-x6g5t\" (UID: \"de022598-d8a1-4331-af2b-895aa6ee2f9b\") " pod="openstack/ovn-controller-r5w67-config-x6g5t" Nov 29 08:03:46 crc kubenswrapper[4795]: I1129 08:03:46.732293 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/de022598-d8a1-4331-af2b-895aa6ee2f9b-additional-scripts\") pod \"ovn-controller-r5w67-config-x6g5t\" (UID: \"de022598-d8a1-4331-af2b-895aa6ee2f9b\") " pod="openstack/ovn-controller-r5w67-config-x6g5t" Nov 29 08:03:46 crc kubenswrapper[4795]: I1129 08:03:46.732464 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/de022598-d8a1-4331-af2b-895aa6ee2f9b-scripts\") pod \"ovn-controller-r5w67-config-x6g5t\" (UID: \"de022598-d8a1-4331-af2b-895aa6ee2f9b\") " pod="openstack/ovn-controller-r5w67-config-x6g5t" Nov 29 08:03:46 crc kubenswrapper[4795]: I1129 08:03:46.732584 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvwh8\" (UniqueName: \"kubernetes.io/projected/02097025-8724-4b6a-b817-d8a5d40f2d24-kube-api-access-hvwh8\") on node \"crc\" DevicePath \"\"" Nov 29 08:03:46 crc kubenswrapper[4795]: I1129 08:03:46.733003 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/de022598-d8a1-4331-af2b-895aa6ee2f9b-var-log-ovn\") pod \"ovn-controller-r5w67-config-x6g5t\" (UID: \"de022598-d8a1-4331-af2b-895aa6ee2f9b\") " pod="openstack/ovn-controller-r5w67-config-x6g5t" Nov 29 08:03:46 crc kubenswrapper[4795]: I1129 08:03:46.733693 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/de022598-d8a1-4331-af2b-895aa6ee2f9b-var-run-ovn\") pod \"ovn-controller-r5w67-config-x6g5t\" (UID: \"de022598-d8a1-4331-af2b-895aa6ee2f9b\") " pod="openstack/ovn-controller-r5w67-config-x6g5t" Nov 29 08:03:46 crc kubenswrapper[4795]: I1129 08:03:46.733779 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/de022598-d8a1-4331-af2b-895aa6ee2f9b-var-run\") pod \"ovn-controller-r5w67-config-x6g5t\" (UID: \"de022598-d8a1-4331-af2b-895aa6ee2f9b\") " pod="openstack/ovn-controller-r5w67-config-x6g5t" Nov 29 08:03:46 crc kubenswrapper[4795]: I1129 08:03:46.733871 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4891454-7144-4416-b2d3-f16e56001077-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e4891454-7144-4416-b2d3-f16e56001077" (UID: "e4891454-7144-4416-b2d3-f16e56001077"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:03:46 crc kubenswrapper[4795]: I1129 08:03:46.734345 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/de022598-d8a1-4331-af2b-895aa6ee2f9b-additional-scripts\") pod \"ovn-controller-r5w67-config-x6g5t\" (UID: \"de022598-d8a1-4331-af2b-895aa6ee2f9b\") " pod="openstack/ovn-controller-r5w67-config-x6g5t" Nov 29 08:03:46 crc kubenswrapper[4795]: I1129 08:03:46.735352 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4891454-7144-4416-b2d3-f16e56001077-kube-api-access-qj8wt" (OuterVolumeSpecName: "kube-api-access-qj8wt") pod "e4891454-7144-4416-b2d3-f16e56001077" (UID: "e4891454-7144-4416-b2d3-f16e56001077"). InnerVolumeSpecName "kube-api-access-qj8wt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:03:46 crc kubenswrapper[4795]: I1129 08:03:46.736097 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/de022598-d8a1-4331-af2b-895aa6ee2f9b-scripts\") pod \"ovn-controller-r5w67-config-x6g5t\" (UID: \"de022598-d8a1-4331-af2b-895aa6ee2f9b\") " pod="openstack/ovn-controller-r5w67-config-x6g5t" Nov 29 08:03:46 crc kubenswrapper[4795]: I1129 08:03:46.756985 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkhgr\" (UniqueName: \"kubernetes.io/projected/de022598-d8a1-4331-af2b-895aa6ee2f9b-kube-api-access-jkhgr\") pod \"ovn-controller-r5w67-config-x6g5t\" (UID: \"de022598-d8a1-4331-af2b-895aa6ee2f9b\") " pod="openstack/ovn-controller-r5w67-config-x6g5t" Nov 29 08:03:46 crc kubenswrapper[4795]: I1129 08:03:46.834690 4795 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4891454-7144-4416-b2d3-f16e56001077-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 08:03:46 crc kubenswrapper[4795]: I1129 08:03:46.834739 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qj8wt\" (UniqueName: \"kubernetes.io/projected/e4891454-7144-4416-b2d3-f16e56001077-kube-api-access-qj8wt\") on node \"crc\" DevicePath \"\"" Nov 29 08:03:46 crc kubenswrapper[4795]: I1129 08:03:46.877476 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-f7j9q" event={"ID":"02097025-8724-4b6a-b817-d8a5d40f2d24","Type":"ContainerDied","Data":"e46d244f83e9803a24f98aceeb79eb01a7309945dfd56b133d9bd601ba678b1e"} Nov 29 08:03:46 crc kubenswrapper[4795]: I1129 08:03:46.877519 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e46d244f83e9803a24f98aceeb79eb01a7309945dfd56b133d9bd601ba678b1e" Nov 29 08:03:46 crc kubenswrapper[4795]: I1129 08:03:46.877501 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-f7j9q" Nov 29 08:03:46 crc kubenswrapper[4795]: I1129 08:03:46.880871 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-hld85" event={"ID":"8c47add8-93ad-456c-8f90-bb854d981a3e","Type":"ContainerDied","Data":"1e4672731470dfa449d682ede1c0433b04cb01b50ca384d96d96b6db4b724e5e"} Nov 29 08:03:46 crc kubenswrapper[4795]: I1129 08:03:46.880899 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-hld85" Nov 29 08:03:46 crc kubenswrapper[4795]: I1129 08:03:46.880904 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e4672731470dfa449d682ede1c0433b04cb01b50ca384d96d96b6db4b724e5e" Nov 29 08:03:46 crc kubenswrapper[4795]: I1129 08:03:46.883583 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-346e-account-create-update-cw8zl" event={"ID":"e4891454-7144-4416-b2d3-f16e56001077","Type":"ContainerDied","Data":"b5873862b907d380ae390ee1ac1c3a155137769f623db8595b7a8b70e0b861bc"} Nov 29 08:03:46 crc kubenswrapper[4795]: I1129 08:03:46.883636 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5873862b907d380ae390ee1ac1c3a155137769f623db8595b7a8b70e0b861bc" Nov 29 08:03:46 crc kubenswrapper[4795]: I1129 08:03:46.883673 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-346e-account-create-update-cw8zl" Nov 29 08:03:46 crc kubenswrapper[4795]: I1129 08:03:46.994306 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-r5w67-config-x6g5t" Nov 29 08:03:47 crc kubenswrapper[4795]: I1129 08:03:47.570533 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-r5w67-config-x6g5t"] Nov 29 08:03:47 crc kubenswrapper[4795]: I1129 08:03:47.897289 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-r5w67-config-x6g5t" event={"ID":"de022598-d8a1-4331-af2b-895aa6ee2f9b","Type":"ContainerStarted","Data":"c611f0245825a54290542ddd16efbd0e5bae4c9663958167debf67fad034ded8"} Nov 29 08:03:48 crc kubenswrapper[4795]: E1129 08:03:48.247555 4795 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde022598_d8a1_4331_af2b_895aa6ee2f9b.slice/crio-9e057905edce3e13b8e32626995262dd69799bbb15d8787ce54a86cabae62b35.scope\": RecentStats: unable to find data in memory cache]" Nov 29 08:03:48 crc kubenswrapper[4795]: E1129 08:03:48.247676 4795 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde022598_d8a1_4331_af2b_895aa6ee2f9b.slice/crio-9e057905edce3e13b8e32626995262dd69799bbb15d8787ce54a86cabae62b35.scope\": RecentStats: unable to find data in memory cache]" Nov 29 08:03:48 crc kubenswrapper[4795]: I1129 08:03:48.290473 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48a8478e-b079-46a5-b3f6-c60ac90b7bce" path="/var/lib/kubelet/pods/48a8478e-b079-46a5-b3f6-c60ac90b7bce/volumes" Nov 29 08:03:48 crc kubenswrapper[4795]: I1129 08:03:48.909760 4795 generic.go:334] "Generic (PLEG): container finished" podID="de022598-d8a1-4331-af2b-895aa6ee2f9b" containerID="9e057905edce3e13b8e32626995262dd69799bbb15d8787ce54a86cabae62b35" exitCode=0 Nov 29 08:03:48 crc kubenswrapper[4795]: I1129 08:03:48.909817 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-r5w67-config-x6g5t" event={"ID":"de022598-d8a1-4331-af2b-895aa6ee2f9b","Type":"ContainerDied","Data":"9e057905edce3e13b8e32626995262dd69799bbb15d8787ce54a86cabae62b35"} Nov 29 08:03:49 crc kubenswrapper[4795]: I1129 08:03:49.921249 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"063373b7-6898-409d-b792-d770a8f6f021","Type":"ContainerStarted","Data":"aeeeceff929624dcc163051b37b80444664380c9bf11143d772c203d6bdf1fa3"} Nov 29 08:03:49 crc kubenswrapper[4795]: I1129 08:03:49.956578 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=11.094026665 podStartE2EDuration="1m22.956560715s" podCreationTimestamp="2025-11-29 08:02:27 +0000 UTC" firstStartedPulling="2025-11-29 08:02:37.519241171 +0000 UTC m=+1403.494816961" lastFinishedPulling="2025-11-29 08:03:49.381775221 +0000 UTC m=+1475.357351011" observedRunningTime="2025-11-29 08:03:49.950979247 +0000 UTC m=+1475.926555037" watchObservedRunningTime="2025-11-29 08:03:49.956560715 +0000 UTC m=+1475.932136505" Nov 29 08:03:50 crc kubenswrapper[4795]: I1129 08:03:50.026704 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/28437f9f-e92e-46d7-9ffb-fcdda5dea25e-etc-swift\") pod \"swift-storage-0\" (UID: \"28437f9f-e92e-46d7-9ffb-fcdda5dea25e\") " pod="openstack/swift-storage-0" Nov 29 08:03:50 crc kubenswrapper[4795]: I1129 08:03:50.057369 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/28437f9f-e92e-46d7-9ffb-fcdda5dea25e-etc-swift\") pod \"swift-storage-0\" (UID: \"28437f9f-e92e-46d7-9ffb-fcdda5dea25e\") " pod="openstack/swift-storage-0" Nov 29 08:03:50 crc kubenswrapper[4795]: I1129 08:03:50.195487 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 29 08:03:50 crc kubenswrapper[4795]: I1129 08:03:50.294979 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-7lg28"] Nov 29 08:03:50 crc kubenswrapper[4795]: E1129 08:03:50.295384 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4891454-7144-4416-b2d3-f16e56001077" containerName="mariadb-account-create-update" Nov 29 08:03:50 crc kubenswrapper[4795]: I1129 08:03:50.295403 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4891454-7144-4416-b2d3-f16e56001077" containerName="mariadb-account-create-update" Nov 29 08:03:50 crc kubenswrapper[4795]: E1129 08:03:50.295438 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02097025-8724-4b6a-b817-d8a5d40f2d24" containerName="mariadb-database-create" Nov 29 08:03:50 crc kubenswrapper[4795]: I1129 08:03:50.295445 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="02097025-8724-4b6a-b817-d8a5d40f2d24" containerName="mariadb-database-create" Nov 29 08:03:50 crc kubenswrapper[4795]: I1129 08:03:50.295663 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="02097025-8724-4b6a-b817-d8a5d40f2d24" containerName="mariadb-database-create" Nov 29 08:03:50 crc kubenswrapper[4795]: I1129 08:03:50.295691 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4891454-7144-4416-b2d3-f16e56001077" containerName="mariadb-account-create-update" Nov 29 08:03:50 crc kubenswrapper[4795]: I1129 08:03:50.296663 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-7lg28" Nov 29 08:03:50 crc kubenswrapper[4795]: I1129 08:03:50.299911 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-qkzd6" Nov 29 08:03:50 crc kubenswrapper[4795]: I1129 08:03:50.299988 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Nov 29 08:03:50 crc kubenswrapper[4795]: I1129 08:03:50.309843 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-7lg28"] Nov 29 08:03:50 crc kubenswrapper[4795]: I1129 08:03:50.326934 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-r5w67-config-x6g5t" Nov 29 08:03:50 crc kubenswrapper[4795]: I1129 08:03:50.439311 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/de022598-d8a1-4331-af2b-895aa6ee2f9b-var-log-ovn\") pod \"de022598-d8a1-4331-af2b-895aa6ee2f9b\" (UID: \"de022598-d8a1-4331-af2b-895aa6ee2f9b\") " Nov 29 08:03:50 crc kubenswrapper[4795]: I1129 08:03:50.439808 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/de022598-d8a1-4331-af2b-895aa6ee2f9b-additional-scripts\") pod \"de022598-d8a1-4331-af2b-895aa6ee2f9b\" (UID: \"de022598-d8a1-4331-af2b-895aa6ee2f9b\") " Nov 29 08:03:50 crc kubenswrapper[4795]: I1129 08:03:50.439887 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/de022598-d8a1-4331-af2b-895aa6ee2f9b-var-run-ovn\") pod \"de022598-d8a1-4331-af2b-895aa6ee2f9b\" (UID: \"de022598-d8a1-4331-af2b-895aa6ee2f9b\") " Nov 29 08:03:50 crc kubenswrapper[4795]: I1129 08:03:50.439957 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkhgr\" (UniqueName: \"kubernetes.io/projected/de022598-d8a1-4331-af2b-895aa6ee2f9b-kube-api-access-jkhgr\") pod \"de022598-d8a1-4331-af2b-895aa6ee2f9b\" (UID: \"de022598-d8a1-4331-af2b-895aa6ee2f9b\") " Nov 29 08:03:50 crc kubenswrapper[4795]: I1129 08:03:50.440130 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/de022598-d8a1-4331-af2b-895aa6ee2f9b-scripts\") pod \"de022598-d8a1-4331-af2b-895aa6ee2f9b\" (UID: \"de022598-d8a1-4331-af2b-895aa6ee2f9b\") " Nov 29 08:03:50 crc kubenswrapper[4795]: I1129 08:03:50.440221 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/de022598-d8a1-4331-af2b-895aa6ee2f9b-var-run\") pod \"de022598-d8a1-4331-af2b-895aa6ee2f9b\" (UID: \"de022598-d8a1-4331-af2b-895aa6ee2f9b\") " Nov 29 08:03:50 crc kubenswrapper[4795]: I1129 08:03:50.440564 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/082a7f2c-1081-4af8-91c8-60a13d787746-db-sync-config-data\") pod \"glance-db-sync-7lg28\" (UID: \"082a7f2c-1081-4af8-91c8-60a13d787746\") " pod="openstack/glance-db-sync-7lg28" Nov 29 08:03:50 crc kubenswrapper[4795]: I1129 08:03:50.439457 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de022598-d8a1-4331-af2b-895aa6ee2f9b-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "de022598-d8a1-4331-af2b-895aa6ee2f9b" (UID: "de022598-d8a1-4331-af2b-895aa6ee2f9b"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 08:03:50 crc kubenswrapper[4795]: I1129 08:03:50.440650 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de022598-d8a1-4331-af2b-895aa6ee2f9b-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "de022598-d8a1-4331-af2b-895aa6ee2f9b" (UID: "de022598-d8a1-4331-af2b-895aa6ee2f9b"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 08:03:50 crc kubenswrapper[4795]: I1129 08:03:50.440792 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de022598-d8a1-4331-af2b-895aa6ee2f9b-var-run" (OuterVolumeSpecName: "var-run") pod "de022598-d8a1-4331-af2b-895aa6ee2f9b" (UID: "de022598-d8a1-4331-af2b-895aa6ee2f9b"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 08:03:50 crc kubenswrapper[4795]: I1129 08:03:50.441380 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de022598-d8a1-4331-af2b-895aa6ee2f9b-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "de022598-d8a1-4331-af2b-895aa6ee2f9b" (UID: "de022598-d8a1-4331-af2b-895aa6ee2f9b"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:03:50 crc kubenswrapper[4795]: I1129 08:03:50.441580 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de022598-d8a1-4331-af2b-895aa6ee2f9b-scripts" (OuterVolumeSpecName: "scripts") pod "de022598-d8a1-4331-af2b-895aa6ee2f9b" (UID: "de022598-d8a1-4331-af2b-895aa6ee2f9b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:03:50 crc kubenswrapper[4795]: I1129 08:03:50.445245 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sh84\" (UniqueName: \"kubernetes.io/projected/082a7f2c-1081-4af8-91c8-60a13d787746-kube-api-access-6sh84\") pod \"glance-db-sync-7lg28\" (UID: \"082a7f2c-1081-4af8-91c8-60a13d787746\") " pod="openstack/glance-db-sync-7lg28" Nov 29 08:03:50 crc kubenswrapper[4795]: I1129 08:03:50.445381 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/082a7f2c-1081-4af8-91c8-60a13d787746-combined-ca-bundle\") pod \"glance-db-sync-7lg28\" (UID: \"082a7f2c-1081-4af8-91c8-60a13d787746\") " pod="openstack/glance-db-sync-7lg28" Nov 29 08:03:50 crc kubenswrapper[4795]: I1129 08:03:50.445870 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/082a7f2c-1081-4af8-91c8-60a13d787746-config-data\") pod \"glance-db-sync-7lg28\" (UID: \"082a7f2c-1081-4af8-91c8-60a13d787746\") " pod="openstack/glance-db-sync-7lg28" Nov 29 08:03:50 crc kubenswrapper[4795]: I1129 08:03:50.445959 4795 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/de022598-d8a1-4331-af2b-895aa6ee2f9b-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 08:03:50 crc kubenswrapper[4795]: I1129 08:03:50.445976 4795 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/de022598-d8a1-4331-af2b-895aa6ee2f9b-var-run\") on node \"crc\" DevicePath \"\"" Nov 29 08:03:50 crc kubenswrapper[4795]: I1129 08:03:50.445987 4795 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/de022598-d8a1-4331-af2b-895aa6ee2f9b-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 29 08:03:50 crc kubenswrapper[4795]: I1129 08:03:50.446000 4795 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/de022598-d8a1-4331-af2b-895aa6ee2f9b-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 08:03:50 crc kubenswrapper[4795]: I1129 08:03:50.446013 4795 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/de022598-d8a1-4331-af2b-895aa6ee2f9b-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 29 08:03:50 crc kubenswrapper[4795]: I1129 08:03:50.452210 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de022598-d8a1-4331-af2b-895aa6ee2f9b-kube-api-access-jkhgr" (OuterVolumeSpecName: "kube-api-access-jkhgr") pod "de022598-d8a1-4331-af2b-895aa6ee2f9b" (UID: "de022598-d8a1-4331-af2b-895aa6ee2f9b"). InnerVolumeSpecName "kube-api-access-jkhgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:03:50 crc kubenswrapper[4795]: I1129 08:03:50.548046 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6sh84\" (UniqueName: \"kubernetes.io/projected/082a7f2c-1081-4af8-91c8-60a13d787746-kube-api-access-6sh84\") pod \"glance-db-sync-7lg28\" (UID: \"082a7f2c-1081-4af8-91c8-60a13d787746\") " pod="openstack/glance-db-sync-7lg28" Nov 29 08:03:50 crc kubenswrapper[4795]: I1129 08:03:50.548128 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/082a7f2c-1081-4af8-91c8-60a13d787746-combined-ca-bundle\") pod \"glance-db-sync-7lg28\" (UID: \"082a7f2c-1081-4af8-91c8-60a13d787746\") " pod="openstack/glance-db-sync-7lg28" Nov 29 08:03:50 crc kubenswrapper[4795]: I1129 08:03:50.548269 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/082a7f2c-1081-4af8-91c8-60a13d787746-config-data\") pod \"glance-db-sync-7lg28\" (UID: \"082a7f2c-1081-4af8-91c8-60a13d787746\") " pod="openstack/glance-db-sync-7lg28" Nov 29 08:03:50 crc kubenswrapper[4795]: I1129 08:03:50.548303 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/082a7f2c-1081-4af8-91c8-60a13d787746-db-sync-config-data\") pod \"glance-db-sync-7lg28\" (UID: \"082a7f2c-1081-4af8-91c8-60a13d787746\") " pod="openstack/glance-db-sync-7lg28" Nov 29 08:03:50 crc kubenswrapper[4795]: I1129 08:03:50.548350 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkhgr\" (UniqueName: \"kubernetes.io/projected/de022598-d8a1-4331-af2b-895aa6ee2f9b-kube-api-access-jkhgr\") on node \"crc\" DevicePath \"\"" Nov 29 08:03:50 crc kubenswrapper[4795]: I1129 08:03:50.552453 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/082a7f2c-1081-4af8-91c8-60a13d787746-db-sync-config-data\") pod \"glance-db-sync-7lg28\" (UID: \"082a7f2c-1081-4af8-91c8-60a13d787746\") " pod="openstack/glance-db-sync-7lg28" Nov 29 08:03:50 crc kubenswrapper[4795]: I1129 08:03:50.552467 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/082a7f2c-1081-4af8-91c8-60a13d787746-config-data\") pod \"glance-db-sync-7lg28\" (UID: \"082a7f2c-1081-4af8-91c8-60a13d787746\") " pod="openstack/glance-db-sync-7lg28" Nov 29 08:03:50 crc kubenswrapper[4795]: I1129 08:03:50.552974 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/082a7f2c-1081-4af8-91c8-60a13d787746-combined-ca-bundle\") pod \"glance-db-sync-7lg28\" (UID: \"082a7f2c-1081-4af8-91c8-60a13d787746\") " pod="openstack/glance-db-sync-7lg28" Nov 29 08:03:50 crc kubenswrapper[4795]: I1129 08:03:50.569506 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6sh84\" (UniqueName: \"kubernetes.io/projected/082a7f2c-1081-4af8-91c8-60a13d787746-kube-api-access-6sh84\") pod \"glance-db-sync-7lg28\" (UID: \"082a7f2c-1081-4af8-91c8-60a13d787746\") " pod="openstack/glance-db-sync-7lg28" Nov 29 08:03:50 crc kubenswrapper[4795]: I1129 08:03:50.639842 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-7lg28" Nov 29 08:03:50 crc kubenswrapper[4795]: I1129 08:03:50.933559 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 29 08:03:50 crc kubenswrapper[4795]: W1129 08:03:50.943823 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28437f9f_e92e_46d7_9ffb_fcdda5dea25e.slice/crio-b44dc776d60baefbc406bbbe41bf7df2131ca317443866e0778f89338188347d WatchSource:0}: Error finding container b44dc776d60baefbc406bbbe41bf7df2131ca317443866e0778f89338188347d: Status 404 returned error can't find the container with id b44dc776d60baefbc406bbbe41bf7df2131ca317443866e0778f89338188347d Nov 29 08:03:50 crc kubenswrapper[4795]: I1129 08:03:50.950256 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-r5w67-config-x6g5t" Nov 29 08:03:50 crc kubenswrapper[4795]: I1129 08:03:50.950253 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-r5w67-config-x6g5t" event={"ID":"de022598-d8a1-4331-af2b-895aa6ee2f9b","Type":"ContainerDied","Data":"c611f0245825a54290542ddd16efbd0e5bae4c9663958167debf67fad034ded8"} Nov 29 08:03:50 crc kubenswrapper[4795]: I1129 08:03:50.950862 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c611f0245825a54290542ddd16efbd0e5bae4c9663958167debf67fad034ded8" Nov 29 08:03:51 crc kubenswrapper[4795]: I1129 08:03:51.276494 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-7lg28"] Nov 29 08:03:51 crc kubenswrapper[4795]: W1129 08:03:51.278461 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod082a7f2c_1081_4af8_91c8_60a13d787746.slice/crio-70eb584be41fd910946c29edd7ab7dfbfb7236424308609d2c1672cf655e3a1e WatchSource:0}: Error finding container 70eb584be41fd910946c29edd7ab7dfbfb7236424308609d2c1672cf655e3a1e: Status 404 returned error can't find the container with id 70eb584be41fd910946c29edd7ab7dfbfb7236424308609d2c1672cf655e3a1e Nov 29 08:03:51 crc kubenswrapper[4795]: I1129 08:03:51.377160 4795 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="74169c45-99e0-4179-a18b-07a1c2cade8b" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.129:5671: connect: connection refused" Nov 29 08:03:51 crc kubenswrapper[4795]: I1129 08:03:51.419909 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-r5w67-config-x6g5t"] Nov 29 08:03:51 crc kubenswrapper[4795]: I1129 08:03:51.427231 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-r5w67-config-x6g5t"] Nov 29 08:03:51 crc kubenswrapper[4795]: I1129 08:03:51.831987 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 29 08:03:51 crc kubenswrapper[4795]: I1129 08:03:51.833571 4795 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="28c13f65-78c4-4d4a-8960-7ef17a4c93e8" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.130:5671: connect: connection refused" Nov 29 08:03:51 crc kubenswrapper[4795]: I1129 08:03:51.833915 4795 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="28c13f65-78c4-4d4a-8960-7ef17a4c93e8" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.130:5671: connect: connection refused" Nov 29 08:03:51 crc kubenswrapper[4795]: I1129 08:03:51.962946 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-7lg28" event={"ID":"082a7f2c-1081-4af8-91c8-60a13d787746","Type":"ContainerStarted","Data":"70eb584be41fd910946c29edd7ab7dfbfb7236424308609d2c1672cf655e3a1e"} Nov 29 08:03:51 crc kubenswrapper[4795]: I1129 08:03:51.965568 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"28437f9f-e92e-46d7-9ffb-fcdda5dea25e","Type":"ContainerStarted","Data":"b44dc776d60baefbc406bbbe41bf7df2131ca317443866e0778f89338188347d"} Nov 29 08:03:52 crc kubenswrapper[4795]: I1129 08:03:52.222729 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-wjgpz"] Nov 29 08:03:52 crc kubenswrapper[4795]: E1129 08:03:52.223512 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de022598-d8a1-4331-af2b-895aa6ee2f9b" containerName="ovn-config" Nov 29 08:03:52 crc kubenswrapper[4795]: I1129 08:03:52.223538 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="de022598-d8a1-4331-af2b-895aa6ee2f9b" containerName="ovn-config" Nov 29 08:03:52 crc kubenswrapper[4795]: I1129 08:03:52.223976 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="de022598-d8a1-4331-af2b-895aa6ee2f9b" containerName="ovn-config" Nov 29 08:03:52 crc kubenswrapper[4795]: I1129 08:03:52.224979 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-wjgpz" Nov 29 08:03:52 crc kubenswrapper[4795]: I1129 08:03:52.252464 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-wjgpz"] Nov 29 08:03:52 crc kubenswrapper[4795]: I1129 08:03:52.289012 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d62n6\" (UniqueName: \"kubernetes.io/projected/613a66e8-273f-4355-bea7-08909eb514e8-kube-api-access-d62n6\") pod \"mysqld-exporter-openstack-cell1-db-create-wjgpz\" (UID: \"613a66e8-273f-4355-bea7-08909eb514e8\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-wjgpz" Nov 29 08:03:52 crc kubenswrapper[4795]: I1129 08:03:52.289734 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/613a66e8-273f-4355-bea7-08909eb514e8-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-wjgpz\" (UID: \"613a66e8-273f-4355-bea7-08909eb514e8\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-wjgpz" Nov 29 08:03:52 crc kubenswrapper[4795]: I1129 08:03:52.312368 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de022598-d8a1-4331-af2b-895aa6ee2f9b" path="/var/lib/kubelet/pods/de022598-d8a1-4331-af2b-895aa6ee2f9b/volumes" Nov 29 08:03:52 crc kubenswrapper[4795]: I1129 08:03:52.394682 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/613a66e8-273f-4355-bea7-08909eb514e8-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-wjgpz\" (UID: \"613a66e8-273f-4355-bea7-08909eb514e8\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-wjgpz" Nov 29 08:03:52 crc kubenswrapper[4795]: I1129 08:03:52.394840 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d62n6\" (UniqueName: \"kubernetes.io/projected/613a66e8-273f-4355-bea7-08909eb514e8-kube-api-access-d62n6\") pod \"mysqld-exporter-openstack-cell1-db-create-wjgpz\" (UID: \"613a66e8-273f-4355-bea7-08909eb514e8\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-wjgpz" Nov 29 08:03:52 crc kubenswrapper[4795]: I1129 08:03:52.397076 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/613a66e8-273f-4355-bea7-08909eb514e8-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-wjgpz\" (UID: \"613a66e8-273f-4355-bea7-08909eb514e8\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-wjgpz" Nov 29 08:03:52 crc kubenswrapper[4795]: I1129 08:03:52.447281 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d62n6\" (UniqueName: \"kubernetes.io/projected/613a66e8-273f-4355-bea7-08909eb514e8-kube-api-access-d62n6\") pod \"mysqld-exporter-openstack-cell1-db-create-wjgpz\" (UID: \"613a66e8-273f-4355-bea7-08909eb514e8\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-wjgpz" Nov 29 08:03:52 crc kubenswrapper[4795]: I1129 08:03:52.450154 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-47c8-account-create-update-jvpms"] Nov 29 08:03:52 crc kubenswrapper[4795]: I1129 08:03:52.456912 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-47c8-account-create-update-jvpms" Nov 29 08:03:52 crc kubenswrapper[4795]: I1129 08:03:52.459084 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-47c8-account-create-update-jvpms"] Nov 29 08:03:52 crc kubenswrapper[4795]: I1129 08:03:52.462238 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-cell1-db-secret" Nov 29 08:03:52 crc kubenswrapper[4795]: I1129 08:03:52.560655 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-wjgpz" Nov 29 08:03:52 crc kubenswrapper[4795]: I1129 08:03:52.598729 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngtch\" (UniqueName: \"kubernetes.io/projected/ff80bff6-f385-4876-9967-5622f2b44e9f-kube-api-access-ngtch\") pod \"mysqld-exporter-47c8-account-create-update-jvpms\" (UID: \"ff80bff6-f385-4876-9967-5622f2b44e9f\") " pod="openstack/mysqld-exporter-47c8-account-create-update-jvpms" Nov 29 08:03:52 crc kubenswrapper[4795]: I1129 08:03:52.598827 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff80bff6-f385-4876-9967-5622f2b44e9f-operator-scripts\") pod \"mysqld-exporter-47c8-account-create-update-jvpms\" (UID: \"ff80bff6-f385-4876-9967-5622f2b44e9f\") " pod="openstack/mysqld-exporter-47c8-account-create-update-jvpms" Nov 29 08:03:52 crc kubenswrapper[4795]: I1129 08:03:52.702369 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngtch\" (UniqueName: \"kubernetes.io/projected/ff80bff6-f385-4876-9967-5622f2b44e9f-kube-api-access-ngtch\") pod \"mysqld-exporter-47c8-account-create-update-jvpms\" (UID: \"ff80bff6-f385-4876-9967-5622f2b44e9f\") " pod="openstack/mysqld-exporter-47c8-account-create-update-jvpms" Nov 29 08:03:52 crc kubenswrapper[4795]: I1129 08:03:52.702443 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff80bff6-f385-4876-9967-5622f2b44e9f-operator-scripts\") pod \"mysqld-exporter-47c8-account-create-update-jvpms\" (UID: \"ff80bff6-f385-4876-9967-5622f2b44e9f\") " pod="openstack/mysqld-exporter-47c8-account-create-update-jvpms" Nov 29 08:03:52 crc kubenswrapper[4795]: I1129 08:03:52.704255 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff80bff6-f385-4876-9967-5622f2b44e9f-operator-scripts\") pod \"mysqld-exporter-47c8-account-create-update-jvpms\" (UID: \"ff80bff6-f385-4876-9967-5622f2b44e9f\") " pod="openstack/mysqld-exporter-47c8-account-create-update-jvpms" Nov 29 08:03:52 crc kubenswrapper[4795]: I1129 08:03:52.744624 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngtch\" (UniqueName: \"kubernetes.io/projected/ff80bff6-f385-4876-9967-5622f2b44e9f-kube-api-access-ngtch\") pod \"mysqld-exporter-47c8-account-create-update-jvpms\" (UID: \"ff80bff6-f385-4876-9967-5622f2b44e9f\") " pod="openstack/mysqld-exporter-47c8-account-create-update-jvpms" Nov 29 08:03:52 crc kubenswrapper[4795]: I1129 08:03:52.805104 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-47c8-account-create-update-jvpms" Nov 29 08:03:53 crc kubenswrapper[4795]: I1129 08:03:53.450640 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Nov 29 08:03:53 crc kubenswrapper[4795]: I1129 08:03:53.482704 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-wjgpz"] Nov 29 08:03:53 crc kubenswrapper[4795]: W1129 08:03:53.492532 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod613a66e8_273f_4355_bea7_08909eb514e8.slice/crio-0f736660cf46658e7c2a2a5054631291409fdedc3604539cbb4d6b3348fd1b6f WatchSource:0}: Error finding container 0f736660cf46658e7c2a2a5054631291409fdedc3604539cbb4d6b3348fd1b6f: Status 404 returned error can't find the container with id 0f736660cf46658e7c2a2a5054631291409fdedc3604539cbb4d6b3348fd1b6f Nov 29 08:03:53 crc kubenswrapper[4795]: I1129 08:03:53.502862 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-47c8-account-create-update-jvpms"] Nov 29 08:03:53 crc kubenswrapper[4795]: I1129 08:03:53.993630 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-wjgpz" event={"ID":"613a66e8-273f-4355-bea7-08909eb514e8","Type":"ContainerStarted","Data":"0f736660cf46658e7c2a2a5054631291409fdedc3604539cbb4d6b3348fd1b6f"} Nov 29 08:03:53 crc kubenswrapper[4795]: I1129 08:03:53.996207 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"28437f9f-e92e-46d7-9ffb-fcdda5dea25e","Type":"ContainerStarted","Data":"75c97a7954a61c78cf7551d4538a5107d1e67304ea408fa2c501588b35329371"} Nov 29 08:03:53 crc kubenswrapper[4795]: I1129 08:03:53.996267 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"28437f9f-e92e-46d7-9ffb-fcdda5dea25e","Type":"ContainerStarted","Data":"3378022172f8d2d03ba9ef35442145aa52a453730efec3406e1d0456ddf3a8f4"} Nov 29 08:03:54 crc kubenswrapper[4795]: I1129 08:03:54.000445 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-47c8-account-create-update-jvpms" event={"ID":"ff80bff6-f385-4876-9967-5622f2b44e9f","Type":"ContainerStarted","Data":"451299070b8d383c31d35ff5a5b3d2c72a9dd9c76593cdd1a9e82b3bdd57d9f0"} Nov 29 08:03:55 crc kubenswrapper[4795]: I1129 08:03:55.013814 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"28437f9f-e92e-46d7-9ffb-fcdda5dea25e","Type":"ContainerStarted","Data":"b5819009c5d199d87c98ddcd3fc8f816f861f2f7337bd44ab14f87635f2b1910"} Nov 29 08:03:55 crc kubenswrapper[4795]: I1129 08:03:55.014273 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"28437f9f-e92e-46d7-9ffb-fcdda5dea25e","Type":"ContainerStarted","Data":"d67be0a7012922e58903bea4670082e76168392e19f1d64c06084c1ca043af8f"} Nov 29 08:03:55 crc kubenswrapper[4795]: I1129 08:03:55.015984 4795 generic.go:334] "Generic (PLEG): container finished" podID="ff80bff6-f385-4876-9967-5622f2b44e9f" containerID="5aa1acd3e95e74c117d71df94c0ff912c2790e1dfc1afa7994c3c1fe083e2937" exitCode=0 Nov 29 08:03:55 crc kubenswrapper[4795]: I1129 08:03:55.016027 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-47c8-account-create-update-jvpms" event={"ID":"ff80bff6-f385-4876-9967-5622f2b44e9f","Type":"ContainerDied","Data":"5aa1acd3e95e74c117d71df94c0ff912c2790e1dfc1afa7994c3c1fe083e2937"} Nov 29 08:03:55 crc kubenswrapper[4795]: I1129 08:03:55.019185 4795 generic.go:334] "Generic (PLEG): container finished" podID="613a66e8-273f-4355-bea7-08909eb514e8" containerID="e1c4997f5fbdbf8d018b224a5421807f3748de57a871648f2813f4b14a3ce231" exitCode=0 Nov 29 08:03:55 crc kubenswrapper[4795]: I1129 08:03:55.019220 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-wjgpz" event={"ID":"613a66e8-273f-4355-bea7-08909eb514e8","Type":"ContainerDied","Data":"e1c4997f5fbdbf8d018b224a5421807f3748de57a871648f2813f4b14a3ce231"} Nov 29 08:03:56 crc kubenswrapper[4795]: I1129 08:03:56.032700 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"28437f9f-e92e-46d7-9ffb-fcdda5dea25e","Type":"ContainerStarted","Data":"f524d163b13addd744144200fa69d0c96df72cc309060c402fb2736091979244"} Nov 29 08:03:56 crc kubenswrapper[4795]: I1129 08:03:56.033058 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"28437f9f-e92e-46d7-9ffb-fcdda5dea25e","Type":"ContainerStarted","Data":"715b2caecbcdd4165bfe87a0f465deb55ea91bef061df17b1139d703531545a4"} Nov 29 08:03:56 crc kubenswrapper[4795]: I1129 08:03:56.533618 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-wjgpz" Nov 29 08:03:56 crc kubenswrapper[4795]: I1129 08:03:56.539638 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-47c8-account-create-update-jvpms" Nov 29 08:03:56 crc kubenswrapper[4795]: I1129 08:03:56.616073 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngtch\" (UniqueName: \"kubernetes.io/projected/ff80bff6-f385-4876-9967-5622f2b44e9f-kube-api-access-ngtch\") pod \"ff80bff6-f385-4876-9967-5622f2b44e9f\" (UID: \"ff80bff6-f385-4876-9967-5622f2b44e9f\") " Nov 29 08:03:56 crc kubenswrapper[4795]: I1129 08:03:56.616114 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff80bff6-f385-4876-9967-5622f2b44e9f-operator-scripts\") pod \"ff80bff6-f385-4876-9967-5622f2b44e9f\" (UID: \"ff80bff6-f385-4876-9967-5622f2b44e9f\") " Nov 29 08:03:56 crc kubenswrapper[4795]: I1129 08:03:56.616176 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/613a66e8-273f-4355-bea7-08909eb514e8-operator-scripts\") pod \"613a66e8-273f-4355-bea7-08909eb514e8\" (UID: \"613a66e8-273f-4355-bea7-08909eb514e8\") " Nov 29 08:03:56 crc kubenswrapper[4795]: I1129 08:03:56.616332 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d62n6\" (UniqueName: \"kubernetes.io/projected/613a66e8-273f-4355-bea7-08909eb514e8-kube-api-access-d62n6\") pod \"613a66e8-273f-4355-bea7-08909eb514e8\" (UID: \"613a66e8-273f-4355-bea7-08909eb514e8\") " Nov 29 08:03:56 crc kubenswrapper[4795]: I1129 08:03:56.619521 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff80bff6-f385-4876-9967-5622f2b44e9f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ff80bff6-f385-4876-9967-5622f2b44e9f" (UID: "ff80bff6-f385-4876-9967-5622f2b44e9f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:03:56 crc kubenswrapper[4795]: I1129 08:03:56.619888 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/613a66e8-273f-4355-bea7-08909eb514e8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "613a66e8-273f-4355-bea7-08909eb514e8" (UID: "613a66e8-273f-4355-bea7-08909eb514e8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:03:56 crc kubenswrapper[4795]: I1129 08:03:56.624745 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/613a66e8-273f-4355-bea7-08909eb514e8-kube-api-access-d62n6" (OuterVolumeSpecName: "kube-api-access-d62n6") pod "613a66e8-273f-4355-bea7-08909eb514e8" (UID: "613a66e8-273f-4355-bea7-08909eb514e8"). InnerVolumeSpecName "kube-api-access-d62n6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:03:56 crc kubenswrapper[4795]: I1129 08:03:56.625722 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff80bff6-f385-4876-9967-5622f2b44e9f-kube-api-access-ngtch" (OuterVolumeSpecName: "kube-api-access-ngtch") pod "ff80bff6-f385-4876-9967-5622f2b44e9f" (UID: "ff80bff6-f385-4876-9967-5622f2b44e9f"). InnerVolumeSpecName "kube-api-access-ngtch". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:03:56 crc kubenswrapper[4795]: I1129 08:03:56.718491 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d62n6\" (UniqueName: \"kubernetes.io/projected/613a66e8-273f-4355-bea7-08909eb514e8-kube-api-access-d62n6\") on node \"crc\" DevicePath \"\"" Nov 29 08:03:56 crc kubenswrapper[4795]: I1129 08:03:56.718526 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngtch\" (UniqueName: \"kubernetes.io/projected/ff80bff6-f385-4876-9967-5622f2b44e9f-kube-api-access-ngtch\") on node \"crc\" DevicePath \"\"" Nov 29 08:03:56 crc kubenswrapper[4795]: I1129 08:03:56.718535 4795 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff80bff6-f385-4876-9967-5622f2b44e9f-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 08:03:56 crc kubenswrapper[4795]: I1129 08:03:56.718545 4795 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/613a66e8-273f-4355-bea7-08909eb514e8-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 08:03:57 crc kubenswrapper[4795]: I1129 08:03:57.045240 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-47c8-account-create-update-jvpms" event={"ID":"ff80bff6-f385-4876-9967-5622f2b44e9f","Type":"ContainerDied","Data":"451299070b8d383c31d35ff5a5b3d2c72a9dd9c76593cdd1a9e82b3bdd57d9f0"} Nov 29 08:03:57 crc kubenswrapper[4795]: I1129 08:03:57.045285 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="451299070b8d383c31d35ff5a5b3d2c72a9dd9c76593cdd1a9e82b3bdd57d9f0" Nov 29 08:03:57 crc kubenswrapper[4795]: I1129 08:03:57.045256 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-47c8-account-create-update-jvpms" Nov 29 08:03:57 crc kubenswrapper[4795]: I1129 08:03:57.047021 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-wjgpz" event={"ID":"613a66e8-273f-4355-bea7-08909eb514e8","Type":"ContainerDied","Data":"0f736660cf46658e7c2a2a5054631291409fdedc3604539cbb4d6b3348fd1b6f"} Nov 29 08:03:57 crc kubenswrapper[4795]: I1129 08:03:57.047074 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f736660cf46658e7c2a2a5054631291409fdedc3604539cbb4d6b3348fd1b6f" Nov 29 08:03:57 crc kubenswrapper[4795]: I1129 08:03:57.047032 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-wjgpz" Nov 29 08:03:57 crc kubenswrapper[4795]: I1129 08:03:57.050847 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"28437f9f-e92e-46d7-9ffb-fcdda5dea25e","Type":"ContainerStarted","Data":"ab2f7efe77f50a833904748029dc8e9f36c3fcb4ff5fc011ed3e50c1378ee590"} Nov 29 08:03:57 crc kubenswrapper[4795]: I1129 08:03:57.050912 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"28437f9f-e92e-46d7-9ffb-fcdda5dea25e","Type":"ContainerStarted","Data":"7060f3ccda7f80f881911f1180ec3257a9349290902ff4b79a8cc85fa1e0bd97"} Nov 29 08:03:58 crc kubenswrapper[4795]: I1129 08:03:58.451024 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Nov 29 08:03:58 crc kubenswrapper[4795]: I1129 08:03:58.454100 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Nov 29 08:03:59 crc kubenswrapper[4795]: I1129 08:03:59.083409 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Nov 29 08:04:01 crc kubenswrapper[4795]: I1129 08:04:01.377004 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 29 08:04:01 crc kubenswrapper[4795]: I1129 08:04:01.812117 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-cf9lf"] Nov 29 08:04:01 crc kubenswrapper[4795]: E1129 08:04:01.812729 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff80bff6-f385-4876-9967-5622f2b44e9f" containerName="mariadb-account-create-update" Nov 29 08:04:01 crc kubenswrapper[4795]: I1129 08:04:01.812753 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff80bff6-f385-4876-9967-5622f2b44e9f" containerName="mariadb-account-create-update" Nov 29 08:04:01 crc kubenswrapper[4795]: E1129 08:04:01.812771 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="613a66e8-273f-4355-bea7-08909eb514e8" containerName="mariadb-database-create" Nov 29 08:04:01 crc kubenswrapper[4795]: I1129 08:04:01.812780 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="613a66e8-273f-4355-bea7-08909eb514e8" containerName="mariadb-database-create" Nov 29 08:04:01 crc kubenswrapper[4795]: I1129 08:04:01.813035 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="613a66e8-273f-4355-bea7-08909eb514e8" containerName="mariadb-database-create" Nov 29 08:04:01 crc kubenswrapper[4795]: I1129 08:04:01.813058 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff80bff6-f385-4876-9967-5622f2b44e9f" containerName="mariadb-account-create-update" Nov 29 08:04:01 crc kubenswrapper[4795]: I1129 08:04:01.814014 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-cf9lf" Nov 29 08:04:01 crc kubenswrapper[4795]: I1129 08:04:01.836835 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 29 08:04:01 crc kubenswrapper[4795]: I1129 08:04:01.855046 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-cf9lf"] Nov 29 08:04:01 crc kubenswrapper[4795]: I1129 08:04:01.874711 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 29 08:04:01 crc kubenswrapper[4795]: I1129 08:04:01.875077 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="063373b7-6898-409d-b792-d770a8f6f021" containerName="prometheus" containerID="cri-o://bcd84a11402b10454dd7031340f3d460680caa2f285ba45c386b62aad92ba57e" gracePeriod=600 Nov 29 08:04:01 crc kubenswrapper[4795]: I1129 08:04:01.875267 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="063373b7-6898-409d-b792-d770a8f6f021" containerName="thanos-sidecar" containerID="cri-o://aeeeceff929624dcc163051b37b80444664380c9bf11143d772c203d6bdf1fa3" gracePeriod=600 Nov 29 08:04:01 crc kubenswrapper[4795]: I1129 08:04:01.875328 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="063373b7-6898-409d-b792-d770a8f6f021" containerName="config-reloader" containerID="cri-o://7100a344cdd389fb34fb5aee6a3f40ec2312d4b0c7003459cc684cba8403e516" gracePeriod=600 Nov 29 08:04:01 crc kubenswrapper[4795]: I1129 08:04:01.942324 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-mlxhw"] Nov 29 08:04:01 crc kubenswrapper[4795]: I1129 08:04:01.944213 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-mlxhw" Nov 29 08:04:01 crc kubenswrapper[4795]: I1129 08:04:01.954427 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb3e5f06-dfee-4af3-bbe8-38b5d3272220-operator-scripts\") pod \"barbican-db-create-cf9lf\" (UID: \"fb3e5f06-dfee-4af3-bbe8-38b5d3272220\") " pod="openstack/barbican-db-create-cf9lf" Nov 29 08:04:01 crc kubenswrapper[4795]: I1129 08:04:01.954574 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx47s\" (UniqueName: \"kubernetes.io/projected/fb3e5f06-dfee-4af3-bbe8-38b5d3272220-kube-api-access-lx47s\") pod \"barbican-db-create-cf9lf\" (UID: \"fb3e5f06-dfee-4af3-bbe8-38b5d3272220\") " pod="openstack/barbican-db-create-cf9lf" Nov 29 08:04:01 crc kubenswrapper[4795]: I1129 08:04:01.968123 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-mlxhw"] Nov 29 08:04:02 crc kubenswrapper[4795]: I1129 08:04:02.075573 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lx47s\" (UniqueName: \"kubernetes.io/projected/fb3e5f06-dfee-4af3-bbe8-38b5d3272220-kube-api-access-lx47s\") pod \"barbican-db-create-cf9lf\" (UID: \"fb3e5f06-dfee-4af3-bbe8-38b5d3272220\") " pod="openstack/barbican-db-create-cf9lf" Nov 29 08:04:02 crc kubenswrapper[4795]: I1129 08:04:02.075696 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f94ad4e7-724c-43f6-bf7d-a50cb51229d3-operator-scripts\") pod \"cinder-db-create-mlxhw\" (UID: \"f94ad4e7-724c-43f6-bf7d-a50cb51229d3\") " pod="openstack/cinder-db-create-mlxhw" Nov 29 08:04:02 crc kubenswrapper[4795]: I1129 08:04:02.075763 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdcjd\" (UniqueName: \"kubernetes.io/projected/f94ad4e7-724c-43f6-bf7d-a50cb51229d3-kube-api-access-gdcjd\") pod \"cinder-db-create-mlxhw\" (UID: \"f94ad4e7-724c-43f6-bf7d-a50cb51229d3\") " pod="openstack/cinder-db-create-mlxhw" Nov 29 08:04:02 crc kubenswrapper[4795]: I1129 08:04:02.075787 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb3e5f06-dfee-4af3-bbe8-38b5d3272220-operator-scripts\") pod \"barbican-db-create-cf9lf\" (UID: \"fb3e5f06-dfee-4af3-bbe8-38b5d3272220\") " pod="openstack/barbican-db-create-cf9lf" Nov 29 08:04:02 crc kubenswrapper[4795]: I1129 08:04:02.076622 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb3e5f06-dfee-4af3-bbe8-38b5d3272220-operator-scripts\") pod \"barbican-db-create-cf9lf\" (UID: \"fb3e5f06-dfee-4af3-bbe8-38b5d3272220\") " pod="openstack/barbican-db-create-cf9lf" Nov 29 08:04:02 crc kubenswrapper[4795]: I1129 08:04:02.135729 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lx47s\" (UniqueName: \"kubernetes.io/projected/fb3e5f06-dfee-4af3-bbe8-38b5d3272220-kube-api-access-lx47s\") pod \"barbican-db-create-cf9lf\" (UID: \"fb3e5f06-dfee-4af3-bbe8-38b5d3272220\") " pod="openstack/barbican-db-create-cf9lf" Nov 29 08:04:02 crc kubenswrapper[4795]: I1129 08:04:02.137004 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-cf9lf" Nov 29 08:04:02 crc kubenswrapper[4795]: I1129 08:04:02.163103 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-ef15-account-create-update-w8qbc"] Nov 29 08:04:02 crc kubenswrapper[4795]: I1129 08:04:02.177620 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e81170c-9273-44af-9017-18b86d36e4c9-operator-scripts\") pod \"barbican-ef15-account-create-update-w8qbc\" (UID: \"5e81170c-9273-44af-9017-18b86d36e4c9\") " pod="openstack/barbican-ef15-account-create-update-w8qbc" Nov 29 08:04:02 crc kubenswrapper[4795]: I1129 08:04:02.177717 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdcjd\" (UniqueName: \"kubernetes.io/projected/f94ad4e7-724c-43f6-bf7d-a50cb51229d3-kube-api-access-gdcjd\") pod \"cinder-db-create-mlxhw\" (UID: \"f94ad4e7-724c-43f6-bf7d-a50cb51229d3\") " pod="openstack/cinder-db-create-mlxhw" Nov 29 08:04:02 crc kubenswrapper[4795]: I1129 08:04:02.177824 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v872p\" (UniqueName: \"kubernetes.io/projected/5e81170c-9273-44af-9017-18b86d36e4c9-kube-api-access-v872p\") pod \"barbican-ef15-account-create-update-w8qbc\" (UID: \"5e81170c-9273-44af-9017-18b86d36e4c9\") " pod="openstack/barbican-ef15-account-create-update-w8qbc" Nov 29 08:04:02 crc kubenswrapper[4795]: I1129 08:04:02.177868 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f94ad4e7-724c-43f6-bf7d-a50cb51229d3-operator-scripts\") pod \"cinder-db-create-mlxhw\" (UID: \"f94ad4e7-724c-43f6-bf7d-a50cb51229d3\") " pod="openstack/cinder-db-create-mlxhw" Nov 29 08:04:02 crc kubenswrapper[4795]: I1129 08:04:02.178558 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f94ad4e7-724c-43f6-bf7d-a50cb51229d3-operator-scripts\") pod \"cinder-db-create-mlxhw\" (UID: \"f94ad4e7-724c-43f6-bf7d-a50cb51229d3\") " pod="openstack/cinder-db-create-mlxhw" Nov 29 08:04:02 crc kubenswrapper[4795]: I1129 08:04:02.178835 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-ef15-account-create-update-w8qbc" Nov 29 08:04:02 crc kubenswrapper[4795]: I1129 08:04:02.182811 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-ef15-account-create-update-w8qbc"] Nov 29 08:04:02 crc kubenswrapper[4795]: I1129 08:04:02.186294 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Nov 29 08:04:02 crc kubenswrapper[4795]: I1129 08:04:02.230403 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdcjd\" (UniqueName: \"kubernetes.io/projected/f94ad4e7-724c-43f6-bf7d-a50cb51229d3-kube-api-access-gdcjd\") pod \"cinder-db-create-mlxhw\" (UID: \"f94ad4e7-724c-43f6-bf7d-a50cb51229d3\") " pod="openstack/cinder-db-create-mlxhw" Nov 29 08:04:02 crc kubenswrapper[4795]: I1129 08:04:02.271952 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-mlxhw" Nov 29 08:04:02 crc kubenswrapper[4795]: I1129 08:04:02.284937 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e81170c-9273-44af-9017-18b86d36e4c9-operator-scripts\") pod \"barbican-ef15-account-create-update-w8qbc\" (UID: \"5e81170c-9273-44af-9017-18b86d36e4c9\") " pod="openstack/barbican-ef15-account-create-update-w8qbc" Nov 29 08:04:02 crc kubenswrapper[4795]: I1129 08:04:02.285362 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v872p\" (UniqueName: \"kubernetes.io/projected/5e81170c-9273-44af-9017-18b86d36e4c9-kube-api-access-v872p\") pod \"barbican-ef15-account-create-update-w8qbc\" (UID: \"5e81170c-9273-44af-9017-18b86d36e4c9\") " pod="openstack/barbican-ef15-account-create-update-w8qbc" Nov 29 08:04:02 crc kubenswrapper[4795]: I1129 08:04:02.286840 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e81170c-9273-44af-9017-18b86d36e4c9-operator-scripts\") pod \"barbican-ef15-account-create-update-w8qbc\" (UID: \"5e81170c-9273-44af-9017-18b86d36e4c9\") " pod="openstack/barbican-ef15-account-create-update-w8qbc" Nov 29 08:04:02 crc kubenswrapper[4795]: I1129 08:04:02.338402 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v872p\" (UniqueName: \"kubernetes.io/projected/5e81170c-9273-44af-9017-18b86d36e4c9-kube-api-access-v872p\") pod \"barbican-ef15-account-create-update-w8qbc\" (UID: \"5e81170c-9273-44af-9017-18b86d36e4c9\") " pod="openstack/barbican-ef15-account-create-update-w8qbc" Nov 29 08:04:02 crc kubenswrapper[4795]: I1129 08:04:02.394565 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-96b1-account-create-update-2xx9v"] Nov 29 08:04:02 crc kubenswrapper[4795]: I1129 08:04:02.396677 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-96b1-account-create-update-2xx9v" Nov 29 08:04:02 crc kubenswrapper[4795]: I1129 08:04:02.405625 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Nov 29 08:04:02 crc kubenswrapper[4795]: I1129 08:04:02.419688 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-j9j74"] Nov 29 08:04:02 crc kubenswrapper[4795]: I1129 08:04:02.421922 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-j9j74" Nov 29 08:04:02 crc kubenswrapper[4795]: I1129 08:04:02.430422 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 29 08:04:02 crc kubenswrapper[4795]: I1129 08:04:02.430675 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 29 08:04:02 crc kubenswrapper[4795]: I1129 08:04:02.430831 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 29 08:04:02 crc kubenswrapper[4795]: I1129 08:04:02.431098 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-j4r47" Nov 29 08:04:02 crc kubenswrapper[4795]: I1129 08:04:02.459696 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-j9j74"] Nov 29 08:04:02 crc kubenswrapper[4795]: I1129 08:04:02.483609 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-96b1-account-create-update-2xx9v"] Nov 29 08:04:02 crc kubenswrapper[4795]: I1129 08:04:02.490880 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqwxh\" (UniqueName: \"kubernetes.io/projected/727203c8-6d15-404c-8744-8308e5c7ced8-kube-api-access-kqwxh\") pod \"cinder-96b1-account-create-update-2xx9v\" (UID: \"727203c8-6d15-404c-8744-8308e5c7ced8\") " pod="openstack/cinder-96b1-account-create-update-2xx9v" Nov 29 08:04:02 crc kubenswrapper[4795]: I1129 08:04:02.491047 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/727203c8-6d15-404c-8744-8308e5c7ced8-operator-scripts\") pod \"cinder-96b1-account-create-update-2xx9v\" (UID: \"727203c8-6d15-404c-8744-8308e5c7ced8\") " pod="openstack/cinder-96b1-account-create-update-2xx9v" Nov 29 08:04:02 crc kubenswrapper[4795]: I1129 08:04:02.512670 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-ef15-account-create-update-w8qbc" Nov 29 08:04:02 crc kubenswrapper[4795]: I1129 08:04:02.531742 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-fjkhc"] Nov 29 08:04:02 crc kubenswrapper[4795]: I1129 08:04:02.533387 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-fjkhc" Nov 29 08:04:02 crc kubenswrapper[4795]: I1129 08:04:02.562923 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-fjkhc"] Nov 29 08:04:02 crc kubenswrapper[4795]: I1129 08:04:02.599982 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/605a5a60-d8c8-4128-954a-236dd7cbf8c4-config-data\") pod \"keystone-db-sync-j9j74\" (UID: \"605a5a60-d8c8-4128-954a-236dd7cbf8c4\") " pod="openstack/keystone-db-sync-j9j74" Nov 29 08:04:02 crc kubenswrapper[4795]: I1129 08:04:02.600119 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfplw\" (UniqueName: \"kubernetes.io/projected/605a5a60-d8c8-4128-954a-236dd7cbf8c4-kube-api-access-dfplw\") pod \"keystone-db-sync-j9j74\" (UID: \"605a5a60-d8c8-4128-954a-236dd7cbf8c4\") " pod="openstack/keystone-db-sync-j9j74" Nov 29 08:04:02 crc kubenswrapper[4795]: I1129 08:04:02.602768 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqwxh\" (UniqueName: \"kubernetes.io/projected/727203c8-6d15-404c-8744-8308e5c7ced8-kube-api-access-kqwxh\") pod \"cinder-96b1-account-create-update-2xx9v\" (UID: \"727203c8-6d15-404c-8744-8308e5c7ced8\") " pod="openstack/cinder-96b1-account-create-update-2xx9v" Nov 29 08:04:02 crc kubenswrapper[4795]: I1129 08:04:02.603014 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/605a5a60-d8c8-4128-954a-236dd7cbf8c4-combined-ca-bundle\") pod \"keystone-db-sync-j9j74\" (UID: \"605a5a60-d8c8-4128-954a-236dd7cbf8c4\") " pod="openstack/keystone-db-sync-j9j74" Nov 29 08:04:02 crc kubenswrapper[4795]: I1129 08:04:02.603106 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/727203c8-6d15-404c-8744-8308e5c7ced8-operator-scripts\") pod \"cinder-96b1-account-create-update-2xx9v\" (UID: \"727203c8-6d15-404c-8744-8308e5c7ced8\") " pod="openstack/cinder-96b1-account-create-update-2xx9v" Nov 29 08:04:02 crc kubenswrapper[4795]: I1129 08:04:02.608518 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/727203c8-6d15-404c-8744-8308e5c7ced8-operator-scripts\") pod \"cinder-96b1-account-create-update-2xx9v\" (UID: \"727203c8-6d15-404c-8744-8308e5c7ced8\") " pod="openstack/cinder-96b1-account-create-update-2xx9v" Nov 29 08:04:02 crc kubenswrapper[4795]: I1129 08:04:02.660533 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqwxh\" (UniqueName: \"kubernetes.io/projected/727203c8-6d15-404c-8744-8308e5c7ced8-kube-api-access-kqwxh\") pod \"cinder-96b1-account-create-update-2xx9v\" (UID: \"727203c8-6d15-404c-8744-8308e5c7ced8\") " pod="openstack/cinder-96b1-account-create-update-2xx9v" Nov 29 08:04:02 crc kubenswrapper[4795]: I1129 08:04:02.688914 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-rnjzx"] Nov 29 08:04:02 crc kubenswrapper[4795]: I1129 08:04:02.715004 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-rnjzx" Nov 29 08:04:02 crc kubenswrapper[4795]: I1129 08:04:02.734790 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfplw\" (UniqueName: \"kubernetes.io/projected/605a5a60-d8c8-4128-954a-236dd7cbf8c4-kube-api-access-dfplw\") pod \"keystone-db-sync-j9j74\" (UID: \"605a5a60-d8c8-4128-954a-236dd7cbf8c4\") " pod="openstack/keystone-db-sync-j9j74" Nov 29 08:04:02 crc kubenswrapper[4795]: I1129 08:04:02.735218 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/605a5a60-d8c8-4128-954a-236dd7cbf8c4-combined-ca-bundle\") pod \"keystone-db-sync-j9j74\" (UID: \"605a5a60-d8c8-4128-954a-236dd7cbf8c4\") " pod="openstack/keystone-db-sync-j9j74" Nov 29 08:04:02 crc kubenswrapper[4795]: I1129 08:04:02.735362 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb646\" (UniqueName: \"kubernetes.io/projected/2b92fb37-6b8f-45de-92a6-c488e9c3f40b-kube-api-access-hb646\") pod \"neutron-db-create-fjkhc\" (UID: \"2b92fb37-6b8f-45de-92a6-c488e9c3f40b\") " pod="openstack/neutron-db-create-fjkhc" Nov 29 08:04:02 crc kubenswrapper[4795]: I1129 08:04:02.735425 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b92fb37-6b8f-45de-92a6-c488e9c3f40b-operator-scripts\") pod \"neutron-db-create-fjkhc\" (UID: \"2b92fb37-6b8f-45de-92a6-c488e9c3f40b\") " pod="openstack/neutron-db-create-fjkhc" Nov 29 08:04:02 crc kubenswrapper[4795]: I1129 08:04:02.735494 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/605a5a60-d8c8-4128-954a-236dd7cbf8c4-config-data\") pod \"keystone-db-sync-j9j74\" (UID: \"605a5a60-d8c8-4128-954a-236dd7cbf8c4\") " pod="openstack/keystone-db-sync-j9j74" Nov 29 08:04:02 crc kubenswrapper[4795]: I1129 08:04:02.735494 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-96b1-account-create-update-2xx9v" Nov 29 08:04:02 crc kubenswrapper[4795]: I1129 08:04:02.745440 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-rnjzx"] Nov 29 08:04:02 crc kubenswrapper[4795]: I1129 08:04:02.747991 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/605a5a60-d8c8-4128-954a-236dd7cbf8c4-combined-ca-bundle\") pod \"keystone-db-sync-j9j74\" (UID: \"605a5a60-d8c8-4128-954a-236dd7cbf8c4\") " pod="openstack/keystone-db-sync-j9j74" Nov 29 08:04:02 crc kubenswrapper[4795]: I1129 08:04:02.757904 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-5025-account-create-update-jzzfl"] Nov 29 08:04:02 crc kubenswrapper[4795]: I1129 08:04:02.761445 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-5025-account-create-update-jzzfl" Nov 29 08:04:02 crc kubenswrapper[4795]: I1129 08:04:02.764196 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Nov 29 08:04:02 crc kubenswrapper[4795]: I1129 08:04:02.774455 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfplw\" (UniqueName: \"kubernetes.io/projected/605a5a60-d8c8-4128-954a-236dd7cbf8c4-kube-api-access-dfplw\") pod \"keystone-db-sync-j9j74\" (UID: \"605a5a60-d8c8-4128-954a-236dd7cbf8c4\") " pod="openstack/keystone-db-sync-j9j74" Nov 29 08:04:02 crc kubenswrapper[4795]: I1129 08:04:02.784740 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-5025-account-create-update-jzzfl"] Nov 29 08:04:02 crc kubenswrapper[4795]: I1129 08:04:02.795196 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/605a5a60-d8c8-4128-954a-236dd7cbf8c4-config-data\") pod \"keystone-db-sync-j9j74\" (UID: \"605a5a60-d8c8-4128-954a-236dd7cbf8c4\") " pod="openstack/keystone-db-sync-j9j74" Nov 29 08:04:02 crc kubenswrapper[4795]: I1129 08:04:02.839000 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hb646\" (UniqueName: \"kubernetes.io/projected/2b92fb37-6b8f-45de-92a6-c488e9c3f40b-kube-api-access-hb646\") pod \"neutron-db-create-fjkhc\" (UID: \"2b92fb37-6b8f-45de-92a6-c488e9c3f40b\") " pod="openstack/neutron-db-create-fjkhc" Nov 29 08:04:02 crc kubenswrapper[4795]: I1129 08:04:02.839081 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b92fb37-6b8f-45de-92a6-c488e9c3f40b-operator-scripts\") pod \"neutron-db-create-fjkhc\" (UID: \"2b92fb37-6b8f-45de-92a6-c488e9c3f40b\") " pod="openstack/neutron-db-create-fjkhc" Nov 29 08:04:02 crc kubenswrapper[4795]: I1129 08:04:02.840308 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/debbb091-39b4-4d01-a468-7f9c7b65ff7e-operator-scripts\") pod \"heat-db-create-rnjzx\" (UID: \"debbb091-39b4-4d01-a468-7f9c7b65ff7e\") " pod="openstack/heat-db-create-rnjzx" Nov 29 08:04:02 crc kubenswrapper[4795]: I1129 08:04:02.840396 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxrlg\" (UniqueName: \"kubernetes.io/projected/debbb091-39b4-4d01-a468-7f9c7b65ff7e-kube-api-access-mxrlg\") pod \"heat-db-create-rnjzx\" (UID: \"debbb091-39b4-4d01-a468-7f9c7b65ff7e\") " pod="openstack/heat-db-create-rnjzx" Nov 29 08:04:02 crc kubenswrapper[4795]: I1129 08:04:02.841241 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b92fb37-6b8f-45de-92a6-c488e9c3f40b-operator-scripts\") pod \"neutron-db-create-fjkhc\" (UID: \"2b92fb37-6b8f-45de-92a6-c488e9c3f40b\") " pod="openstack/neutron-db-create-fjkhc" Nov 29 08:04:02 crc kubenswrapper[4795]: I1129 08:04:02.858583 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-703d-account-create-update-868mf"] Nov 29 08:04:02 crc kubenswrapper[4795]: I1129 08:04:02.860333 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-703d-account-create-update-868mf" Nov 29 08:04:02 crc kubenswrapper[4795]: I1129 08:04:02.864836 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Nov 29 08:04:02 crc kubenswrapper[4795]: I1129 08:04:02.865211 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hb646\" (UniqueName: \"kubernetes.io/projected/2b92fb37-6b8f-45de-92a6-c488e9c3f40b-kube-api-access-hb646\") pod \"neutron-db-create-fjkhc\" (UID: \"2b92fb37-6b8f-45de-92a6-c488e9c3f40b\") " pod="openstack/neutron-db-create-fjkhc" Nov 29 08:04:02 crc kubenswrapper[4795]: I1129 08:04:02.881076 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-703d-account-create-update-868mf"] Nov 29 08:04:02 crc kubenswrapper[4795]: I1129 08:04:02.941931 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/debbb091-39b4-4d01-a468-7f9c7b65ff7e-operator-scripts\") pod \"heat-db-create-rnjzx\" (UID: \"debbb091-39b4-4d01-a468-7f9c7b65ff7e\") " pod="openstack/heat-db-create-rnjzx" Nov 29 08:04:02 crc kubenswrapper[4795]: I1129 08:04:02.942355 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxrlg\" (UniqueName: \"kubernetes.io/projected/debbb091-39b4-4d01-a468-7f9c7b65ff7e-kube-api-access-mxrlg\") pod \"heat-db-create-rnjzx\" (UID: \"debbb091-39b4-4d01-a468-7f9c7b65ff7e\") " pod="openstack/heat-db-create-rnjzx" Nov 29 08:04:02 crc kubenswrapper[4795]: I1129 08:04:02.942423 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ntgq\" (UniqueName: \"kubernetes.io/projected/51454db5-2fe5-46bf-b01c-a302030baa3d-kube-api-access-7ntgq\") pod \"heat-5025-account-create-update-jzzfl\" (UID: \"51454db5-2fe5-46bf-b01c-a302030baa3d\") " pod="openstack/heat-5025-account-create-update-jzzfl" Nov 29 08:04:02 crc kubenswrapper[4795]: I1129 08:04:02.942623 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51454db5-2fe5-46bf-b01c-a302030baa3d-operator-scripts\") pod \"heat-5025-account-create-update-jzzfl\" (UID: \"51454db5-2fe5-46bf-b01c-a302030baa3d\") " pod="openstack/heat-5025-account-create-update-jzzfl" Nov 29 08:04:02 crc kubenswrapper[4795]: I1129 08:04:02.943038 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/debbb091-39b4-4d01-a468-7f9c7b65ff7e-operator-scripts\") pod \"heat-db-create-rnjzx\" (UID: \"debbb091-39b4-4d01-a468-7f9c7b65ff7e\") " pod="openstack/heat-db-create-rnjzx" Nov 29 08:04:02 crc kubenswrapper[4795]: I1129 08:04:02.961298 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxrlg\" (UniqueName: \"kubernetes.io/projected/debbb091-39b4-4d01-a468-7f9c7b65ff7e-kube-api-access-mxrlg\") pod \"heat-db-create-rnjzx\" (UID: \"debbb091-39b4-4d01-a468-7f9c7b65ff7e\") " pod="openstack/heat-db-create-rnjzx" Nov 29 08:04:03 crc kubenswrapper[4795]: I1129 08:04:03.044865 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx9sm\" (UniqueName: \"kubernetes.io/projected/307b1106-2c28-463f-8843-e2d397bf999a-kube-api-access-gx9sm\") pod \"neutron-703d-account-create-update-868mf\" (UID: \"307b1106-2c28-463f-8843-e2d397bf999a\") " pod="openstack/neutron-703d-account-create-update-868mf" Nov 29 08:04:03 crc kubenswrapper[4795]: I1129 08:04:03.045001 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ntgq\" (UniqueName: \"kubernetes.io/projected/51454db5-2fe5-46bf-b01c-a302030baa3d-kube-api-access-7ntgq\") pod \"heat-5025-account-create-update-jzzfl\" (UID: \"51454db5-2fe5-46bf-b01c-a302030baa3d\") " pod="openstack/heat-5025-account-create-update-jzzfl" Nov 29 08:04:03 crc kubenswrapper[4795]: I1129 08:04:03.045034 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/307b1106-2c28-463f-8843-e2d397bf999a-operator-scripts\") pod \"neutron-703d-account-create-update-868mf\" (UID: \"307b1106-2c28-463f-8843-e2d397bf999a\") " pod="openstack/neutron-703d-account-create-update-868mf" Nov 29 08:04:03 crc kubenswrapper[4795]: I1129 08:04:03.045116 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51454db5-2fe5-46bf-b01c-a302030baa3d-operator-scripts\") pod \"heat-5025-account-create-update-jzzfl\" (UID: \"51454db5-2fe5-46bf-b01c-a302030baa3d\") " pod="openstack/heat-5025-account-create-update-jzzfl" Nov 29 08:04:03 crc kubenswrapper[4795]: I1129 08:04:03.045836 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51454db5-2fe5-46bf-b01c-a302030baa3d-operator-scripts\") pod \"heat-5025-account-create-update-jzzfl\" (UID: \"51454db5-2fe5-46bf-b01c-a302030baa3d\") " pod="openstack/heat-5025-account-create-update-jzzfl" Nov 29 08:04:03 crc kubenswrapper[4795]: I1129 08:04:03.051044 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-j9j74" Nov 29 08:04:03 crc kubenswrapper[4795]: I1129 08:04:03.051139 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-rnjzx" Nov 29 08:04:03 crc kubenswrapper[4795]: I1129 08:04:03.071680 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Nov 29 08:04:03 crc kubenswrapper[4795]: I1129 08:04:03.073906 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Nov 29 08:04:03 crc kubenswrapper[4795]: I1129 08:04:03.077970 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ntgq\" (UniqueName: \"kubernetes.io/projected/51454db5-2fe5-46bf-b01c-a302030baa3d-kube-api-access-7ntgq\") pod \"heat-5025-account-create-update-jzzfl\" (UID: \"51454db5-2fe5-46bf-b01c-a302030baa3d\") " pod="openstack/heat-5025-account-create-update-jzzfl" Nov 29 08:04:03 crc kubenswrapper[4795]: I1129 08:04:03.078434 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Nov 29 08:04:03 crc kubenswrapper[4795]: I1129 08:04:03.081042 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Nov 29 08:04:03 crc kubenswrapper[4795]: I1129 08:04:03.163807 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/307b1106-2c28-463f-8843-e2d397bf999a-operator-scripts\") pod \"neutron-703d-account-create-update-868mf\" (UID: \"307b1106-2c28-463f-8843-e2d397bf999a\") " pod="openstack/neutron-703d-account-create-update-868mf" Nov 29 08:04:03 crc kubenswrapper[4795]: I1129 08:04:03.164113 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gx9sm\" (UniqueName: \"kubernetes.io/projected/307b1106-2c28-463f-8843-e2d397bf999a-kube-api-access-gx9sm\") pod \"neutron-703d-account-create-update-868mf\" (UID: \"307b1106-2c28-463f-8843-e2d397bf999a\") " pod="openstack/neutron-703d-account-create-update-868mf" Nov 29 08:04:03 crc kubenswrapper[4795]: I1129 08:04:03.166540 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/307b1106-2c28-463f-8843-e2d397bf999a-operator-scripts\") pod \"neutron-703d-account-create-update-868mf\" (UID: \"307b1106-2c28-463f-8843-e2d397bf999a\") " pod="openstack/neutron-703d-account-create-update-868mf" Nov 29 08:04:03 crc kubenswrapper[4795]: I1129 08:04:03.160868 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-fjkhc" Nov 29 08:04:03 crc kubenswrapper[4795]: I1129 08:04:03.185717 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-5025-account-create-update-jzzfl" Nov 29 08:04:03 crc kubenswrapper[4795]: I1129 08:04:03.205505 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gx9sm\" (UniqueName: \"kubernetes.io/projected/307b1106-2c28-463f-8843-e2d397bf999a-kube-api-access-gx9sm\") pod \"neutron-703d-account-create-update-868mf\" (UID: \"307b1106-2c28-463f-8843-e2d397bf999a\") " pod="openstack/neutron-703d-account-create-update-868mf" Nov 29 08:04:03 crc kubenswrapper[4795]: I1129 08:04:03.279039 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1592ce2-fb7d-464f-b13d-09e287a45af6-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"a1592ce2-fb7d-464f-b13d-09e287a45af6\") " pod="openstack/mysqld-exporter-0" Nov 29 08:04:03 crc kubenswrapper[4795]: I1129 08:04:03.279664 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpbpt\" (UniqueName: \"kubernetes.io/projected/a1592ce2-fb7d-464f-b13d-09e287a45af6-kube-api-access-zpbpt\") pod \"mysqld-exporter-0\" (UID: \"a1592ce2-fb7d-464f-b13d-09e287a45af6\") " pod="openstack/mysqld-exporter-0" Nov 29 08:04:03 crc kubenswrapper[4795]: I1129 08:04:03.279793 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1592ce2-fb7d-464f-b13d-09e287a45af6-config-data\") pod \"mysqld-exporter-0\" (UID: \"a1592ce2-fb7d-464f-b13d-09e287a45af6\") " pod="openstack/mysqld-exporter-0" Nov 29 08:04:03 crc kubenswrapper[4795]: I1129 08:04:03.387036 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpbpt\" (UniqueName: \"kubernetes.io/projected/a1592ce2-fb7d-464f-b13d-09e287a45af6-kube-api-access-zpbpt\") pod \"mysqld-exporter-0\" (UID: \"a1592ce2-fb7d-464f-b13d-09e287a45af6\") " pod="openstack/mysqld-exporter-0" Nov 29 08:04:03 crc kubenswrapper[4795]: I1129 08:04:03.387358 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1592ce2-fb7d-464f-b13d-09e287a45af6-config-data\") pod \"mysqld-exporter-0\" (UID: \"a1592ce2-fb7d-464f-b13d-09e287a45af6\") " pod="openstack/mysqld-exporter-0" Nov 29 08:04:03 crc kubenswrapper[4795]: I1129 08:04:03.387488 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1592ce2-fb7d-464f-b13d-09e287a45af6-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"a1592ce2-fb7d-464f-b13d-09e287a45af6\") " pod="openstack/mysqld-exporter-0" Nov 29 08:04:03 crc kubenswrapper[4795]: I1129 08:04:03.395664 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1592ce2-fb7d-464f-b13d-09e287a45af6-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"a1592ce2-fb7d-464f-b13d-09e287a45af6\") " pod="openstack/mysqld-exporter-0" Nov 29 08:04:03 crc kubenswrapper[4795]: I1129 08:04:03.406515 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1592ce2-fb7d-464f-b13d-09e287a45af6-config-data\") pod \"mysqld-exporter-0\" (UID: \"a1592ce2-fb7d-464f-b13d-09e287a45af6\") " pod="openstack/mysqld-exporter-0" Nov 29 08:04:03 crc kubenswrapper[4795]: I1129 08:04:03.459785 4795 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="063373b7-6898-409d-b792-d770a8f6f021" containerName="prometheus" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 29 08:04:03 crc kubenswrapper[4795]: I1129 08:04:03.465122 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpbpt\" (UniqueName: \"kubernetes.io/projected/a1592ce2-fb7d-464f-b13d-09e287a45af6-kube-api-access-zpbpt\") pod \"mysqld-exporter-0\" (UID: \"a1592ce2-fb7d-464f-b13d-09e287a45af6\") " pod="openstack/mysqld-exporter-0" Nov 29 08:04:03 crc kubenswrapper[4795]: I1129 08:04:03.496762 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-703d-account-create-update-868mf" Nov 29 08:04:03 crc kubenswrapper[4795]: I1129 08:04:03.704937 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Nov 29 08:04:06 crc kubenswrapper[4795]: I1129 08:04:06.276555 4795 generic.go:334] "Generic (PLEG): container finished" podID="063373b7-6898-409d-b792-d770a8f6f021" containerID="aeeeceff929624dcc163051b37b80444664380c9bf11143d772c203d6bdf1fa3" exitCode=0 Nov 29 08:04:06 crc kubenswrapper[4795]: I1129 08:04:06.278104 4795 generic.go:334] "Generic (PLEG): container finished" podID="063373b7-6898-409d-b792-d770a8f6f021" containerID="7100a344cdd389fb34fb5aee6a3f40ec2312d4b0c7003459cc684cba8403e516" exitCode=0 Nov 29 08:04:06 crc kubenswrapper[4795]: I1129 08:04:06.278229 4795 generic.go:334] "Generic (PLEG): container finished" podID="063373b7-6898-409d-b792-d770a8f6f021" containerID="bcd84a11402b10454dd7031340f3d460680caa2f285ba45c386b62aad92ba57e" exitCode=0 Nov 29 08:04:06 crc kubenswrapper[4795]: I1129 08:04:06.302110 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"063373b7-6898-409d-b792-d770a8f6f021","Type":"ContainerDied","Data":"aeeeceff929624dcc163051b37b80444664380c9bf11143d772c203d6bdf1fa3"} Nov 29 08:04:06 crc kubenswrapper[4795]: I1129 08:04:06.302150 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"063373b7-6898-409d-b792-d770a8f6f021","Type":"ContainerDied","Data":"7100a344cdd389fb34fb5aee6a3f40ec2312d4b0c7003459cc684cba8403e516"} Nov 29 08:04:06 crc kubenswrapper[4795]: I1129 08:04:06.302160 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"063373b7-6898-409d-b792-d770a8f6f021","Type":"ContainerDied","Data":"bcd84a11402b10454dd7031340f3d460680caa2f285ba45c386b62aad92ba57e"} Nov 29 08:04:08 crc kubenswrapper[4795]: I1129 08:04:08.453241 4795 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="063373b7-6898-409d-b792-d770a8f6f021" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.137:9090/-/ready\": dial tcp 10.217.0.137:9090: connect: connection refused" Nov 29 08:04:10 crc kubenswrapper[4795]: E1129 08:04:10.578661 4795 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Nov 29 08:04:10 crc kubenswrapper[4795]: E1129 08:04:10.579245 4795 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6sh84,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-7lg28_openstack(082a7f2c-1081-4af8-91c8-60a13d787746): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 08:04:10 crc kubenswrapper[4795]: E1129 08:04:10.580420 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-7lg28" podUID="082a7f2c-1081-4af8-91c8-60a13d787746" Nov 29 08:04:11 crc kubenswrapper[4795]: E1129 08:04:11.115781 4795 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-swift-object:current-podified" Nov 29 08:04:11 crc kubenswrapper[4795]: E1129 08:04:11.116365 4795 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:object-server,Image:quay.io/podified-antelope-centos9/openstack-swift-object:current-podified,Command:[/usr/bin/swift-object-server /etc/swift/object-server.conf.d -v],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:object,HostPort:0,ContainerPort:6200,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5b7h56h9dh94h67bh697h95h55hbh555h556h675h5fdh57dh579h5fbh64fh5c9h687hb6h678h5d4h549h54h98h8ch564h5bh5bch55dhc8hf8q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:swift,ReadOnly:false,MountPath:/srv/node/pv,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-swift,ReadOnly:false,MountPath:/etc/swift,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cache,ReadOnly:false,MountPath:/var/cache/swift,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lock,ReadOnly:false,MountPath:/var/lock,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ssvh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42445,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-storage-0_openstack(28437f9f-e92e-46d7-9ffb-fcdda5dea25e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 08:04:11 crc kubenswrapper[4795]: I1129 08:04:11.165464 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 29 08:04:11 crc kubenswrapper[4795]: I1129 08:04:11.322318 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/063373b7-6898-409d-b792-d770a8f6f021-thanos-prometheus-http-client-file\") pod \"063373b7-6898-409d-b792-d770a8f6f021\" (UID: \"063373b7-6898-409d-b792-d770a8f6f021\") " Nov 29 08:04:11 crc kubenswrapper[4795]: I1129 08:04:11.322399 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/063373b7-6898-409d-b792-d770a8f6f021-web-config\") pod \"063373b7-6898-409d-b792-d770a8f6f021\" (UID: \"063373b7-6898-409d-b792-d770a8f6f021\") " Nov 29 08:04:11 crc kubenswrapper[4795]: I1129 08:04:11.322481 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/063373b7-6898-409d-b792-d770a8f6f021-config-out\") pod \"063373b7-6898-409d-b792-d770a8f6f021\" (UID: \"063373b7-6898-409d-b792-d770a8f6f021\") " Nov 29 08:04:11 crc kubenswrapper[4795]: I1129 08:04:11.322658 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89ch9\" (UniqueName: \"kubernetes.io/projected/063373b7-6898-409d-b792-d770a8f6f021-kube-api-access-89ch9\") pod \"063373b7-6898-409d-b792-d770a8f6f021\" (UID: \"063373b7-6898-409d-b792-d770a8f6f021\") " Nov 29 08:04:11 crc kubenswrapper[4795]: I1129 08:04:11.322731 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"063373b7-6898-409d-b792-d770a8f6f021\" (UID: \"063373b7-6898-409d-b792-d770a8f6f021\") " Nov 29 08:04:11 crc kubenswrapper[4795]: I1129 08:04:11.322762 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/063373b7-6898-409d-b792-d770a8f6f021-prometheus-metric-storage-rulefiles-0\") pod \"063373b7-6898-409d-b792-d770a8f6f021\" (UID: \"063373b7-6898-409d-b792-d770a8f6f021\") " Nov 29 08:04:11 crc kubenswrapper[4795]: I1129 08:04:11.322799 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/063373b7-6898-409d-b792-d770a8f6f021-config\") pod \"063373b7-6898-409d-b792-d770a8f6f021\" (UID: \"063373b7-6898-409d-b792-d770a8f6f021\") " Nov 29 08:04:11 crc kubenswrapper[4795]: I1129 08:04:11.322827 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/063373b7-6898-409d-b792-d770a8f6f021-tls-assets\") pod \"063373b7-6898-409d-b792-d770a8f6f021\" (UID: \"063373b7-6898-409d-b792-d770a8f6f021\") " Nov 29 08:04:11 crc kubenswrapper[4795]: I1129 08:04:11.326923 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/063373b7-6898-409d-b792-d770a8f6f021-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "063373b7-6898-409d-b792-d770a8f6f021" (UID: "063373b7-6898-409d-b792-d770a8f6f021"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:04:11 crc kubenswrapper[4795]: I1129 08:04:11.353806 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/063373b7-6898-409d-b792-d770a8f6f021-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "063373b7-6898-409d-b792-d770a8f6f021" (UID: "063373b7-6898-409d-b792-d770a8f6f021"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:04:11 crc kubenswrapper[4795]: I1129 08:04:11.360700 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/063373b7-6898-409d-b792-d770a8f6f021-config-out" (OuterVolumeSpecName: "config-out") pod "063373b7-6898-409d-b792-d770a8f6f021" (UID: "063373b7-6898-409d-b792-d770a8f6f021"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:04:11 crc kubenswrapper[4795]: I1129 08:04:11.369378 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/063373b7-6898-409d-b792-d770a8f6f021-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "063373b7-6898-409d-b792-d770a8f6f021" (UID: "063373b7-6898-409d-b792-d770a8f6f021"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:04:11 crc kubenswrapper[4795]: I1129 08:04:11.369499 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/063373b7-6898-409d-b792-d770a8f6f021-config" (OuterVolumeSpecName: "config") pod "063373b7-6898-409d-b792-d770a8f6f021" (UID: "063373b7-6898-409d-b792-d770a8f6f021"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:04:11 crc kubenswrapper[4795]: I1129 08:04:11.369744 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/063373b7-6898-409d-b792-d770a8f6f021-kube-api-access-89ch9" (OuterVolumeSpecName: "kube-api-access-89ch9") pod "063373b7-6898-409d-b792-d770a8f6f021" (UID: "063373b7-6898-409d-b792-d770a8f6f021"). InnerVolumeSpecName "kube-api-access-89ch9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:04:11 crc kubenswrapper[4795]: I1129 08:04:11.374629 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 29 08:04:11 crc kubenswrapper[4795]: I1129 08:04:11.377058 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"063373b7-6898-409d-b792-d770a8f6f021","Type":"ContainerDied","Data":"05207f7e389e76235bf61effb8cd38b5460dff57b356f4a7773b9132b187506e"} Nov 29 08:04:11 crc kubenswrapper[4795]: I1129 08:04:11.377147 4795 scope.go:117] "RemoveContainer" containerID="aeeeceff929624dcc163051b37b80444664380c9bf11143d772c203d6bdf1fa3" Nov 29 08:04:11 crc kubenswrapper[4795]: E1129 08:04:11.395739 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-7lg28" podUID="082a7f2c-1081-4af8-91c8-60a13d787746" Nov 29 08:04:11 crc kubenswrapper[4795]: I1129 08:04:11.397807 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "063373b7-6898-409d-b792-d770a8f6f021" (UID: "063373b7-6898-409d-b792-d770a8f6f021"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 29 08:04:11 crc kubenswrapper[4795]: I1129 08:04:11.402824 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/063373b7-6898-409d-b792-d770a8f6f021-web-config" (OuterVolumeSpecName: "web-config") pod "063373b7-6898-409d-b792-d770a8f6f021" (UID: "063373b7-6898-409d-b792-d770a8f6f021"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:04:11 crc kubenswrapper[4795]: I1129 08:04:11.429447 4795 scope.go:117] "RemoveContainer" containerID="7100a344cdd389fb34fb5aee6a3f40ec2312d4b0c7003459cc684cba8403e516" Nov 29 08:04:11 crc kubenswrapper[4795]: I1129 08:04:11.432300 4795 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/063373b7-6898-409d-b792-d770a8f6f021-config\") on node \"crc\" DevicePath \"\"" Nov 29 08:04:11 crc kubenswrapper[4795]: I1129 08:04:11.432346 4795 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/063373b7-6898-409d-b792-d770a8f6f021-tls-assets\") on node \"crc\" DevicePath \"\"" Nov 29 08:04:11 crc kubenswrapper[4795]: I1129 08:04:11.432367 4795 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/063373b7-6898-409d-b792-d770a8f6f021-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Nov 29 08:04:11 crc kubenswrapper[4795]: I1129 08:04:11.432384 4795 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/063373b7-6898-409d-b792-d770a8f6f021-web-config\") on node \"crc\" DevicePath \"\"" Nov 29 08:04:11 crc kubenswrapper[4795]: I1129 08:04:11.432396 4795 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/063373b7-6898-409d-b792-d770a8f6f021-config-out\") on node \"crc\" DevicePath \"\"" Nov 29 08:04:11 crc kubenswrapper[4795]: I1129 08:04:11.432408 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89ch9\" (UniqueName: \"kubernetes.io/projected/063373b7-6898-409d-b792-d770a8f6f021-kube-api-access-89ch9\") on node \"crc\" DevicePath \"\"" Nov 29 08:04:11 crc kubenswrapper[4795]: I1129 08:04:11.432450 4795 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Nov 29 08:04:11 crc kubenswrapper[4795]: I1129 08:04:11.432464 4795 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/063373b7-6898-409d-b792-d770a8f6f021-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Nov 29 08:04:11 crc kubenswrapper[4795]: I1129 08:04:11.457362 4795 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Nov 29 08:04:11 crc kubenswrapper[4795]: I1129 08:04:11.497416 4795 scope.go:117] "RemoveContainer" containerID="bcd84a11402b10454dd7031340f3d460680caa2f285ba45c386b62aad92ba57e" Nov 29 08:04:11 crc kubenswrapper[4795]: I1129 08:04:11.524921 4795 scope.go:117] "RemoveContainer" containerID="10ac0e6c9dea581ab810775fab5fa9d47c5e63e062410134dcdb0d1e701ee7d8" Nov 29 08:04:11 crc kubenswrapper[4795]: I1129 08:04:11.533838 4795 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Nov 29 08:04:11 crc kubenswrapper[4795]: E1129 08:04:11.569953 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"object-server\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"object-replicator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-object:current-podified\\\"\", failed to \"StartContainer\" for \"object-auditor\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-object:current-podified\\\"\", failed to \"StartContainer\" for \"object-updater\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-object:current-podified\\\"\", failed to \"StartContainer\" for \"rsync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-object:current-podified\\\"\", failed to \"StartContainer\" for \"swift-recon-cron\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-object:current-podified\\\"\"]" pod="openstack/swift-storage-0" podUID="28437f9f-e92e-46d7-9ffb-fcdda5dea25e" Nov 29 08:04:11 crc kubenswrapper[4795]: I1129 08:04:11.727731 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-ef15-account-create-update-w8qbc"] Nov 29 08:04:11 crc kubenswrapper[4795]: I1129 08:04:11.742918 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 29 08:04:11 crc kubenswrapper[4795]: W1129 08:04:11.744417 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e81170c_9273_44af_9017_18b86d36e4c9.slice/crio-529f387d3e9e31272b9746e48f16cf74f2316926cbc85ed3c39943e910a9670a WatchSource:0}: Error finding container 529f387d3e9e31272b9746e48f16cf74f2316926cbc85ed3c39943e910a9670a: Status 404 returned error can't find the container with id 529f387d3e9e31272b9746e48f16cf74f2316926cbc85ed3c39943e910a9670a Nov 29 08:04:11 crc kubenswrapper[4795]: I1129 08:04:11.768822 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 29 08:04:11 crc kubenswrapper[4795]: I1129 08:04:11.820368 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 29 08:04:11 crc kubenswrapper[4795]: E1129 08:04:11.820913 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="063373b7-6898-409d-b792-d770a8f6f021" containerName="prometheus" Nov 29 08:04:11 crc kubenswrapper[4795]: I1129 08:04:11.820928 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="063373b7-6898-409d-b792-d770a8f6f021" containerName="prometheus" Nov 29 08:04:11 crc kubenswrapper[4795]: E1129 08:04:11.820960 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="063373b7-6898-409d-b792-d770a8f6f021" containerName="config-reloader" Nov 29 08:04:11 crc kubenswrapper[4795]: I1129 08:04:11.820968 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="063373b7-6898-409d-b792-d770a8f6f021" containerName="config-reloader" Nov 29 08:04:11 crc kubenswrapper[4795]: E1129 08:04:11.820977 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="063373b7-6898-409d-b792-d770a8f6f021" containerName="init-config-reloader" Nov 29 08:04:11 crc kubenswrapper[4795]: I1129 08:04:11.820985 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="063373b7-6898-409d-b792-d770a8f6f021" containerName="init-config-reloader" Nov 29 08:04:11 crc kubenswrapper[4795]: E1129 08:04:11.821001 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="063373b7-6898-409d-b792-d770a8f6f021" containerName="thanos-sidecar" Nov 29 08:04:11 crc kubenswrapper[4795]: I1129 08:04:11.821011 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="063373b7-6898-409d-b792-d770a8f6f021" containerName="thanos-sidecar" Nov 29 08:04:11 crc kubenswrapper[4795]: I1129 08:04:11.821214 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="063373b7-6898-409d-b792-d770a8f6f021" containerName="thanos-sidecar" Nov 29 08:04:11 crc kubenswrapper[4795]: I1129 08:04:11.821229 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="063373b7-6898-409d-b792-d770a8f6f021" containerName="config-reloader" Nov 29 08:04:11 crc kubenswrapper[4795]: I1129 08:04:11.821242 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="063373b7-6898-409d-b792-d770a8f6f021" containerName="prometheus" Nov 29 08:04:11 crc kubenswrapper[4795]: I1129 08:04:11.823230 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 29 08:04:11 crc kubenswrapper[4795]: I1129 08:04:11.829634 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-w8tlv" Nov 29 08:04:11 crc kubenswrapper[4795]: I1129 08:04:11.829754 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Nov 29 08:04:11 crc kubenswrapper[4795]: I1129 08:04:11.829946 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Nov 29 08:04:11 crc kubenswrapper[4795]: I1129 08:04:11.829634 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Nov 29 08:04:11 crc kubenswrapper[4795]: I1129 08:04:11.830205 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Nov 29 08:04:11 crc kubenswrapper[4795]: I1129 08:04:11.831144 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Nov 29 08:04:11 crc kubenswrapper[4795]: I1129 08:04:11.833203 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Nov 29 08:04:11 crc kubenswrapper[4795]: I1129 08:04:11.838008 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 29 08:04:11 crc kubenswrapper[4795]: I1129 08:04:11.941335 4795 patch_prober.go:28] interesting pod/machine-config-daemon-bkmq6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:04:11 crc kubenswrapper[4795]: I1129 08:04:11.941404 4795 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:04:11 crc kubenswrapper[4795]: I1129 08:04:11.942625 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9301745a-4dd1-469a-a37f-465f65a063e4-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"9301745a-4dd1-469a-a37f-465f65a063e4\") " pod="openstack/prometheus-metric-storage-0" Nov 29 08:04:11 crc kubenswrapper[4795]: I1129 08:04:11.942765 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/9301745a-4dd1-469a-a37f-465f65a063e4-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"9301745a-4dd1-469a-a37f-465f65a063e4\") " pod="openstack/prometheus-metric-storage-0" Nov 29 08:04:11 crc kubenswrapper[4795]: I1129 08:04:11.942889 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2g4m\" (UniqueName: \"kubernetes.io/projected/9301745a-4dd1-469a-a37f-465f65a063e4-kube-api-access-w2g4m\") pod \"prometheus-metric-storage-0\" (UID: \"9301745a-4dd1-469a-a37f-465f65a063e4\") " pod="openstack/prometheus-metric-storage-0" Nov 29 08:04:11 crc kubenswrapper[4795]: I1129 08:04:11.942992 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/9301745a-4dd1-469a-a37f-465f65a063e4-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"9301745a-4dd1-469a-a37f-465f65a063e4\") " pod="openstack/prometheus-metric-storage-0" Nov 29 08:04:11 crc kubenswrapper[4795]: I1129 08:04:11.943106 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9301745a-4dd1-469a-a37f-465f65a063e4-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"9301745a-4dd1-469a-a37f-465f65a063e4\") " pod="openstack/prometheus-metric-storage-0" Nov 29 08:04:11 crc kubenswrapper[4795]: I1129 08:04:11.943217 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9301745a-4dd1-469a-a37f-465f65a063e4-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"9301745a-4dd1-469a-a37f-465f65a063e4\") " pod="openstack/prometheus-metric-storage-0" Nov 29 08:04:11 crc kubenswrapper[4795]: I1129 08:04:11.943325 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9301745a-4dd1-469a-a37f-465f65a063e4-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"9301745a-4dd1-469a-a37f-465f65a063e4\") " pod="openstack/prometheus-metric-storage-0" Nov 29 08:04:11 crc kubenswrapper[4795]: I1129 08:04:11.943499 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9301745a-4dd1-469a-a37f-465f65a063e4-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"9301745a-4dd1-469a-a37f-465f65a063e4\") " pod="openstack/prometheus-metric-storage-0" Nov 29 08:04:11 crc kubenswrapper[4795]: I1129 08:04:11.943710 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9301745a-4dd1-469a-a37f-465f65a063e4-config\") pod \"prometheus-metric-storage-0\" (UID: \"9301745a-4dd1-469a-a37f-465f65a063e4\") " pod="openstack/prometheus-metric-storage-0" Nov 29 08:04:11 crc kubenswrapper[4795]: I1129 08:04:11.943981 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"prometheus-metric-storage-0\" (UID: \"9301745a-4dd1-469a-a37f-465f65a063e4\") " pod="openstack/prometheus-metric-storage-0" Nov 29 08:04:11 crc kubenswrapper[4795]: I1129 08:04:11.944091 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9301745a-4dd1-469a-a37f-465f65a063e4-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"9301745a-4dd1-469a-a37f-465f65a063e4\") " pod="openstack/prometheus-metric-storage-0" Nov 29 08:04:12 crc kubenswrapper[4795]: I1129 08:04:12.049164 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9301745a-4dd1-469a-a37f-465f65a063e4-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"9301745a-4dd1-469a-a37f-465f65a063e4\") " pod="openstack/prometheus-metric-storage-0" Nov 29 08:04:12 crc kubenswrapper[4795]: I1129 08:04:12.050181 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9301745a-4dd1-469a-a37f-465f65a063e4-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"9301745a-4dd1-469a-a37f-465f65a063e4\") " pod="openstack/prometheus-metric-storage-0" Nov 29 08:04:12 crc kubenswrapper[4795]: I1129 08:04:12.050308 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9301745a-4dd1-469a-a37f-465f65a063e4-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"9301745a-4dd1-469a-a37f-465f65a063e4\") " pod="openstack/prometheus-metric-storage-0" Nov 29 08:04:12 crc kubenswrapper[4795]: I1129 08:04:12.050405 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9301745a-4dd1-469a-a37f-465f65a063e4-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"9301745a-4dd1-469a-a37f-465f65a063e4\") " pod="openstack/prometheus-metric-storage-0" Nov 29 08:04:12 crc kubenswrapper[4795]: I1129 08:04:12.050552 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9301745a-4dd1-469a-a37f-465f65a063e4-config\") pod \"prometheus-metric-storage-0\" (UID: \"9301745a-4dd1-469a-a37f-465f65a063e4\") " pod="openstack/prometheus-metric-storage-0" Nov 29 08:04:12 crc kubenswrapper[4795]: I1129 08:04:12.050716 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"prometheus-metric-storage-0\" (UID: \"9301745a-4dd1-469a-a37f-465f65a063e4\") " pod="openstack/prometheus-metric-storage-0" Nov 29 08:04:12 crc kubenswrapper[4795]: I1129 08:04:12.050840 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9301745a-4dd1-469a-a37f-465f65a063e4-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"9301745a-4dd1-469a-a37f-465f65a063e4\") " pod="openstack/prometheus-metric-storage-0" Nov 29 08:04:12 crc kubenswrapper[4795]: I1129 08:04:12.051083 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9301745a-4dd1-469a-a37f-465f65a063e4-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"9301745a-4dd1-469a-a37f-465f65a063e4\") " pod="openstack/prometheus-metric-storage-0" Nov 29 08:04:12 crc kubenswrapper[4795]: I1129 08:04:12.051163 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/9301745a-4dd1-469a-a37f-465f65a063e4-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"9301745a-4dd1-469a-a37f-465f65a063e4\") " pod="openstack/prometheus-metric-storage-0" Nov 29 08:04:12 crc kubenswrapper[4795]: I1129 08:04:12.051292 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2g4m\" (UniqueName: \"kubernetes.io/projected/9301745a-4dd1-469a-a37f-465f65a063e4-kube-api-access-w2g4m\") pod \"prometheus-metric-storage-0\" (UID: \"9301745a-4dd1-469a-a37f-465f65a063e4\") " pod="openstack/prometheus-metric-storage-0" Nov 29 08:04:12 crc kubenswrapper[4795]: I1129 08:04:12.051491 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/9301745a-4dd1-469a-a37f-465f65a063e4-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"9301745a-4dd1-469a-a37f-465f65a063e4\") " pod="openstack/prometheus-metric-storage-0" Nov 29 08:04:12 crc kubenswrapper[4795]: I1129 08:04:12.052624 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9301745a-4dd1-469a-a37f-465f65a063e4-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"9301745a-4dd1-469a-a37f-465f65a063e4\") " pod="openstack/prometheus-metric-storage-0" Nov 29 08:04:12 crc kubenswrapper[4795]: I1129 08:04:12.053024 4795 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"prometheus-metric-storage-0\" (UID: \"9301745a-4dd1-469a-a37f-465f65a063e4\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/prometheus-metric-storage-0" Nov 29 08:04:12 crc kubenswrapper[4795]: I1129 08:04:12.058537 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9301745a-4dd1-469a-a37f-465f65a063e4-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"9301745a-4dd1-469a-a37f-465f65a063e4\") " pod="openstack/prometheus-metric-storage-0" Nov 29 08:04:12 crc kubenswrapper[4795]: I1129 08:04:12.062963 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/9301745a-4dd1-469a-a37f-465f65a063e4-config\") pod \"prometheus-metric-storage-0\" (UID: \"9301745a-4dd1-469a-a37f-465f65a063e4\") " pod="openstack/prometheus-metric-storage-0" Nov 29 08:04:12 crc kubenswrapper[4795]: I1129 08:04:12.063388 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9301745a-4dd1-469a-a37f-465f65a063e4-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"9301745a-4dd1-469a-a37f-465f65a063e4\") " pod="openstack/prometheus-metric-storage-0" Nov 29 08:04:12 crc kubenswrapper[4795]: I1129 08:04:12.064832 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/9301745a-4dd1-469a-a37f-465f65a063e4-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"9301745a-4dd1-469a-a37f-465f65a063e4\") " pod="openstack/prometheus-metric-storage-0" Nov 29 08:04:12 crc kubenswrapper[4795]: I1129 08:04:12.067714 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/9301745a-4dd1-469a-a37f-465f65a063e4-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"9301745a-4dd1-469a-a37f-465f65a063e4\") " pod="openstack/prometheus-metric-storage-0" Nov 29 08:04:12 crc kubenswrapper[4795]: I1129 08:04:12.076978 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9301745a-4dd1-469a-a37f-465f65a063e4-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"9301745a-4dd1-469a-a37f-465f65a063e4\") " pod="openstack/prometheus-metric-storage-0" Nov 29 08:04:12 crc kubenswrapper[4795]: I1129 08:04:12.080516 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2g4m\" (UniqueName: \"kubernetes.io/projected/9301745a-4dd1-469a-a37f-465f65a063e4-kube-api-access-w2g4m\") pod \"prometheus-metric-storage-0\" (UID: \"9301745a-4dd1-469a-a37f-465f65a063e4\") " pod="openstack/prometheus-metric-storage-0" Nov 29 08:04:12 crc kubenswrapper[4795]: I1129 08:04:12.083711 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9301745a-4dd1-469a-a37f-465f65a063e4-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"9301745a-4dd1-469a-a37f-465f65a063e4\") " pod="openstack/prometheus-metric-storage-0" Nov 29 08:04:12 crc kubenswrapper[4795]: I1129 08:04:12.085255 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9301745a-4dd1-469a-a37f-465f65a063e4-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"9301745a-4dd1-469a-a37f-465f65a063e4\") " pod="openstack/prometheus-metric-storage-0" Nov 29 08:04:12 crc kubenswrapper[4795]: I1129 08:04:12.168430 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"prometheus-metric-storage-0\" (UID: \"9301745a-4dd1-469a-a37f-465f65a063e4\") " pod="openstack/prometheus-metric-storage-0" Nov 29 08:04:12 crc kubenswrapper[4795]: W1129 08:04:12.185627 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1592ce2_fb7d_464f_b13d_09e287a45af6.slice/crio-b44c7cfc3b66e80f3820c397d817f0be2d6f243cb498317bafc51411c59480cd WatchSource:0}: Error finding container b44c7cfc3b66e80f3820c397d817f0be2d6f243cb498317bafc51411c59480cd: Status 404 returned error can't find the container with id b44c7cfc3b66e80f3820c397d817f0be2d6f243cb498317bafc51411c59480cd Nov 29 08:04:12 crc kubenswrapper[4795]: I1129 08:04:12.194028 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-cf9lf"] Nov 29 08:04:12 crc kubenswrapper[4795]: I1129 08:04:12.207508 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Nov 29 08:04:12 crc kubenswrapper[4795]: I1129 08:04:12.301470 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="063373b7-6898-409d-b792-d770a8f6f021" path="/var/lib/kubelet/pods/063373b7-6898-409d-b792-d770a8f6f021/volumes" Nov 29 08:04:12 crc kubenswrapper[4795]: I1129 08:04:12.378742 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-mlxhw"] Nov 29 08:04:12 crc kubenswrapper[4795]: W1129 08:04:12.384363 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf94ad4e7_724c_43f6_bf7d_a50cb51229d3.slice/crio-570c4cffd61cbbf175b55c498dc98f136a89e89e7c5bcef9a562b46be6a3546b WatchSource:0}: Error finding container 570c4cffd61cbbf175b55c498dc98f136a89e89e7c5bcef9a562b46be6a3546b: Status 404 returned error can't find the container with id 570c4cffd61cbbf175b55c498dc98f136a89e89e7c5bcef9a562b46be6a3546b Nov 29 08:04:12 crc kubenswrapper[4795]: W1129 08:04:12.388466 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod605a5a60_d8c8_4128_954a_236dd7cbf8c4.slice/crio-43d53d256505feacba324dc325539dabac6ee1e5318b6dd401d4a73cef93050c WatchSource:0}: Error finding container 43d53d256505feacba324dc325539dabac6ee1e5318b6dd401d4a73cef93050c: Status 404 returned error can't find the container with id 43d53d256505feacba324dc325539dabac6ee1e5318b6dd401d4a73cef93050c Nov 29 08:04:12 crc kubenswrapper[4795]: I1129 08:04:12.393409 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-703d-account-create-update-868mf"] Nov 29 08:04:12 crc kubenswrapper[4795]: I1129 08:04:12.405056 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-5025-account-create-update-jzzfl"] Nov 29 08:04:12 crc kubenswrapper[4795]: I1129 08:04:12.411898 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-ef15-account-create-update-w8qbc" event={"ID":"5e81170c-9273-44af-9017-18b86d36e4c9","Type":"ContainerStarted","Data":"e57f367be37ab0f422bcbee87b20ed2949e33d3c7f2045b418f6cbc6111149e6"} Nov 29 08:04:12 crc kubenswrapper[4795]: I1129 08:04:12.412098 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-ef15-account-create-update-w8qbc" event={"ID":"5e81170c-9273-44af-9017-18b86d36e4c9","Type":"ContainerStarted","Data":"529f387d3e9e31272b9746e48f16cf74f2316926cbc85ed3c39943e910a9670a"} Nov 29 08:04:12 crc kubenswrapper[4795]: W1129 08:04:12.413189 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod727203c8_6d15_404c_8744_8308e5c7ced8.slice/crio-002f4a74d18d1a50570f3b922b9b63bd75699b9c7c156ccb76cba00e3b74dbc3 WatchSource:0}: Error finding container 002f4a74d18d1a50570f3b922b9b63bd75699b9c7c156ccb76cba00e3b74dbc3: Status 404 returned error can't find the container with id 002f4a74d18d1a50570f3b922b9b63bd75699b9c7c156ccb76cba00e3b74dbc3 Nov 29 08:04:12 crc kubenswrapper[4795]: I1129 08:04:12.413798 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-703d-account-create-update-868mf" event={"ID":"307b1106-2c28-463f-8843-e2d397bf999a","Type":"ContainerStarted","Data":"d5f7654fc826e92aad0c09a84e5e1c277da3c9e3c5b5113127735fea35e720ba"} Nov 29 08:04:12 crc kubenswrapper[4795]: I1129 08:04:12.416572 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"a1592ce2-fb7d-464f-b13d-09e287a45af6","Type":"ContainerStarted","Data":"b44c7cfc3b66e80f3820c397d817f0be2d6f243cb498317bafc51411c59480cd"} Nov 29 08:04:12 crc kubenswrapper[4795]: I1129 08:04:12.416762 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-j9j74"] Nov 29 08:04:12 crc kubenswrapper[4795]: I1129 08:04:12.421331 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"28437f9f-e92e-46d7-9ffb-fcdda5dea25e","Type":"ContainerStarted","Data":"9ecd3b9d5974240350ac232b6a872a379a24c3fd8afe6e3b73644702446ff138"} Nov 29 08:04:12 crc kubenswrapper[4795]: I1129 08:04:12.423661 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-cf9lf" event={"ID":"fb3e5f06-dfee-4af3-bbe8-38b5d3272220","Type":"ContainerStarted","Data":"e75fdab97bcad49a5c884806c4fa85378b166eb5008483dfeb933508a87dbfe3"} Nov 29 08:04:12 crc kubenswrapper[4795]: I1129 08:04:12.430065 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-rnjzx"] Nov 29 08:04:12 crc kubenswrapper[4795]: I1129 08:04:12.445406 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-96b1-account-create-update-2xx9v"] Nov 29 08:04:12 crc kubenswrapper[4795]: I1129 08:04:12.447656 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-ef15-account-create-update-w8qbc" podStartSLOduration=10.447632133 podStartE2EDuration="10.447632133s" podCreationTimestamp="2025-11-29 08:04:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 08:04:12.428026426 +0000 UTC m=+1498.403602216" watchObservedRunningTime="2025-11-29 08:04:12.447632133 +0000 UTC m=+1498.423207923" Nov 29 08:04:12 crc kubenswrapper[4795]: E1129 08:04:12.454522 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"object-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-object:current-podified\\\"\", failed to \"StartContainer\" for \"object-replicator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-object:current-podified\\\"\", failed to \"StartContainer\" for \"object-auditor\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-object:current-podified\\\"\", failed to \"StartContainer\" for \"object-updater\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-object:current-podified\\\"\", failed to \"StartContainer\" for \"rsync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-object:current-podified\\\"\", failed to \"StartContainer\" for \"swift-recon-cron\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-object:current-podified\\\"\"]" pod="openstack/swift-storage-0" podUID="28437f9f-e92e-46d7-9ffb-fcdda5dea25e" Nov 29 08:04:12 crc kubenswrapper[4795]: I1129 08:04:12.461925 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 29 08:04:12 crc kubenswrapper[4795]: I1129 08:04:12.537798 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-fjkhc"] Nov 29 08:04:12 crc kubenswrapper[4795]: W1129 08:04:12.553241 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b92fb37_6b8f_45de_92a6_c488e9c3f40b.slice/crio-9dc2a90a640e419e5dd126b3903055202f7e9f5428e03127fc78dd93320de1a6 WatchSource:0}: Error finding container 9dc2a90a640e419e5dd126b3903055202f7e9f5428e03127fc78dd93320de1a6: Status 404 returned error can't find the container with id 9dc2a90a640e419e5dd126b3903055202f7e9f5428e03127fc78dd93320de1a6 Nov 29 08:04:13 crc kubenswrapper[4795]: I1129 08:04:13.006095 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 29 08:04:13 crc kubenswrapper[4795]: W1129 08:04:13.011201 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9301745a_4dd1_469a_a37f_465f65a063e4.slice/crio-a272be23725baeba3b9ed6de835074475af981b75b152c713181ae94b700de14 WatchSource:0}: Error finding container a272be23725baeba3b9ed6de835074475af981b75b152c713181ae94b700de14: Status 404 returned error can't find the container with id a272be23725baeba3b9ed6de835074475af981b75b152c713181ae94b700de14 Nov 29 08:04:13 crc kubenswrapper[4795]: I1129 08:04:13.435316 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-5025-account-create-update-jzzfl" event={"ID":"51454db5-2fe5-46bf-b01c-a302030baa3d","Type":"ContainerStarted","Data":"e090cc896be124296704fa28498b8d0b5efd50e07a59f23625a526c639a0eaba"} Nov 29 08:04:13 crc kubenswrapper[4795]: I1129 08:04:13.437375 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-fjkhc" event={"ID":"2b92fb37-6b8f-45de-92a6-c488e9c3f40b","Type":"ContainerStarted","Data":"9dc2a90a640e419e5dd126b3903055202f7e9f5428e03127fc78dd93320de1a6"} Nov 29 08:04:13 crc kubenswrapper[4795]: I1129 08:04:13.438551 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-j9j74" event={"ID":"605a5a60-d8c8-4128-954a-236dd7cbf8c4","Type":"ContainerStarted","Data":"43d53d256505feacba324dc325539dabac6ee1e5318b6dd401d4a73cef93050c"} Nov 29 08:04:13 crc kubenswrapper[4795]: I1129 08:04:13.439547 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-mlxhw" event={"ID":"f94ad4e7-724c-43f6-bf7d-a50cb51229d3","Type":"ContainerStarted","Data":"570c4cffd61cbbf175b55c498dc98f136a89e89e7c5bcef9a562b46be6a3546b"} Nov 29 08:04:13 crc kubenswrapper[4795]: I1129 08:04:13.441164 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-cf9lf" event={"ID":"fb3e5f06-dfee-4af3-bbe8-38b5d3272220","Type":"ContainerStarted","Data":"9940228c7b583513058642c80d5f172a1a560532de3c2d4e1fcdc65230125de5"} Nov 29 08:04:13 crc kubenswrapper[4795]: I1129 08:04:13.442422 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9301745a-4dd1-469a-a37f-465f65a063e4","Type":"ContainerStarted","Data":"a272be23725baeba3b9ed6de835074475af981b75b152c713181ae94b700de14"} Nov 29 08:04:13 crc kubenswrapper[4795]: I1129 08:04:13.443685 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-rnjzx" event={"ID":"debbb091-39b4-4d01-a468-7f9c7b65ff7e","Type":"ContainerStarted","Data":"093033fae8536edf553e1f0a0148b599a532f674b3dc458a0d74b37fe5621a83"} Nov 29 08:04:13 crc kubenswrapper[4795]: I1129 08:04:13.444854 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-96b1-account-create-update-2xx9v" event={"ID":"727203c8-6d15-404c-8744-8308e5c7ced8","Type":"ContainerStarted","Data":"002f4a74d18d1a50570f3b922b9b63bd75699b9c7c156ccb76cba00e3b74dbc3"} Nov 29 08:04:13 crc kubenswrapper[4795]: E1129 08:04:13.451645 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"object-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-object:current-podified\\\"\", failed to \"StartContainer\" for \"object-replicator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-object:current-podified\\\"\", failed to \"StartContainer\" for \"object-auditor\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-object:current-podified\\\"\", failed to \"StartContainer\" for \"object-updater\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-object:current-podified\\\"\", failed to \"StartContainer\" for \"rsync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-object:current-podified\\\"\", failed to \"StartContainer\" for \"swift-recon-cron\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-object:current-podified\\\"\"]" pod="openstack/swift-storage-0" podUID="28437f9f-e92e-46d7-9ffb-fcdda5dea25e" Nov 29 08:04:14 crc kubenswrapper[4795]: I1129 08:04:14.464480 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-5025-account-create-update-jzzfl" event={"ID":"51454db5-2fe5-46bf-b01c-a302030baa3d","Type":"ContainerStarted","Data":"9fe0bb7cbb4235ad426038d0c24f8f524f0d79c2d4ad74d050f8a2838af82749"} Nov 29 08:04:14 crc kubenswrapper[4795]: I1129 08:04:14.468121 4795 generic.go:334] "Generic (PLEG): container finished" podID="2b92fb37-6b8f-45de-92a6-c488e9c3f40b" containerID="7db55f65bc85a7cb487176c0a91de0a6ae7f421913f6c6e20ab01f4a0051fbcf" exitCode=0 Nov 29 08:04:14 crc kubenswrapper[4795]: I1129 08:04:14.468188 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-fjkhc" event={"ID":"2b92fb37-6b8f-45de-92a6-c488e9c3f40b","Type":"ContainerDied","Data":"7db55f65bc85a7cb487176c0a91de0a6ae7f421913f6c6e20ab01f4a0051fbcf"} Nov 29 08:04:14 crc kubenswrapper[4795]: I1129 08:04:14.474011 4795 generic.go:334] "Generic (PLEG): container finished" podID="f94ad4e7-724c-43f6-bf7d-a50cb51229d3" containerID="d65f024389ef1d5aa608e4a8fb1087518135f5cd8f20e05e297a5bfff0cf1121" exitCode=0 Nov 29 08:04:14 crc kubenswrapper[4795]: I1129 08:04:14.474120 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-mlxhw" event={"ID":"f94ad4e7-724c-43f6-bf7d-a50cb51229d3","Type":"ContainerDied","Data":"d65f024389ef1d5aa608e4a8fb1087518135f5cd8f20e05e297a5bfff0cf1121"} Nov 29 08:04:14 crc kubenswrapper[4795]: I1129 08:04:14.477165 4795 generic.go:334] "Generic (PLEG): container finished" podID="fb3e5f06-dfee-4af3-bbe8-38b5d3272220" containerID="9940228c7b583513058642c80d5f172a1a560532de3c2d4e1fcdc65230125de5" exitCode=0 Nov 29 08:04:14 crc kubenswrapper[4795]: I1129 08:04:14.477272 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-cf9lf" event={"ID":"fb3e5f06-dfee-4af3-bbe8-38b5d3272220","Type":"ContainerDied","Data":"9940228c7b583513058642c80d5f172a1a560532de3c2d4e1fcdc65230125de5"} Nov 29 08:04:14 crc kubenswrapper[4795]: I1129 08:04:14.484450 4795 generic.go:334] "Generic (PLEG): container finished" podID="5e81170c-9273-44af-9017-18b86d36e4c9" containerID="e57f367be37ab0f422bcbee87b20ed2949e33d3c7f2045b418f6cbc6111149e6" exitCode=0 Nov 29 08:04:14 crc kubenswrapper[4795]: I1129 08:04:14.484539 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-ef15-account-create-update-w8qbc" event={"ID":"5e81170c-9273-44af-9017-18b86d36e4c9","Type":"ContainerDied","Data":"e57f367be37ab0f422bcbee87b20ed2949e33d3c7f2045b418f6cbc6111149e6"} Nov 29 08:04:14 crc kubenswrapper[4795]: I1129 08:04:14.487293 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-703d-account-create-update-868mf" event={"ID":"307b1106-2c28-463f-8843-e2d397bf999a","Type":"ContainerStarted","Data":"f698ae92f9fee52d2ebd1c980cb6346e95539bc1c5d41c27cd57a201cf1950e5"} Nov 29 08:04:14 crc kubenswrapper[4795]: I1129 08:04:14.494382 4795 generic.go:334] "Generic (PLEG): container finished" podID="debbb091-39b4-4d01-a468-7f9c7b65ff7e" containerID="02d3bccc3eb069db5cdb13679eddeeaf3fcc70c21b41a2a903130dac092b2cff" exitCode=0 Nov 29 08:04:14 crc kubenswrapper[4795]: I1129 08:04:14.494494 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-rnjzx" event={"ID":"debbb091-39b4-4d01-a468-7f9c7b65ff7e","Type":"ContainerDied","Data":"02d3bccc3eb069db5cdb13679eddeeaf3fcc70c21b41a2a903130dac092b2cff"} Nov 29 08:04:14 crc kubenswrapper[4795]: I1129 08:04:14.501053 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-96b1-account-create-update-2xx9v" event={"ID":"727203c8-6d15-404c-8744-8308e5c7ced8","Type":"ContainerStarted","Data":"b9f07bb09ffcd836b0c31bbb418f75982da975a7bd60a41fabb99fd46a767f33"} Nov 29 08:04:14 crc kubenswrapper[4795]: I1129 08:04:14.580991 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-5025-account-create-update-jzzfl" podStartSLOduration=12.580965016 podStartE2EDuration="12.580965016s" podCreationTimestamp="2025-11-29 08:04:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 08:04:14.5325692 +0000 UTC m=+1500.508144990" watchObservedRunningTime="2025-11-29 08:04:14.580965016 +0000 UTC m=+1500.556540806" Nov 29 08:04:15 crc kubenswrapper[4795]: I1129 08:04:15.513300 4795 generic.go:334] "Generic (PLEG): container finished" podID="727203c8-6d15-404c-8744-8308e5c7ced8" containerID="b9f07bb09ffcd836b0c31bbb418f75982da975a7bd60a41fabb99fd46a767f33" exitCode=0 Nov 29 08:04:15 crc kubenswrapper[4795]: I1129 08:04:15.513398 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-96b1-account-create-update-2xx9v" event={"ID":"727203c8-6d15-404c-8744-8308e5c7ced8","Type":"ContainerDied","Data":"b9f07bb09ffcd836b0c31bbb418f75982da975a7bd60a41fabb99fd46a767f33"} Nov 29 08:04:15 crc kubenswrapper[4795]: I1129 08:04:15.516988 4795 generic.go:334] "Generic (PLEG): container finished" podID="51454db5-2fe5-46bf-b01c-a302030baa3d" containerID="9fe0bb7cbb4235ad426038d0c24f8f524f0d79c2d4ad74d050f8a2838af82749" exitCode=0 Nov 29 08:04:15 crc kubenswrapper[4795]: I1129 08:04:15.517066 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-5025-account-create-update-jzzfl" event={"ID":"51454db5-2fe5-46bf-b01c-a302030baa3d","Type":"ContainerDied","Data":"9fe0bb7cbb4235ad426038d0c24f8f524f0d79c2d4ad74d050f8a2838af82749"} Nov 29 08:04:15 crc kubenswrapper[4795]: I1129 08:04:15.519287 4795 generic.go:334] "Generic (PLEG): container finished" podID="307b1106-2c28-463f-8843-e2d397bf999a" containerID="f698ae92f9fee52d2ebd1c980cb6346e95539bc1c5d41c27cd57a201cf1950e5" exitCode=0 Nov 29 08:04:15 crc kubenswrapper[4795]: I1129 08:04:15.519548 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-703d-account-create-update-868mf" event={"ID":"307b1106-2c28-463f-8843-e2d397bf999a","Type":"ContainerDied","Data":"f698ae92f9fee52d2ebd1c980cb6346e95539bc1c5d41c27cd57a201cf1950e5"} Nov 29 08:04:16 crc kubenswrapper[4795]: E1129 08:04:16.079849 4795 kubelet_node_status.go:756] "Failed to set some node status fields" err="failed to validate nodeIP: route ip+net: no such network interface" node="crc" Nov 29 08:04:16 crc kubenswrapper[4795]: I1129 08:04:16.535465 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"a1592ce2-fb7d-464f-b13d-09e287a45af6","Type":"ContainerStarted","Data":"3569a57a1c5e12b2ec026d71f27e5f63f1ca059c182a1ad8555946b9b4b42dd9"} Nov 29 08:04:16 crc kubenswrapper[4795]: I1129 08:04:16.573323 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=10.550550605 podStartE2EDuration="13.57330385s" podCreationTimestamp="2025-11-29 08:04:03 +0000 UTC" firstStartedPulling="2025-11-29 08:04:12.216089369 +0000 UTC m=+1498.191665149" lastFinishedPulling="2025-11-29 08:04:15.238842604 +0000 UTC m=+1501.214418394" observedRunningTime="2025-11-29 08:04:16.563422129 +0000 UTC m=+1502.538997919" watchObservedRunningTime="2025-11-29 08:04:16.57330385 +0000 UTC m=+1502.548879640" Nov 29 08:04:17 crc kubenswrapper[4795]: I1129 08:04:17.548456 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9301745a-4dd1-469a-a37f-465f65a063e4","Type":"ContainerStarted","Data":"84a5c9a750b9ba725505a91a200f75536758295e7d4f0e70761bd3378dbe1398"} Nov 29 08:04:19 crc kubenswrapper[4795]: I1129 08:04:19.000548 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-mlxhw" Nov 29 08:04:19 crc kubenswrapper[4795]: I1129 08:04:19.011197 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-fjkhc" Nov 29 08:04:19 crc kubenswrapper[4795]: I1129 08:04:19.025468 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-cf9lf" Nov 29 08:04:19 crc kubenswrapper[4795]: I1129 08:04:19.037264 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-rnjzx" Nov 29 08:04:19 crc kubenswrapper[4795]: I1129 08:04:19.046849 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-5025-account-create-update-jzzfl" Nov 29 08:04:19 crc kubenswrapper[4795]: I1129 08:04:19.047563 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f94ad4e7-724c-43f6-bf7d-a50cb51229d3-operator-scripts\") pod \"f94ad4e7-724c-43f6-bf7d-a50cb51229d3\" (UID: \"f94ad4e7-724c-43f6-bf7d-a50cb51229d3\") " Nov 29 08:04:19 crc kubenswrapper[4795]: I1129 08:04:19.047648 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdcjd\" (UniqueName: \"kubernetes.io/projected/f94ad4e7-724c-43f6-bf7d-a50cb51229d3-kube-api-access-gdcjd\") pod \"f94ad4e7-724c-43f6-bf7d-a50cb51229d3\" (UID: \"f94ad4e7-724c-43f6-bf7d-a50cb51229d3\") " Nov 29 08:04:19 crc kubenswrapper[4795]: I1129 08:04:19.047694 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b92fb37-6b8f-45de-92a6-c488e9c3f40b-operator-scripts\") pod \"2b92fb37-6b8f-45de-92a6-c488e9c3f40b\" (UID: \"2b92fb37-6b8f-45de-92a6-c488e9c3f40b\") " Nov 29 08:04:19 crc kubenswrapper[4795]: I1129 08:04:19.047801 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hb646\" (UniqueName: \"kubernetes.io/projected/2b92fb37-6b8f-45de-92a6-c488e9c3f40b-kube-api-access-hb646\") pod \"2b92fb37-6b8f-45de-92a6-c488e9c3f40b\" (UID: \"2b92fb37-6b8f-45de-92a6-c488e9c3f40b\") " Nov 29 08:04:19 crc kubenswrapper[4795]: I1129 08:04:19.053079 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b92fb37-6b8f-45de-92a6-c488e9c3f40b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2b92fb37-6b8f-45de-92a6-c488e9c3f40b" (UID: "2b92fb37-6b8f-45de-92a6-c488e9c3f40b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:04:19 crc kubenswrapper[4795]: I1129 08:04:19.053999 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b92fb37-6b8f-45de-92a6-c488e9c3f40b-kube-api-access-hb646" (OuterVolumeSpecName: "kube-api-access-hb646") pod "2b92fb37-6b8f-45de-92a6-c488e9c3f40b" (UID: "2b92fb37-6b8f-45de-92a6-c488e9c3f40b"). InnerVolumeSpecName "kube-api-access-hb646". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:04:19 crc kubenswrapper[4795]: I1129 08:04:19.056958 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f94ad4e7-724c-43f6-bf7d-a50cb51229d3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f94ad4e7-724c-43f6-bf7d-a50cb51229d3" (UID: "f94ad4e7-724c-43f6-bf7d-a50cb51229d3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:04:19 crc kubenswrapper[4795]: I1129 08:04:19.057231 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f94ad4e7-724c-43f6-bf7d-a50cb51229d3-kube-api-access-gdcjd" (OuterVolumeSpecName: "kube-api-access-gdcjd") pod "f94ad4e7-724c-43f6-bf7d-a50cb51229d3" (UID: "f94ad4e7-724c-43f6-bf7d-a50cb51229d3"). InnerVolumeSpecName "kube-api-access-gdcjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:04:19 crc kubenswrapper[4795]: I1129 08:04:19.071092 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-703d-account-create-update-868mf" Nov 29 08:04:19 crc kubenswrapper[4795]: I1129 08:04:19.123670 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-96b1-account-create-update-2xx9v" Nov 29 08:04:19 crc kubenswrapper[4795]: I1129 08:04:19.132250 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-ef15-account-create-update-w8qbc" Nov 29 08:04:19 crc kubenswrapper[4795]: I1129 08:04:19.149421 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51454db5-2fe5-46bf-b01c-a302030baa3d-operator-scripts\") pod \"51454db5-2fe5-46bf-b01c-a302030baa3d\" (UID: \"51454db5-2fe5-46bf-b01c-a302030baa3d\") " Nov 29 08:04:19 crc kubenswrapper[4795]: I1129 08:04:19.149481 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/debbb091-39b4-4d01-a468-7f9c7b65ff7e-operator-scripts\") pod \"debbb091-39b4-4d01-a468-7f9c7b65ff7e\" (UID: \"debbb091-39b4-4d01-a468-7f9c7b65ff7e\") " Nov 29 08:04:19 crc kubenswrapper[4795]: I1129 08:04:19.149549 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ntgq\" (UniqueName: \"kubernetes.io/projected/51454db5-2fe5-46bf-b01c-a302030baa3d-kube-api-access-7ntgq\") pod \"51454db5-2fe5-46bf-b01c-a302030baa3d\" (UID: \"51454db5-2fe5-46bf-b01c-a302030baa3d\") " Nov 29 08:04:19 crc kubenswrapper[4795]: I1129 08:04:19.149609 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/727203c8-6d15-404c-8744-8308e5c7ced8-operator-scripts\") pod \"727203c8-6d15-404c-8744-8308e5c7ced8\" (UID: \"727203c8-6d15-404c-8744-8308e5c7ced8\") " Nov 29 08:04:19 crc kubenswrapper[4795]: I1129 08:04:19.149730 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gx9sm\" (UniqueName: \"kubernetes.io/projected/307b1106-2c28-463f-8843-e2d397bf999a-kube-api-access-gx9sm\") pod \"307b1106-2c28-463f-8843-e2d397bf999a\" (UID: \"307b1106-2c28-463f-8843-e2d397bf999a\") " Nov 29 08:04:19 crc kubenswrapper[4795]: I1129 08:04:19.149787 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb3e5f06-dfee-4af3-bbe8-38b5d3272220-operator-scripts\") pod \"fb3e5f06-dfee-4af3-bbe8-38b5d3272220\" (UID: \"fb3e5f06-dfee-4af3-bbe8-38b5d3272220\") " Nov 29 08:04:19 crc kubenswrapper[4795]: I1129 08:04:19.149847 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxrlg\" (UniqueName: \"kubernetes.io/projected/debbb091-39b4-4d01-a468-7f9c7b65ff7e-kube-api-access-mxrlg\") pod \"debbb091-39b4-4d01-a468-7f9c7b65ff7e\" (UID: \"debbb091-39b4-4d01-a468-7f9c7b65ff7e\") " Nov 29 08:04:19 crc kubenswrapper[4795]: I1129 08:04:19.149877 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqwxh\" (UniqueName: \"kubernetes.io/projected/727203c8-6d15-404c-8744-8308e5c7ced8-kube-api-access-kqwxh\") pod \"727203c8-6d15-404c-8744-8308e5c7ced8\" (UID: \"727203c8-6d15-404c-8744-8308e5c7ced8\") " Nov 29 08:04:19 crc kubenswrapper[4795]: I1129 08:04:19.149908 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/307b1106-2c28-463f-8843-e2d397bf999a-operator-scripts\") pod \"307b1106-2c28-463f-8843-e2d397bf999a\" (UID: \"307b1106-2c28-463f-8843-e2d397bf999a\") " Nov 29 08:04:19 crc kubenswrapper[4795]: I1129 08:04:19.149955 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lx47s\" (UniqueName: \"kubernetes.io/projected/fb3e5f06-dfee-4af3-bbe8-38b5d3272220-kube-api-access-lx47s\") pod \"fb3e5f06-dfee-4af3-bbe8-38b5d3272220\" (UID: \"fb3e5f06-dfee-4af3-bbe8-38b5d3272220\") " Nov 29 08:04:19 crc kubenswrapper[4795]: I1129 08:04:19.150071 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/debbb091-39b4-4d01-a468-7f9c7b65ff7e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "debbb091-39b4-4d01-a468-7f9c7b65ff7e" (UID: "debbb091-39b4-4d01-a468-7f9c7b65ff7e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:04:19 crc kubenswrapper[4795]: I1129 08:04:19.150449 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51454db5-2fe5-46bf-b01c-a302030baa3d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "51454db5-2fe5-46bf-b01c-a302030baa3d" (UID: "51454db5-2fe5-46bf-b01c-a302030baa3d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:04:19 crc kubenswrapper[4795]: I1129 08:04:19.150759 4795 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f94ad4e7-724c-43f6-bf7d-a50cb51229d3-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 08:04:19 crc kubenswrapper[4795]: I1129 08:04:19.150898 4795 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51454db5-2fe5-46bf-b01c-a302030baa3d-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 08:04:19 crc kubenswrapper[4795]: I1129 08:04:19.150914 4795 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/debbb091-39b4-4d01-a468-7f9c7b65ff7e-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 08:04:19 crc kubenswrapper[4795]: I1129 08:04:19.150927 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gdcjd\" (UniqueName: \"kubernetes.io/projected/f94ad4e7-724c-43f6-bf7d-a50cb51229d3-kube-api-access-gdcjd\") on node \"crc\" DevicePath \"\"" Nov 29 08:04:19 crc kubenswrapper[4795]: I1129 08:04:19.150939 4795 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b92fb37-6b8f-45de-92a6-c488e9c3f40b-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 08:04:19 crc kubenswrapper[4795]: I1129 08:04:19.150960 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hb646\" (UniqueName: \"kubernetes.io/projected/2b92fb37-6b8f-45de-92a6-c488e9c3f40b-kube-api-access-hb646\") on node \"crc\" DevicePath \"\"" Nov 29 08:04:19 crc kubenswrapper[4795]: I1129 08:04:19.150908 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb3e5f06-dfee-4af3-bbe8-38b5d3272220-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fb3e5f06-dfee-4af3-bbe8-38b5d3272220" (UID: "fb3e5f06-dfee-4af3-bbe8-38b5d3272220"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:04:19 crc kubenswrapper[4795]: I1129 08:04:19.151252 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/727203c8-6d15-404c-8744-8308e5c7ced8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "727203c8-6d15-404c-8744-8308e5c7ced8" (UID: "727203c8-6d15-404c-8744-8308e5c7ced8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:04:19 crc kubenswrapper[4795]: I1129 08:04:19.157456 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/727203c8-6d15-404c-8744-8308e5c7ced8-kube-api-access-kqwxh" (OuterVolumeSpecName: "kube-api-access-kqwxh") pod "727203c8-6d15-404c-8744-8308e5c7ced8" (UID: "727203c8-6d15-404c-8744-8308e5c7ced8"). InnerVolumeSpecName "kube-api-access-kqwxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:04:19 crc kubenswrapper[4795]: I1129 08:04:19.160662 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/307b1106-2c28-463f-8843-e2d397bf999a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "307b1106-2c28-463f-8843-e2d397bf999a" (UID: "307b1106-2c28-463f-8843-e2d397bf999a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:04:19 crc kubenswrapper[4795]: I1129 08:04:19.160830 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/307b1106-2c28-463f-8843-e2d397bf999a-kube-api-access-gx9sm" (OuterVolumeSpecName: "kube-api-access-gx9sm") pod "307b1106-2c28-463f-8843-e2d397bf999a" (UID: "307b1106-2c28-463f-8843-e2d397bf999a"). InnerVolumeSpecName "kube-api-access-gx9sm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:04:19 crc kubenswrapper[4795]: I1129 08:04:19.161777 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb3e5f06-dfee-4af3-bbe8-38b5d3272220-kube-api-access-lx47s" (OuterVolumeSpecName: "kube-api-access-lx47s") pod "fb3e5f06-dfee-4af3-bbe8-38b5d3272220" (UID: "fb3e5f06-dfee-4af3-bbe8-38b5d3272220"). InnerVolumeSpecName "kube-api-access-lx47s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:04:19 crc kubenswrapper[4795]: I1129 08:04:19.162840 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/debbb091-39b4-4d01-a468-7f9c7b65ff7e-kube-api-access-mxrlg" (OuterVolumeSpecName: "kube-api-access-mxrlg") pod "debbb091-39b4-4d01-a468-7f9c7b65ff7e" (UID: "debbb091-39b4-4d01-a468-7f9c7b65ff7e"). InnerVolumeSpecName "kube-api-access-mxrlg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:04:19 crc kubenswrapper[4795]: I1129 08:04:19.195493 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51454db5-2fe5-46bf-b01c-a302030baa3d-kube-api-access-7ntgq" (OuterVolumeSpecName: "kube-api-access-7ntgq") pod "51454db5-2fe5-46bf-b01c-a302030baa3d" (UID: "51454db5-2fe5-46bf-b01c-a302030baa3d"). InnerVolumeSpecName "kube-api-access-7ntgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:04:19 crc kubenswrapper[4795]: I1129 08:04:19.252434 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e81170c-9273-44af-9017-18b86d36e4c9-operator-scripts\") pod \"5e81170c-9273-44af-9017-18b86d36e4c9\" (UID: \"5e81170c-9273-44af-9017-18b86d36e4c9\") " Nov 29 08:04:19 crc kubenswrapper[4795]: I1129 08:04:19.252622 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e81170c-9273-44af-9017-18b86d36e4c9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5e81170c-9273-44af-9017-18b86d36e4c9" (UID: "5e81170c-9273-44af-9017-18b86d36e4c9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:04:19 crc kubenswrapper[4795]: I1129 08:04:19.253932 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v872p\" (UniqueName: \"kubernetes.io/projected/5e81170c-9273-44af-9017-18b86d36e4c9-kube-api-access-v872p\") pod \"5e81170c-9273-44af-9017-18b86d36e4c9\" (UID: \"5e81170c-9273-44af-9017-18b86d36e4c9\") " Nov 29 08:04:19 crc kubenswrapper[4795]: I1129 08:04:19.254604 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxrlg\" (UniqueName: \"kubernetes.io/projected/debbb091-39b4-4d01-a468-7f9c7b65ff7e-kube-api-access-mxrlg\") on node \"crc\" DevicePath \"\"" Nov 29 08:04:19 crc kubenswrapper[4795]: I1129 08:04:19.254630 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqwxh\" (UniqueName: \"kubernetes.io/projected/727203c8-6d15-404c-8744-8308e5c7ced8-kube-api-access-kqwxh\") on node \"crc\" DevicePath \"\"" Nov 29 08:04:19 crc kubenswrapper[4795]: I1129 08:04:19.254645 4795 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/307b1106-2c28-463f-8843-e2d397bf999a-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 08:04:19 crc kubenswrapper[4795]: I1129 08:04:19.254658 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lx47s\" (UniqueName: \"kubernetes.io/projected/fb3e5f06-dfee-4af3-bbe8-38b5d3272220-kube-api-access-lx47s\") on node \"crc\" DevicePath \"\"" Nov 29 08:04:19 crc kubenswrapper[4795]: I1129 08:04:19.254672 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ntgq\" (UniqueName: \"kubernetes.io/projected/51454db5-2fe5-46bf-b01c-a302030baa3d-kube-api-access-7ntgq\") on node \"crc\" DevicePath \"\"" Nov 29 08:04:19 crc kubenswrapper[4795]: I1129 08:04:19.254683 4795 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/727203c8-6d15-404c-8744-8308e5c7ced8-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 08:04:19 crc kubenswrapper[4795]: I1129 08:04:19.254695 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gx9sm\" (UniqueName: \"kubernetes.io/projected/307b1106-2c28-463f-8843-e2d397bf999a-kube-api-access-gx9sm\") on node \"crc\" DevicePath \"\"" Nov 29 08:04:19 crc kubenswrapper[4795]: I1129 08:04:19.254705 4795 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e81170c-9273-44af-9017-18b86d36e4c9-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 08:04:19 crc kubenswrapper[4795]: I1129 08:04:19.254718 4795 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb3e5f06-dfee-4af3-bbe8-38b5d3272220-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 08:04:19 crc kubenswrapper[4795]: I1129 08:04:19.255670 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e81170c-9273-44af-9017-18b86d36e4c9-kube-api-access-v872p" (OuterVolumeSpecName: "kube-api-access-v872p") pod "5e81170c-9273-44af-9017-18b86d36e4c9" (UID: "5e81170c-9273-44af-9017-18b86d36e4c9"). InnerVolumeSpecName "kube-api-access-v872p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:04:19 crc kubenswrapper[4795]: I1129 08:04:19.356508 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v872p\" (UniqueName: \"kubernetes.io/projected/5e81170c-9273-44af-9017-18b86d36e4c9-kube-api-access-v872p\") on node \"crc\" DevicePath \"\"" Nov 29 08:04:19 crc kubenswrapper[4795]: I1129 08:04:19.567535 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-ef15-account-create-update-w8qbc" event={"ID":"5e81170c-9273-44af-9017-18b86d36e4c9","Type":"ContainerDied","Data":"529f387d3e9e31272b9746e48f16cf74f2316926cbc85ed3c39943e910a9670a"} Nov 29 08:04:19 crc kubenswrapper[4795]: I1129 08:04:19.567566 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-ef15-account-create-update-w8qbc" Nov 29 08:04:19 crc kubenswrapper[4795]: I1129 08:04:19.567574 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="529f387d3e9e31272b9746e48f16cf74f2316926cbc85ed3c39943e910a9670a" Nov 29 08:04:19 crc kubenswrapper[4795]: I1129 08:04:19.569927 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-703d-account-create-update-868mf" event={"ID":"307b1106-2c28-463f-8843-e2d397bf999a","Type":"ContainerDied","Data":"d5f7654fc826e92aad0c09a84e5e1c277da3c9e3c5b5113127735fea35e720ba"} Nov 29 08:04:19 crc kubenswrapper[4795]: I1129 08:04:19.569996 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5f7654fc826e92aad0c09a84e5e1c277da3c9e3c5b5113127735fea35e720ba" Nov 29 08:04:19 crc kubenswrapper[4795]: I1129 08:04:19.570079 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-703d-account-create-update-868mf" Nov 29 08:04:19 crc kubenswrapper[4795]: I1129 08:04:19.572514 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-rnjzx" event={"ID":"debbb091-39b4-4d01-a468-7f9c7b65ff7e","Type":"ContainerDied","Data":"093033fae8536edf553e1f0a0148b599a532f674b3dc458a0d74b37fe5621a83"} Nov 29 08:04:19 crc kubenswrapper[4795]: I1129 08:04:19.572539 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="093033fae8536edf553e1f0a0148b599a532f674b3dc458a0d74b37fe5621a83" Nov 29 08:04:19 crc kubenswrapper[4795]: I1129 08:04:19.572606 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-rnjzx" Nov 29 08:04:19 crc kubenswrapper[4795]: I1129 08:04:19.576810 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-96b1-account-create-update-2xx9v" event={"ID":"727203c8-6d15-404c-8744-8308e5c7ced8","Type":"ContainerDied","Data":"002f4a74d18d1a50570f3b922b9b63bd75699b9c7c156ccb76cba00e3b74dbc3"} Nov 29 08:04:19 crc kubenswrapper[4795]: I1129 08:04:19.576839 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-96b1-account-create-update-2xx9v" Nov 29 08:04:19 crc kubenswrapper[4795]: I1129 08:04:19.576853 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="002f4a74d18d1a50570f3b922b9b63bd75699b9c7c156ccb76cba00e3b74dbc3" Nov 29 08:04:19 crc kubenswrapper[4795]: I1129 08:04:19.579057 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-5025-account-create-update-jzzfl" Nov 29 08:04:19 crc kubenswrapper[4795]: I1129 08:04:19.579109 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-5025-account-create-update-jzzfl" event={"ID":"51454db5-2fe5-46bf-b01c-a302030baa3d","Type":"ContainerDied","Data":"e090cc896be124296704fa28498b8d0b5efd50e07a59f23625a526c639a0eaba"} Nov 29 08:04:19 crc kubenswrapper[4795]: I1129 08:04:19.579144 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e090cc896be124296704fa28498b8d0b5efd50e07a59f23625a526c639a0eaba" Nov 29 08:04:19 crc kubenswrapper[4795]: I1129 08:04:19.580515 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-fjkhc" Nov 29 08:04:19 crc kubenswrapper[4795]: I1129 08:04:19.580526 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-fjkhc" event={"ID":"2b92fb37-6b8f-45de-92a6-c488e9c3f40b","Type":"ContainerDied","Data":"9dc2a90a640e419e5dd126b3903055202f7e9f5428e03127fc78dd93320de1a6"} Nov 29 08:04:19 crc kubenswrapper[4795]: I1129 08:04:19.580576 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9dc2a90a640e419e5dd126b3903055202f7e9f5428e03127fc78dd93320de1a6" Nov 29 08:04:19 crc kubenswrapper[4795]: I1129 08:04:19.582076 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-mlxhw" event={"ID":"f94ad4e7-724c-43f6-bf7d-a50cb51229d3","Type":"ContainerDied","Data":"570c4cffd61cbbf175b55c498dc98f136a89e89e7c5bcef9a562b46be6a3546b"} Nov 29 08:04:19 crc kubenswrapper[4795]: I1129 08:04:19.582104 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="570c4cffd61cbbf175b55c498dc98f136a89e89e7c5bcef9a562b46be6a3546b" Nov 29 08:04:19 crc kubenswrapper[4795]: I1129 08:04:19.582157 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-mlxhw" Nov 29 08:04:19 crc kubenswrapper[4795]: I1129 08:04:19.585514 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-cf9lf" event={"ID":"fb3e5f06-dfee-4af3-bbe8-38b5d3272220","Type":"ContainerDied","Data":"e75fdab97bcad49a5c884806c4fa85378b166eb5008483dfeb933508a87dbfe3"} Nov 29 08:04:19 crc kubenswrapper[4795]: I1129 08:04:19.585539 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e75fdab97bcad49a5c884806c4fa85378b166eb5008483dfeb933508a87dbfe3" Nov 29 08:04:19 crc kubenswrapper[4795]: I1129 08:04:19.585572 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-cf9lf" Nov 29 08:04:20 crc kubenswrapper[4795]: E1129 08:04:20.457260 4795 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddebbb091_39b4_4d01_a468_7f9c7b65ff7e.slice/crio-02d3bccc3eb069db5cdb13679eddeeaf3fcc70c21b41a2a903130dac092b2cff.scope\": RecentStats: unable to find data in memory cache]" Nov 29 08:04:20 crc kubenswrapper[4795]: I1129 08:04:20.595301 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-j9j74" event={"ID":"605a5a60-d8c8-4128-954a-236dd7cbf8c4","Type":"ContainerStarted","Data":"9c07469073a6d62e6f3d1ce4dc2c82fba1c46f96c4c46c031740f2b80b8681b9"} Nov 29 08:04:20 crc kubenswrapper[4795]: I1129 08:04:20.616402 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-j9j74" podStartSLOduration=11.304310104 podStartE2EDuration="18.61638551s" podCreationTimestamp="2025-11-29 08:04:02 +0000 UTC" firstStartedPulling="2025-11-29 08:04:12.398066604 +0000 UTC m=+1498.373642404" lastFinishedPulling="2025-11-29 08:04:19.71014202 +0000 UTC m=+1505.685717810" observedRunningTime="2025-11-29 08:04:20.61110557 +0000 UTC m=+1506.586681370" watchObservedRunningTime="2025-11-29 08:04:20.61638551 +0000 UTC m=+1506.591961300" Nov 29 08:04:23 crc kubenswrapper[4795]: E1129 08:04:23.234484 4795 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddebbb091_39b4_4d01_a468_7f9c7b65ff7e.slice/crio-02d3bccc3eb069db5cdb13679eddeeaf3fcc70c21b41a2a903130dac092b2cff.scope\": RecentStats: unable to find data in memory cache]" Nov 29 08:04:24 crc kubenswrapper[4795]: I1129 08:04:24.667423 4795 generic.go:334] "Generic (PLEG): container finished" podID="9301745a-4dd1-469a-a37f-465f65a063e4" containerID="84a5c9a750b9ba725505a91a200f75536758295e7d4f0e70761bd3378dbe1398" exitCode=0 Nov 29 08:04:24 crc kubenswrapper[4795]: I1129 08:04:24.667499 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9301745a-4dd1-469a-a37f-465f65a063e4","Type":"ContainerDied","Data":"84a5c9a750b9ba725505a91a200f75536758295e7d4f0e70761bd3378dbe1398"} Nov 29 08:04:24 crc kubenswrapper[4795]: I1129 08:04:24.670709 4795 generic.go:334] "Generic (PLEG): container finished" podID="605a5a60-d8c8-4128-954a-236dd7cbf8c4" containerID="9c07469073a6d62e6f3d1ce4dc2c82fba1c46f96c4c46c031740f2b80b8681b9" exitCode=0 Nov 29 08:04:24 crc kubenswrapper[4795]: I1129 08:04:24.670742 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-j9j74" event={"ID":"605a5a60-d8c8-4128-954a-236dd7cbf8c4","Type":"ContainerDied","Data":"9c07469073a6d62e6f3d1ce4dc2c82fba1c46f96c4c46c031740f2b80b8681b9"} Nov 29 08:04:25 crc kubenswrapper[4795]: I1129 08:04:25.688497 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9301745a-4dd1-469a-a37f-465f65a063e4","Type":"ContainerStarted","Data":"09fe2477163c05361e0e5b23eaafe3345ea439836de15ef675cc3c482cf059d6"} Nov 29 08:04:25 crc kubenswrapper[4795]: I1129 08:04:25.690893 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-7lg28" event={"ID":"082a7f2c-1081-4af8-91c8-60a13d787746","Type":"ContainerStarted","Data":"245d605dd8455419c5b160025a93f2e21c118a545505ae76282d573abadf30be"} Nov 29 08:04:25 crc kubenswrapper[4795]: I1129 08:04:25.709398 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-7lg28" podStartSLOduration=2.172038616 podStartE2EDuration="35.709378785s" podCreationTimestamp="2025-11-29 08:03:50 +0000 UTC" firstStartedPulling="2025-11-29 08:03:51.28143572 +0000 UTC m=+1477.257011510" lastFinishedPulling="2025-11-29 08:04:24.818775889 +0000 UTC m=+1510.794351679" observedRunningTime="2025-11-29 08:04:25.70814643 +0000 UTC m=+1511.683722220" watchObservedRunningTime="2025-11-29 08:04:25.709378785 +0000 UTC m=+1511.684954575" Nov 29 08:04:26 crc kubenswrapper[4795]: I1129 08:04:26.114038 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-j9j74" Nov 29 08:04:26 crc kubenswrapper[4795]: I1129 08:04:26.233175 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/605a5a60-d8c8-4128-954a-236dd7cbf8c4-config-data\") pod \"605a5a60-d8c8-4128-954a-236dd7cbf8c4\" (UID: \"605a5a60-d8c8-4128-954a-236dd7cbf8c4\") " Nov 29 08:04:26 crc kubenswrapper[4795]: I1129 08:04:26.233427 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/605a5a60-d8c8-4128-954a-236dd7cbf8c4-combined-ca-bundle\") pod \"605a5a60-d8c8-4128-954a-236dd7cbf8c4\" (UID: \"605a5a60-d8c8-4128-954a-236dd7cbf8c4\") " Nov 29 08:04:26 crc kubenswrapper[4795]: I1129 08:04:26.233623 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfplw\" (UniqueName: \"kubernetes.io/projected/605a5a60-d8c8-4128-954a-236dd7cbf8c4-kube-api-access-dfplw\") pod \"605a5a60-d8c8-4128-954a-236dd7cbf8c4\" (UID: \"605a5a60-d8c8-4128-954a-236dd7cbf8c4\") " Nov 29 08:04:26 crc kubenswrapper[4795]: I1129 08:04:26.266972 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/605a5a60-d8c8-4128-954a-236dd7cbf8c4-kube-api-access-dfplw" (OuterVolumeSpecName: "kube-api-access-dfplw") pod "605a5a60-d8c8-4128-954a-236dd7cbf8c4" (UID: "605a5a60-d8c8-4128-954a-236dd7cbf8c4"). InnerVolumeSpecName "kube-api-access-dfplw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:04:26 crc kubenswrapper[4795]: I1129 08:04:26.273922 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/605a5a60-d8c8-4128-954a-236dd7cbf8c4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "605a5a60-d8c8-4128-954a-236dd7cbf8c4" (UID: "605a5a60-d8c8-4128-954a-236dd7cbf8c4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:04:26 crc kubenswrapper[4795]: I1129 08:04:26.335759 4795 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/605a5a60-d8c8-4128-954a-236dd7cbf8c4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 08:04:26 crc kubenswrapper[4795]: I1129 08:04:26.335790 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfplw\" (UniqueName: \"kubernetes.io/projected/605a5a60-d8c8-4128-954a-236dd7cbf8c4-kube-api-access-dfplw\") on node \"crc\" DevicePath \"\"" Nov 29 08:04:26 crc kubenswrapper[4795]: I1129 08:04:26.349574 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/605a5a60-d8c8-4128-954a-236dd7cbf8c4-config-data" (OuterVolumeSpecName: "config-data") pod "605a5a60-d8c8-4128-954a-236dd7cbf8c4" (UID: "605a5a60-d8c8-4128-954a-236dd7cbf8c4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:04:26 crc kubenswrapper[4795]: I1129 08:04:26.438309 4795 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/605a5a60-d8c8-4128-954a-236dd7cbf8c4-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 08:04:26 crc kubenswrapper[4795]: I1129 08:04:26.707822 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-j9j74" event={"ID":"605a5a60-d8c8-4128-954a-236dd7cbf8c4","Type":"ContainerDied","Data":"43d53d256505feacba324dc325539dabac6ee1e5318b6dd401d4a73cef93050c"} Nov 29 08:04:26 crc kubenswrapper[4795]: I1129 08:04:26.707875 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43d53d256505feacba324dc325539dabac6ee1e5318b6dd401d4a73cef93050c" Nov 29 08:04:26 crc kubenswrapper[4795]: I1129 08:04:26.707945 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-j9j74" Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.357536 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9d85d47c-ts7lc"] Nov 29 08:04:27 crc kubenswrapper[4795]: E1129 08:04:27.358786 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb3e5f06-dfee-4af3-bbe8-38b5d3272220" containerName="mariadb-database-create" Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.358812 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb3e5f06-dfee-4af3-bbe8-38b5d3272220" containerName="mariadb-database-create" Nov 29 08:04:27 crc kubenswrapper[4795]: E1129 08:04:27.358827 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51454db5-2fe5-46bf-b01c-a302030baa3d" containerName="mariadb-account-create-update" Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.358835 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="51454db5-2fe5-46bf-b01c-a302030baa3d" containerName="mariadb-account-create-update" Nov 29 08:04:27 crc kubenswrapper[4795]: E1129 08:04:27.358852 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="307b1106-2c28-463f-8843-e2d397bf999a" containerName="mariadb-account-create-update" Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.358859 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="307b1106-2c28-463f-8843-e2d397bf999a" containerName="mariadb-account-create-update" Nov 29 08:04:27 crc kubenswrapper[4795]: E1129 08:04:27.358875 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="605a5a60-d8c8-4128-954a-236dd7cbf8c4" containerName="keystone-db-sync" Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.358883 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="605a5a60-d8c8-4128-954a-236dd7cbf8c4" containerName="keystone-db-sync" Nov 29 08:04:27 crc kubenswrapper[4795]: E1129 08:04:27.358899 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="727203c8-6d15-404c-8744-8308e5c7ced8" containerName="mariadb-account-create-update" Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.358906 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="727203c8-6d15-404c-8744-8308e5c7ced8" containerName="mariadb-account-create-update" Nov 29 08:04:27 crc kubenswrapper[4795]: E1129 08:04:27.358933 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e81170c-9273-44af-9017-18b86d36e4c9" containerName="mariadb-account-create-update" Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.358940 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e81170c-9273-44af-9017-18b86d36e4c9" containerName="mariadb-account-create-update" Nov 29 08:04:27 crc kubenswrapper[4795]: E1129 08:04:27.358950 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f94ad4e7-724c-43f6-bf7d-a50cb51229d3" containerName="mariadb-database-create" Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.358957 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="f94ad4e7-724c-43f6-bf7d-a50cb51229d3" containerName="mariadb-database-create" Nov 29 08:04:27 crc kubenswrapper[4795]: E1129 08:04:27.358978 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="debbb091-39b4-4d01-a468-7f9c7b65ff7e" containerName="mariadb-database-create" Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.358987 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="debbb091-39b4-4d01-a468-7f9c7b65ff7e" containerName="mariadb-database-create" Nov 29 08:04:27 crc kubenswrapper[4795]: E1129 08:04:27.359001 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b92fb37-6b8f-45de-92a6-c488e9c3f40b" containerName="mariadb-database-create" Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.359008 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b92fb37-6b8f-45de-92a6-c488e9c3f40b" containerName="mariadb-database-create" Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.359285 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e81170c-9273-44af-9017-18b86d36e4c9" containerName="mariadb-account-create-update" Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.359304 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b92fb37-6b8f-45de-92a6-c488e9c3f40b" containerName="mariadb-database-create" Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.359316 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="307b1106-2c28-463f-8843-e2d397bf999a" containerName="mariadb-account-create-update" Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.359331 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="51454db5-2fe5-46bf-b01c-a302030baa3d" containerName="mariadb-account-create-update" Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.359349 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb3e5f06-dfee-4af3-bbe8-38b5d3272220" containerName="mariadb-database-create" Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.359358 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="debbb091-39b4-4d01-a468-7f9c7b65ff7e" containerName="mariadb-database-create" Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.359371 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="605a5a60-d8c8-4128-954a-236dd7cbf8c4" containerName="keystone-db-sync" Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.359379 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="f94ad4e7-724c-43f6-bf7d-a50cb51229d3" containerName="mariadb-database-create" Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.359400 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="727203c8-6d15-404c-8744-8308e5c7ced8" containerName="mariadb-account-create-update" Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.361235 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9d85d47c-ts7lc" Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.403681 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9d85d47c-ts7lc"] Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.430697 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-w9kg5"] Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.432406 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-w9kg5" Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.438308 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.438996 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.457941 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-j4r47" Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.458238 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.458361 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.467660 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-w9kg5"] Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.535205 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/48fbb037-6558-4de9-ad75-5e1e26ed6b45-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9d85d47c-ts7lc\" (UID: \"48fbb037-6558-4de9-ad75-5e1e26ed6b45\") " pod="openstack/dnsmasq-dns-5c9d85d47c-ts7lc" Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.535745 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48fbb037-6558-4de9-ad75-5e1e26ed6b45-dns-svc\") pod \"dnsmasq-dns-5c9d85d47c-ts7lc\" (UID: \"48fbb037-6558-4de9-ad75-5e1e26ed6b45\") " pod="openstack/dnsmasq-dns-5c9d85d47c-ts7lc" Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.535791 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7bb6a7d2-f5f5-4684-b085-66101954d8e9-fernet-keys\") pod \"keystone-bootstrap-w9kg5\" (UID: \"7bb6a7d2-f5f5-4684-b085-66101954d8e9\") " pod="openstack/keystone-bootstrap-w9kg5" Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.535821 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48fbb037-6558-4de9-ad75-5e1e26ed6b45-config\") pod \"dnsmasq-dns-5c9d85d47c-ts7lc\" (UID: \"48fbb037-6558-4de9-ad75-5e1e26ed6b45\") " pod="openstack/dnsmasq-dns-5c9d85d47c-ts7lc" Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.535896 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bb6a7d2-f5f5-4684-b085-66101954d8e9-config-data\") pod \"keystone-bootstrap-w9kg5\" (UID: \"7bb6a7d2-f5f5-4684-b085-66101954d8e9\") " pod="openstack/keystone-bootstrap-w9kg5" Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.535922 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7bb6a7d2-f5f5-4684-b085-66101954d8e9-scripts\") pod \"keystone-bootstrap-w9kg5\" (UID: \"7bb6a7d2-f5f5-4684-b085-66101954d8e9\") " pod="openstack/keystone-bootstrap-w9kg5" Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.535949 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mf6mw\" (UniqueName: \"kubernetes.io/projected/48fbb037-6558-4de9-ad75-5e1e26ed6b45-kube-api-access-mf6mw\") pod \"dnsmasq-dns-5c9d85d47c-ts7lc\" (UID: \"48fbb037-6558-4de9-ad75-5e1e26ed6b45\") " pod="openstack/dnsmasq-dns-5c9d85d47c-ts7lc" Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.535977 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/48fbb037-6558-4de9-ad75-5e1e26ed6b45-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9d85d47c-ts7lc\" (UID: \"48fbb037-6558-4de9-ad75-5e1e26ed6b45\") " pod="openstack/dnsmasq-dns-5c9d85d47c-ts7lc" Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.536052 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bb6a7d2-f5f5-4684-b085-66101954d8e9-combined-ca-bundle\") pod \"keystone-bootstrap-w9kg5\" (UID: \"7bb6a7d2-f5f5-4684-b085-66101954d8e9\") " pod="openstack/keystone-bootstrap-w9kg5" Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.536087 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22lg5\" (UniqueName: \"kubernetes.io/projected/7bb6a7d2-f5f5-4684-b085-66101954d8e9-kube-api-access-22lg5\") pod \"keystone-bootstrap-w9kg5\" (UID: \"7bb6a7d2-f5f5-4684-b085-66101954d8e9\") " pod="openstack/keystone-bootstrap-w9kg5" Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.536141 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7bb6a7d2-f5f5-4684-b085-66101954d8e9-credential-keys\") pod \"keystone-bootstrap-w9kg5\" (UID: \"7bb6a7d2-f5f5-4684-b085-66101954d8e9\") " pod="openstack/keystone-bootstrap-w9kg5" Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.545356 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-85xp6"] Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.546919 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-85xp6" Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.551095 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-5vxbl" Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.551277 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.569543 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-85xp6"] Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.640391 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bb6a7d2-f5f5-4684-b085-66101954d8e9-combined-ca-bundle\") pod \"keystone-bootstrap-w9kg5\" (UID: \"7bb6a7d2-f5f5-4684-b085-66101954d8e9\") " pod="openstack/keystone-bootstrap-w9kg5" Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.640472 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22lg5\" (UniqueName: \"kubernetes.io/projected/7bb6a7d2-f5f5-4684-b085-66101954d8e9-kube-api-access-22lg5\") pod \"keystone-bootstrap-w9kg5\" (UID: \"7bb6a7d2-f5f5-4684-b085-66101954d8e9\") " pod="openstack/keystone-bootstrap-w9kg5" Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.640566 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7bb6a7d2-f5f5-4684-b085-66101954d8e9-credential-keys\") pod \"keystone-bootstrap-w9kg5\" (UID: \"7bb6a7d2-f5f5-4684-b085-66101954d8e9\") " pod="openstack/keystone-bootstrap-w9kg5" Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.640644 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/48fbb037-6558-4de9-ad75-5e1e26ed6b45-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9d85d47c-ts7lc\" (UID: \"48fbb037-6558-4de9-ad75-5e1e26ed6b45\") " pod="openstack/dnsmasq-dns-5c9d85d47c-ts7lc" Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.640675 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48fbb037-6558-4de9-ad75-5e1e26ed6b45-dns-svc\") pod \"dnsmasq-dns-5c9d85d47c-ts7lc\" (UID: \"48fbb037-6558-4de9-ad75-5e1e26ed6b45\") " pod="openstack/dnsmasq-dns-5c9d85d47c-ts7lc" Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.640714 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7bb6a7d2-f5f5-4684-b085-66101954d8e9-fernet-keys\") pod \"keystone-bootstrap-w9kg5\" (UID: \"7bb6a7d2-f5f5-4684-b085-66101954d8e9\") " pod="openstack/keystone-bootstrap-w9kg5" Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.640753 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48fbb037-6558-4de9-ad75-5e1e26ed6b45-config\") pod \"dnsmasq-dns-5c9d85d47c-ts7lc\" (UID: \"48fbb037-6558-4de9-ad75-5e1e26ed6b45\") " pod="openstack/dnsmasq-dns-5c9d85d47c-ts7lc" Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.640818 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bb6a7d2-f5f5-4684-b085-66101954d8e9-config-data\") pod \"keystone-bootstrap-w9kg5\" (UID: \"7bb6a7d2-f5f5-4684-b085-66101954d8e9\") " pod="openstack/keystone-bootstrap-w9kg5" Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.640856 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blpm6\" (UniqueName: \"kubernetes.io/projected/59666d8f-35e8-4c8a-887f-0c23881547ec-kube-api-access-blpm6\") pod \"heat-db-sync-85xp6\" (UID: \"59666d8f-35e8-4c8a-887f-0c23881547ec\") " pod="openstack/heat-db-sync-85xp6" Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.640895 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7bb6a7d2-f5f5-4684-b085-66101954d8e9-scripts\") pod \"keystone-bootstrap-w9kg5\" (UID: \"7bb6a7d2-f5f5-4684-b085-66101954d8e9\") " pod="openstack/keystone-bootstrap-w9kg5" Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.640935 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59666d8f-35e8-4c8a-887f-0c23881547ec-combined-ca-bundle\") pod \"heat-db-sync-85xp6\" (UID: \"59666d8f-35e8-4c8a-887f-0c23881547ec\") " pod="openstack/heat-db-sync-85xp6" Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.640959 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mf6mw\" (UniqueName: \"kubernetes.io/projected/48fbb037-6558-4de9-ad75-5e1e26ed6b45-kube-api-access-mf6mw\") pod \"dnsmasq-dns-5c9d85d47c-ts7lc\" (UID: \"48fbb037-6558-4de9-ad75-5e1e26ed6b45\") " pod="openstack/dnsmasq-dns-5c9d85d47c-ts7lc" Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.641002 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/48fbb037-6558-4de9-ad75-5e1e26ed6b45-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9d85d47c-ts7lc\" (UID: \"48fbb037-6558-4de9-ad75-5e1e26ed6b45\") " pod="openstack/dnsmasq-dns-5c9d85d47c-ts7lc" Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.641027 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59666d8f-35e8-4c8a-887f-0c23881547ec-config-data\") pod \"heat-db-sync-85xp6\" (UID: \"59666d8f-35e8-4c8a-887f-0c23881547ec\") " pod="openstack/heat-db-sync-85xp6" Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.642775 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48fbb037-6558-4de9-ad75-5e1e26ed6b45-config\") pod \"dnsmasq-dns-5c9d85d47c-ts7lc\" (UID: \"48fbb037-6558-4de9-ad75-5e1e26ed6b45\") " pod="openstack/dnsmasq-dns-5c9d85d47c-ts7lc" Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.644351 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/48fbb037-6558-4de9-ad75-5e1e26ed6b45-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9d85d47c-ts7lc\" (UID: \"48fbb037-6558-4de9-ad75-5e1e26ed6b45\") " pod="openstack/dnsmasq-dns-5c9d85d47c-ts7lc" Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.647514 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/48fbb037-6558-4de9-ad75-5e1e26ed6b45-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9d85d47c-ts7lc\" (UID: \"48fbb037-6558-4de9-ad75-5e1e26ed6b45\") " pod="openstack/dnsmasq-dns-5c9d85d47c-ts7lc" Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.648923 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48fbb037-6558-4de9-ad75-5e1e26ed6b45-dns-svc\") pod \"dnsmasq-dns-5c9d85d47c-ts7lc\" (UID: \"48fbb037-6558-4de9-ad75-5e1e26ed6b45\") " pod="openstack/dnsmasq-dns-5c9d85d47c-ts7lc" Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.662847 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bb6a7d2-f5f5-4684-b085-66101954d8e9-config-data\") pod \"keystone-bootstrap-w9kg5\" (UID: \"7bb6a7d2-f5f5-4684-b085-66101954d8e9\") " pod="openstack/keystone-bootstrap-w9kg5" Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.663419 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7bb6a7d2-f5f5-4684-b085-66101954d8e9-credential-keys\") pod \"keystone-bootstrap-w9kg5\" (UID: \"7bb6a7d2-f5f5-4684-b085-66101954d8e9\") " pod="openstack/keystone-bootstrap-w9kg5" Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.666492 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7bb6a7d2-f5f5-4684-b085-66101954d8e9-scripts\") pod \"keystone-bootstrap-w9kg5\" (UID: \"7bb6a7d2-f5f5-4684-b085-66101954d8e9\") " pod="openstack/keystone-bootstrap-w9kg5" Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.669354 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bb6a7d2-f5f5-4684-b085-66101954d8e9-combined-ca-bundle\") pod \"keystone-bootstrap-w9kg5\" (UID: \"7bb6a7d2-f5f5-4684-b085-66101954d8e9\") " pod="openstack/keystone-bootstrap-w9kg5" Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.678316 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7bb6a7d2-f5f5-4684-b085-66101954d8e9-fernet-keys\") pod \"keystone-bootstrap-w9kg5\" (UID: \"7bb6a7d2-f5f5-4684-b085-66101954d8e9\") " pod="openstack/keystone-bootstrap-w9kg5" Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.748569 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blpm6\" (UniqueName: \"kubernetes.io/projected/59666d8f-35e8-4c8a-887f-0c23881547ec-kube-api-access-blpm6\") pod \"heat-db-sync-85xp6\" (UID: \"59666d8f-35e8-4c8a-887f-0c23881547ec\") " pod="openstack/heat-db-sync-85xp6" Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.749078 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59666d8f-35e8-4c8a-887f-0c23881547ec-combined-ca-bundle\") pod \"heat-db-sync-85xp6\" (UID: \"59666d8f-35e8-4c8a-887f-0c23881547ec\") " pod="openstack/heat-db-sync-85xp6" Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.749114 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59666d8f-35e8-4c8a-887f-0c23881547ec-config-data\") pod \"heat-db-sync-85xp6\" (UID: \"59666d8f-35e8-4c8a-887f-0c23881547ec\") " pod="openstack/heat-db-sync-85xp6" Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.754040 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59666d8f-35e8-4c8a-887f-0c23881547ec-combined-ca-bundle\") pod \"heat-db-sync-85xp6\" (UID: \"59666d8f-35e8-4c8a-887f-0c23881547ec\") " pod="openstack/heat-db-sync-85xp6" Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.757267 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59666d8f-35e8-4c8a-887f-0c23881547ec-config-data\") pod \"heat-db-sync-85xp6\" (UID: \"59666d8f-35e8-4c8a-887f-0c23881547ec\") " pod="openstack/heat-db-sync-85xp6" Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.776713 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mf6mw\" (UniqueName: \"kubernetes.io/projected/48fbb037-6558-4de9-ad75-5e1e26ed6b45-kube-api-access-mf6mw\") pod \"dnsmasq-dns-5c9d85d47c-ts7lc\" (UID: \"48fbb037-6558-4de9-ad75-5e1e26ed6b45\") " pod="openstack/dnsmasq-dns-5c9d85d47c-ts7lc" Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.811473 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blpm6\" (UniqueName: \"kubernetes.io/projected/59666d8f-35e8-4c8a-887f-0c23881547ec-kube-api-access-blpm6\") pod \"heat-db-sync-85xp6\" (UID: \"59666d8f-35e8-4c8a-887f-0c23881547ec\") " pod="openstack/heat-db-sync-85xp6" Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.812211 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22lg5\" (UniqueName: \"kubernetes.io/projected/7bb6a7d2-f5f5-4684-b085-66101954d8e9-kube-api-access-22lg5\") pod \"keystone-bootstrap-w9kg5\" (UID: \"7bb6a7d2-f5f5-4684-b085-66101954d8e9\") " pod="openstack/keystone-bootstrap-w9kg5" Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.856760 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-z6klw"] Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.873644 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-85xp6" Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.876560 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-z6klw" Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.888360 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.889264 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-ktbwf" Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.889389 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.903240 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-z6klw"] Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.964732 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnrzd\" (UniqueName: \"kubernetes.io/projected/c0c3141e-5bf7-49c4-80fd-2f0d0b361491-kube-api-access-pnrzd\") pod \"neutron-db-sync-z6klw\" (UID: \"c0c3141e-5bf7-49c4-80fd-2f0d0b361491\") " pod="openstack/neutron-db-sync-z6klw" Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.964872 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c0c3141e-5bf7-49c4-80fd-2f0d0b361491-config\") pod \"neutron-db-sync-z6klw\" (UID: \"c0c3141e-5bf7-49c4-80fd-2f0d0b361491\") " pod="openstack/neutron-db-sync-z6klw" Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.964913 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0c3141e-5bf7-49c4-80fd-2f0d0b361491-combined-ca-bundle\") pod \"neutron-db-sync-z6klw\" (UID: \"c0c3141e-5bf7-49c4-80fd-2f0d0b361491\") " pod="openstack/neutron-db-sync-z6klw" Nov 29 08:04:27 crc kubenswrapper[4795]: I1129 08:04:27.989919 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9d85d47c-ts7lc" Nov 29 08:04:28 crc kubenswrapper[4795]: I1129 08:04:28.071387 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c0c3141e-5bf7-49c4-80fd-2f0d0b361491-config\") pod \"neutron-db-sync-z6klw\" (UID: \"c0c3141e-5bf7-49c4-80fd-2f0d0b361491\") " pod="openstack/neutron-db-sync-z6klw" Nov 29 08:04:28 crc kubenswrapper[4795]: I1129 08:04:28.071905 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0c3141e-5bf7-49c4-80fd-2f0d0b361491-combined-ca-bundle\") pod \"neutron-db-sync-z6klw\" (UID: \"c0c3141e-5bf7-49c4-80fd-2f0d0b361491\") " pod="openstack/neutron-db-sync-z6klw" Nov 29 08:04:28 crc kubenswrapper[4795]: I1129 08:04:28.072055 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnrzd\" (UniqueName: \"kubernetes.io/projected/c0c3141e-5bf7-49c4-80fd-2f0d0b361491-kube-api-access-pnrzd\") pod \"neutron-db-sync-z6klw\" (UID: \"c0c3141e-5bf7-49c4-80fd-2f0d0b361491\") " pod="openstack/neutron-db-sync-z6klw" Nov 29 08:04:28 crc kubenswrapper[4795]: I1129 08:04:28.094124 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-w9kg5" Nov 29 08:04:28 crc kubenswrapper[4795]: I1129 08:04:28.096548 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c0c3141e-5bf7-49c4-80fd-2f0d0b361491-config\") pod \"neutron-db-sync-z6klw\" (UID: \"c0c3141e-5bf7-49c4-80fd-2f0d0b361491\") " pod="openstack/neutron-db-sync-z6klw" Nov 29 08:04:28 crc kubenswrapper[4795]: I1129 08:04:28.097682 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0c3141e-5bf7-49c4-80fd-2f0d0b361491-combined-ca-bundle\") pod \"neutron-db-sync-z6klw\" (UID: \"c0c3141e-5bf7-49c4-80fd-2f0d0b361491\") " pod="openstack/neutron-db-sync-z6klw" Nov 29 08:04:28 crc kubenswrapper[4795]: I1129 08:04:28.157415 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-tvhw5"] Nov 29 08:04:28 crc kubenswrapper[4795]: I1129 08:04:28.168772 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-tvhw5" Nov 29 08:04:28 crc kubenswrapper[4795]: I1129 08:04:28.169013 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnrzd\" (UniqueName: \"kubernetes.io/projected/c0c3141e-5bf7-49c4-80fd-2f0d0b361491-kube-api-access-pnrzd\") pod \"neutron-db-sync-z6klw\" (UID: \"c0c3141e-5bf7-49c4-80fd-2f0d0b361491\") " pod="openstack/neutron-db-sync-z6klw" Nov 29 08:04:28 crc kubenswrapper[4795]: I1129 08:04:28.177458 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-6ql88" Nov 29 08:04:28 crc kubenswrapper[4795]: I1129 08:04:28.177743 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 29 08:04:28 crc kubenswrapper[4795]: I1129 08:04:28.178018 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 29 08:04:28 crc kubenswrapper[4795]: I1129 08:04:28.231449 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-tvhw5"] Nov 29 08:04:28 crc kubenswrapper[4795]: I1129 08:04:28.282790 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1082de8f-47bf-41ac-875f-8d7db0baab7b-etc-machine-id\") pod \"cinder-db-sync-tvhw5\" (UID: \"1082de8f-47bf-41ac-875f-8d7db0baab7b\") " pod="openstack/cinder-db-sync-tvhw5" Nov 29 08:04:28 crc kubenswrapper[4795]: I1129 08:04:28.283266 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1082de8f-47bf-41ac-875f-8d7db0baab7b-config-data\") pod \"cinder-db-sync-tvhw5\" (UID: \"1082de8f-47bf-41ac-875f-8d7db0baab7b\") " pod="openstack/cinder-db-sync-tvhw5" Nov 29 08:04:28 crc kubenswrapper[4795]: I1129 08:04:28.283291 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9k97\" (UniqueName: \"kubernetes.io/projected/1082de8f-47bf-41ac-875f-8d7db0baab7b-kube-api-access-c9k97\") pod \"cinder-db-sync-tvhw5\" (UID: \"1082de8f-47bf-41ac-875f-8d7db0baab7b\") " pod="openstack/cinder-db-sync-tvhw5" Nov 29 08:04:28 crc kubenswrapper[4795]: I1129 08:04:28.283336 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1082de8f-47bf-41ac-875f-8d7db0baab7b-scripts\") pod \"cinder-db-sync-tvhw5\" (UID: \"1082de8f-47bf-41ac-875f-8d7db0baab7b\") " pod="openstack/cinder-db-sync-tvhw5" Nov 29 08:04:28 crc kubenswrapper[4795]: I1129 08:04:28.283362 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1082de8f-47bf-41ac-875f-8d7db0baab7b-combined-ca-bundle\") pod \"cinder-db-sync-tvhw5\" (UID: \"1082de8f-47bf-41ac-875f-8d7db0baab7b\") " pod="openstack/cinder-db-sync-tvhw5" Nov 29 08:04:28 crc kubenswrapper[4795]: I1129 08:04:28.283454 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1082de8f-47bf-41ac-875f-8d7db0baab7b-db-sync-config-data\") pod \"cinder-db-sync-tvhw5\" (UID: \"1082de8f-47bf-41ac-875f-8d7db0baab7b\") " pod="openstack/cinder-db-sync-tvhw5" Nov 29 08:04:28 crc kubenswrapper[4795]: I1129 08:04:28.346870 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-fdl5k"] Nov 29 08:04:28 crc kubenswrapper[4795]: I1129 08:04:28.348376 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-fdl5k" Nov 29 08:04:28 crc kubenswrapper[4795]: I1129 08:04:28.358972 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-zw9w5" Nov 29 08:04:28 crc kubenswrapper[4795]: I1129 08:04:28.373479 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 29 08:04:28 crc kubenswrapper[4795]: I1129 08:04:28.387874 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1082de8f-47bf-41ac-875f-8d7db0baab7b-db-sync-config-data\") pod \"cinder-db-sync-tvhw5\" (UID: \"1082de8f-47bf-41ac-875f-8d7db0baab7b\") " pod="openstack/cinder-db-sync-tvhw5" Nov 29 08:04:28 crc kubenswrapper[4795]: I1129 08:04:28.388053 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1082de8f-47bf-41ac-875f-8d7db0baab7b-etc-machine-id\") pod \"cinder-db-sync-tvhw5\" (UID: \"1082de8f-47bf-41ac-875f-8d7db0baab7b\") " pod="openstack/cinder-db-sync-tvhw5" Nov 29 08:04:28 crc kubenswrapper[4795]: I1129 08:04:28.388162 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1082de8f-47bf-41ac-875f-8d7db0baab7b-config-data\") pod \"cinder-db-sync-tvhw5\" (UID: \"1082de8f-47bf-41ac-875f-8d7db0baab7b\") " pod="openstack/cinder-db-sync-tvhw5" Nov 29 08:04:28 crc kubenswrapper[4795]: I1129 08:04:28.388190 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9k97\" (UniqueName: \"kubernetes.io/projected/1082de8f-47bf-41ac-875f-8d7db0baab7b-kube-api-access-c9k97\") pod \"cinder-db-sync-tvhw5\" (UID: \"1082de8f-47bf-41ac-875f-8d7db0baab7b\") " pod="openstack/cinder-db-sync-tvhw5" Nov 29 08:04:28 crc kubenswrapper[4795]: I1129 08:04:28.388234 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1082de8f-47bf-41ac-875f-8d7db0baab7b-scripts\") pod \"cinder-db-sync-tvhw5\" (UID: \"1082de8f-47bf-41ac-875f-8d7db0baab7b\") " pod="openstack/cinder-db-sync-tvhw5" Nov 29 08:04:28 crc kubenswrapper[4795]: I1129 08:04:28.388271 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1082de8f-47bf-41ac-875f-8d7db0baab7b-combined-ca-bundle\") pod \"cinder-db-sync-tvhw5\" (UID: \"1082de8f-47bf-41ac-875f-8d7db0baab7b\") " pod="openstack/cinder-db-sync-tvhw5" Nov 29 08:04:28 crc kubenswrapper[4795]: I1129 08:04:28.392121 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1082de8f-47bf-41ac-875f-8d7db0baab7b-etc-machine-id\") pod \"cinder-db-sync-tvhw5\" (UID: \"1082de8f-47bf-41ac-875f-8d7db0baab7b\") " pod="openstack/cinder-db-sync-tvhw5" Nov 29 08:04:28 crc kubenswrapper[4795]: I1129 08:04:28.406168 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1082de8f-47bf-41ac-875f-8d7db0baab7b-scripts\") pod \"cinder-db-sync-tvhw5\" (UID: \"1082de8f-47bf-41ac-875f-8d7db0baab7b\") " pod="openstack/cinder-db-sync-tvhw5" Nov 29 08:04:28 crc kubenswrapper[4795]: I1129 08:04:28.407694 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1082de8f-47bf-41ac-875f-8d7db0baab7b-db-sync-config-data\") pod \"cinder-db-sync-tvhw5\" (UID: \"1082de8f-47bf-41ac-875f-8d7db0baab7b\") " pod="openstack/cinder-db-sync-tvhw5" Nov 29 08:04:28 crc kubenswrapper[4795]: I1129 08:04:28.410792 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1082de8f-47bf-41ac-875f-8d7db0baab7b-config-data\") pod \"cinder-db-sync-tvhw5\" (UID: \"1082de8f-47bf-41ac-875f-8d7db0baab7b\") " pod="openstack/cinder-db-sync-tvhw5" Nov 29 08:04:28 crc kubenswrapper[4795]: I1129 08:04:28.426498 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1082de8f-47bf-41ac-875f-8d7db0baab7b-combined-ca-bundle\") pod \"cinder-db-sync-tvhw5\" (UID: \"1082de8f-47bf-41ac-875f-8d7db0baab7b\") " pod="openstack/cinder-db-sync-tvhw5" Nov 29 08:04:28 crc kubenswrapper[4795]: I1129 08:04:28.429743 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-fdl5k"] Nov 29 08:04:28 crc kubenswrapper[4795]: I1129 08:04:28.433755 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9k97\" (UniqueName: \"kubernetes.io/projected/1082de8f-47bf-41ac-875f-8d7db0baab7b-kube-api-access-c9k97\") pod \"cinder-db-sync-tvhw5\" (UID: \"1082de8f-47bf-41ac-875f-8d7db0baab7b\") " pod="openstack/cinder-db-sync-tvhw5" Nov 29 08:04:28 crc kubenswrapper[4795]: I1129 08:04:28.440857 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-z6klw" Nov 29 08:04:28 crc kubenswrapper[4795]: I1129 08:04:28.490758 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6dff0cb-a174-4227-ad82-21a12aee68f5-combined-ca-bundle\") pod \"barbican-db-sync-fdl5k\" (UID: \"f6dff0cb-a174-4227-ad82-21a12aee68f5\") " pod="openstack/barbican-db-sync-fdl5k" Nov 29 08:04:28 crc kubenswrapper[4795]: I1129 08:04:28.490851 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpfp8\" (UniqueName: \"kubernetes.io/projected/f6dff0cb-a174-4227-ad82-21a12aee68f5-kube-api-access-xpfp8\") pod \"barbican-db-sync-fdl5k\" (UID: \"f6dff0cb-a174-4227-ad82-21a12aee68f5\") " pod="openstack/barbican-db-sync-fdl5k" Nov 29 08:04:28 crc kubenswrapper[4795]: I1129 08:04:28.490954 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f6dff0cb-a174-4227-ad82-21a12aee68f5-db-sync-config-data\") pod \"barbican-db-sync-fdl5k\" (UID: \"f6dff0cb-a174-4227-ad82-21a12aee68f5\") " pod="openstack/barbican-db-sync-fdl5k" Nov 29 08:04:28 crc kubenswrapper[4795]: I1129 08:04:28.545994 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9d85d47c-ts7lc"] Nov 29 08:04:28 crc kubenswrapper[4795]: I1129 08:04:28.557793 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6ffb94d8ff-pwdw9"] Nov 29 08:04:28 crc kubenswrapper[4795]: I1129 08:04:28.559521 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ffb94d8ff-pwdw9" Nov 29 08:04:28 crc kubenswrapper[4795]: I1129 08:04:28.570715 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-tvhw5" Nov 29 08:04:28 crc kubenswrapper[4795]: I1129 08:04:28.587732 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6ffb94d8ff-pwdw9"] Nov 29 08:04:28 crc kubenswrapper[4795]: I1129 08:04:28.599845 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6xh8\" (UniqueName: \"kubernetes.io/projected/81ba1e26-17fa-4357-86d9-0e51b2aa3814-kube-api-access-p6xh8\") pod \"dnsmasq-dns-6ffb94d8ff-pwdw9\" (UID: \"81ba1e26-17fa-4357-86d9-0e51b2aa3814\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-pwdw9" Nov 29 08:04:28 crc kubenswrapper[4795]: I1129 08:04:28.599902 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/81ba1e26-17fa-4357-86d9-0e51b2aa3814-ovsdbserver-sb\") pod \"dnsmasq-dns-6ffb94d8ff-pwdw9\" (UID: \"81ba1e26-17fa-4357-86d9-0e51b2aa3814\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-pwdw9" Nov 29 08:04:28 crc kubenswrapper[4795]: I1129 08:04:28.599950 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/81ba1e26-17fa-4357-86d9-0e51b2aa3814-ovsdbserver-nb\") pod \"dnsmasq-dns-6ffb94d8ff-pwdw9\" (UID: \"81ba1e26-17fa-4357-86d9-0e51b2aa3814\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-pwdw9" Nov 29 08:04:28 crc kubenswrapper[4795]: I1129 08:04:28.599997 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f6dff0cb-a174-4227-ad82-21a12aee68f5-db-sync-config-data\") pod \"barbican-db-sync-fdl5k\" (UID: \"f6dff0cb-a174-4227-ad82-21a12aee68f5\") " pod="openstack/barbican-db-sync-fdl5k" Nov 29 08:04:28 crc kubenswrapper[4795]: I1129 08:04:28.600018 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81ba1e26-17fa-4357-86d9-0e51b2aa3814-dns-svc\") pod \"dnsmasq-dns-6ffb94d8ff-pwdw9\" (UID: \"81ba1e26-17fa-4357-86d9-0e51b2aa3814\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-pwdw9" Nov 29 08:04:28 crc kubenswrapper[4795]: I1129 08:04:28.600042 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81ba1e26-17fa-4357-86d9-0e51b2aa3814-config\") pod \"dnsmasq-dns-6ffb94d8ff-pwdw9\" (UID: \"81ba1e26-17fa-4357-86d9-0e51b2aa3814\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-pwdw9" Nov 29 08:04:28 crc kubenswrapper[4795]: I1129 08:04:28.600138 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6dff0cb-a174-4227-ad82-21a12aee68f5-combined-ca-bundle\") pod \"barbican-db-sync-fdl5k\" (UID: \"f6dff0cb-a174-4227-ad82-21a12aee68f5\") " pod="openstack/barbican-db-sync-fdl5k" Nov 29 08:04:28 crc kubenswrapper[4795]: I1129 08:04:28.600189 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpfp8\" (UniqueName: \"kubernetes.io/projected/f6dff0cb-a174-4227-ad82-21a12aee68f5-kube-api-access-xpfp8\") pod \"barbican-db-sync-fdl5k\" (UID: \"f6dff0cb-a174-4227-ad82-21a12aee68f5\") " pod="openstack/barbican-db-sync-fdl5k" Nov 29 08:04:28 crc kubenswrapper[4795]: I1129 08:04:28.626493 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6dff0cb-a174-4227-ad82-21a12aee68f5-combined-ca-bundle\") pod \"barbican-db-sync-fdl5k\" (UID: \"f6dff0cb-a174-4227-ad82-21a12aee68f5\") " pod="openstack/barbican-db-sync-fdl5k" Nov 29 08:04:28 crc kubenswrapper[4795]: I1129 08:04:28.633769 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f6dff0cb-a174-4227-ad82-21a12aee68f5-db-sync-config-data\") pod \"barbican-db-sync-fdl5k\" (UID: \"f6dff0cb-a174-4227-ad82-21a12aee68f5\") " pod="openstack/barbican-db-sync-fdl5k" Nov 29 08:04:28 crc kubenswrapper[4795]: I1129 08:04:28.650538 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpfp8\" (UniqueName: \"kubernetes.io/projected/f6dff0cb-a174-4227-ad82-21a12aee68f5-kube-api-access-xpfp8\") pod \"barbican-db-sync-fdl5k\" (UID: \"f6dff0cb-a174-4227-ad82-21a12aee68f5\") " pod="openstack/barbican-db-sync-fdl5k" Nov 29 08:04:28 crc kubenswrapper[4795]: I1129 08:04:28.754103 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/81ba1e26-17fa-4357-86d9-0e51b2aa3814-ovsdbserver-nb\") pod \"dnsmasq-dns-6ffb94d8ff-pwdw9\" (UID: \"81ba1e26-17fa-4357-86d9-0e51b2aa3814\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-pwdw9" Nov 29 08:04:28 crc kubenswrapper[4795]: I1129 08:04:28.754249 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81ba1e26-17fa-4357-86d9-0e51b2aa3814-dns-svc\") pod \"dnsmasq-dns-6ffb94d8ff-pwdw9\" (UID: \"81ba1e26-17fa-4357-86d9-0e51b2aa3814\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-pwdw9" Nov 29 08:04:28 crc kubenswrapper[4795]: I1129 08:04:28.754310 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81ba1e26-17fa-4357-86d9-0e51b2aa3814-config\") pod \"dnsmasq-dns-6ffb94d8ff-pwdw9\" (UID: \"81ba1e26-17fa-4357-86d9-0e51b2aa3814\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-pwdw9" Nov 29 08:04:28 crc kubenswrapper[4795]: I1129 08:04:28.754562 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6xh8\" (UniqueName: \"kubernetes.io/projected/81ba1e26-17fa-4357-86d9-0e51b2aa3814-kube-api-access-p6xh8\") pod \"dnsmasq-dns-6ffb94d8ff-pwdw9\" (UID: \"81ba1e26-17fa-4357-86d9-0e51b2aa3814\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-pwdw9" Nov 29 08:04:28 crc kubenswrapper[4795]: I1129 08:04:28.754636 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/81ba1e26-17fa-4357-86d9-0e51b2aa3814-ovsdbserver-sb\") pod \"dnsmasq-dns-6ffb94d8ff-pwdw9\" (UID: \"81ba1e26-17fa-4357-86d9-0e51b2aa3814\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-pwdw9" Nov 29 08:04:28 crc kubenswrapper[4795]: I1129 08:04:28.755460 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-fdl5k" Nov 29 08:04:28 crc kubenswrapper[4795]: I1129 08:04:28.755984 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/81ba1e26-17fa-4357-86d9-0e51b2aa3814-ovsdbserver-sb\") pod \"dnsmasq-dns-6ffb94d8ff-pwdw9\" (UID: \"81ba1e26-17fa-4357-86d9-0e51b2aa3814\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-pwdw9" Nov 29 08:04:28 crc kubenswrapper[4795]: I1129 08:04:28.756629 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81ba1e26-17fa-4357-86d9-0e51b2aa3814-dns-svc\") pod \"dnsmasq-dns-6ffb94d8ff-pwdw9\" (UID: \"81ba1e26-17fa-4357-86d9-0e51b2aa3814\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-pwdw9" Nov 29 08:04:28 crc kubenswrapper[4795]: I1129 08:04:28.786829 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81ba1e26-17fa-4357-86d9-0e51b2aa3814-config\") pod \"dnsmasq-dns-6ffb94d8ff-pwdw9\" (UID: \"81ba1e26-17fa-4357-86d9-0e51b2aa3814\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-pwdw9" Nov 29 08:04:28 crc kubenswrapper[4795]: I1129 08:04:28.787850 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/81ba1e26-17fa-4357-86d9-0e51b2aa3814-ovsdbserver-nb\") pod \"dnsmasq-dns-6ffb94d8ff-pwdw9\" (UID: \"81ba1e26-17fa-4357-86d9-0e51b2aa3814\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-pwdw9" Nov 29 08:04:28 crc kubenswrapper[4795]: I1129 08:04:28.812289 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6xh8\" (UniqueName: \"kubernetes.io/projected/81ba1e26-17fa-4357-86d9-0e51b2aa3814-kube-api-access-p6xh8\") pod \"dnsmasq-dns-6ffb94d8ff-pwdw9\" (UID: \"81ba1e26-17fa-4357-86d9-0e51b2aa3814\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-pwdw9" Nov 29 08:04:28 crc kubenswrapper[4795]: I1129 08:04:28.838671 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-jf2ds"] Nov 29 08:04:28 crc kubenswrapper[4795]: I1129 08:04:28.850089 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-jf2ds" Nov 29 08:04:28 crc kubenswrapper[4795]: I1129 08:04:28.863502 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 29 08:04:28 crc kubenswrapper[4795]: I1129 08:04:28.863906 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-wxscv" Nov 29 08:04:28 crc kubenswrapper[4795]: I1129 08:04:28.875191 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 29 08:04:28 crc kubenswrapper[4795]: I1129 08:04:28.909921 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-jf2ds"] Nov 29 08:04:28 crc kubenswrapper[4795]: I1129 08:04:28.939083 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ffb94d8ff-pwdw9" Nov 29 08:04:28 crc kubenswrapper[4795]: I1129 08:04:28.964284 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20682aa9-7d99-41f5-9214-2c08cb1533ec-combined-ca-bundle\") pod \"placement-db-sync-jf2ds\" (UID: \"20682aa9-7d99-41f5-9214-2c08cb1533ec\") " pod="openstack/placement-db-sync-jf2ds" Nov 29 08:04:28 crc kubenswrapper[4795]: I1129 08:04:28.964418 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20682aa9-7d99-41f5-9214-2c08cb1533ec-scripts\") pod \"placement-db-sync-jf2ds\" (UID: \"20682aa9-7d99-41f5-9214-2c08cb1533ec\") " pod="openstack/placement-db-sync-jf2ds" Nov 29 08:04:28 crc kubenswrapper[4795]: I1129 08:04:28.964481 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20682aa9-7d99-41f5-9214-2c08cb1533ec-config-data\") pod \"placement-db-sync-jf2ds\" (UID: \"20682aa9-7d99-41f5-9214-2c08cb1533ec\") " pod="openstack/placement-db-sync-jf2ds" Nov 29 08:04:28 crc kubenswrapper[4795]: I1129 08:04:28.964510 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20682aa9-7d99-41f5-9214-2c08cb1533ec-logs\") pod \"placement-db-sync-jf2ds\" (UID: \"20682aa9-7d99-41f5-9214-2c08cb1533ec\") " pod="openstack/placement-db-sync-jf2ds" Nov 29 08:04:28 crc kubenswrapper[4795]: I1129 08:04:28.964544 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbx69\" (UniqueName: \"kubernetes.io/projected/20682aa9-7d99-41f5-9214-2c08cb1533ec-kube-api-access-xbx69\") pod \"placement-db-sync-jf2ds\" (UID: \"20682aa9-7d99-41f5-9214-2c08cb1533ec\") " pod="openstack/placement-db-sync-jf2ds" Nov 29 08:04:29 crc kubenswrapper[4795]: I1129 08:04:29.066887 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20682aa9-7d99-41f5-9214-2c08cb1533ec-logs\") pod \"placement-db-sync-jf2ds\" (UID: \"20682aa9-7d99-41f5-9214-2c08cb1533ec\") " pod="openstack/placement-db-sync-jf2ds" Nov 29 08:04:29 crc kubenswrapper[4795]: I1129 08:04:29.066941 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbx69\" (UniqueName: \"kubernetes.io/projected/20682aa9-7d99-41f5-9214-2c08cb1533ec-kube-api-access-xbx69\") pod \"placement-db-sync-jf2ds\" (UID: \"20682aa9-7d99-41f5-9214-2c08cb1533ec\") " pod="openstack/placement-db-sync-jf2ds" Nov 29 08:04:29 crc kubenswrapper[4795]: I1129 08:04:29.067076 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20682aa9-7d99-41f5-9214-2c08cb1533ec-combined-ca-bundle\") pod \"placement-db-sync-jf2ds\" (UID: \"20682aa9-7d99-41f5-9214-2c08cb1533ec\") " pod="openstack/placement-db-sync-jf2ds" Nov 29 08:04:29 crc kubenswrapper[4795]: I1129 08:04:29.067504 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20682aa9-7d99-41f5-9214-2c08cb1533ec-logs\") pod \"placement-db-sync-jf2ds\" (UID: \"20682aa9-7d99-41f5-9214-2c08cb1533ec\") " pod="openstack/placement-db-sync-jf2ds" Nov 29 08:04:29 crc kubenswrapper[4795]: I1129 08:04:29.068515 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20682aa9-7d99-41f5-9214-2c08cb1533ec-scripts\") pod \"placement-db-sync-jf2ds\" (UID: \"20682aa9-7d99-41f5-9214-2c08cb1533ec\") " pod="openstack/placement-db-sync-jf2ds" Nov 29 08:04:29 crc kubenswrapper[4795]: I1129 08:04:29.068921 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20682aa9-7d99-41f5-9214-2c08cb1533ec-config-data\") pod \"placement-db-sync-jf2ds\" (UID: \"20682aa9-7d99-41f5-9214-2c08cb1533ec\") " pod="openstack/placement-db-sync-jf2ds" Nov 29 08:04:29 crc kubenswrapper[4795]: I1129 08:04:29.075097 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20682aa9-7d99-41f5-9214-2c08cb1533ec-combined-ca-bundle\") pod \"placement-db-sync-jf2ds\" (UID: \"20682aa9-7d99-41f5-9214-2c08cb1533ec\") " pod="openstack/placement-db-sync-jf2ds" Nov 29 08:04:29 crc kubenswrapper[4795]: I1129 08:04:29.075456 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20682aa9-7d99-41f5-9214-2c08cb1533ec-scripts\") pod \"placement-db-sync-jf2ds\" (UID: \"20682aa9-7d99-41f5-9214-2c08cb1533ec\") " pod="openstack/placement-db-sync-jf2ds" Nov 29 08:04:29 crc kubenswrapper[4795]: I1129 08:04:29.113323 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20682aa9-7d99-41f5-9214-2c08cb1533ec-config-data\") pod \"placement-db-sync-jf2ds\" (UID: \"20682aa9-7d99-41f5-9214-2c08cb1533ec\") " pod="openstack/placement-db-sync-jf2ds" Nov 29 08:04:29 crc kubenswrapper[4795]: I1129 08:04:29.180539 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbx69\" (UniqueName: \"kubernetes.io/projected/20682aa9-7d99-41f5-9214-2c08cb1533ec-kube-api-access-xbx69\") pod \"placement-db-sync-jf2ds\" (UID: \"20682aa9-7d99-41f5-9214-2c08cb1533ec\") " pod="openstack/placement-db-sync-jf2ds" Nov 29 08:04:29 crc kubenswrapper[4795]: I1129 08:04:29.286911 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-85xp6"] Nov 29 08:04:29 crc kubenswrapper[4795]: I1129 08:04:29.392063 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-jf2ds" Nov 29 08:04:29 crc kubenswrapper[4795]: I1129 08:04:29.834540 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-z6klw"] Nov 29 08:04:29 crc kubenswrapper[4795]: I1129 08:04:29.872262 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-85xp6" event={"ID":"59666d8f-35e8-4c8a-887f-0c23881547ec","Type":"ContainerStarted","Data":"16d0fc1fac5e9bd5157fecbc0601a219ff223bfbffb746da0a07b3b3c42a54a7"} Nov 29 08:04:29 crc kubenswrapper[4795]: I1129 08:04:29.894180 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9d85d47c-ts7lc"] Nov 29 08:04:29 crc kubenswrapper[4795]: W1129 08:04:29.904901 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48fbb037_6558_4de9_ad75_5e1e26ed6b45.slice/crio-769c631f2fe6145b335634f7c855ac217d1741ca089f7ebe1c8ba77e63c44a64 WatchSource:0}: Error finding container 769c631f2fe6145b335634f7c855ac217d1741ca089f7ebe1c8ba77e63c44a64: Status 404 returned error can't find the container with id 769c631f2fe6145b335634f7c855ac217d1741ca089f7ebe1c8ba77e63c44a64 Nov 29 08:04:29 crc kubenswrapper[4795]: I1129 08:04:29.912908 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-w9kg5"] Nov 29 08:04:30 crc kubenswrapper[4795]: I1129 08:04:29.999809 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-tvhw5"] Nov 29 08:04:30 crc kubenswrapper[4795]: I1129 08:04:30.013082 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6ffb94d8ff-pwdw9"] Nov 29 08:04:30 crc kubenswrapper[4795]: I1129 08:04:30.037540 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-fdl5k"] Nov 29 08:04:30 crc kubenswrapper[4795]: I1129 08:04:30.191933 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-jf2ds"] Nov 29 08:04:30 crc kubenswrapper[4795]: W1129 08:04:30.241747 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20682aa9_7d99_41f5_9214_2c08cb1533ec.slice/crio-39b710d96a413cc0f88b94f614e724fb655de78f329b1b0b9cb1b51efc6acca4 WatchSource:0}: Error finding container 39b710d96a413cc0f88b94f614e724fb655de78f329b1b0b9cb1b51efc6acca4: Status 404 returned error can't find the container with id 39b710d96a413cc0f88b94f614e724fb655de78f329b1b0b9cb1b51efc6acca4 Nov 29 08:04:30 crc kubenswrapper[4795]: I1129 08:04:30.379667 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 29 08:04:30 crc kubenswrapper[4795]: I1129 08:04:30.382797 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 08:04:30 crc kubenswrapper[4795]: I1129 08:04:30.386724 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 29 08:04:30 crc kubenswrapper[4795]: I1129 08:04:30.386942 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 29 08:04:30 crc kubenswrapper[4795]: I1129 08:04:30.396554 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 08:04:30 crc kubenswrapper[4795]: I1129 08:04:30.501343 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6dbced2-2ca2-4189-aad3-7a872ab6209c-run-httpd\") pod \"ceilometer-0\" (UID: \"c6dbced2-2ca2-4189-aad3-7a872ab6209c\") " pod="openstack/ceilometer-0" Nov 29 08:04:30 crc kubenswrapper[4795]: I1129 08:04:30.501887 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6dbced2-2ca2-4189-aad3-7a872ab6209c-log-httpd\") pod \"ceilometer-0\" (UID: \"c6dbced2-2ca2-4189-aad3-7a872ab6209c\") " pod="openstack/ceilometer-0" Nov 29 08:04:30 crc kubenswrapper[4795]: I1129 08:04:30.502055 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmxvx\" (UniqueName: \"kubernetes.io/projected/c6dbced2-2ca2-4189-aad3-7a872ab6209c-kube-api-access-hmxvx\") pod \"ceilometer-0\" (UID: \"c6dbced2-2ca2-4189-aad3-7a872ab6209c\") " pod="openstack/ceilometer-0" Nov 29 08:04:30 crc kubenswrapper[4795]: I1129 08:04:30.502101 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6dbced2-2ca2-4189-aad3-7a872ab6209c-scripts\") pod \"ceilometer-0\" (UID: \"c6dbced2-2ca2-4189-aad3-7a872ab6209c\") " pod="openstack/ceilometer-0" Nov 29 08:04:30 crc kubenswrapper[4795]: I1129 08:04:30.502172 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6dbced2-2ca2-4189-aad3-7a872ab6209c-config-data\") pod \"ceilometer-0\" (UID: \"c6dbced2-2ca2-4189-aad3-7a872ab6209c\") " pod="openstack/ceilometer-0" Nov 29 08:04:30 crc kubenswrapper[4795]: I1129 08:04:30.502213 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6dbced2-2ca2-4189-aad3-7a872ab6209c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c6dbced2-2ca2-4189-aad3-7a872ab6209c\") " pod="openstack/ceilometer-0" Nov 29 08:04:30 crc kubenswrapper[4795]: I1129 08:04:30.502250 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c6dbced2-2ca2-4189-aad3-7a872ab6209c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c6dbced2-2ca2-4189-aad3-7a872ab6209c\") " pod="openstack/ceilometer-0" Nov 29 08:04:30 crc kubenswrapper[4795]: I1129 08:04:30.604142 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6dbced2-2ca2-4189-aad3-7a872ab6209c-scripts\") pod \"ceilometer-0\" (UID: \"c6dbced2-2ca2-4189-aad3-7a872ab6209c\") " pod="openstack/ceilometer-0" Nov 29 08:04:30 crc kubenswrapper[4795]: I1129 08:04:30.604248 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6dbced2-2ca2-4189-aad3-7a872ab6209c-config-data\") pod \"ceilometer-0\" (UID: \"c6dbced2-2ca2-4189-aad3-7a872ab6209c\") " pod="openstack/ceilometer-0" Nov 29 08:04:30 crc kubenswrapper[4795]: I1129 08:04:30.604282 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6dbced2-2ca2-4189-aad3-7a872ab6209c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c6dbced2-2ca2-4189-aad3-7a872ab6209c\") " pod="openstack/ceilometer-0" Nov 29 08:04:30 crc kubenswrapper[4795]: I1129 08:04:30.604321 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c6dbced2-2ca2-4189-aad3-7a872ab6209c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c6dbced2-2ca2-4189-aad3-7a872ab6209c\") " pod="openstack/ceilometer-0" Nov 29 08:04:30 crc kubenswrapper[4795]: I1129 08:04:30.604366 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6dbced2-2ca2-4189-aad3-7a872ab6209c-run-httpd\") pod \"ceilometer-0\" (UID: \"c6dbced2-2ca2-4189-aad3-7a872ab6209c\") " pod="openstack/ceilometer-0" Nov 29 08:04:30 crc kubenswrapper[4795]: I1129 08:04:30.604387 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6dbced2-2ca2-4189-aad3-7a872ab6209c-log-httpd\") pod \"ceilometer-0\" (UID: \"c6dbced2-2ca2-4189-aad3-7a872ab6209c\") " pod="openstack/ceilometer-0" Nov 29 08:04:30 crc kubenswrapper[4795]: I1129 08:04:30.604467 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmxvx\" (UniqueName: \"kubernetes.io/projected/c6dbced2-2ca2-4189-aad3-7a872ab6209c-kube-api-access-hmxvx\") pod \"ceilometer-0\" (UID: \"c6dbced2-2ca2-4189-aad3-7a872ab6209c\") " pod="openstack/ceilometer-0" Nov 29 08:04:30 crc kubenswrapper[4795]: I1129 08:04:30.605207 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6dbced2-2ca2-4189-aad3-7a872ab6209c-log-httpd\") pod \"ceilometer-0\" (UID: \"c6dbced2-2ca2-4189-aad3-7a872ab6209c\") " pod="openstack/ceilometer-0" Nov 29 08:04:30 crc kubenswrapper[4795]: I1129 08:04:30.605354 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6dbced2-2ca2-4189-aad3-7a872ab6209c-run-httpd\") pod \"ceilometer-0\" (UID: \"c6dbced2-2ca2-4189-aad3-7a872ab6209c\") " pod="openstack/ceilometer-0" Nov 29 08:04:30 crc kubenswrapper[4795]: I1129 08:04:30.618113 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6dbced2-2ca2-4189-aad3-7a872ab6209c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c6dbced2-2ca2-4189-aad3-7a872ab6209c\") " pod="openstack/ceilometer-0" Nov 29 08:04:30 crc kubenswrapper[4795]: I1129 08:04:30.618573 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6dbced2-2ca2-4189-aad3-7a872ab6209c-scripts\") pod \"ceilometer-0\" (UID: \"c6dbced2-2ca2-4189-aad3-7a872ab6209c\") " pod="openstack/ceilometer-0" Nov 29 08:04:30 crc kubenswrapper[4795]: I1129 08:04:30.622213 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c6dbced2-2ca2-4189-aad3-7a872ab6209c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c6dbced2-2ca2-4189-aad3-7a872ab6209c\") " pod="openstack/ceilometer-0" Nov 29 08:04:30 crc kubenswrapper[4795]: I1129 08:04:30.625551 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6dbced2-2ca2-4189-aad3-7a872ab6209c-config-data\") pod \"ceilometer-0\" (UID: \"c6dbced2-2ca2-4189-aad3-7a872ab6209c\") " pod="openstack/ceilometer-0" Nov 29 08:04:30 crc kubenswrapper[4795]: I1129 08:04:30.633443 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmxvx\" (UniqueName: \"kubernetes.io/projected/c6dbced2-2ca2-4189-aad3-7a872ab6209c-kube-api-access-hmxvx\") pod \"ceilometer-0\" (UID: \"c6dbced2-2ca2-4189-aad3-7a872ab6209c\") " pod="openstack/ceilometer-0" Nov 29 08:04:30 crc kubenswrapper[4795]: I1129 08:04:30.729415 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 08:04:30 crc kubenswrapper[4795]: I1129 08:04:30.966839 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-fdl5k" event={"ID":"f6dff0cb-a174-4227-ad82-21a12aee68f5","Type":"ContainerStarted","Data":"de90d2d3e8ffb89d827ea31e045c69e829a1d31b2de20afc7ea74625c7525347"} Nov 29 08:04:31 crc kubenswrapper[4795]: I1129 08:04:31.039994 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-z6klw" event={"ID":"c0c3141e-5bf7-49c4-80fd-2f0d0b361491","Type":"ContainerStarted","Data":"ab09232fa054ef8344cace40f56de85ab852491522d1f5e86f1c56d74df6ad31"} Nov 29 08:04:31 crc kubenswrapper[4795]: I1129 08:04:31.062810 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-jf2ds" event={"ID":"20682aa9-7d99-41f5-9214-2c08cb1533ec","Type":"ContainerStarted","Data":"39b710d96a413cc0f88b94f614e724fb655de78f329b1b0b9cb1b51efc6acca4"} Nov 29 08:04:31 crc kubenswrapper[4795]: I1129 08:04:31.081952 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ffb94d8ff-pwdw9" event={"ID":"81ba1e26-17fa-4357-86d9-0e51b2aa3814","Type":"ContainerStarted","Data":"59611a1b99832e9d43b386c424c6bf8f80835da462d97aa2f658090ba0d6b07b"} Nov 29 08:04:31 crc kubenswrapper[4795]: I1129 08:04:31.093247 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-tvhw5" event={"ID":"1082de8f-47bf-41ac-875f-8d7db0baab7b","Type":"ContainerStarted","Data":"f34dcfd68119562074854949abe1e1270f406458f3c63340e8cc226888b94fe6"} Nov 29 08:04:31 crc kubenswrapper[4795]: I1129 08:04:31.104086 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-w9kg5" event={"ID":"7bb6a7d2-f5f5-4684-b085-66101954d8e9","Type":"ContainerStarted","Data":"244c4eec23b056bb081bd6cac149215b9c5aca9c2e2cf23a812d19509aa875c3"} Nov 29 08:04:31 crc kubenswrapper[4795]: I1129 08:04:31.116818 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9d85d47c-ts7lc" event={"ID":"48fbb037-6558-4de9-ad75-5e1e26ed6b45","Type":"ContainerStarted","Data":"769c631f2fe6145b335634f7c855ac217d1741ca089f7ebe1c8ba77e63c44a64"} Nov 29 08:04:31 crc kubenswrapper[4795]: I1129 08:04:31.655578 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 08:04:32 crc kubenswrapper[4795]: I1129 08:04:32.131029 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6dbced2-2ca2-4189-aad3-7a872ab6209c","Type":"ContainerStarted","Data":"5832325ba17fad5c07c62549a6e44f28a51be2c23eccb49326c41f4870fae845"} Nov 29 08:04:32 crc kubenswrapper[4795]: I1129 08:04:32.132677 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-z6klw" event={"ID":"c0c3141e-5bf7-49c4-80fd-2f0d0b361491","Type":"ContainerStarted","Data":"394aa5c2b264af45bd9815054a03cda1553ea3317bcd26facfc2c583ad2145f2"} Nov 29 08:04:32 crc kubenswrapper[4795]: I1129 08:04:32.134626 4795 generic.go:334] "Generic (PLEG): container finished" podID="81ba1e26-17fa-4357-86d9-0e51b2aa3814" containerID="2bc3e959bf62522b84702513595a712ff9d3697feac1fe8438fc2c1fc8f9ea4c" exitCode=0 Nov 29 08:04:32 crc kubenswrapper[4795]: I1129 08:04:32.134741 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ffb94d8ff-pwdw9" event={"ID":"81ba1e26-17fa-4357-86d9-0e51b2aa3814","Type":"ContainerDied","Data":"2bc3e959bf62522b84702513595a712ff9d3697feac1fe8438fc2c1fc8f9ea4c"} Nov 29 08:04:32 crc kubenswrapper[4795]: I1129 08:04:32.138805 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9301745a-4dd1-469a-a37f-465f65a063e4","Type":"ContainerStarted","Data":"5e131d7b22cb7786b65628b26a5d67002fac3d31f79e438d97efd6e775a7d66d"} Nov 29 08:04:32 crc kubenswrapper[4795]: I1129 08:04:32.141337 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-w9kg5" event={"ID":"7bb6a7d2-f5f5-4684-b085-66101954d8e9","Type":"ContainerStarted","Data":"a3eac150aab4f523499bb70dae16fc16fde8b4333aa5c440b79d3a4ff2e3923b"} Nov 29 08:04:32 crc kubenswrapper[4795]: I1129 08:04:32.168223 4795 generic.go:334] "Generic (PLEG): container finished" podID="48fbb037-6558-4de9-ad75-5e1e26ed6b45" containerID="fb487ba3931d5ce47947e0849c9b512a3040be3b7ecf2e03b038a4a486797612" exitCode=0 Nov 29 08:04:32 crc kubenswrapper[4795]: I1129 08:04:32.168275 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9d85d47c-ts7lc" event={"ID":"48fbb037-6558-4de9-ad75-5e1e26ed6b45","Type":"ContainerDied","Data":"fb487ba3931d5ce47947e0849c9b512a3040be3b7ecf2e03b038a4a486797612"} Nov 29 08:04:32 crc kubenswrapper[4795]: I1129 08:04:32.216195 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-z6klw" podStartSLOduration=5.216157262 podStartE2EDuration="5.216157262s" podCreationTimestamp="2025-11-29 08:04:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 08:04:32.200671111 +0000 UTC m=+1518.176246901" watchObservedRunningTime="2025-11-29 08:04:32.216157262 +0000 UTC m=+1518.191733052" Nov 29 08:04:32 crc kubenswrapper[4795]: I1129 08:04:32.254070 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-w9kg5" podStartSLOduration=5.254046569 podStartE2EDuration="5.254046569s" podCreationTimestamp="2025-11-29 08:04:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 08:04:32.241828272 +0000 UTC m=+1518.217404072" watchObservedRunningTime="2025-11-29 08:04:32.254046569 +0000 UTC m=+1518.229622359" Nov 29 08:04:32 crc kubenswrapper[4795]: I1129 08:04:32.502424 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 08:04:32 crc kubenswrapper[4795]: I1129 08:04:32.691143 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9d85d47c-ts7lc" Nov 29 08:04:32 crc kubenswrapper[4795]: I1129 08:04:32.877296 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48fbb037-6558-4de9-ad75-5e1e26ed6b45-config\") pod \"48fbb037-6558-4de9-ad75-5e1e26ed6b45\" (UID: \"48fbb037-6558-4de9-ad75-5e1e26ed6b45\") " Nov 29 08:04:32 crc kubenswrapper[4795]: I1129 08:04:32.877693 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/48fbb037-6558-4de9-ad75-5e1e26ed6b45-ovsdbserver-nb\") pod \"48fbb037-6558-4de9-ad75-5e1e26ed6b45\" (UID: \"48fbb037-6558-4de9-ad75-5e1e26ed6b45\") " Nov 29 08:04:32 crc kubenswrapper[4795]: I1129 08:04:32.877885 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48fbb037-6558-4de9-ad75-5e1e26ed6b45-dns-svc\") pod \"48fbb037-6558-4de9-ad75-5e1e26ed6b45\" (UID: \"48fbb037-6558-4de9-ad75-5e1e26ed6b45\") " Nov 29 08:04:32 crc kubenswrapper[4795]: I1129 08:04:32.878019 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mf6mw\" (UniqueName: \"kubernetes.io/projected/48fbb037-6558-4de9-ad75-5e1e26ed6b45-kube-api-access-mf6mw\") pod \"48fbb037-6558-4de9-ad75-5e1e26ed6b45\" (UID: \"48fbb037-6558-4de9-ad75-5e1e26ed6b45\") " Nov 29 08:04:32 crc kubenswrapper[4795]: I1129 08:04:32.878147 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/48fbb037-6558-4de9-ad75-5e1e26ed6b45-ovsdbserver-sb\") pod \"48fbb037-6558-4de9-ad75-5e1e26ed6b45\" (UID: \"48fbb037-6558-4de9-ad75-5e1e26ed6b45\") " Nov 29 08:04:32 crc kubenswrapper[4795]: I1129 08:04:32.892846 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48fbb037-6558-4de9-ad75-5e1e26ed6b45-kube-api-access-mf6mw" (OuterVolumeSpecName: "kube-api-access-mf6mw") pod "48fbb037-6558-4de9-ad75-5e1e26ed6b45" (UID: "48fbb037-6558-4de9-ad75-5e1e26ed6b45"). InnerVolumeSpecName "kube-api-access-mf6mw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:04:32 crc kubenswrapper[4795]: I1129 08:04:32.915077 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48fbb037-6558-4de9-ad75-5e1e26ed6b45-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "48fbb037-6558-4de9-ad75-5e1e26ed6b45" (UID: "48fbb037-6558-4de9-ad75-5e1e26ed6b45"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:04:32 crc kubenswrapper[4795]: I1129 08:04:32.927287 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48fbb037-6558-4de9-ad75-5e1e26ed6b45-config" (OuterVolumeSpecName: "config") pod "48fbb037-6558-4de9-ad75-5e1e26ed6b45" (UID: "48fbb037-6558-4de9-ad75-5e1e26ed6b45"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:04:32 crc kubenswrapper[4795]: I1129 08:04:32.949274 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48fbb037-6558-4de9-ad75-5e1e26ed6b45-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "48fbb037-6558-4de9-ad75-5e1e26ed6b45" (UID: "48fbb037-6558-4de9-ad75-5e1e26ed6b45"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:04:32 crc kubenswrapper[4795]: I1129 08:04:32.957333 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48fbb037-6558-4de9-ad75-5e1e26ed6b45-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "48fbb037-6558-4de9-ad75-5e1e26ed6b45" (UID: "48fbb037-6558-4de9-ad75-5e1e26ed6b45"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:04:32 crc kubenswrapper[4795]: I1129 08:04:32.980503 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mf6mw\" (UniqueName: \"kubernetes.io/projected/48fbb037-6558-4de9-ad75-5e1e26ed6b45-kube-api-access-mf6mw\") on node \"crc\" DevicePath \"\"" Nov 29 08:04:32 crc kubenswrapper[4795]: I1129 08:04:32.980549 4795 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/48fbb037-6558-4de9-ad75-5e1e26ed6b45-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 29 08:04:32 crc kubenswrapper[4795]: I1129 08:04:32.980563 4795 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48fbb037-6558-4de9-ad75-5e1e26ed6b45-config\") on node \"crc\" DevicePath \"\"" Nov 29 08:04:32 crc kubenswrapper[4795]: I1129 08:04:32.980577 4795 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/48fbb037-6558-4de9-ad75-5e1e26ed6b45-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 29 08:04:32 crc kubenswrapper[4795]: I1129 08:04:32.980625 4795 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48fbb037-6558-4de9-ad75-5e1e26ed6b45-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 08:04:33 crc kubenswrapper[4795]: I1129 08:04:33.217169 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ffb94d8ff-pwdw9" event={"ID":"81ba1e26-17fa-4357-86d9-0e51b2aa3814","Type":"ContainerStarted","Data":"6344c4883c0f70725fbb83383f4df27f6c389ed311ae125d81350835a91e8c93"} Nov 29 08:04:33 crc kubenswrapper[4795]: I1129 08:04:33.223766 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9301745a-4dd1-469a-a37f-465f65a063e4","Type":"ContainerStarted","Data":"24794ffc0e90373df1d46b27f5e90bbd14d4dfe1f954c57584b2ffc5e6e09792"} Nov 29 08:04:33 crc kubenswrapper[4795]: I1129 08:04:33.230724 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9d85d47c-ts7lc" event={"ID":"48fbb037-6558-4de9-ad75-5e1e26ed6b45","Type":"ContainerDied","Data":"769c631f2fe6145b335634f7c855ac217d1741ca089f7ebe1c8ba77e63c44a64"} Nov 29 08:04:33 crc kubenswrapper[4795]: I1129 08:04:33.230806 4795 scope.go:117] "RemoveContainer" containerID="fb487ba3931d5ce47947e0849c9b512a3040be3b7ecf2e03b038a4a486797612" Nov 29 08:04:33 crc kubenswrapper[4795]: I1129 08:04:33.231026 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9d85d47c-ts7lc" Nov 29 08:04:33 crc kubenswrapper[4795]: I1129 08:04:33.399681 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9d85d47c-ts7lc"] Nov 29 08:04:33 crc kubenswrapper[4795]: I1129 08:04:33.413944 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9d85d47c-ts7lc"] Nov 29 08:04:33 crc kubenswrapper[4795]: E1129 08:04:33.676007 4795 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddebbb091_39b4_4d01_a468_7f9c7b65ff7e.slice/crio-02d3bccc3eb069db5cdb13679eddeeaf3fcc70c21b41a2a903130dac092b2cff.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48fbb037_6558_4de9_ad75_5e1e26ed6b45.slice\": RecentStats: unable to find data in memory cache]" Nov 29 08:04:34 crc kubenswrapper[4795]: I1129 08:04:34.242808 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6ffb94d8ff-pwdw9" Nov 29 08:04:34 crc kubenswrapper[4795]: I1129 08:04:34.262966 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6ffb94d8ff-pwdw9" podStartSLOduration=6.262943345 podStartE2EDuration="6.262943345s" podCreationTimestamp="2025-11-29 08:04:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 08:04:34.258788627 +0000 UTC m=+1520.234364417" watchObservedRunningTime="2025-11-29 08:04:34.262943345 +0000 UTC m=+1520.238519135" Nov 29 08:04:34 crc kubenswrapper[4795]: I1129 08:04:34.300940 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=23.300920405 podStartE2EDuration="23.300920405s" podCreationTimestamp="2025-11-29 08:04:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 08:04:34.291620911 +0000 UTC m=+1520.267196701" watchObservedRunningTime="2025-11-29 08:04:34.300920405 +0000 UTC m=+1520.276496195" Nov 29 08:04:34 crc kubenswrapper[4795]: I1129 08:04:34.307746 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48fbb037-6558-4de9-ad75-5e1e26ed6b45" path="/var/lib/kubelet/pods/48fbb037-6558-4de9-ad75-5e1e26ed6b45/volumes" Nov 29 08:04:35 crc kubenswrapper[4795]: E1129 08:04:35.220871 4795 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddebbb091_39b4_4d01_a468_7f9c7b65ff7e.slice/crio-02d3bccc3eb069db5cdb13679eddeeaf3fcc70c21b41a2a903130dac092b2cff.scope\": RecentStats: unable to find data in memory cache]" Nov 29 08:04:35 crc kubenswrapper[4795]: I1129 08:04:35.296774 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"28437f9f-e92e-46d7-9ffb-fcdda5dea25e","Type":"ContainerStarted","Data":"26cd9a25ed9c24941976153c99c7330313eccfdf161bccb13b1c1efd26b5124f"} Nov 29 08:04:36 crc kubenswrapper[4795]: I1129 08:04:36.318995 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"28437f9f-e92e-46d7-9ffb-fcdda5dea25e","Type":"ContainerStarted","Data":"b0d59c41a7a30c06808a97a42f8b2451f8fd12bb772c2d44368f0c678ea2147d"} Nov 29 08:04:37 crc kubenswrapper[4795]: I1129 08:04:37.340453 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"28437f9f-e92e-46d7-9ffb-fcdda5dea25e","Type":"ContainerStarted","Data":"7b7b4a4b8cedb8e0719a720e65a62846c4e6e6cd5524b6f62248294d0acf1184"} Nov 29 08:04:37 crc kubenswrapper[4795]: I1129 08:04:37.463464 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Nov 29 08:04:38 crc kubenswrapper[4795]: I1129 08:04:38.360976 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"28437f9f-e92e-46d7-9ffb-fcdda5dea25e","Type":"ContainerStarted","Data":"b1fef406767ad49b06e6842ea9c114c5cd1fc51ec7d728aa8217894cf9f4e403"} Nov 29 08:04:38 crc kubenswrapper[4795]: I1129 08:04:38.941603 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6ffb94d8ff-pwdw9" Nov 29 08:04:39 crc kubenswrapper[4795]: I1129 08:04:39.020586 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-24d2f"] Nov 29 08:04:39 crc kubenswrapper[4795]: I1129 08:04:39.021108 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-24d2f" podUID="439904f7-bd39-4def-96c8-8975d412574f" containerName="dnsmasq-dns" containerID="cri-o://d472010b37d590e430d767a87d8ac19b69bda8f43512edc03908fb14c169e8a8" gracePeriod=10 Nov 29 08:04:40 crc kubenswrapper[4795]: I1129 08:04:40.769011 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8b2xv"] Nov 29 08:04:40 crc kubenswrapper[4795]: E1129 08:04:40.769814 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48fbb037-6558-4de9-ad75-5e1e26ed6b45" containerName="init" Nov 29 08:04:40 crc kubenswrapper[4795]: I1129 08:04:40.769826 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="48fbb037-6558-4de9-ad75-5e1e26ed6b45" containerName="init" Nov 29 08:04:40 crc kubenswrapper[4795]: I1129 08:04:40.770014 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="48fbb037-6558-4de9-ad75-5e1e26ed6b45" containerName="init" Nov 29 08:04:40 crc kubenswrapper[4795]: I1129 08:04:40.771603 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8b2xv" Nov 29 08:04:40 crc kubenswrapper[4795]: I1129 08:04:40.826731 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8b2xv"] Nov 29 08:04:40 crc kubenswrapper[4795]: I1129 08:04:40.933749 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19557359-2cdd-493b-a9ad-0770bf37206e-catalog-content\") pod \"certified-operators-8b2xv\" (UID: \"19557359-2cdd-493b-a9ad-0770bf37206e\") " pod="openshift-marketplace/certified-operators-8b2xv" Nov 29 08:04:40 crc kubenswrapper[4795]: I1129 08:04:40.933810 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19557359-2cdd-493b-a9ad-0770bf37206e-utilities\") pod \"certified-operators-8b2xv\" (UID: \"19557359-2cdd-493b-a9ad-0770bf37206e\") " pod="openshift-marketplace/certified-operators-8b2xv" Nov 29 08:04:40 crc kubenswrapper[4795]: I1129 08:04:40.934053 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7t4hf\" (UniqueName: \"kubernetes.io/projected/19557359-2cdd-493b-a9ad-0770bf37206e-kube-api-access-7t4hf\") pod \"certified-operators-8b2xv\" (UID: \"19557359-2cdd-493b-a9ad-0770bf37206e\") " pod="openshift-marketplace/certified-operators-8b2xv" Nov 29 08:04:41 crc kubenswrapper[4795]: I1129 08:04:41.036459 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7t4hf\" (UniqueName: \"kubernetes.io/projected/19557359-2cdd-493b-a9ad-0770bf37206e-kube-api-access-7t4hf\") pod \"certified-operators-8b2xv\" (UID: \"19557359-2cdd-493b-a9ad-0770bf37206e\") " pod="openshift-marketplace/certified-operators-8b2xv" Nov 29 08:04:41 crc kubenswrapper[4795]: I1129 08:04:41.036961 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19557359-2cdd-493b-a9ad-0770bf37206e-catalog-content\") pod \"certified-operators-8b2xv\" (UID: \"19557359-2cdd-493b-a9ad-0770bf37206e\") " pod="openshift-marketplace/certified-operators-8b2xv" Nov 29 08:04:41 crc kubenswrapper[4795]: I1129 08:04:41.036990 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19557359-2cdd-493b-a9ad-0770bf37206e-utilities\") pod \"certified-operators-8b2xv\" (UID: \"19557359-2cdd-493b-a9ad-0770bf37206e\") " pod="openshift-marketplace/certified-operators-8b2xv" Nov 29 08:04:41 crc kubenswrapper[4795]: I1129 08:04:41.037524 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19557359-2cdd-493b-a9ad-0770bf37206e-utilities\") pod \"certified-operators-8b2xv\" (UID: \"19557359-2cdd-493b-a9ad-0770bf37206e\") " pod="openshift-marketplace/certified-operators-8b2xv" Nov 29 08:04:41 crc kubenswrapper[4795]: I1129 08:04:41.037497 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19557359-2cdd-493b-a9ad-0770bf37206e-catalog-content\") pod \"certified-operators-8b2xv\" (UID: \"19557359-2cdd-493b-a9ad-0770bf37206e\") " pod="openshift-marketplace/certified-operators-8b2xv" Nov 29 08:04:41 crc kubenswrapper[4795]: I1129 08:04:41.073085 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7t4hf\" (UniqueName: \"kubernetes.io/projected/19557359-2cdd-493b-a9ad-0770bf37206e-kube-api-access-7t4hf\") pod \"certified-operators-8b2xv\" (UID: \"19557359-2cdd-493b-a9ad-0770bf37206e\") " pod="openshift-marketplace/certified-operators-8b2xv" Nov 29 08:04:41 crc kubenswrapper[4795]: I1129 08:04:41.100032 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8b2xv" Nov 29 08:04:41 crc kubenswrapper[4795]: I1129 08:04:41.450366 4795 generic.go:334] "Generic (PLEG): container finished" podID="439904f7-bd39-4def-96c8-8975d412574f" containerID="d472010b37d590e430d767a87d8ac19b69bda8f43512edc03908fb14c169e8a8" exitCode=0 Nov 29 08:04:41 crc kubenswrapper[4795]: I1129 08:04:41.450632 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-24d2f" event={"ID":"439904f7-bd39-4def-96c8-8975d412574f","Type":"ContainerDied","Data":"d472010b37d590e430d767a87d8ac19b69bda8f43512edc03908fb14c169e8a8"} Nov 29 08:04:41 crc kubenswrapper[4795]: I1129 08:04:41.737374 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8b2xv"] Nov 29 08:04:41 crc kubenswrapper[4795]: W1129 08:04:41.770424 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19557359_2cdd_493b_a9ad_0770bf37206e.slice/crio-60bcc8a2c85ae650a7e114488481593e8d37afcbbaf7becceabfd519e711e933 WatchSource:0}: Error finding container 60bcc8a2c85ae650a7e114488481593e8d37afcbbaf7becceabfd519e711e933: Status 404 returned error can't find the container with id 60bcc8a2c85ae650a7e114488481593e8d37afcbbaf7becceabfd519e711e933 Nov 29 08:04:41 crc kubenswrapper[4795]: I1129 08:04:41.941119 4795 patch_prober.go:28] interesting pod/machine-config-daemon-bkmq6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:04:41 crc kubenswrapper[4795]: I1129 08:04:41.941519 4795 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:04:41 crc kubenswrapper[4795]: I1129 08:04:41.941578 4795 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" Nov 29 08:04:41 crc kubenswrapper[4795]: I1129 08:04:41.942700 4795 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"92bdb17e0171829dd368ff834d02a1b920553ce70337271f54fb17757386f741"} pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 08:04:41 crc kubenswrapper[4795]: I1129 08:04:41.942767 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" containerID="cri-o://92bdb17e0171829dd368ff834d02a1b920553ce70337271f54fb17757386f741" gracePeriod=600 Nov 29 08:04:42 crc kubenswrapper[4795]: I1129 08:04:42.358819 4795 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-24d2f" podUID="439904f7-bd39-4def-96c8-8975d412574f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.146:5353: connect: connection refused" Nov 29 08:04:42 crc kubenswrapper[4795]: I1129 08:04:42.464239 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Nov 29 08:04:42 crc kubenswrapper[4795]: I1129 08:04:42.471059 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Nov 29 08:04:42 crc kubenswrapper[4795]: I1129 08:04:42.478979 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"28437f9f-e92e-46d7-9ffb-fcdda5dea25e","Type":"ContainerStarted","Data":"d70b4554ad22757f784fbf14277794dba5b8965b9245d9c1487a0c9a5d931cb0"} Nov 29 08:04:42 crc kubenswrapper[4795]: I1129 08:04:42.480699 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8b2xv" event={"ID":"19557359-2cdd-493b-a9ad-0770bf37206e","Type":"ContainerStarted","Data":"60bcc8a2c85ae650a7e114488481593e8d37afcbbaf7becceabfd519e711e933"} Nov 29 08:04:43 crc kubenswrapper[4795]: I1129 08:04:43.497520 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Nov 29 08:04:43 crc kubenswrapper[4795]: E1129 08:04:43.974639 4795 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddebbb091_39b4_4d01_a468_7f9c7b65ff7e.slice/crio-02d3bccc3eb069db5cdb13679eddeeaf3fcc70c21b41a2a903130dac092b2cff.scope\": RecentStats: unable to find data in memory cache]" Nov 29 08:04:44 crc kubenswrapper[4795]: I1129 08:04:44.765137 4795 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="a2fd879b-6f46-437e-acf0-c60e879af239" containerName="galera" probeResult="failure" output="command timed out" Nov 29 08:04:44 crc kubenswrapper[4795]: I1129 08:04:44.766505 4795 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="a2fd879b-6f46-437e-acf0-c60e879af239" containerName="galera" probeResult="failure" output="command timed out" Nov 29 08:04:46 crc kubenswrapper[4795]: I1129 08:04:46.526859 4795 generic.go:334] "Generic (PLEG): container finished" podID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerID="92bdb17e0171829dd368ff834d02a1b920553ce70337271f54fb17757386f741" exitCode=0 Nov 29 08:04:46 crc kubenswrapper[4795]: I1129 08:04:46.527399 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" event={"ID":"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1","Type":"ContainerDied","Data":"92bdb17e0171829dd368ff834d02a1b920553ce70337271f54fb17757386f741"} Nov 29 08:04:46 crc kubenswrapper[4795]: I1129 08:04:46.527444 4795 scope.go:117] "RemoveContainer" containerID="d9f96d395ac312701f390d1e390e800e858ea785b49bf9e565d383c5df5e5a12" Nov 29 08:04:47 crc kubenswrapper[4795]: I1129 08:04:47.358961 4795 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-24d2f" podUID="439904f7-bd39-4def-96c8-8975d412574f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.146:5353: connect: connection refused" Nov 29 08:04:48 crc kubenswrapper[4795]: E1129 08:04:48.104104 4795 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddebbb091_39b4_4d01_a468_7f9c7b65ff7e.slice/crio-02d3bccc3eb069db5cdb13679eddeeaf3fcc70c21b41a2a903130dac092b2cff.scope\": RecentStats: unable to find data in memory cache]" Nov 29 08:04:48 crc kubenswrapper[4795]: E1129 08:04:48.109342 4795 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddebbb091_39b4_4d01_a468_7f9c7b65ff7e.slice/crio-02d3bccc3eb069db5cdb13679eddeeaf3fcc70c21b41a2a903130dac092b2cff.scope\": RecentStats: unable to find data in memory cache]" Nov 29 08:04:50 crc kubenswrapper[4795]: E1129 08:04:50.443754 4795 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddebbb091_39b4_4d01_a468_7f9c7b65ff7e.slice/crio-02d3bccc3eb069db5cdb13679eddeeaf3fcc70c21b41a2a903130dac092b2cff.scope\": RecentStats: unable to find data in memory cache]" Nov 29 08:04:52 crc kubenswrapper[4795]: I1129 08:04:52.358464 4795 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-24d2f" podUID="439904f7-bd39-4def-96c8-8975d412574f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.146:5353: connect: connection refused" Nov 29 08:04:52 crc kubenswrapper[4795]: I1129 08:04:52.359113 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-24d2f" Nov 29 08:04:54 crc kubenswrapper[4795]: E1129 08:04:54.020006 4795 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddebbb091_39b4_4d01_a468_7f9c7b65ff7e.slice/crio-02d3bccc3eb069db5cdb13679eddeeaf3fcc70c21b41a2a903130dac092b2cff.scope\": RecentStats: unable to find data in memory cache]" Nov 29 08:04:57 crc kubenswrapper[4795]: I1129 08:04:57.707432 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-f2mxf"] Nov 29 08:04:57 crc kubenswrapper[4795]: I1129 08:04:57.714794 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f2mxf" Nov 29 08:04:57 crc kubenswrapper[4795]: I1129 08:04:57.779339 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5c53f2b-bb1e-40be-9ca4-40b2c63545e4-catalog-content\") pod \"redhat-marketplace-f2mxf\" (UID: \"b5c53f2b-bb1e-40be-9ca4-40b2c63545e4\") " pod="openshift-marketplace/redhat-marketplace-f2mxf" Nov 29 08:04:57 crc kubenswrapper[4795]: I1129 08:04:57.779429 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6wtd\" (UniqueName: \"kubernetes.io/projected/b5c53f2b-bb1e-40be-9ca4-40b2c63545e4-kube-api-access-b6wtd\") pod \"redhat-marketplace-f2mxf\" (UID: \"b5c53f2b-bb1e-40be-9ca4-40b2c63545e4\") " pod="openshift-marketplace/redhat-marketplace-f2mxf" Nov 29 08:04:57 crc kubenswrapper[4795]: I1129 08:04:57.779477 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5c53f2b-bb1e-40be-9ca4-40b2c63545e4-utilities\") pod \"redhat-marketplace-f2mxf\" (UID: \"b5c53f2b-bb1e-40be-9ca4-40b2c63545e4\") " pod="openshift-marketplace/redhat-marketplace-f2mxf" Nov 29 08:04:57 crc kubenswrapper[4795]: I1129 08:04:57.791412 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f2mxf"] Nov 29 08:04:57 crc kubenswrapper[4795]: I1129 08:04:57.884520 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5c53f2b-bb1e-40be-9ca4-40b2c63545e4-catalog-content\") pod \"redhat-marketplace-f2mxf\" (UID: \"b5c53f2b-bb1e-40be-9ca4-40b2c63545e4\") " pod="openshift-marketplace/redhat-marketplace-f2mxf" Nov 29 08:04:57 crc kubenswrapper[4795]: I1129 08:04:57.884600 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6wtd\" (UniqueName: \"kubernetes.io/projected/b5c53f2b-bb1e-40be-9ca4-40b2c63545e4-kube-api-access-b6wtd\") pod \"redhat-marketplace-f2mxf\" (UID: \"b5c53f2b-bb1e-40be-9ca4-40b2c63545e4\") " pod="openshift-marketplace/redhat-marketplace-f2mxf" Nov 29 08:04:57 crc kubenswrapper[4795]: I1129 08:04:57.884639 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5c53f2b-bb1e-40be-9ca4-40b2c63545e4-utilities\") pod \"redhat-marketplace-f2mxf\" (UID: \"b5c53f2b-bb1e-40be-9ca4-40b2c63545e4\") " pod="openshift-marketplace/redhat-marketplace-f2mxf" Nov 29 08:04:57 crc kubenswrapper[4795]: I1129 08:04:57.885209 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5c53f2b-bb1e-40be-9ca4-40b2c63545e4-utilities\") pod \"redhat-marketplace-f2mxf\" (UID: \"b5c53f2b-bb1e-40be-9ca4-40b2c63545e4\") " pod="openshift-marketplace/redhat-marketplace-f2mxf" Nov 29 08:04:57 crc kubenswrapper[4795]: I1129 08:04:57.885433 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5c53f2b-bb1e-40be-9ca4-40b2c63545e4-catalog-content\") pod \"redhat-marketplace-f2mxf\" (UID: \"b5c53f2b-bb1e-40be-9ca4-40b2c63545e4\") " pod="openshift-marketplace/redhat-marketplace-f2mxf" Nov 29 08:04:57 crc kubenswrapper[4795]: I1129 08:04:57.935544 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6wtd\" (UniqueName: \"kubernetes.io/projected/b5c53f2b-bb1e-40be-9ca4-40b2c63545e4-kube-api-access-b6wtd\") pod \"redhat-marketplace-f2mxf\" (UID: \"b5c53f2b-bb1e-40be-9ca4-40b2c63545e4\") " pod="openshift-marketplace/redhat-marketplace-f2mxf" Nov 29 08:04:58 crc kubenswrapper[4795]: I1129 08:04:58.040116 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f2mxf" Nov 29 08:05:01 crc kubenswrapper[4795]: I1129 08:05:01.727761 4795 generic.go:334] "Generic (PLEG): container finished" podID="7bb6a7d2-f5f5-4684-b085-66101954d8e9" containerID="a3eac150aab4f523499bb70dae16fc16fde8b4333aa5c440b79d3a4ff2e3923b" exitCode=0 Nov 29 08:05:01 crc kubenswrapper[4795]: I1129 08:05:01.727847 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-w9kg5" event={"ID":"7bb6a7d2-f5f5-4684-b085-66101954d8e9","Type":"ContainerDied","Data":"a3eac150aab4f523499bb70dae16fc16fde8b4333aa5c440b79d3a4ff2e3923b"} Nov 29 08:05:02 crc kubenswrapper[4795]: E1129 08:05:02.349800 4795 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Nov 29 08:05:02 crc kubenswrapper[4795]: E1129 08:05:02.350985 4795 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xbx69,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-jf2ds_openstack(20682aa9-7d99-41f5-9214-2c08cb1533ec): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 08:05:02 crc kubenswrapper[4795]: E1129 08:05:02.352558 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-jf2ds" podUID="20682aa9-7d99-41f5-9214-2c08cb1533ec" Nov 29 08:05:02 crc kubenswrapper[4795]: I1129 08:05:02.361202 4795 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-24d2f" podUID="439904f7-bd39-4def-96c8-8975d412574f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.146:5353: i/o timeout" Nov 29 08:05:02 crc kubenswrapper[4795]: I1129 08:05:02.504540 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-24d2f" Nov 29 08:05:02 crc kubenswrapper[4795]: I1129 08:05:02.525111 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/439904f7-bd39-4def-96c8-8975d412574f-dns-svc\") pod \"439904f7-bd39-4def-96c8-8975d412574f\" (UID: \"439904f7-bd39-4def-96c8-8975d412574f\") " Nov 29 08:05:02 crc kubenswrapper[4795]: I1129 08:05:02.525307 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bjzj\" (UniqueName: \"kubernetes.io/projected/439904f7-bd39-4def-96c8-8975d412574f-kube-api-access-2bjzj\") pod \"439904f7-bd39-4def-96c8-8975d412574f\" (UID: \"439904f7-bd39-4def-96c8-8975d412574f\") " Nov 29 08:05:02 crc kubenswrapper[4795]: I1129 08:05:02.525425 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/439904f7-bd39-4def-96c8-8975d412574f-ovsdbserver-nb\") pod \"439904f7-bd39-4def-96c8-8975d412574f\" (UID: \"439904f7-bd39-4def-96c8-8975d412574f\") " Nov 29 08:05:02 crc kubenswrapper[4795]: I1129 08:05:02.525547 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/439904f7-bd39-4def-96c8-8975d412574f-config\") pod \"439904f7-bd39-4def-96c8-8975d412574f\" (UID: \"439904f7-bd39-4def-96c8-8975d412574f\") " Nov 29 08:05:02 crc kubenswrapper[4795]: I1129 08:05:02.525648 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/439904f7-bd39-4def-96c8-8975d412574f-ovsdbserver-sb\") pod \"439904f7-bd39-4def-96c8-8975d412574f\" (UID: \"439904f7-bd39-4def-96c8-8975d412574f\") " Nov 29 08:05:02 crc kubenswrapper[4795]: I1129 08:05:02.574145 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/439904f7-bd39-4def-96c8-8975d412574f-kube-api-access-2bjzj" (OuterVolumeSpecName: "kube-api-access-2bjzj") pod "439904f7-bd39-4def-96c8-8975d412574f" (UID: "439904f7-bd39-4def-96c8-8975d412574f"). InnerVolumeSpecName "kube-api-access-2bjzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:05:02 crc kubenswrapper[4795]: I1129 08:05:02.628879 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2bjzj\" (UniqueName: \"kubernetes.io/projected/439904f7-bd39-4def-96c8-8975d412574f-kube-api-access-2bjzj\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:02 crc kubenswrapper[4795]: I1129 08:05:02.636151 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/439904f7-bd39-4def-96c8-8975d412574f-config" (OuterVolumeSpecName: "config") pod "439904f7-bd39-4def-96c8-8975d412574f" (UID: "439904f7-bd39-4def-96c8-8975d412574f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:05:02 crc kubenswrapper[4795]: I1129 08:05:02.656009 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/439904f7-bd39-4def-96c8-8975d412574f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "439904f7-bd39-4def-96c8-8975d412574f" (UID: "439904f7-bd39-4def-96c8-8975d412574f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:05:02 crc kubenswrapper[4795]: I1129 08:05:02.670944 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/439904f7-bd39-4def-96c8-8975d412574f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "439904f7-bd39-4def-96c8-8975d412574f" (UID: "439904f7-bd39-4def-96c8-8975d412574f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:05:02 crc kubenswrapper[4795]: I1129 08:05:02.676143 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/439904f7-bd39-4def-96c8-8975d412574f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "439904f7-bd39-4def-96c8-8975d412574f" (UID: "439904f7-bd39-4def-96c8-8975d412574f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:05:02 crc kubenswrapper[4795]: I1129 08:05:02.730558 4795 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/439904f7-bd39-4def-96c8-8975d412574f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:02 crc kubenswrapper[4795]: I1129 08:05:02.730628 4795 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/439904f7-bd39-4def-96c8-8975d412574f-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:02 crc kubenswrapper[4795]: I1129 08:05:02.730641 4795 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/439904f7-bd39-4def-96c8-8975d412574f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:02 crc kubenswrapper[4795]: I1129 08:05:02.730653 4795 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/439904f7-bd39-4def-96c8-8975d412574f-config\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:02 crc kubenswrapper[4795]: I1129 08:05:02.741740 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-24d2f" event={"ID":"439904f7-bd39-4def-96c8-8975d412574f","Type":"ContainerDied","Data":"fe1329861aa0dea5c27e5c6dace70b6403e45ca290477f1ea8363138ba5e6674"} Nov 29 08:05:02 crc kubenswrapper[4795]: I1129 08:05:02.741761 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-24d2f" Nov 29 08:05:02 crc kubenswrapper[4795]: E1129 08:05:02.743581 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-jf2ds" podUID="20682aa9-7d99-41f5-9214-2c08cb1533ec" Nov 29 08:05:02 crc kubenswrapper[4795]: I1129 08:05:02.804791 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-24d2f"] Nov 29 08:05:02 crc kubenswrapper[4795]: I1129 08:05:02.819039 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-24d2f"] Nov 29 08:05:04 crc kubenswrapper[4795]: E1129 08:05:04.063691 4795 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddebbb091_39b4_4d01_a468_7f9c7b65ff7e.slice/crio-02d3bccc3eb069db5cdb13679eddeeaf3fcc70c21b41a2a903130dac092b2cff.scope\": RecentStats: unable to find data in memory cache]" Nov 29 08:05:04 crc kubenswrapper[4795]: I1129 08:05:04.293055 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="439904f7-bd39-4def-96c8-8975d412574f" path="/var/lib/kubelet/pods/439904f7-bd39-4def-96c8-8975d412574f/volumes" Nov 29 08:05:05 crc kubenswrapper[4795]: E1129 08:05:05.220266 4795 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddebbb091_39b4_4d01_a468_7f9c7b65ff7e.slice/crio-02d3bccc3eb069db5cdb13679eddeeaf3fcc70c21b41a2a903130dac092b2cff.scope\": RecentStats: unable to find data in memory cache]" Nov 29 08:05:07 crc kubenswrapper[4795]: I1129 08:05:07.362307 4795 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-24d2f" podUID="439904f7-bd39-4def-96c8-8975d412574f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.146:5353: i/o timeout" Nov 29 08:05:12 crc kubenswrapper[4795]: I1129 08:05:12.881921 4795 generic.go:334] "Generic (PLEG): container finished" podID="082a7f2c-1081-4af8-91c8-60a13d787746" containerID="245d605dd8455419c5b160025a93f2e21c118a545505ae76282d573abadf30be" exitCode=0 Nov 29 08:05:12 crc kubenswrapper[4795]: I1129 08:05:12.881997 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-7lg28" event={"ID":"082a7f2c-1081-4af8-91c8-60a13d787746","Type":"ContainerDied","Data":"245d605dd8455419c5b160025a93f2e21c118a545505ae76282d573abadf30be"} Nov 29 08:05:12 crc kubenswrapper[4795]: I1129 08:05:12.891379 4795 generic.go:334] "Generic (PLEG): container finished" podID="19557359-2cdd-493b-a9ad-0770bf37206e" containerID="186c40ddf7e9415a10044c140f413f84d9509eefda35df4455eec069b836e0a7" exitCode=0 Nov 29 08:05:12 crc kubenswrapper[4795]: I1129 08:05:12.891448 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8b2xv" event={"ID":"19557359-2cdd-493b-a9ad-0770bf37206e","Type":"ContainerDied","Data":"186c40ddf7e9415a10044c140f413f84d9509eefda35df4455eec069b836e0a7"} Nov 29 08:05:14 crc kubenswrapper[4795]: E1129 08:05:14.445024 4795 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddebbb091_39b4_4d01_a468_7f9c7b65ff7e.slice/crio-02d3bccc3eb069db5cdb13679eddeeaf3fcc70c21b41a2a903130dac092b2cff.scope\": RecentStats: unable to find data in memory cache]" Nov 29 08:05:22 crc kubenswrapper[4795]: E1129 08:05:22.414097 4795 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Nov 29 08:05:22 crc kubenswrapper[4795]: E1129 08:05:22.414908 4795 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c9k97,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-tvhw5_openstack(1082de8f-47bf-41ac-875f-8d7db0baab7b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 08:05:22 crc kubenswrapper[4795]: E1129 08:05:22.419698 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-tvhw5" podUID="1082de8f-47bf-41ac-875f-8d7db0baab7b" Nov 29 08:05:23 crc kubenswrapper[4795]: E1129 08:05:23.029444 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-tvhw5" podUID="1082de8f-47bf-41ac-875f-8d7db0baab7b" Nov 29 08:05:24 crc kubenswrapper[4795]: I1129 08:05:24.061399 4795 generic.go:334] "Generic (PLEG): container finished" podID="c0c3141e-5bf7-49c4-80fd-2f0d0b361491" containerID="394aa5c2b264af45bd9815054a03cda1553ea3317bcd26facfc2c583ad2145f2" exitCode=0 Nov 29 08:05:24 crc kubenswrapper[4795]: I1129 08:05:24.061649 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-z6klw" event={"ID":"c0c3141e-5bf7-49c4-80fd-2f0d0b361491","Type":"ContainerDied","Data":"394aa5c2b264af45bd9815054a03cda1553ea3317bcd26facfc2c583ad2145f2"} Nov 29 08:05:24 crc kubenswrapper[4795]: E1129 08:05:24.843013 4795 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Nov 29 08:05:24 crc kubenswrapper[4795]: E1129 08:05:24.843460 4795 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n79h56fhcfh5f7hc5h695h549hdch5ch68dh66bh56bhfbh69h65bh5d6h8bh598h5chc4h5f7h56dh54chcdh665h5fh5dh555h55h55bh5dbh5dbq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hmxvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(c6dbced2-2ca2-4189-aad3-7a872ab6209c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 08:05:24 crc kubenswrapper[4795]: I1129 08:05:24.948820 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-w9kg5" Nov 29 08:05:25 crc kubenswrapper[4795]: I1129 08:05:25.097208 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-w9kg5" Nov 29 08:05:25 crc kubenswrapper[4795]: I1129 08:05:25.097304 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-w9kg5" event={"ID":"7bb6a7d2-f5f5-4684-b085-66101954d8e9","Type":"ContainerDied","Data":"244c4eec23b056bb081bd6cac149215b9c5aca9c2e2cf23a812d19509aa875c3"} Nov 29 08:05:25 crc kubenswrapper[4795]: I1129 08:05:25.097340 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="244c4eec23b056bb081bd6cac149215b9c5aca9c2e2cf23a812d19509aa875c3" Nov 29 08:05:25 crc kubenswrapper[4795]: I1129 08:05:25.105574 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bb6a7d2-f5f5-4684-b085-66101954d8e9-config-data\") pod \"7bb6a7d2-f5f5-4684-b085-66101954d8e9\" (UID: \"7bb6a7d2-f5f5-4684-b085-66101954d8e9\") " Nov 29 08:05:25 crc kubenswrapper[4795]: I1129 08:05:25.105647 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7bb6a7d2-f5f5-4684-b085-66101954d8e9-fernet-keys\") pod \"7bb6a7d2-f5f5-4684-b085-66101954d8e9\" (UID: \"7bb6a7d2-f5f5-4684-b085-66101954d8e9\") " Nov 29 08:05:25 crc kubenswrapper[4795]: I1129 08:05:25.105699 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22lg5\" (UniqueName: \"kubernetes.io/projected/7bb6a7d2-f5f5-4684-b085-66101954d8e9-kube-api-access-22lg5\") pod \"7bb6a7d2-f5f5-4684-b085-66101954d8e9\" (UID: \"7bb6a7d2-f5f5-4684-b085-66101954d8e9\") " Nov 29 08:05:25 crc kubenswrapper[4795]: I1129 08:05:25.105747 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7bb6a7d2-f5f5-4684-b085-66101954d8e9-credential-keys\") pod \"7bb6a7d2-f5f5-4684-b085-66101954d8e9\" (UID: \"7bb6a7d2-f5f5-4684-b085-66101954d8e9\") " Nov 29 08:05:25 crc kubenswrapper[4795]: I1129 08:05:25.105781 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bb6a7d2-f5f5-4684-b085-66101954d8e9-combined-ca-bundle\") pod \"7bb6a7d2-f5f5-4684-b085-66101954d8e9\" (UID: \"7bb6a7d2-f5f5-4684-b085-66101954d8e9\") " Nov 29 08:05:25 crc kubenswrapper[4795]: I1129 08:05:25.105807 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7bb6a7d2-f5f5-4684-b085-66101954d8e9-scripts\") pod \"7bb6a7d2-f5f5-4684-b085-66101954d8e9\" (UID: \"7bb6a7d2-f5f5-4684-b085-66101954d8e9\") " Nov 29 08:05:25 crc kubenswrapper[4795]: I1129 08:05:25.113121 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bb6a7d2-f5f5-4684-b085-66101954d8e9-scripts" (OuterVolumeSpecName: "scripts") pod "7bb6a7d2-f5f5-4684-b085-66101954d8e9" (UID: "7bb6a7d2-f5f5-4684-b085-66101954d8e9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:05:25 crc kubenswrapper[4795]: I1129 08:05:25.113140 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb6a7d2-f5f5-4684-b085-66101954d8e9-kube-api-access-22lg5" (OuterVolumeSpecName: "kube-api-access-22lg5") pod "7bb6a7d2-f5f5-4684-b085-66101954d8e9" (UID: "7bb6a7d2-f5f5-4684-b085-66101954d8e9"). InnerVolumeSpecName "kube-api-access-22lg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:05:25 crc kubenswrapper[4795]: I1129 08:05:25.113250 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bb6a7d2-f5f5-4684-b085-66101954d8e9-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "7bb6a7d2-f5f5-4684-b085-66101954d8e9" (UID: "7bb6a7d2-f5f5-4684-b085-66101954d8e9"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:05:25 crc kubenswrapper[4795]: I1129 08:05:25.124138 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bb6a7d2-f5f5-4684-b085-66101954d8e9-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "7bb6a7d2-f5f5-4684-b085-66101954d8e9" (UID: "7bb6a7d2-f5f5-4684-b085-66101954d8e9"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:05:25 crc kubenswrapper[4795]: I1129 08:05:25.142972 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bb6a7d2-f5f5-4684-b085-66101954d8e9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7bb6a7d2-f5f5-4684-b085-66101954d8e9" (UID: "7bb6a7d2-f5f5-4684-b085-66101954d8e9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:05:25 crc kubenswrapper[4795]: I1129 08:05:25.151665 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bb6a7d2-f5f5-4684-b085-66101954d8e9-config-data" (OuterVolumeSpecName: "config-data") pod "7bb6a7d2-f5f5-4684-b085-66101954d8e9" (UID: "7bb6a7d2-f5f5-4684-b085-66101954d8e9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:05:25 crc kubenswrapper[4795]: I1129 08:05:25.207787 4795 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bb6a7d2-f5f5-4684-b085-66101954d8e9-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:25 crc kubenswrapper[4795]: I1129 08:05:25.208158 4795 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7bb6a7d2-f5f5-4684-b085-66101954d8e9-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:25 crc kubenswrapper[4795]: I1129 08:05:25.208168 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22lg5\" (UniqueName: \"kubernetes.io/projected/7bb6a7d2-f5f5-4684-b085-66101954d8e9-kube-api-access-22lg5\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:25 crc kubenswrapper[4795]: I1129 08:05:25.208180 4795 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7bb6a7d2-f5f5-4684-b085-66101954d8e9-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:25 crc kubenswrapper[4795]: I1129 08:05:25.208193 4795 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bb6a7d2-f5f5-4684-b085-66101954d8e9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:25 crc kubenswrapper[4795]: I1129 08:05:25.208202 4795 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7bb6a7d2-f5f5-4684-b085-66101954d8e9-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:25 crc kubenswrapper[4795]: E1129 08:05:25.399845 4795 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified" Nov 29 08:05:25 crc kubenswrapper[4795]: E1129 08:05:25.399996 4795 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-blpm6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-85xp6_openstack(59666d8f-35e8-4c8a-887f-0c23881547ec): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 08:05:25 crc kubenswrapper[4795]: E1129 08:05:25.401390 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-85xp6" podUID="59666d8f-35e8-4c8a-887f-0c23881547ec" Nov 29 08:05:25 crc kubenswrapper[4795]: I1129 08:05:25.512516 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-7lg28" Nov 29 08:05:25 crc kubenswrapper[4795]: I1129 08:05:25.657345 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/082a7f2c-1081-4af8-91c8-60a13d787746-config-data\") pod \"082a7f2c-1081-4af8-91c8-60a13d787746\" (UID: \"082a7f2c-1081-4af8-91c8-60a13d787746\") " Nov 29 08:05:25 crc kubenswrapper[4795]: I1129 08:05:25.657549 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/082a7f2c-1081-4af8-91c8-60a13d787746-db-sync-config-data\") pod \"082a7f2c-1081-4af8-91c8-60a13d787746\" (UID: \"082a7f2c-1081-4af8-91c8-60a13d787746\") " Nov 29 08:05:25 crc kubenswrapper[4795]: I1129 08:05:25.657624 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6sh84\" (UniqueName: \"kubernetes.io/projected/082a7f2c-1081-4af8-91c8-60a13d787746-kube-api-access-6sh84\") pod \"082a7f2c-1081-4af8-91c8-60a13d787746\" (UID: \"082a7f2c-1081-4af8-91c8-60a13d787746\") " Nov 29 08:05:25 crc kubenswrapper[4795]: I1129 08:05:25.657676 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/082a7f2c-1081-4af8-91c8-60a13d787746-combined-ca-bundle\") pod \"082a7f2c-1081-4af8-91c8-60a13d787746\" (UID: \"082a7f2c-1081-4af8-91c8-60a13d787746\") " Nov 29 08:05:25 crc kubenswrapper[4795]: I1129 08:05:25.663673 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/082a7f2c-1081-4af8-91c8-60a13d787746-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "082a7f2c-1081-4af8-91c8-60a13d787746" (UID: "082a7f2c-1081-4af8-91c8-60a13d787746"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:05:25 crc kubenswrapper[4795]: I1129 08:05:25.664325 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/082a7f2c-1081-4af8-91c8-60a13d787746-kube-api-access-6sh84" (OuterVolumeSpecName: "kube-api-access-6sh84") pod "082a7f2c-1081-4af8-91c8-60a13d787746" (UID: "082a7f2c-1081-4af8-91c8-60a13d787746"). InnerVolumeSpecName "kube-api-access-6sh84". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:05:25 crc kubenswrapper[4795]: I1129 08:05:25.717088 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/082a7f2c-1081-4af8-91c8-60a13d787746-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "082a7f2c-1081-4af8-91c8-60a13d787746" (UID: "082a7f2c-1081-4af8-91c8-60a13d787746"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:05:25 crc kubenswrapper[4795]: I1129 08:05:25.717496 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/082a7f2c-1081-4af8-91c8-60a13d787746-config-data" (OuterVolumeSpecName: "config-data") pod "082a7f2c-1081-4af8-91c8-60a13d787746" (UID: "082a7f2c-1081-4af8-91c8-60a13d787746"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:05:25 crc kubenswrapper[4795]: I1129 08:05:25.760262 4795 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/082a7f2c-1081-4af8-91c8-60a13d787746-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:25 crc kubenswrapper[4795]: I1129 08:05:25.760314 4795 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/082a7f2c-1081-4af8-91c8-60a13d787746-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:25 crc kubenswrapper[4795]: I1129 08:05:25.760328 4795 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/082a7f2c-1081-4af8-91c8-60a13d787746-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:25 crc kubenswrapper[4795]: I1129 08:05:25.760344 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6sh84\" (UniqueName: \"kubernetes.io/projected/082a7f2c-1081-4af8-91c8-60a13d787746-kube-api-access-6sh84\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:26 crc kubenswrapper[4795]: I1129 08:05:26.054712 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-w9kg5"] Nov 29 08:05:26 crc kubenswrapper[4795]: I1129 08:05:26.064569 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-w9kg5"] Nov 29 08:05:26 crc kubenswrapper[4795]: I1129 08:05:26.114749 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-7lg28" event={"ID":"082a7f2c-1081-4af8-91c8-60a13d787746","Type":"ContainerDied","Data":"70eb584be41fd910946c29edd7ab7dfbfb7236424308609d2c1672cf655e3a1e"} Nov 29 08:05:26 crc kubenswrapper[4795]: I1129 08:05:26.114803 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70eb584be41fd910946c29edd7ab7dfbfb7236424308609d2c1672cf655e3a1e" Nov 29 08:05:26 crc kubenswrapper[4795]: I1129 08:05:26.114803 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-7lg28" Nov 29 08:05:26 crc kubenswrapper[4795]: E1129 08:05:26.119017 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified\\\"\"" pod="openstack/heat-db-sync-85xp6" podUID="59666d8f-35e8-4c8a-887f-0c23881547ec" Nov 29 08:05:26 crc kubenswrapper[4795]: E1129 08:05:26.154430 4795 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Nov 29 08:05:26 crc kubenswrapper[4795]: E1129 08:05:26.154583 4795 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xpfp8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-fdl5k_openstack(f6dff0cb-a174-4227-ad82-21a12aee68f5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 08:05:26 crc kubenswrapper[4795]: E1129 08:05:26.156477 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-fdl5k" podUID="f6dff0cb-a174-4227-ad82-21a12aee68f5" Nov 29 08:05:26 crc kubenswrapper[4795]: I1129 08:05:26.170859 4795 scope.go:117] "RemoveContainer" containerID="d472010b37d590e430d767a87d8ac19b69bda8f43512edc03908fb14c169e8a8" Nov 29 08:05:26 crc kubenswrapper[4795]: I1129 08:05:26.174465 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-48c6t"] Nov 29 08:05:26 crc kubenswrapper[4795]: E1129 08:05:26.175022 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="439904f7-bd39-4def-96c8-8975d412574f" containerName="init" Nov 29 08:05:26 crc kubenswrapper[4795]: I1129 08:05:26.175037 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="439904f7-bd39-4def-96c8-8975d412574f" containerName="init" Nov 29 08:05:26 crc kubenswrapper[4795]: E1129 08:05:26.175054 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="439904f7-bd39-4def-96c8-8975d412574f" containerName="dnsmasq-dns" Nov 29 08:05:26 crc kubenswrapper[4795]: I1129 08:05:26.175062 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="439904f7-bd39-4def-96c8-8975d412574f" containerName="dnsmasq-dns" Nov 29 08:05:26 crc kubenswrapper[4795]: E1129 08:05:26.175119 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="082a7f2c-1081-4af8-91c8-60a13d787746" containerName="glance-db-sync" Nov 29 08:05:26 crc kubenswrapper[4795]: I1129 08:05:26.175128 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="082a7f2c-1081-4af8-91c8-60a13d787746" containerName="glance-db-sync" Nov 29 08:05:26 crc kubenswrapper[4795]: E1129 08:05:26.175143 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bb6a7d2-f5f5-4684-b085-66101954d8e9" containerName="keystone-bootstrap" Nov 29 08:05:26 crc kubenswrapper[4795]: I1129 08:05:26.175150 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bb6a7d2-f5f5-4684-b085-66101954d8e9" containerName="keystone-bootstrap" Nov 29 08:05:26 crc kubenswrapper[4795]: I1129 08:05:26.175512 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bb6a7d2-f5f5-4684-b085-66101954d8e9" containerName="keystone-bootstrap" Nov 29 08:05:26 crc kubenswrapper[4795]: I1129 08:05:26.175557 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="082a7f2c-1081-4af8-91c8-60a13d787746" containerName="glance-db-sync" Nov 29 08:05:26 crc kubenswrapper[4795]: I1129 08:05:26.175583 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="439904f7-bd39-4def-96c8-8975d412574f" containerName="dnsmasq-dns" Nov 29 08:05:26 crc kubenswrapper[4795]: I1129 08:05:26.176585 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-48c6t" Nov 29 08:05:26 crc kubenswrapper[4795]: I1129 08:05:26.180891 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 29 08:05:26 crc kubenswrapper[4795]: I1129 08:05:26.181105 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-j4r47" Nov 29 08:05:26 crc kubenswrapper[4795]: I1129 08:05:26.181266 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 29 08:05:26 crc kubenswrapper[4795]: I1129 08:05:26.182762 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 29 08:05:26 crc kubenswrapper[4795]: I1129 08:05:26.183186 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 29 08:05:26 crc kubenswrapper[4795]: I1129 08:05:26.183932 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-48c6t"] Nov 29 08:05:26 crc kubenswrapper[4795]: I1129 08:05:26.269244 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f63e39cf-970a-40df-a823-d1e60521e702-config-data\") pod \"keystone-bootstrap-48c6t\" (UID: \"f63e39cf-970a-40df-a823-d1e60521e702\") " pod="openstack/keystone-bootstrap-48c6t" Nov 29 08:05:26 crc kubenswrapper[4795]: I1129 08:05:26.269300 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f63e39cf-970a-40df-a823-d1e60521e702-scripts\") pod \"keystone-bootstrap-48c6t\" (UID: \"f63e39cf-970a-40df-a823-d1e60521e702\") " pod="openstack/keystone-bootstrap-48c6t" Nov 29 08:05:26 crc kubenswrapper[4795]: I1129 08:05:26.269345 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f63e39cf-970a-40df-a823-d1e60521e702-combined-ca-bundle\") pod \"keystone-bootstrap-48c6t\" (UID: \"f63e39cf-970a-40df-a823-d1e60521e702\") " pod="openstack/keystone-bootstrap-48c6t" Nov 29 08:05:26 crc kubenswrapper[4795]: I1129 08:05:26.269382 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f63e39cf-970a-40df-a823-d1e60521e702-credential-keys\") pod \"keystone-bootstrap-48c6t\" (UID: \"f63e39cf-970a-40df-a823-d1e60521e702\") " pod="openstack/keystone-bootstrap-48c6t" Nov 29 08:05:26 crc kubenswrapper[4795]: I1129 08:05:26.269641 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jclnx\" (UniqueName: \"kubernetes.io/projected/f63e39cf-970a-40df-a823-d1e60521e702-kube-api-access-jclnx\") pod \"keystone-bootstrap-48c6t\" (UID: \"f63e39cf-970a-40df-a823-d1e60521e702\") " pod="openstack/keystone-bootstrap-48c6t" Nov 29 08:05:26 crc kubenswrapper[4795]: I1129 08:05:26.269814 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f63e39cf-970a-40df-a823-d1e60521e702-fernet-keys\") pod \"keystone-bootstrap-48c6t\" (UID: \"f63e39cf-970a-40df-a823-d1e60521e702\") " pod="openstack/keystone-bootstrap-48c6t" Nov 29 08:05:26 crc kubenswrapper[4795]: I1129 08:05:26.292276 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb6a7d2-f5f5-4684-b085-66101954d8e9" path="/var/lib/kubelet/pods/7bb6a7d2-f5f5-4684-b085-66101954d8e9/volumes" Nov 29 08:05:26 crc kubenswrapper[4795]: I1129 08:05:26.297115 4795 scope.go:117] "RemoveContainer" containerID="73895a78e65661359e4b10fde34c42b0170bd599ae2dd740e531f5090b1742de" Nov 29 08:05:26 crc kubenswrapper[4795]: I1129 08:05:26.371376 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f63e39cf-970a-40df-a823-d1e60521e702-combined-ca-bundle\") pod \"keystone-bootstrap-48c6t\" (UID: \"f63e39cf-970a-40df-a823-d1e60521e702\") " pod="openstack/keystone-bootstrap-48c6t" Nov 29 08:05:26 crc kubenswrapper[4795]: I1129 08:05:26.371639 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f63e39cf-970a-40df-a823-d1e60521e702-credential-keys\") pod \"keystone-bootstrap-48c6t\" (UID: \"f63e39cf-970a-40df-a823-d1e60521e702\") " pod="openstack/keystone-bootstrap-48c6t" Nov 29 08:05:26 crc kubenswrapper[4795]: I1129 08:05:26.371737 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jclnx\" (UniqueName: \"kubernetes.io/projected/f63e39cf-970a-40df-a823-d1e60521e702-kube-api-access-jclnx\") pod \"keystone-bootstrap-48c6t\" (UID: \"f63e39cf-970a-40df-a823-d1e60521e702\") " pod="openstack/keystone-bootstrap-48c6t" Nov 29 08:05:26 crc kubenswrapper[4795]: I1129 08:05:26.372306 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f63e39cf-970a-40df-a823-d1e60521e702-fernet-keys\") pod \"keystone-bootstrap-48c6t\" (UID: \"f63e39cf-970a-40df-a823-d1e60521e702\") " pod="openstack/keystone-bootstrap-48c6t" Nov 29 08:05:26 crc kubenswrapper[4795]: I1129 08:05:26.372367 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f63e39cf-970a-40df-a823-d1e60521e702-config-data\") pod \"keystone-bootstrap-48c6t\" (UID: \"f63e39cf-970a-40df-a823-d1e60521e702\") " pod="openstack/keystone-bootstrap-48c6t" Nov 29 08:05:26 crc kubenswrapper[4795]: I1129 08:05:26.372417 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f63e39cf-970a-40df-a823-d1e60521e702-scripts\") pod \"keystone-bootstrap-48c6t\" (UID: \"f63e39cf-970a-40df-a823-d1e60521e702\") " pod="openstack/keystone-bootstrap-48c6t" Nov 29 08:05:26 crc kubenswrapper[4795]: I1129 08:05:26.375465 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f63e39cf-970a-40df-a823-d1e60521e702-credential-keys\") pod \"keystone-bootstrap-48c6t\" (UID: \"f63e39cf-970a-40df-a823-d1e60521e702\") " pod="openstack/keystone-bootstrap-48c6t" Nov 29 08:05:26 crc kubenswrapper[4795]: I1129 08:05:26.377022 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f63e39cf-970a-40df-a823-d1e60521e702-combined-ca-bundle\") pod \"keystone-bootstrap-48c6t\" (UID: \"f63e39cf-970a-40df-a823-d1e60521e702\") " pod="openstack/keystone-bootstrap-48c6t" Nov 29 08:05:26 crc kubenswrapper[4795]: I1129 08:05:26.378278 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f63e39cf-970a-40df-a823-d1e60521e702-config-data\") pod \"keystone-bootstrap-48c6t\" (UID: \"f63e39cf-970a-40df-a823-d1e60521e702\") " pod="openstack/keystone-bootstrap-48c6t" Nov 29 08:05:26 crc kubenswrapper[4795]: I1129 08:05:26.393079 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f63e39cf-970a-40df-a823-d1e60521e702-fernet-keys\") pod \"keystone-bootstrap-48c6t\" (UID: \"f63e39cf-970a-40df-a823-d1e60521e702\") " pod="openstack/keystone-bootstrap-48c6t" Nov 29 08:05:26 crc kubenswrapper[4795]: I1129 08:05:26.393290 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f63e39cf-970a-40df-a823-d1e60521e702-scripts\") pod \"keystone-bootstrap-48c6t\" (UID: \"f63e39cf-970a-40df-a823-d1e60521e702\") " pod="openstack/keystone-bootstrap-48c6t" Nov 29 08:05:26 crc kubenswrapper[4795]: I1129 08:05:26.402222 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jclnx\" (UniqueName: \"kubernetes.io/projected/f63e39cf-970a-40df-a823-d1e60521e702-kube-api-access-jclnx\") pod \"keystone-bootstrap-48c6t\" (UID: \"f63e39cf-970a-40df-a823-d1e60521e702\") " pod="openstack/keystone-bootstrap-48c6t" Nov 29 08:05:26 crc kubenswrapper[4795]: I1129 08:05:26.492973 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-z6klw" Nov 29 08:05:26 crc kubenswrapper[4795]: I1129 08:05:26.510225 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-48c6t" Nov 29 08:05:26 crc kubenswrapper[4795]: I1129 08:05:26.575747 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnrzd\" (UniqueName: \"kubernetes.io/projected/c0c3141e-5bf7-49c4-80fd-2f0d0b361491-kube-api-access-pnrzd\") pod \"c0c3141e-5bf7-49c4-80fd-2f0d0b361491\" (UID: \"c0c3141e-5bf7-49c4-80fd-2f0d0b361491\") " Nov 29 08:05:26 crc kubenswrapper[4795]: I1129 08:05:26.576132 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c0c3141e-5bf7-49c4-80fd-2f0d0b361491-config\") pod \"c0c3141e-5bf7-49c4-80fd-2f0d0b361491\" (UID: \"c0c3141e-5bf7-49c4-80fd-2f0d0b361491\") " Nov 29 08:05:26 crc kubenswrapper[4795]: I1129 08:05:26.576226 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0c3141e-5bf7-49c4-80fd-2f0d0b361491-combined-ca-bundle\") pod \"c0c3141e-5bf7-49c4-80fd-2f0d0b361491\" (UID: \"c0c3141e-5bf7-49c4-80fd-2f0d0b361491\") " Nov 29 08:05:26 crc kubenswrapper[4795]: I1129 08:05:26.580719 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0c3141e-5bf7-49c4-80fd-2f0d0b361491-kube-api-access-pnrzd" (OuterVolumeSpecName: "kube-api-access-pnrzd") pod "c0c3141e-5bf7-49c4-80fd-2f0d0b361491" (UID: "c0c3141e-5bf7-49c4-80fd-2f0d0b361491"). InnerVolumeSpecName "kube-api-access-pnrzd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:05:26 crc kubenswrapper[4795]: I1129 08:05:26.679117 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnrzd\" (UniqueName: \"kubernetes.io/projected/c0c3141e-5bf7-49c4-80fd-2f0d0b361491-kube-api-access-pnrzd\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:26 crc kubenswrapper[4795]: I1129 08:05:26.855254 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f2mxf"] Nov 29 08:05:26 crc kubenswrapper[4795]: W1129 08:05:26.888915 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb5c53f2b_bb1e_40be_9ca4_40b2c63545e4.slice/crio-8283ce3d7995553c4ac3f7b58850e186affdb1f02b5ac9f8022266532e0b625d WatchSource:0}: Error finding container 8283ce3d7995553c4ac3f7b58850e186affdb1f02b5ac9f8022266532e0b625d: Status 404 returned error can't find the container with id 8283ce3d7995553c4ac3f7b58850e186affdb1f02b5ac9f8022266532e0b625d Nov 29 08:05:27 crc kubenswrapper[4795]: I1129 08:05:27.034181 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0c3141e-5bf7-49c4-80fd-2f0d0b361491-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c0c3141e-5bf7-49c4-80fd-2f0d0b361491" (UID: "c0c3141e-5bf7-49c4-80fd-2f0d0b361491"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:05:27 crc kubenswrapper[4795]: I1129 08:05:27.103300 4795 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0c3141e-5bf7-49c4-80fd-2f0d0b361491-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:27 crc kubenswrapper[4795]: I1129 08:05:27.103527 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56798b757f-rv42r"] Nov 29 08:05:27 crc kubenswrapper[4795]: E1129 08:05:27.105708 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0c3141e-5bf7-49c4-80fd-2f0d0b361491" containerName="neutron-db-sync" Nov 29 08:05:27 crc kubenswrapper[4795]: I1129 08:05:27.105760 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0c3141e-5bf7-49c4-80fd-2f0d0b361491" containerName="neutron-db-sync" Nov 29 08:05:27 crc kubenswrapper[4795]: I1129 08:05:27.106113 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0c3141e-5bf7-49c4-80fd-2f0d0b361491" containerName="neutron-db-sync" Nov 29 08:05:27 crc kubenswrapper[4795]: I1129 08:05:27.111433 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56798b757f-rv42r" Nov 29 08:05:27 crc kubenswrapper[4795]: I1129 08:05:27.165088 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56798b757f-rv42r"] Nov 29 08:05:27 crc kubenswrapper[4795]: I1129 08:05:27.188855 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f2mxf" event={"ID":"b5c53f2b-bb1e-40be-9ca4-40b2c63545e4","Type":"ContainerStarted","Data":"8283ce3d7995553c4ac3f7b58850e186affdb1f02b5ac9f8022266532e0b625d"} Nov 29 08:05:27 crc kubenswrapper[4795]: I1129 08:05:27.190212 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-z6klw" event={"ID":"c0c3141e-5bf7-49c4-80fd-2f0d0b361491","Type":"ContainerDied","Data":"ab09232fa054ef8344cace40f56de85ab852491522d1f5e86f1c56d74df6ad31"} Nov 29 08:05:27 crc kubenswrapper[4795]: I1129 08:05:27.190239 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab09232fa054ef8344cace40f56de85ab852491522d1f5e86f1c56d74df6ad31" Nov 29 08:05:27 crc kubenswrapper[4795]: I1129 08:05:27.190296 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-z6klw" Nov 29 08:05:27 crc kubenswrapper[4795]: I1129 08:05:27.212018 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b46442f-1cdc-4645-99e0-eb658f59600b-ovsdbserver-nb\") pod \"dnsmasq-dns-56798b757f-rv42r\" (UID: \"2b46442f-1cdc-4645-99e0-eb658f59600b\") " pod="openstack/dnsmasq-dns-56798b757f-rv42r" Nov 29 08:05:27 crc kubenswrapper[4795]: I1129 08:05:27.212183 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b46442f-1cdc-4645-99e0-eb658f59600b-ovsdbserver-sb\") pod \"dnsmasq-dns-56798b757f-rv42r\" (UID: \"2b46442f-1cdc-4645-99e0-eb658f59600b\") " pod="openstack/dnsmasq-dns-56798b757f-rv42r" Nov 29 08:05:27 crc kubenswrapper[4795]: I1129 08:05:27.212272 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b46442f-1cdc-4645-99e0-eb658f59600b-config\") pod \"dnsmasq-dns-56798b757f-rv42r\" (UID: \"2b46442f-1cdc-4645-99e0-eb658f59600b\") " pod="openstack/dnsmasq-dns-56798b757f-rv42r" Nov 29 08:05:27 crc kubenswrapper[4795]: I1129 08:05:27.212345 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b46442f-1cdc-4645-99e0-eb658f59600b-dns-svc\") pod \"dnsmasq-dns-56798b757f-rv42r\" (UID: \"2b46442f-1cdc-4645-99e0-eb658f59600b\") " pod="openstack/dnsmasq-dns-56798b757f-rv42r" Nov 29 08:05:27 crc kubenswrapper[4795]: I1129 08:05:27.212511 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8hwr\" (UniqueName: \"kubernetes.io/projected/2b46442f-1cdc-4645-99e0-eb658f59600b-kube-api-access-v8hwr\") pod \"dnsmasq-dns-56798b757f-rv42r\" (UID: \"2b46442f-1cdc-4645-99e0-eb658f59600b\") " pod="openstack/dnsmasq-dns-56798b757f-rv42r" Nov 29 08:05:27 crc kubenswrapper[4795]: I1129 08:05:27.269617 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-48c6t"] Nov 29 08:05:27 crc kubenswrapper[4795]: I1129 08:05:27.302251 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" event={"ID":"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1","Type":"ContainerStarted","Data":"cb788e622da5bdf804dcfa0c959dfb0158a848bc427ed90337dfd530ea78ea5d"} Nov 29 08:05:27 crc kubenswrapper[4795]: I1129 08:05:27.314294 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b46442f-1cdc-4645-99e0-eb658f59600b-ovsdbserver-nb\") pod \"dnsmasq-dns-56798b757f-rv42r\" (UID: \"2b46442f-1cdc-4645-99e0-eb658f59600b\") " pod="openstack/dnsmasq-dns-56798b757f-rv42r" Nov 29 08:05:27 crc kubenswrapper[4795]: I1129 08:05:27.315374 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b46442f-1cdc-4645-99e0-eb658f59600b-ovsdbserver-sb\") pod \"dnsmasq-dns-56798b757f-rv42r\" (UID: \"2b46442f-1cdc-4645-99e0-eb658f59600b\") " pod="openstack/dnsmasq-dns-56798b757f-rv42r" Nov 29 08:05:27 crc kubenswrapper[4795]: I1129 08:05:27.315587 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b46442f-1cdc-4645-99e0-eb658f59600b-config\") pod \"dnsmasq-dns-56798b757f-rv42r\" (UID: \"2b46442f-1cdc-4645-99e0-eb658f59600b\") " pod="openstack/dnsmasq-dns-56798b757f-rv42r" Nov 29 08:05:27 crc kubenswrapper[4795]: I1129 08:05:27.315815 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b46442f-1cdc-4645-99e0-eb658f59600b-dns-svc\") pod \"dnsmasq-dns-56798b757f-rv42r\" (UID: \"2b46442f-1cdc-4645-99e0-eb658f59600b\") " pod="openstack/dnsmasq-dns-56798b757f-rv42r" Nov 29 08:05:27 crc kubenswrapper[4795]: I1129 08:05:27.316155 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8hwr\" (UniqueName: \"kubernetes.io/projected/2b46442f-1cdc-4645-99e0-eb658f59600b-kube-api-access-v8hwr\") pod \"dnsmasq-dns-56798b757f-rv42r\" (UID: \"2b46442f-1cdc-4645-99e0-eb658f59600b\") " pod="openstack/dnsmasq-dns-56798b757f-rv42r" Nov 29 08:05:27 crc kubenswrapper[4795]: I1129 08:05:27.318339 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b46442f-1cdc-4645-99e0-eb658f59600b-ovsdbserver-nb\") pod \"dnsmasq-dns-56798b757f-rv42r\" (UID: \"2b46442f-1cdc-4645-99e0-eb658f59600b\") " pod="openstack/dnsmasq-dns-56798b757f-rv42r" Nov 29 08:05:27 crc kubenswrapper[4795]: I1129 08:05:27.319760 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b46442f-1cdc-4645-99e0-eb658f59600b-ovsdbserver-sb\") pod \"dnsmasq-dns-56798b757f-rv42r\" (UID: \"2b46442f-1cdc-4645-99e0-eb658f59600b\") " pod="openstack/dnsmasq-dns-56798b757f-rv42r" Nov 29 08:05:27 crc kubenswrapper[4795]: I1129 08:05:27.321276 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b46442f-1cdc-4645-99e0-eb658f59600b-config\") pod \"dnsmasq-dns-56798b757f-rv42r\" (UID: \"2b46442f-1cdc-4645-99e0-eb658f59600b\") " pod="openstack/dnsmasq-dns-56798b757f-rv42r" Nov 29 08:05:27 crc kubenswrapper[4795]: I1129 08:05:27.322488 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b46442f-1cdc-4645-99e0-eb658f59600b-dns-svc\") pod \"dnsmasq-dns-56798b757f-rv42r\" (UID: \"2b46442f-1cdc-4645-99e0-eb658f59600b\") " pod="openstack/dnsmasq-dns-56798b757f-rv42r" Nov 29 08:05:27 crc kubenswrapper[4795]: I1129 08:05:27.473351 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0c3141e-5bf7-49c4-80fd-2f0d0b361491-config" (OuterVolumeSpecName: "config") pod "c0c3141e-5bf7-49c4-80fd-2f0d0b361491" (UID: "c0c3141e-5bf7-49c4-80fd-2f0d0b361491"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:05:27 crc kubenswrapper[4795]: I1129 08:05:27.484331 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8hwr\" (UniqueName: \"kubernetes.io/projected/2b46442f-1cdc-4645-99e0-eb658f59600b-kube-api-access-v8hwr\") pod \"dnsmasq-dns-56798b757f-rv42r\" (UID: \"2b46442f-1cdc-4645-99e0-eb658f59600b\") " pod="openstack/dnsmasq-dns-56798b757f-rv42r" Nov 29 08:05:27 crc kubenswrapper[4795]: I1129 08:05:27.523503 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"28437f9f-e92e-46d7-9ffb-fcdda5dea25e","Type":"ContainerStarted","Data":"2c167e2e89455c38f11e0820116f9106a162d2e9cb1274324c31bed4930b74e2"} Nov 29 08:05:27 crc kubenswrapper[4795]: E1129 08:05:27.588954 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-fdl5k" podUID="f6dff0cb-a174-4227-ad82-21a12aee68f5" Nov 29 08:05:27 crc kubenswrapper[4795]: I1129 08:05:27.660062 4795 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/c0c3141e-5bf7-49c4-80fd-2f0d0b361491-config\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:27 crc kubenswrapper[4795]: I1129 08:05:27.667274 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56798b757f-rv42r" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.044195 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=88.368772226 podStartE2EDuration="2m12.044173112s" podCreationTimestamp="2025-11-29 08:03:16 +0000 UTC" firstStartedPulling="2025-11-29 08:03:50.95016181 +0000 UTC m=+1476.925737600" lastFinishedPulling="2025-11-29 08:04:34.625562696 +0000 UTC m=+1520.601138486" observedRunningTime="2025-11-29 08:05:27.806062453 +0000 UTC m=+1573.781638263" watchObservedRunningTime="2025-11-29 08:05:28.044173112 +0000 UTC m=+1574.019748912" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.103028 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.105816 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.111203 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-qkzd6" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.115832 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.116064 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.161837 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56798b757f-rv42r"] Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.238006 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9787cd2-4259-4322-a95d-e3fa30e1a4b6-scripts\") pod \"glance-default-external-api-0\" (UID: \"e9787cd2-4259-4322-a95d-e3fa30e1a4b6\") " pod="openstack/glance-default-external-api-0" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.238072 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"e9787cd2-4259-4322-a95d-e3fa30e1a4b6\") " pod="openstack/glance-default-external-api-0" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.238091 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e9787cd2-4259-4322-a95d-e3fa30e1a4b6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e9787cd2-4259-4322-a95d-e3fa30e1a4b6\") " pod="openstack/glance-default-external-api-0" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.238121 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqfpt\" (UniqueName: \"kubernetes.io/projected/e9787cd2-4259-4322-a95d-e3fa30e1a4b6-kube-api-access-bqfpt\") pod \"glance-default-external-api-0\" (UID: \"e9787cd2-4259-4322-a95d-e3fa30e1a4b6\") " pod="openstack/glance-default-external-api-0" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.238197 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9787cd2-4259-4322-a95d-e3fa30e1a4b6-config-data\") pod \"glance-default-external-api-0\" (UID: \"e9787cd2-4259-4322-a95d-e3fa30e1a4b6\") " pod="openstack/glance-default-external-api-0" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.238268 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9787cd2-4259-4322-a95d-e3fa30e1a4b6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e9787cd2-4259-4322-a95d-e3fa30e1a4b6\") " pod="openstack/glance-default-external-api-0" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.238320 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9787cd2-4259-4322-a95d-e3fa30e1a4b6-logs\") pod \"glance-default-external-api-0\" (UID: \"e9787cd2-4259-4322-a95d-e3fa30e1a4b6\") " pod="openstack/glance-default-external-api-0" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.252054 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.335789 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b6c948c7-k8hff"] Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.338027 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b6c948c7-k8hff" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.340816 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9787cd2-4259-4322-a95d-e3fa30e1a4b6-config-data\") pod \"glance-default-external-api-0\" (UID: \"e9787cd2-4259-4322-a95d-e3fa30e1a4b6\") " pod="openstack/glance-default-external-api-0" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.340906 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9787cd2-4259-4322-a95d-e3fa30e1a4b6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e9787cd2-4259-4322-a95d-e3fa30e1a4b6\") " pod="openstack/glance-default-external-api-0" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.340955 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9787cd2-4259-4322-a95d-e3fa30e1a4b6-logs\") pod \"glance-default-external-api-0\" (UID: \"e9787cd2-4259-4322-a95d-e3fa30e1a4b6\") " pod="openstack/glance-default-external-api-0" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.341025 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9787cd2-4259-4322-a95d-e3fa30e1a4b6-scripts\") pod \"glance-default-external-api-0\" (UID: \"e9787cd2-4259-4322-a95d-e3fa30e1a4b6\") " pod="openstack/glance-default-external-api-0" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.341057 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"e9787cd2-4259-4322-a95d-e3fa30e1a4b6\") " pod="openstack/glance-default-external-api-0" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.341077 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e9787cd2-4259-4322-a95d-e3fa30e1a4b6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e9787cd2-4259-4322-a95d-e3fa30e1a4b6\") " pod="openstack/glance-default-external-api-0" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.341099 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqfpt\" (UniqueName: \"kubernetes.io/projected/e9787cd2-4259-4322-a95d-e3fa30e1a4b6-kube-api-access-bqfpt\") pod \"glance-default-external-api-0\" (UID: \"e9787cd2-4259-4322-a95d-e3fa30e1a4b6\") " pod="openstack/glance-default-external-api-0" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.346574 4795 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"e9787cd2-4259-4322-a95d-e3fa30e1a4b6\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.350552 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9787cd2-4259-4322-a95d-e3fa30e1a4b6-logs\") pod \"glance-default-external-api-0\" (UID: \"e9787cd2-4259-4322-a95d-e3fa30e1a4b6\") " pod="openstack/glance-default-external-api-0" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.350851 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9787cd2-4259-4322-a95d-e3fa30e1a4b6-config-data\") pod \"glance-default-external-api-0\" (UID: \"e9787cd2-4259-4322-a95d-e3fa30e1a4b6\") " pod="openstack/glance-default-external-api-0" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.350868 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e9787cd2-4259-4322-a95d-e3fa30e1a4b6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e9787cd2-4259-4322-a95d-e3fa30e1a4b6\") " pod="openstack/glance-default-external-api-0" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.371738 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9787cd2-4259-4322-a95d-e3fa30e1a4b6-scripts\") pod \"glance-default-external-api-0\" (UID: \"e9787cd2-4259-4322-a95d-e3fa30e1a4b6\") " pod="openstack/glance-default-external-api-0" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.376566 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9787cd2-4259-4322-a95d-e3fa30e1a4b6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e9787cd2-4259-4322-a95d-e3fa30e1a4b6\") " pod="openstack/glance-default-external-api-0" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.385886 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b6c948c7-k8hff"] Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.404879 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqfpt\" (UniqueName: \"kubernetes.io/projected/e9787cd2-4259-4322-a95d-e3fa30e1a4b6-kube-api-access-bqfpt\") pod \"glance-default-external-api-0\" (UID: \"e9787cd2-4259-4322-a95d-e3fa30e1a4b6\") " pod="openstack/glance-default-external-api-0" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.408879 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5bdf57676-nn72s"] Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.411073 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5bdf57676-nn72s" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.418327 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.418614 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-ktbwf" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.418786 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.418982 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.420020 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5bdf57676-nn72s"] Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.447398 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4783c203-7112-41d2-9f0e-50656d484cf3-config\") pod \"dnsmasq-dns-b6c948c7-k8hff\" (UID: \"4783c203-7112-41d2-9f0e-50656d484cf3\") " pod="openstack/dnsmasq-dns-b6c948c7-k8hff" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.447516 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4783c203-7112-41d2-9f0e-50656d484cf3-ovsdbserver-sb\") pod \"dnsmasq-dns-b6c948c7-k8hff\" (UID: \"4783c203-7112-41d2-9f0e-50656d484cf3\") " pod="openstack/dnsmasq-dns-b6c948c7-k8hff" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.447614 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4783c203-7112-41d2-9f0e-50656d484cf3-ovsdbserver-nb\") pod \"dnsmasq-dns-b6c948c7-k8hff\" (UID: \"4783c203-7112-41d2-9f0e-50656d484cf3\") " pod="openstack/dnsmasq-dns-b6c948c7-k8hff" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.447855 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmqhb\" (UniqueName: \"kubernetes.io/projected/4783c203-7112-41d2-9f0e-50656d484cf3-kube-api-access-cmqhb\") pod \"dnsmasq-dns-b6c948c7-k8hff\" (UID: \"4783c203-7112-41d2-9f0e-50656d484cf3\") " pod="openstack/dnsmasq-dns-b6c948c7-k8hff" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.447902 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4783c203-7112-41d2-9f0e-50656d484cf3-dns-svc\") pod \"dnsmasq-dns-b6c948c7-k8hff\" (UID: \"4783c203-7112-41d2-9f0e-50656d484cf3\") " pod="openstack/dnsmasq-dns-b6c948c7-k8hff" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.544841 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"e9787cd2-4259-4322-a95d-e3fa30e1a4b6\") " pod="openstack/glance-default-external-api-0" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.557114 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ac907bde-e98c-4e4f-aa14-e105a4fe885c-httpd-config\") pod \"neutron-5bdf57676-nn72s\" (UID: \"ac907bde-e98c-4e4f-aa14-e105a4fe885c\") " pod="openstack/neutron-5bdf57676-nn72s" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.557266 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4783c203-7112-41d2-9f0e-50656d484cf3-ovsdbserver-sb\") pod \"dnsmasq-dns-b6c948c7-k8hff\" (UID: \"4783c203-7112-41d2-9f0e-50656d484cf3\") " pod="openstack/dnsmasq-dns-b6c948c7-k8hff" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.557351 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ac907bde-e98c-4e4f-aa14-e105a4fe885c-config\") pod \"neutron-5bdf57676-nn72s\" (UID: \"ac907bde-e98c-4e4f-aa14-e105a4fe885c\") " pod="openstack/neutron-5bdf57676-nn72s" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.557424 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac907bde-e98c-4e4f-aa14-e105a4fe885c-combined-ca-bundle\") pod \"neutron-5bdf57676-nn72s\" (UID: \"ac907bde-e98c-4e4f-aa14-e105a4fe885c\") " pod="openstack/neutron-5bdf57676-nn72s" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.557454 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac907bde-e98c-4e4f-aa14-e105a4fe885c-ovndb-tls-certs\") pod \"neutron-5bdf57676-nn72s\" (UID: \"ac907bde-e98c-4e4f-aa14-e105a4fe885c\") " pod="openstack/neutron-5bdf57676-nn72s" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.557541 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4783c203-7112-41d2-9f0e-50656d484cf3-ovsdbserver-nb\") pod \"dnsmasq-dns-b6c948c7-k8hff\" (UID: \"4783c203-7112-41d2-9f0e-50656d484cf3\") " pod="openstack/dnsmasq-dns-b6c948c7-k8hff" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.557763 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fb8nn\" (UniqueName: \"kubernetes.io/projected/ac907bde-e98c-4e4f-aa14-e105a4fe885c-kube-api-access-fb8nn\") pod \"neutron-5bdf57676-nn72s\" (UID: \"ac907bde-e98c-4e4f-aa14-e105a4fe885c\") " pod="openstack/neutron-5bdf57676-nn72s" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.557793 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmqhb\" (UniqueName: \"kubernetes.io/projected/4783c203-7112-41d2-9f0e-50656d484cf3-kube-api-access-cmqhb\") pod \"dnsmasq-dns-b6c948c7-k8hff\" (UID: \"4783c203-7112-41d2-9f0e-50656d484cf3\") " pod="openstack/dnsmasq-dns-b6c948c7-k8hff" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.557850 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4783c203-7112-41d2-9f0e-50656d484cf3-dns-svc\") pod \"dnsmasq-dns-b6c948c7-k8hff\" (UID: \"4783c203-7112-41d2-9f0e-50656d484cf3\") " pod="openstack/dnsmasq-dns-b6c948c7-k8hff" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.557890 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4783c203-7112-41d2-9f0e-50656d484cf3-config\") pod \"dnsmasq-dns-b6c948c7-k8hff\" (UID: \"4783c203-7112-41d2-9f0e-50656d484cf3\") " pod="openstack/dnsmasq-dns-b6c948c7-k8hff" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.559131 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4783c203-7112-41d2-9f0e-50656d484cf3-config\") pod \"dnsmasq-dns-b6c948c7-k8hff\" (UID: \"4783c203-7112-41d2-9f0e-50656d484cf3\") " pod="openstack/dnsmasq-dns-b6c948c7-k8hff" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.559740 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4783c203-7112-41d2-9f0e-50656d484cf3-ovsdbserver-sb\") pod \"dnsmasq-dns-b6c948c7-k8hff\" (UID: \"4783c203-7112-41d2-9f0e-50656d484cf3\") " pod="openstack/dnsmasq-dns-b6c948c7-k8hff" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.560299 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4783c203-7112-41d2-9f0e-50656d484cf3-ovsdbserver-nb\") pod \"dnsmasq-dns-b6c948c7-k8hff\" (UID: \"4783c203-7112-41d2-9f0e-50656d484cf3\") " pod="openstack/dnsmasq-dns-b6c948c7-k8hff" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.562374 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4783c203-7112-41d2-9f0e-50656d484cf3-dns-svc\") pod \"dnsmasq-dns-b6c948c7-k8hff\" (UID: \"4783c203-7112-41d2-9f0e-50656d484cf3\") " pod="openstack/dnsmasq-dns-b6c948c7-k8hff" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.577748 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.579914 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.586049 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.588717 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-jf2ds" event={"ID":"20682aa9-7d99-41f5-9214-2c08cb1533ec","Type":"ContainerStarted","Data":"8d764fe84bb5442819e04f7a14c354fa2a8f9df310b70eb300cff428ab13603f"} Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.589424 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.609435 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b6c948c7-k8hff"] Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.611013 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-48c6t" event={"ID":"f63e39cf-970a-40df-a823-d1e60521e702","Type":"ContainerStarted","Data":"a9ee5173c048d905ae9a3615e56ff08a24b3e21afa91353ea63ac1c93120003e"} Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.611220 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmqhb\" (UniqueName: \"kubernetes.io/projected/4783c203-7112-41d2-9f0e-50656d484cf3-kube-api-access-cmqhb\") pod \"dnsmasq-dns-b6c948c7-k8hff\" (UID: \"4783c203-7112-41d2-9f0e-50656d484cf3\") " pod="openstack/dnsmasq-dns-b6c948c7-k8hff" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.620373 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.639348 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-m6xrq"] Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.641799 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc5c4795-m6xrq" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.645741 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.659276 4795 generic.go:334] "Generic (PLEG): container finished" podID="19557359-2cdd-493b-a9ad-0770bf37206e" containerID="e63ce49ee7e31529506ea8fcd72ab771eccd3d29ca7936226fbaeaea5c48eba5" exitCode=0 Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.659390 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8b2xv" event={"ID":"19557359-2cdd-493b-a9ad-0770bf37206e","Type":"ContainerDied","Data":"e63ce49ee7e31529506ea8fcd72ab771eccd3d29ca7936226fbaeaea5c48eba5"} Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.662951 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fb8nn\" (UniqueName: \"kubernetes.io/projected/ac907bde-e98c-4e4f-aa14-e105a4fe885c-kube-api-access-fb8nn\") pod \"neutron-5bdf57676-nn72s\" (UID: \"ac907bde-e98c-4e4f-aa14-e105a4fe885c\") " pod="openstack/neutron-5bdf57676-nn72s" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.663347 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52a6f349-88f4-429e-8c73-143752c8e249-scripts\") pod \"glance-default-internal-api-0\" (UID: \"52a6f349-88f4-429e-8c73-143752c8e249\") " pod="openstack/glance-default-internal-api-0" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.663400 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ac907bde-e98c-4e4f-aa14-e105a4fe885c-httpd-config\") pod \"neutron-5bdf57676-nn72s\" (UID: \"ac907bde-e98c-4e4f-aa14-e105a4fe885c\") " pod="openstack/neutron-5bdf57676-nn72s" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.663432 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/52a6f349-88f4-429e-8c73-143752c8e249-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"52a6f349-88f4-429e-8c73-143752c8e249\") " pod="openstack/glance-default-internal-api-0" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.663458 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52a6f349-88f4-429e-8c73-143752c8e249-config-data\") pod \"glance-default-internal-api-0\" (UID: \"52a6f349-88f4-429e-8c73-143752c8e249\") " pod="openstack/glance-default-internal-api-0" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.663514 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52a6f349-88f4-429e-8c73-143752c8e249-logs\") pod \"glance-default-internal-api-0\" (UID: \"52a6f349-88f4-429e-8c73-143752c8e249\") " pod="openstack/glance-default-internal-api-0" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.663539 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ac907bde-e98c-4e4f-aa14-e105a4fe885c-config\") pod \"neutron-5bdf57676-nn72s\" (UID: \"ac907bde-e98c-4e4f-aa14-e105a4fe885c\") " pod="openstack/neutron-5bdf57676-nn72s" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.663561 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"52a6f349-88f4-429e-8c73-143752c8e249\") " pod="openstack/glance-default-internal-api-0" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.663605 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac907bde-e98c-4e4f-aa14-e105a4fe885c-combined-ca-bundle\") pod \"neutron-5bdf57676-nn72s\" (UID: \"ac907bde-e98c-4e4f-aa14-e105a4fe885c\") " pod="openstack/neutron-5bdf57676-nn72s" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.663645 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac907bde-e98c-4e4f-aa14-e105a4fe885c-ovndb-tls-certs\") pod \"neutron-5bdf57676-nn72s\" (UID: \"ac907bde-e98c-4e4f-aa14-e105a4fe885c\") " pod="openstack/neutron-5bdf57676-nn72s" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.663689 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mf9c\" (UniqueName: \"kubernetes.io/projected/52a6f349-88f4-429e-8c73-143752c8e249-kube-api-access-4mf9c\") pod \"glance-default-internal-api-0\" (UID: \"52a6f349-88f4-429e-8c73-143752c8e249\") " pod="openstack/glance-default-internal-api-0" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.663717 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52a6f349-88f4-429e-8c73-143752c8e249-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"52a6f349-88f4-429e-8c73-143752c8e249\") " pod="openstack/glance-default-internal-api-0" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.672091 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-m6xrq"] Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.676946 4795 generic.go:334] "Generic (PLEG): container finished" podID="b5c53f2b-bb1e-40be-9ca4-40b2c63545e4" containerID="6fbb270eea863dd4902d649d7b81983afb166cd4e4daa0c5fbe9a60c56cf963c" exitCode=0 Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.678921 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f2mxf" event={"ID":"b5c53f2b-bb1e-40be-9ca4-40b2c63545e4","Type":"ContainerDied","Data":"6fbb270eea863dd4902d649d7b81983afb166cd4e4daa0c5fbe9a60c56cf963c"} Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.692609 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-jf2ds" podStartSLOduration=4.600520651 podStartE2EDuration="1m0.692571171s" podCreationTimestamp="2025-11-29 08:04:28 +0000 UTC" firstStartedPulling="2025-11-29 08:04:30.249614561 +0000 UTC m=+1516.225190351" lastFinishedPulling="2025-11-29 08:05:26.341665081 +0000 UTC m=+1572.317240871" observedRunningTime="2025-11-29 08:05:28.645244945 +0000 UTC m=+1574.620820735" watchObservedRunningTime="2025-11-29 08:05:28.692571171 +0000 UTC m=+1574.668146961" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.750295 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ac907bde-e98c-4e4f-aa14-e105a4fe885c-config\") pod \"neutron-5bdf57676-nn72s\" (UID: \"ac907bde-e98c-4e4f-aa14-e105a4fe885c\") " pod="openstack/neutron-5bdf57676-nn72s" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.751089 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b6c948c7-k8hff" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.751894 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fb8nn\" (UniqueName: \"kubernetes.io/projected/ac907bde-e98c-4e4f-aa14-e105a4fe885c-kube-api-access-fb8nn\") pod \"neutron-5bdf57676-nn72s\" (UID: \"ac907bde-e98c-4e4f-aa14-e105a4fe885c\") " pod="openstack/neutron-5bdf57676-nn72s" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.753188 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac907bde-e98c-4e4f-aa14-e105a4fe885c-combined-ca-bundle\") pod \"neutron-5bdf57676-nn72s\" (UID: \"ac907bde-e98c-4e4f-aa14-e105a4fe885c\") " pod="openstack/neutron-5bdf57676-nn72s" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.783201 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fe89b8b3-85af-4550-a246-b092a5f2f233-ovsdbserver-sb\") pod \"dnsmasq-dns-5ccc5c4795-m6xrq\" (UID: \"fe89b8b3-85af-4550-a246-b092a5f2f233\") " pod="openstack/dnsmasq-dns-5ccc5c4795-m6xrq" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.784303 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52a6f349-88f4-429e-8c73-143752c8e249-scripts\") pod \"glance-default-internal-api-0\" (UID: \"52a6f349-88f4-429e-8c73-143752c8e249\") " pod="openstack/glance-default-internal-api-0" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.785557 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fe89b8b3-85af-4550-a246-b092a5f2f233-ovsdbserver-nb\") pod \"dnsmasq-dns-5ccc5c4795-m6xrq\" (UID: \"fe89b8b3-85af-4550-a246-b092a5f2f233\") " pod="openstack/dnsmasq-dns-5ccc5c4795-m6xrq" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.785711 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fe89b8b3-85af-4550-a246-b092a5f2f233-dns-svc\") pod \"dnsmasq-dns-5ccc5c4795-m6xrq\" (UID: \"fe89b8b3-85af-4550-a246-b092a5f2f233\") " pod="openstack/dnsmasq-dns-5ccc5c4795-m6xrq" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.787914 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkplf\" (UniqueName: \"kubernetes.io/projected/fe89b8b3-85af-4550-a246-b092a5f2f233-kube-api-access-xkplf\") pod \"dnsmasq-dns-5ccc5c4795-m6xrq\" (UID: \"fe89b8b3-85af-4550-a246-b092a5f2f233\") " pod="openstack/dnsmasq-dns-5ccc5c4795-m6xrq" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.783869 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ac907bde-e98c-4e4f-aa14-e105a4fe885c-httpd-config\") pod \"neutron-5bdf57676-nn72s\" (UID: \"ac907bde-e98c-4e4f-aa14-e105a4fe885c\") " pod="openstack/neutron-5bdf57676-nn72s" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.791685 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/52a6f349-88f4-429e-8c73-143752c8e249-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"52a6f349-88f4-429e-8c73-143752c8e249\") " pod="openstack/glance-default-internal-api-0" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.792046 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52a6f349-88f4-429e-8c73-143752c8e249-config-data\") pod \"glance-default-internal-api-0\" (UID: \"52a6f349-88f4-429e-8c73-143752c8e249\") " pod="openstack/glance-default-internal-api-0" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.792404 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52a6f349-88f4-429e-8c73-143752c8e249-logs\") pod \"glance-default-internal-api-0\" (UID: \"52a6f349-88f4-429e-8c73-143752c8e249\") " pod="openstack/glance-default-internal-api-0" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.792719 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"52a6f349-88f4-429e-8c73-143752c8e249\") " pod="openstack/glance-default-internal-api-0" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.793691 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mf9c\" (UniqueName: \"kubernetes.io/projected/52a6f349-88f4-429e-8c73-143752c8e249-kube-api-access-4mf9c\") pod \"glance-default-internal-api-0\" (UID: \"52a6f349-88f4-429e-8c73-143752c8e249\") " pod="openstack/glance-default-internal-api-0" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.793807 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fe89b8b3-85af-4550-a246-b092a5f2f233-dns-swift-storage-0\") pod \"dnsmasq-dns-5ccc5c4795-m6xrq\" (UID: \"fe89b8b3-85af-4550-a246-b092a5f2f233\") " pod="openstack/dnsmasq-dns-5ccc5c4795-m6xrq" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.793889 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52a6f349-88f4-429e-8c73-143752c8e249-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"52a6f349-88f4-429e-8c73-143752c8e249\") " pod="openstack/glance-default-internal-api-0" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.794124 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe89b8b3-85af-4550-a246-b092a5f2f233-config\") pod \"dnsmasq-dns-5ccc5c4795-m6xrq\" (UID: \"fe89b8b3-85af-4550-a246-b092a5f2f233\") " pod="openstack/dnsmasq-dns-5ccc5c4795-m6xrq" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.795788 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/52a6f349-88f4-429e-8c73-143752c8e249-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"52a6f349-88f4-429e-8c73-143752c8e249\") " pod="openstack/glance-default-internal-api-0" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.801673 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52a6f349-88f4-429e-8c73-143752c8e249-scripts\") pod \"glance-default-internal-api-0\" (UID: \"52a6f349-88f4-429e-8c73-143752c8e249\") " pod="openstack/glance-default-internal-api-0" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.803054 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52a6f349-88f4-429e-8c73-143752c8e249-logs\") pod \"glance-default-internal-api-0\" (UID: \"52a6f349-88f4-429e-8c73-143752c8e249\") " pod="openstack/glance-default-internal-api-0" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.803484 4795 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"52a6f349-88f4-429e-8c73-143752c8e249\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-internal-api-0" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.811731 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52a6f349-88f4-429e-8c73-143752c8e249-config-data\") pod \"glance-default-internal-api-0\" (UID: \"52a6f349-88f4-429e-8c73-143752c8e249\") " pod="openstack/glance-default-internal-api-0" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.814131 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52a6f349-88f4-429e-8c73-143752c8e249-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"52a6f349-88f4-429e-8c73-143752c8e249\") " pod="openstack/glance-default-internal-api-0" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.819432 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac907bde-e98c-4e4f-aa14-e105a4fe885c-ovndb-tls-certs\") pod \"neutron-5bdf57676-nn72s\" (UID: \"ac907bde-e98c-4e4f-aa14-e105a4fe885c\") " pod="openstack/neutron-5bdf57676-nn72s" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.861160 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mf9c\" (UniqueName: \"kubernetes.io/projected/52a6f349-88f4-429e-8c73-143752c8e249-kube-api-access-4mf9c\") pod \"glance-default-internal-api-0\" (UID: \"52a6f349-88f4-429e-8c73-143752c8e249\") " pod="openstack/glance-default-internal-api-0" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.896654 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fe89b8b3-85af-4550-a246-b092a5f2f233-ovsdbserver-nb\") pod \"dnsmasq-dns-5ccc5c4795-m6xrq\" (UID: \"fe89b8b3-85af-4550-a246-b092a5f2f233\") " pod="openstack/dnsmasq-dns-5ccc5c4795-m6xrq" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.897139 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fe89b8b3-85af-4550-a246-b092a5f2f233-dns-svc\") pod \"dnsmasq-dns-5ccc5c4795-m6xrq\" (UID: \"fe89b8b3-85af-4550-a246-b092a5f2f233\") " pod="openstack/dnsmasq-dns-5ccc5c4795-m6xrq" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.897242 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkplf\" (UniqueName: \"kubernetes.io/projected/fe89b8b3-85af-4550-a246-b092a5f2f233-kube-api-access-xkplf\") pod \"dnsmasq-dns-5ccc5c4795-m6xrq\" (UID: \"fe89b8b3-85af-4550-a246-b092a5f2f233\") " pod="openstack/dnsmasq-dns-5ccc5c4795-m6xrq" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.897476 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fe89b8b3-85af-4550-a246-b092a5f2f233-dns-swift-storage-0\") pod \"dnsmasq-dns-5ccc5c4795-m6xrq\" (UID: \"fe89b8b3-85af-4550-a246-b092a5f2f233\") " pod="openstack/dnsmasq-dns-5ccc5c4795-m6xrq" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.897741 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fe89b8b3-85af-4550-a246-b092a5f2f233-ovsdbserver-nb\") pod \"dnsmasq-dns-5ccc5c4795-m6xrq\" (UID: \"fe89b8b3-85af-4550-a246-b092a5f2f233\") " pod="openstack/dnsmasq-dns-5ccc5c4795-m6xrq" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.897749 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe89b8b3-85af-4550-a246-b092a5f2f233-config\") pod \"dnsmasq-dns-5ccc5c4795-m6xrq\" (UID: \"fe89b8b3-85af-4550-a246-b092a5f2f233\") " pod="openstack/dnsmasq-dns-5ccc5c4795-m6xrq" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.899138 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fe89b8b3-85af-4550-a246-b092a5f2f233-ovsdbserver-sb\") pod \"dnsmasq-dns-5ccc5c4795-m6xrq\" (UID: \"fe89b8b3-85af-4550-a246-b092a5f2f233\") " pod="openstack/dnsmasq-dns-5ccc5c4795-m6xrq" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.900798 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fe89b8b3-85af-4550-a246-b092a5f2f233-dns-swift-storage-0\") pod \"dnsmasq-dns-5ccc5c4795-m6xrq\" (UID: \"fe89b8b3-85af-4550-a246-b092a5f2f233\") " pod="openstack/dnsmasq-dns-5ccc5c4795-m6xrq" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.901512 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fe89b8b3-85af-4550-a246-b092a5f2f233-dns-svc\") pod \"dnsmasq-dns-5ccc5c4795-m6xrq\" (UID: \"fe89b8b3-85af-4550-a246-b092a5f2f233\") " pod="openstack/dnsmasq-dns-5ccc5c4795-m6xrq" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.902348 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fe89b8b3-85af-4550-a246-b092a5f2f233-ovsdbserver-sb\") pod \"dnsmasq-dns-5ccc5c4795-m6xrq\" (UID: \"fe89b8b3-85af-4550-a246-b092a5f2f233\") " pod="openstack/dnsmasq-dns-5ccc5c4795-m6xrq" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.903325 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe89b8b3-85af-4550-a246-b092a5f2f233-config\") pod \"dnsmasq-dns-5ccc5c4795-m6xrq\" (UID: \"fe89b8b3-85af-4550-a246-b092a5f2f233\") " pod="openstack/dnsmasq-dns-5ccc5c4795-m6xrq" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.908900 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"52a6f349-88f4-429e-8c73-143752c8e249\") " pod="openstack/glance-default-internal-api-0" Nov 29 08:05:28 crc kubenswrapper[4795]: I1129 08:05:28.934718 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkplf\" (UniqueName: \"kubernetes.io/projected/fe89b8b3-85af-4550-a246-b092a5f2f233-kube-api-access-xkplf\") pod \"dnsmasq-dns-5ccc5c4795-m6xrq\" (UID: \"fe89b8b3-85af-4550-a246-b092a5f2f233\") " pod="openstack/dnsmasq-dns-5ccc5c4795-m6xrq" Nov 29 08:05:29 crc kubenswrapper[4795]: I1129 08:05:29.091020 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5bdf57676-nn72s" Nov 29 08:05:29 crc kubenswrapper[4795]: I1129 08:05:29.141322 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 29 08:05:29 crc kubenswrapper[4795]: I1129 08:05:29.195581 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc5c4795-m6xrq" Nov 29 08:05:30 crc kubenswrapper[4795]: I1129 08:05:30.739338 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-48c6t" event={"ID":"f63e39cf-970a-40df-a823-d1e60521e702","Type":"ContainerStarted","Data":"bfb19c9ce44284ee4afdf4be4136a2041c2065b1c9beb5337b6bb10989b8dd0c"} Nov 29 08:05:30 crc kubenswrapper[4795]: I1129 08:05:30.778424 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-48c6t" podStartSLOduration=4.778396664 podStartE2EDuration="4.778396664s" podCreationTimestamp="2025-11-29 08:05:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 08:05:30.767857994 +0000 UTC m=+1576.743433784" watchObservedRunningTime="2025-11-29 08:05:30.778396664 +0000 UTC m=+1576.753972464" Nov 29 08:05:31 crc kubenswrapper[4795]: I1129 08:05:31.039835 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56798b757f-rv42r"] Nov 29 08:05:31 crc kubenswrapper[4795]: W1129 08:05:31.410248 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac907bde_e98c_4e4f_aa14_e105a4fe885c.slice/crio-193e44cb247772cb177a8b1373506dea540472cd63fac2dddb52c1a27bfe046d WatchSource:0}: Error finding container 193e44cb247772cb177a8b1373506dea540472cd63fac2dddb52c1a27bfe046d: Status 404 returned error can't find the container with id 193e44cb247772cb177a8b1373506dea540472cd63fac2dddb52c1a27bfe046d Nov 29 08:05:31 crc kubenswrapper[4795]: I1129 08:05:31.412248 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5bdf57676-nn72s"] Nov 29 08:05:31 crc kubenswrapper[4795]: I1129 08:05:31.762644 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5bdf57676-nn72s" event={"ID":"ac907bde-e98c-4e4f-aa14-e105a4fe885c","Type":"ContainerStarted","Data":"193e44cb247772cb177a8b1373506dea540472cd63fac2dddb52c1a27bfe046d"} Nov 29 08:05:31 crc kubenswrapper[4795]: I1129 08:05:31.780165 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6dbced2-2ca2-4189-aad3-7a872ab6209c","Type":"ContainerStarted","Data":"bfa68787ef29f44c70901173a1f1a2dc9583802c7d38a555e5dfe2f938b751b9"} Nov 29 08:05:31 crc kubenswrapper[4795]: I1129 08:05:31.785157 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-m6xrq"] Nov 29 08:05:31 crc kubenswrapper[4795]: I1129 08:05:31.801879 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8b2xv" event={"ID":"19557359-2cdd-493b-a9ad-0770bf37206e","Type":"ContainerStarted","Data":"5ebe0a03b9d7ffc369722758e67d8d71695fb3dea054e5cf4126807196f14a47"} Nov 29 08:05:31 crc kubenswrapper[4795]: I1129 08:05:31.841725 4795 generic.go:334] "Generic (PLEG): container finished" podID="b5c53f2b-bb1e-40be-9ca4-40b2c63545e4" containerID="c2e0a8a3dd7cd0f05fa15fe7691c2a8ce25a36c2cd81b4ebbc47a85ce8f18aff" exitCode=0 Nov 29 08:05:31 crc kubenswrapper[4795]: I1129 08:05:31.847473 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f2mxf" event={"ID":"b5c53f2b-bb1e-40be-9ca4-40b2c63545e4","Type":"ContainerDied","Data":"c2e0a8a3dd7cd0f05fa15fe7691c2a8ce25a36c2cd81b4ebbc47a85ce8f18aff"} Nov 29 08:05:31 crc kubenswrapper[4795]: I1129 08:05:31.848479 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8b2xv" podStartSLOduration=42.382829155 podStartE2EDuration="51.848454412s" podCreationTimestamp="2025-11-29 08:04:40 +0000 UTC" firstStartedPulling="2025-11-29 08:05:21.239325159 +0000 UTC m=+1567.214900959" lastFinishedPulling="2025-11-29 08:05:30.704950426 +0000 UTC m=+1576.680526216" observedRunningTime="2025-11-29 08:05:31.847078732 +0000 UTC m=+1577.822654522" watchObservedRunningTime="2025-11-29 08:05:31.848454412 +0000 UTC m=+1577.824030202" Nov 29 08:05:31 crc kubenswrapper[4795]: I1129 08:05:31.995007 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56798b757f-rv42r" event={"ID":"2b46442f-1cdc-4645-99e0-eb658f59600b","Type":"ContainerStarted","Data":"dff4fbbf637a3a95593da2029f6b74153d57b1dfd22b18126ed5707e101ceef8"} Nov 29 08:05:32 crc kubenswrapper[4795]: I1129 08:05:32.081836 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b6c948c7-k8hff"] Nov 29 08:05:32 crc kubenswrapper[4795]: I1129 08:05:32.229106 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 29 08:05:32 crc kubenswrapper[4795]: I1129 08:05:32.261197 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 29 08:05:32 crc kubenswrapper[4795]: I1129 08:05:32.606961 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 29 08:05:32 crc kubenswrapper[4795]: I1129 08:05:32.707744 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 29 08:05:33 crc kubenswrapper[4795]: I1129 08:05:33.019566 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e9787cd2-4259-4322-a95d-e3fa30e1a4b6","Type":"ContainerStarted","Data":"f1fc52c035ae5884eff83585be7a370156ede3c3a83b7f852b2db8471b6095c1"} Nov 29 08:05:33 crc kubenswrapper[4795]: I1129 08:05:33.022391 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"52a6f349-88f4-429e-8c73-143752c8e249","Type":"ContainerStarted","Data":"92034690c8d97bbee17ff54bcd99671019e5467a1a8d76648ae83254b3353e2e"} Nov 29 08:05:33 crc kubenswrapper[4795]: I1129 08:05:33.024279 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-m6xrq" event={"ID":"fe89b8b3-85af-4550-a246-b092a5f2f233","Type":"ContainerStarted","Data":"7bc0a83de9bce7740ee41ff52910e667c268e267857c79e5f471da536bc145f9"} Nov 29 08:05:33 crc kubenswrapper[4795]: I1129 08:05:33.024311 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-m6xrq" event={"ID":"fe89b8b3-85af-4550-a246-b092a5f2f233","Type":"ContainerStarted","Data":"aa4d8c22036cc597ada6aecc0b49c9de5a52991757590cdec3601ec44b5c1f31"} Nov 29 08:05:33 crc kubenswrapper[4795]: I1129 08:05:33.027270 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b6c948c7-k8hff" event={"ID":"4783c203-7112-41d2-9f0e-50656d484cf3","Type":"ContainerStarted","Data":"608801ec57963470c793190e5fb3893c2e8184f25ad3e5e86c205e433f146d1f"} Nov 29 08:05:33 crc kubenswrapper[4795]: I1129 08:05:33.029250 4795 generic.go:334] "Generic (PLEG): container finished" podID="2b46442f-1cdc-4645-99e0-eb658f59600b" containerID="00db8525fd63a8b8b4e92652e1ef1e6e99d01b1606feb7acafb17085a90646a5" exitCode=0 Nov 29 08:05:33 crc kubenswrapper[4795]: I1129 08:05:33.029335 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56798b757f-rv42r" event={"ID":"2b46442f-1cdc-4645-99e0-eb658f59600b","Type":"ContainerDied","Data":"00db8525fd63a8b8b4e92652e1ef1e6e99d01b1606feb7acafb17085a90646a5"} Nov 29 08:05:33 crc kubenswrapper[4795]: I1129 08:05:33.033116 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5bdf57676-nn72s" event={"ID":"ac907bde-e98c-4e4f-aa14-e105a4fe885c","Type":"ContainerStarted","Data":"45a4c4aa94ff4f81822bbb57e1e819c08101d4d990fb49cd356761d8201d876a"} Nov 29 08:05:33 crc kubenswrapper[4795]: I1129 08:05:33.685391 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56798b757f-rv42r" Nov 29 08:05:33 crc kubenswrapper[4795]: I1129 08:05:33.806290 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b46442f-1cdc-4645-99e0-eb658f59600b-ovsdbserver-nb\") pod \"2b46442f-1cdc-4645-99e0-eb658f59600b\" (UID: \"2b46442f-1cdc-4645-99e0-eb658f59600b\") " Nov 29 08:05:33 crc kubenswrapper[4795]: I1129 08:05:33.806571 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b46442f-1cdc-4645-99e0-eb658f59600b-dns-svc\") pod \"2b46442f-1cdc-4645-99e0-eb658f59600b\" (UID: \"2b46442f-1cdc-4645-99e0-eb658f59600b\") " Nov 29 08:05:33 crc kubenswrapper[4795]: I1129 08:05:33.806789 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8hwr\" (UniqueName: \"kubernetes.io/projected/2b46442f-1cdc-4645-99e0-eb658f59600b-kube-api-access-v8hwr\") pod \"2b46442f-1cdc-4645-99e0-eb658f59600b\" (UID: \"2b46442f-1cdc-4645-99e0-eb658f59600b\") " Nov 29 08:05:33 crc kubenswrapper[4795]: I1129 08:05:33.806935 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b46442f-1cdc-4645-99e0-eb658f59600b-ovsdbserver-sb\") pod \"2b46442f-1cdc-4645-99e0-eb658f59600b\" (UID: \"2b46442f-1cdc-4645-99e0-eb658f59600b\") " Nov 29 08:05:33 crc kubenswrapper[4795]: I1129 08:05:33.807051 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b46442f-1cdc-4645-99e0-eb658f59600b-config\") pod \"2b46442f-1cdc-4645-99e0-eb658f59600b\" (UID: \"2b46442f-1cdc-4645-99e0-eb658f59600b\") " Nov 29 08:05:33 crc kubenswrapper[4795]: I1129 08:05:33.847130 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b46442f-1cdc-4645-99e0-eb658f59600b-kube-api-access-v8hwr" (OuterVolumeSpecName: "kube-api-access-v8hwr") pod "2b46442f-1cdc-4645-99e0-eb658f59600b" (UID: "2b46442f-1cdc-4645-99e0-eb658f59600b"). InnerVolumeSpecName "kube-api-access-v8hwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:05:33 crc kubenswrapper[4795]: I1129 08:05:33.905751 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b46442f-1cdc-4645-99e0-eb658f59600b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2b46442f-1cdc-4645-99e0-eb658f59600b" (UID: "2b46442f-1cdc-4645-99e0-eb658f59600b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:05:33 crc kubenswrapper[4795]: I1129 08:05:33.908541 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b46442f-1cdc-4645-99e0-eb658f59600b-config" (OuterVolumeSpecName: "config") pod "2b46442f-1cdc-4645-99e0-eb658f59600b" (UID: "2b46442f-1cdc-4645-99e0-eb658f59600b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:05:33 crc kubenswrapper[4795]: I1129 08:05:33.914188 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b46442f-1cdc-4645-99e0-eb658f59600b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2b46442f-1cdc-4645-99e0-eb658f59600b" (UID: "2b46442f-1cdc-4645-99e0-eb658f59600b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:05:33 crc kubenswrapper[4795]: I1129 08:05:33.917021 4795 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b46442f-1cdc-4645-99e0-eb658f59600b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:33 crc kubenswrapper[4795]: I1129 08:05:33.917063 4795 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b46442f-1cdc-4645-99e0-eb658f59600b-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:33 crc kubenswrapper[4795]: I1129 08:05:33.917077 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8hwr\" (UniqueName: \"kubernetes.io/projected/2b46442f-1cdc-4645-99e0-eb658f59600b-kube-api-access-v8hwr\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:33 crc kubenswrapper[4795]: I1129 08:05:33.917090 4795 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b46442f-1cdc-4645-99e0-eb658f59600b-config\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:33 crc kubenswrapper[4795]: I1129 08:05:33.917285 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b46442f-1cdc-4645-99e0-eb658f59600b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2b46442f-1cdc-4645-99e0-eb658f59600b" (UID: "2b46442f-1cdc-4645-99e0-eb658f59600b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:05:34 crc kubenswrapper[4795]: I1129 08:05:34.019753 4795 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b46442f-1cdc-4645-99e0-eb658f59600b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:34 crc kubenswrapper[4795]: I1129 08:05:34.053081 4795 generic.go:334] "Generic (PLEG): container finished" podID="fe89b8b3-85af-4550-a246-b092a5f2f233" containerID="7bc0a83de9bce7740ee41ff52910e667c268e267857c79e5f471da536bc145f9" exitCode=0 Nov 29 08:05:34 crc kubenswrapper[4795]: I1129 08:05:34.053137 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-m6xrq" event={"ID":"fe89b8b3-85af-4550-a246-b092a5f2f233","Type":"ContainerDied","Data":"7bc0a83de9bce7740ee41ff52910e667c268e267857c79e5f471da536bc145f9"} Nov 29 08:05:34 crc kubenswrapper[4795]: I1129 08:05:34.059886 4795 generic.go:334] "Generic (PLEG): container finished" podID="4783c203-7112-41d2-9f0e-50656d484cf3" containerID="0256a00828c9b065706a5219cde8a48ea49ec525a93fa015091b651ed2af45f5" exitCode=0 Nov 29 08:05:34 crc kubenswrapper[4795]: I1129 08:05:34.059955 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b6c948c7-k8hff" event={"ID":"4783c203-7112-41d2-9f0e-50656d484cf3","Type":"ContainerDied","Data":"0256a00828c9b065706a5219cde8a48ea49ec525a93fa015091b651ed2af45f5"} Nov 29 08:05:34 crc kubenswrapper[4795]: I1129 08:05:34.064989 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56798b757f-rv42r" event={"ID":"2b46442f-1cdc-4645-99e0-eb658f59600b","Type":"ContainerDied","Data":"dff4fbbf637a3a95593da2029f6b74153d57b1dfd22b18126ed5707e101ceef8"} Nov 29 08:05:34 crc kubenswrapper[4795]: I1129 08:05:34.065042 4795 scope.go:117] "RemoveContainer" containerID="00db8525fd63a8b8b4e92652e1ef1e6e99d01b1606feb7acafb17085a90646a5" Nov 29 08:05:34 crc kubenswrapper[4795]: I1129 08:05:34.065194 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56798b757f-rv42r" Nov 29 08:05:34 crc kubenswrapper[4795]: I1129 08:05:34.090622 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5bdf57676-nn72s" event={"ID":"ac907bde-e98c-4e4f-aa14-e105a4fe885c","Type":"ContainerStarted","Data":"b502aeb549c6281a4c29dfce7ff4bb46e86bfaf86e8bd5850874871365e44283"} Nov 29 08:05:34 crc kubenswrapper[4795]: I1129 08:05:34.091469 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5bdf57676-nn72s" Nov 29 08:05:34 crc kubenswrapper[4795]: I1129 08:05:34.102320 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e9787cd2-4259-4322-a95d-e3fa30e1a4b6","Type":"ContainerStarted","Data":"23359389ee83da425d3fc6016d537d8ee0204a7b06a019a52fccb355afb9e0ee"} Nov 29 08:05:34 crc kubenswrapper[4795]: I1129 08:05:34.113123 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"52a6f349-88f4-429e-8c73-143752c8e249","Type":"ContainerStarted","Data":"8651532643bb14f08367baeeb3b68fb7c315c6a2e267c866a35387fc10610728"} Nov 29 08:05:34 crc kubenswrapper[4795]: I1129 08:05:34.159793 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5bdf57676-nn72s" podStartSLOduration=6.159768107 podStartE2EDuration="6.159768107s" podCreationTimestamp="2025-11-29 08:05:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 08:05:34.144571235 +0000 UTC m=+1580.120147025" watchObservedRunningTime="2025-11-29 08:05:34.159768107 +0000 UTC m=+1580.135343897" Nov 29 08:05:34 crc kubenswrapper[4795]: I1129 08:05:34.224671 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56798b757f-rv42r"] Nov 29 08:05:34 crc kubenswrapper[4795]: I1129 08:05:34.237066 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56798b757f-rv42r"] Nov 29 08:05:34 crc kubenswrapper[4795]: I1129 08:05:34.434754 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b46442f-1cdc-4645-99e0-eb658f59600b" path="/var/lib/kubelet/pods/2b46442f-1cdc-4645-99e0-eb658f59600b/volumes" Nov 29 08:05:35 crc kubenswrapper[4795]: I1129 08:05:35.177653 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b6c948c7-k8hff" event={"ID":"4783c203-7112-41d2-9f0e-50656d484cf3","Type":"ContainerDied","Data":"608801ec57963470c793190e5fb3893c2e8184f25ad3e5e86c205e433f146d1f"} Nov 29 08:05:35 crc kubenswrapper[4795]: I1129 08:05:35.177977 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="608801ec57963470c793190e5fb3893c2e8184f25ad3e5e86c205e433f146d1f" Nov 29 08:05:35 crc kubenswrapper[4795]: I1129 08:05:35.222362 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b6c948c7-k8hff" Nov 29 08:05:35 crc kubenswrapper[4795]: I1129 08:05:35.222421 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f2mxf" event={"ID":"b5c53f2b-bb1e-40be-9ca4-40b2c63545e4","Type":"ContainerStarted","Data":"a6fc80f3e705e6564150f04e1b69fd985697cbad439c8e4e18fd23e169f65450"} Nov 29 08:05:35 crc kubenswrapper[4795]: I1129 08:05:35.305258 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-f2mxf" podStartSLOduration=34.23440884 podStartE2EDuration="38.305235329s" podCreationTimestamp="2025-11-29 08:04:57 +0000 UTC" firstStartedPulling="2025-11-29 08:05:29.890428633 +0000 UTC m=+1575.866004423" lastFinishedPulling="2025-11-29 08:05:33.961255112 +0000 UTC m=+1579.936830912" observedRunningTime="2025-11-29 08:05:35.27891012 +0000 UTC m=+1581.254485911" watchObservedRunningTime="2025-11-29 08:05:35.305235329 +0000 UTC m=+1581.280811119" Nov 29 08:05:35 crc kubenswrapper[4795]: I1129 08:05:35.328703 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4783c203-7112-41d2-9f0e-50656d484cf3-ovsdbserver-nb\") pod \"4783c203-7112-41d2-9f0e-50656d484cf3\" (UID: \"4783c203-7112-41d2-9f0e-50656d484cf3\") " Nov 29 08:05:35 crc kubenswrapper[4795]: I1129 08:05:35.328765 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4783c203-7112-41d2-9f0e-50656d484cf3-config\") pod \"4783c203-7112-41d2-9f0e-50656d484cf3\" (UID: \"4783c203-7112-41d2-9f0e-50656d484cf3\") " Nov 29 08:05:35 crc kubenswrapper[4795]: I1129 08:05:35.328815 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4783c203-7112-41d2-9f0e-50656d484cf3-ovsdbserver-sb\") pod \"4783c203-7112-41d2-9f0e-50656d484cf3\" (UID: \"4783c203-7112-41d2-9f0e-50656d484cf3\") " Nov 29 08:05:35 crc kubenswrapper[4795]: I1129 08:05:35.328832 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4783c203-7112-41d2-9f0e-50656d484cf3-dns-svc\") pod \"4783c203-7112-41d2-9f0e-50656d484cf3\" (UID: \"4783c203-7112-41d2-9f0e-50656d484cf3\") " Nov 29 08:05:35 crc kubenswrapper[4795]: I1129 08:05:35.328883 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmqhb\" (UniqueName: \"kubernetes.io/projected/4783c203-7112-41d2-9f0e-50656d484cf3-kube-api-access-cmqhb\") pod \"4783c203-7112-41d2-9f0e-50656d484cf3\" (UID: \"4783c203-7112-41d2-9f0e-50656d484cf3\") " Nov 29 08:05:35 crc kubenswrapper[4795]: I1129 08:05:35.400953 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4783c203-7112-41d2-9f0e-50656d484cf3-kube-api-access-cmqhb" (OuterVolumeSpecName: "kube-api-access-cmqhb") pod "4783c203-7112-41d2-9f0e-50656d484cf3" (UID: "4783c203-7112-41d2-9f0e-50656d484cf3"). InnerVolumeSpecName "kube-api-access-cmqhb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:05:35 crc kubenswrapper[4795]: I1129 08:05:35.401329 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4783c203-7112-41d2-9f0e-50656d484cf3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4783c203-7112-41d2-9f0e-50656d484cf3" (UID: "4783c203-7112-41d2-9f0e-50656d484cf3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:05:35 crc kubenswrapper[4795]: I1129 08:05:35.411647 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4783c203-7112-41d2-9f0e-50656d484cf3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4783c203-7112-41d2-9f0e-50656d484cf3" (UID: "4783c203-7112-41d2-9f0e-50656d484cf3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:05:35 crc kubenswrapper[4795]: I1129 08:05:35.417974 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4783c203-7112-41d2-9f0e-50656d484cf3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4783c203-7112-41d2-9f0e-50656d484cf3" (UID: "4783c203-7112-41d2-9f0e-50656d484cf3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:05:35 crc kubenswrapper[4795]: I1129 08:05:35.419732 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4783c203-7112-41d2-9f0e-50656d484cf3-config" (OuterVolumeSpecName: "config") pod "4783c203-7112-41d2-9f0e-50656d484cf3" (UID: "4783c203-7112-41d2-9f0e-50656d484cf3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:05:35 crc kubenswrapper[4795]: I1129 08:05:35.438542 4795 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4783c203-7112-41d2-9f0e-50656d484cf3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:35 crc kubenswrapper[4795]: I1129 08:05:35.438613 4795 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4783c203-7112-41d2-9f0e-50656d484cf3-config\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:35 crc kubenswrapper[4795]: I1129 08:05:35.438630 4795 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4783c203-7112-41d2-9f0e-50656d484cf3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:35 crc kubenswrapper[4795]: I1129 08:05:35.438639 4795 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4783c203-7112-41d2-9f0e-50656d484cf3-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:35 crc kubenswrapper[4795]: I1129 08:05:35.438650 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cmqhb\" (UniqueName: \"kubernetes.io/projected/4783c203-7112-41d2-9f0e-50656d484cf3-kube-api-access-cmqhb\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:35 crc kubenswrapper[4795]: I1129 08:05:35.795477 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-bcc9b4b57-btgmc"] Nov 29 08:05:35 crc kubenswrapper[4795]: E1129 08:05:35.799478 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4783c203-7112-41d2-9f0e-50656d484cf3" containerName="init" Nov 29 08:05:35 crc kubenswrapper[4795]: I1129 08:05:35.799525 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="4783c203-7112-41d2-9f0e-50656d484cf3" containerName="init" Nov 29 08:05:35 crc kubenswrapper[4795]: E1129 08:05:35.799608 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b46442f-1cdc-4645-99e0-eb658f59600b" containerName="init" Nov 29 08:05:35 crc kubenswrapper[4795]: I1129 08:05:35.799620 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b46442f-1cdc-4645-99e0-eb658f59600b" containerName="init" Nov 29 08:05:35 crc kubenswrapper[4795]: I1129 08:05:35.799904 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="4783c203-7112-41d2-9f0e-50656d484cf3" containerName="init" Nov 29 08:05:35 crc kubenswrapper[4795]: I1129 08:05:35.799953 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b46442f-1cdc-4645-99e0-eb658f59600b" containerName="init" Nov 29 08:05:35 crc kubenswrapper[4795]: I1129 08:05:35.802009 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bcc9b4b57-btgmc" Nov 29 08:05:35 crc kubenswrapper[4795]: I1129 08:05:35.809749 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Nov 29 08:05:35 crc kubenswrapper[4795]: I1129 08:05:35.810056 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Nov 29 08:05:35 crc kubenswrapper[4795]: I1129 08:05:35.827276 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-bcc9b4b57-btgmc"] Nov 29 08:05:35 crc kubenswrapper[4795]: I1129 08:05:35.846405 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwm44\" (UniqueName: \"kubernetes.io/projected/25dbe68e-5c4f-4d79-afb2-a0ac640aa889-kube-api-access-lwm44\") pod \"neutron-bcc9b4b57-btgmc\" (UID: \"25dbe68e-5c4f-4d79-afb2-a0ac640aa889\") " pod="openstack/neutron-bcc9b4b57-btgmc" Nov 29 08:05:35 crc kubenswrapper[4795]: I1129 08:05:35.846476 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/25dbe68e-5c4f-4d79-afb2-a0ac640aa889-httpd-config\") pod \"neutron-bcc9b4b57-btgmc\" (UID: \"25dbe68e-5c4f-4d79-afb2-a0ac640aa889\") " pod="openstack/neutron-bcc9b4b57-btgmc" Nov 29 08:05:35 crc kubenswrapper[4795]: I1129 08:05:35.846561 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/25dbe68e-5c4f-4d79-afb2-a0ac640aa889-public-tls-certs\") pod \"neutron-bcc9b4b57-btgmc\" (UID: \"25dbe68e-5c4f-4d79-afb2-a0ac640aa889\") " pod="openstack/neutron-bcc9b4b57-btgmc" Nov 29 08:05:35 crc kubenswrapper[4795]: I1129 08:05:35.846640 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/25dbe68e-5c4f-4d79-afb2-a0ac640aa889-config\") pod \"neutron-bcc9b4b57-btgmc\" (UID: \"25dbe68e-5c4f-4d79-afb2-a0ac640aa889\") " pod="openstack/neutron-bcc9b4b57-btgmc" Nov 29 08:05:35 crc kubenswrapper[4795]: I1129 08:05:35.846664 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/25dbe68e-5c4f-4d79-afb2-a0ac640aa889-ovndb-tls-certs\") pod \"neutron-bcc9b4b57-btgmc\" (UID: \"25dbe68e-5c4f-4d79-afb2-a0ac640aa889\") " pod="openstack/neutron-bcc9b4b57-btgmc" Nov 29 08:05:35 crc kubenswrapper[4795]: I1129 08:05:35.846691 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25dbe68e-5c4f-4d79-afb2-a0ac640aa889-combined-ca-bundle\") pod \"neutron-bcc9b4b57-btgmc\" (UID: \"25dbe68e-5c4f-4d79-afb2-a0ac640aa889\") " pod="openstack/neutron-bcc9b4b57-btgmc" Nov 29 08:05:35 crc kubenswrapper[4795]: I1129 08:05:35.846759 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/25dbe68e-5c4f-4d79-afb2-a0ac640aa889-internal-tls-certs\") pod \"neutron-bcc9b4b57-btgmc\" (UID: \"25dbe68e-5c4f-4d79-afb2-a0ac640aa889\") " pod="openstack/neutron-bcc9b4b57-btgmc" Nov 29 08:05:35 crc kubenswrapper[4795]: I1129 08:05:35.948816 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/25dbe68e-5c4f-4d79-afb2-a0ac640aa889-internal-tls-certs\") pod \"neutron-bcc9b4b57-btgmc\" (UID: \"25dbe68e-5c4f-4d79-afb2-a0ac640aa889\") " pod="openstack/neutron-bcc9b4b57-btgmc" Nov 29 08:05:35 crc kubenswrapper[4795]: I1129 08:05:35.949326 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwm44\" (UniqueName: \"kubernetes.io/projected/25dbe68e-5c4f-4d79-afb2-a0ac640aa889-kube-api-access-lwm44\") pod \"neutron-bcc9b4b57-btgmc\" (UID: \"25dbe68e-5c4f-4d79-afb2-a0ac640aa889\") " pod="openstack/neutron-bcc9b4b57-btgmc" Nov 29 08:05:35 crc kubenswrapper[4795]: I1129 08:05:35.949361 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/25dbe68e-5c4f-4d79-afb2-a0ac640aa889-httpd-config\") pod \"neutron-bcc9b4b57-btgmc\" (UID: \"25dbe68e-5c4f-4d79-afb2-a0ac640aa889\") " pod="openstack/neutron-bcc9b4b57-btgmc" Nov 29 08:05:35 crc kubenswrapper[4795]: I1129 08:05:35.949425 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/25dbe68e-5c4f-4d79-afb2-a0ac640aa889-public-tls-certs\") pod \"neutron-bcc9b4b57-btgmc\" (UID: \"25dbe68e-5c4f-4d79-afb2-a0ac640aa889\") " pod="openstack/neutron-bcc9b4b57-btgmc" Nov 29 08:05:35 crc kubenswrapper[4795]: I1129 08:05:35.949477 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/25dbe68e-5c4f-4d79-afb2-a0ac640aa889-config\") pod \"neutron-bcc9b4b57-btgmc\" (UID: \"25dbe68e-5c4f-4d79-afb2-a0ac640aa889\") " pod="openstack/neutron-bcc9b4b57-btgmc" Nov 29 08:05:35 crc kubenswrapper[4795]: I1129 08:05:35.949502 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/25dbe68e-5c4f-4d79-afb2-a0ac640aa889-ovndb-tls-certs\") pod \"neutron-bcc9b4b57-btgmc\" (UID: \"25dbe68e-5c4f-4d79-afb2-a0ac640aa889\") " pod="openstack/neutron-bcc9b4b57-btgmc" Nov 29 08:05:35 crc kubenswrapper[4795]: I1129 08:05:35.949526 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25dbe68e-5c4f-4d79-afb2-a0ac640aa889-combined-ca-bundle\") pod \"neutron-bcc9b4b57-btgmc\" (UID: \"25dbe68e-5c4f-4d79-afb2-a0ac640aa889\") " pod="openstack/neutron-bcc9b4b57-btgmc" Nov 29 08:05:35 crc kubenswrapper[4795]: I1129 08:05:35.955543 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/25dbe68e-5c4f-4d79-afb2-a0ac640aa889-config\") pod \"neutron-bcc9b4b57-btgmc\" (UID: \"25dbe68e-5c4f-4d79-afb2-a0ac640aa889\") " pod="openstack/neutron-bcc9b4b57-btgmc" Nov 29 08:05:35 crc kubenswrapper[4795]: I1129 08:05:35.960829 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/25dbe68e-5c4f-4d79-afb2-a0ac640aa889-public-tls-certs\") pod \"neutron-bcc9b4b57-btgmc\" (UID: \"25dbe68e-5c4f-4d79-afb2-a0ac640aa889\") " pod="openstack/neutron-bcc9b4b57-btgmc" Nov 29 08:05:35 crc kubenswrapper[4795]: I1129 08:05:35.960892 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/25dbe68e-5c4f-4d79-afb2-a0ac640aa889-internal-tls-certs\") pod \"neutron-bcc9b4b57-btgmc\" (UID: \"25dbe68e-5c4f-4d79-afb2-a0ac640aa889\") " pod="openstack/neutron-bcc9b4b57-btgmc" Nov 29 08:05:35 crc kubenswrapper[4795]: I1129 08:05:35.961427 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25dbe68e-5c4f-4d79-afb2-a0ac640aa889-combined-ca-bundle\") pod \"neutron-bcc9b4b57-btgmc\" (UID: \"25dbe68e-5c4f-4d79-afb2-a0ac640aa889\") " pod="openstack/neutron-bcc9b4b57-btgmc" Nov 29 08:05:35 crc kubenswrapper[4795]: I1129 08:05:35.963670 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/25dbe68e-5c4f-4d79-afb2-a0ac640aa889-httpd-config\") pod \"neutron-bcc9b4b57-btgmc\" (UID: \"25dbe68e-5c4f-4d79-afb2-a0ac640aa889\") " pod="openstack/neutron-bcc9b4b57-btgmc" Nov 29 08:05:35 crc kubenswrapper[4795]: I1129 08:05:35.964822 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/25dbe68e-5c4f-4d79-afb2-a0ac640aa889-ovndb-tls-certs\") pod \"neutron-bcc9b4b57-btgmc\" (UID: \"25dbe68e-5c4f-4d79-afb2-a0ac640aa889\") " pod="openstack/neutron-bcc9b4b57-btgmc" Nov 29 08:05:35 crc kubenswrapper[4795]: I1129 08:05:35.976764 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwm44\" (UniqueName: \"kubernetes.io/projected/25dbe68e-5c4f-4d79-afb2-a0ac640aa889-kube-api-access-lwm44\") pod \"neutron-bcc9b4b57-btgmc\" (UID: \"25dbe68e-5c4f-4d79-afb2-a0ac640aa889\") " pod="openstack/neutron-bcc9b4b57-btgmc" Nov 29 08:05:36 crc kubenswrapper[4795]: I1129 08:05:36.139042 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bcc9b4b57-btgmc" Nov 29 08:05:36 crc kubenswrapper[4795]: I1129 08:05:36.312178 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="e9787cd2-4259-4322-a95d-e3fa30e1a4b6" containerName="glance-log" containerID="cri-o://23359389ee83da425d3fc6016d537d8ee0204a7b06a019a52fccb355afb9e0ee" gracePeriod=30 Nov 29 08:05:36 crc kubenswrapper[4795]: I1129 08:05:36.315303 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="e9787cd2-4259-4322-a95d-e3fa30e1a4b6" containerName="glance-httpd" containerID="cri-o://b2c3f3c1eecfbf601635b24ea6e3d760f791333d98c107abf75dcdb7c59eeccc" gracePeriod=30 Nov 29 08:05:36 crc kubenswrapper[4795]: I1129 08:05:36.356554 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e9787cd2-4259-4322-a95d-e3fa30e1a4b6","Type":"ContainerStarted","Data":"b2c3f3c1eecfbf601635b24ea6e3d760f791333d98c107abf75dcdb7c59eeccc"} Nov 29 08:05:36 crc kubenswrapper[4795]: I1129 08:05:36.371336 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"52a6f349-88f4-429e-8c73-143752c8e249","Type":"ContainerStarted","Data":"3d62d62e35607881dedb8398c5f06dcf45da9a0349367bec3205ef89e02fef3c"} Nov 29 08:05:36 crc kubenswrapper[4795]: I1129 08:05:36.371567 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="52a6f349-88f4-429e-8c73-143752c8e249" containerName="glance-log" containerID="cri-o://8651532643bb14f08367baeeb3b68fb7c315c6a2e267c866a35387fc10610728" gracePeriod=30 Nov 29 08:05:36 crc kubenswrapper[4795]: I1129 08:05:36.372202 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="52a6f349-88f4-429e-8c73-143752c8e249" containerName="glance-httpd" containerID="cri-o://3d62d62e35607881dedb8398c5f06dcf45da9a0349367bec3205ef89e02fef3c" gracePeriod=30 Nov 29 08:05:36 crc kubenswrapper[4795]: I1129 08:05:36.447246 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-m6xrq" event={"ID":"fe89b8b3-85af-4550-a246-b092a5f2f233","Type":"ContainerStarted","Data":"802c86dfd3edba72a7085b4fe4da96b0ee8206da13562600448cafcc2a964b84"} Nov 29 08:05:36 crc kubenswrapper[4795]: I1129 08:05:36.447737 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b6c948c7-k8hff" Nov 29 08:05:36 crc kubenswrapper[4795]: I1129 08:05:36.461641 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5ccc5c4795-m6xrq" Nov 29 08:05:36 crc kubenswrapper[4795]: I1129 08:05:36.495721 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=10.495696651 podStartE2EDuration="10.495696651s" podCreationTimestamp="2025-11-29 08:05:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 08:05:36.347069115 +0000 UTC m=+1582.322644925" watchObservedRunningTime="2025-11-29 08:05:36.495696651 +0000 UTC m=+1582.471272441" Nov 29 08:05:36 crc kubenswrapper[4795]: I1129 08:05:36.523378 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=9.523335197 podStartE2EDuration="9.523335197s" podCreationTimestamp="2025-11-29 08:05:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 08:05:36.447204163 +0000 UTC m=+1582.422779953" watchObservedRunningTime="2025-11-29 08:05:36.523335197 +0000 UTC m=+1582.498910987" Nov 29 08:05:36 crc kubenswrapper[4795]: I1129 08:05:36.536626 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5ccc5c4795-m6xrq" podStartSLOduration=8.536601895 podStartE2EDuration="8.536601895s" podCreationTimestamp="2025-11-29 08:05:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 08:05:36.489581258 +0000 UTC m=+1582.465157038" watchObservedRunningTime="2025-11-29 08:05:36.536601895 +0000 UTC m=+1582.512177685" Nov 29 08:05:36 crc kubenswrapper[4795]: I1129 08:05:36.608429 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b6c948c7-k8hff"] Nov 29 08:05:36 crc kubenswrapper[4795]: I1129 08:05:36.660159 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b6c948c7-k8hff"] Nov 29 08:05:37 crc kubenswrapper[4795]: I1129 08:05:37.093014 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-bcc9b4b57-btgmc"] Nov 29 08:05:37 crc kubenswrapper[4795]: I1129 08:05:37.490261 4795 generic.go:334] "Generic (PLEG): container finished" podID="20682aa9-7d99-41f5-9214-2c08cb1533ec" containerID="8d764fe84bb5442819e04f7a14c354fa2a8f9df310b70eb300cff428ab13603f" exitCode=0 Nov 29 08:05:37 crc kubenswrapper[4795]: I1129 08:05:37.490697 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-jf2ds" event={"ID":"20682aa9-7d99-41f5-9214-2c08cb1533ec","Type":"ContainerDied","Data":"8d764fe84bb5442819e04f7a14c354fa2a8f9df310b70eb300cff428ab13603f"} Nov 29 08:05:37 crc kubenswrapper[4795]: I1129 08:05:37.497262 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bcc9b4b57-btgmc" event={"ID":"25dbe68e-5c4f-4d79-afb2-a0ac640aa889","Type":"ContainerStarted","Data":"ca9620aff4211be4a0cc649df65b9b8c295bd3ea632912872bea0ec6b9f08f0e"} Nov 29 08:05:37 crc kubenswrapper[4795]: I1129 08:05:37.515153 4795 generic.go:334] "Generic (PLEG): container finished" podID="e9787cd2-4259-4322-a95d-e3fa30e1a4b6" containerID="b2c3f3c1eecfbf601635b24ea6e3d760f791333d98c107abf75dcdb7c59eeccc" exitCode=0 Nov 29 08:05:37 crc kubenswrapper[4795]: I1129 08:05:37.515191 4795 generic.go:334] "Generic (PLEG): container finished" podID="e9787cd2-4259-4322-a95d-e3fa30e1a4b6" containerID="23359389ee83da425d3fc6016d537d8ee0204a7b06a019a52fccb355afb9e0ee" exitCode=143 Nov 29 08:05:37 crc kubenswrapper[4795]: I1129 08:05:37.515226 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e9787cd2-4259-4322-a95d-e3fa30e1a4b6","Type":"ContainerDied","Data":"b2c3f3c1eecfbf601635b24ea6e3d760f791333d98c107abf75dcdb7c59eeccc"} Nov 29 08:05:37 crc kubenswrapper[4795]: I1129 08:05:37.515285 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e9787cd2-4259-4322-a95d-e3fa30e1a4b6","Type":"ContainerDied","Data":"23359389ee83da425d3fc6016d537d8ee0204a7b06a019a52fccb355afb9e0ee"} Nov 29 08:05:37 crc kubenswrapper[4795]: I1129 08:05:37.529016 4795 generic.go:334] "Generic (PLEG): container finished" podID="52a6f349-88f4-429e-8c73-143752c8e249" containerID="3d62d62e35607881dedb8398c5f06dcf45da9a0349367bec3205ef89e02fef3c" exitCode=0 Nov 29 08:05:37 crc kubenswrapper[4795]: I1129 08:05:37.529047 4795 generic.go:334] "Generic (PLEG): container finished" podID="52a6f349-88f4-429e-8c73-143752c8e249" containerID="8651532643bb14f08367baeeb3b68fb7c315c6a2e267c866a35387fc10610728" exitCode=143 Nov 29 08:05:37 crc kubenswrapper[4795]: I1129 08:05:37.529694 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"52a6f349-88f4-429e-8c73-143752c8e249","Type":"ContainerDied","Data":"3d62d62e35607881dedb8398c5f06dcf45da9a0349367bec3205ef89e02fef3c"} Nov 29 08:05:37 crc kubenswrapper[4795]: I1129 08:05:37.529758 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"52a6f349-88f4-429e-8c73-143752c8e249","Type":"ContainerDied","Data":"8651532643bb14f08367baeeb3b68fb7c315c6a2e267c866a35387fc10610728"} Nov 29 08:05:38 crc kubenswrapper[4795]: I1129 08:05:38.044701 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-f2mxf" Nov 29 08:05:38 crc kubenswrapper[4795]: I1129 08:05:38.045346 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-f2mxf" Nov 29 08:05:38 crc kubenswrapper[4795]: I1129 08:05:38.303569 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4783c203-7112-41d2-9f0e-50656d484cf3" path="/var/lib/kubelet/pods/4783c203-7112-41d2-9f0e-50656d484cf3/volumes" Nov 29 08:05:38 crc kubenswrapper[4795]: I1129 08:05:38.352240 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 29 08:05:38 crc kubenswrapper[4795]: I1129 08:05:38.397826 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 29 08:05:38 crc kubenswrapper[4795]: I1129 08:05:38.457661 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/52a6f349-88f4-429e-8c73-143752c8e249-httpd-run\") pod \"52a6f349-88f4-429e-8c73-143752c8e249\" (UID: \"52a6f349-88f4-429e-8c73-143752c8e249\") " Nov 29 08:05:38 crc kubenswrapper[4795]: I1129 08:05:38.457716 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52a6f349-88f4-429e-8c73-143752c8e249-logs\") pod \"52a6f349-88f4-429e-8c73-143752c8e249\" (UID: \"52a6f349-88f4-429e-8c73-143752c8e249\") " Nov 29 08:05:38 crc kubenswrapper[4795]: I1129 08:05:38.457764 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e9787cd2-4259-4322-a95d-e3fa30e1a4b6-httpd-run\") pod \"e9787cd2-4259-4322-a95d-e3fa30e1a4b6\" (UID: \"e9787cd2-4259-4322-a95d-e3fa30e1a4b6\") " Nov 29 08:05:38 crc kubenswrapper[4795]: I1129 08:05:38.457795 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"52a6f349-88f4-429e-8c73-143752c8e249\" (UID: \"52a6f349-88f4-429e-8c73-143752c8e249\") " Nov 29 08:05:38 crc kubenswrapper[4795]: I1129 08:05:38.457873 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52a6f349-88f4-429e-8c73-143752c8e249-config-data\") pod \"52a6f349-88f4-429e-8c73-143752c8e249\" (UID: \"52a6f349-88f4-429e-8c73-143752c8e249\") " Nov 29 08:05:38 crc kubenswrapper[4795]: I1129 08:05:38.457902 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"e9787cd2-4259-4322-a95d-e3fa30e1a4b6\" (UID: \"e9787cd2-4259-4322-a95d-e3fa30e1a4b6\") " Nov 29 08:05:38 crc kubenswrapper[4795]: I1129 08:05:38.457941 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9787cd2-4259-4322-a95d-e3fa30e1a4b6-logs\") pod \"e9787cd2-4259-4322-a95d-e3fa30e1a4b6\" (UID: \"e9787cd2-4259-4322-a95d-e3fa30e1a4b6\") " Nov 29 08:05:38 crc kubenswrapper[4795]: I1129 08:05:38.457966 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52a6f349-88f4-429e-8c73-143752c8e249-combined-ca-bundle\") pod \"52a6f349-88f4-429e-8c73-143752c8e249\" (UID: \"52a6f349-88f4-429e-8c73-143752c8e249\") " Nov 29 08:05:38 crc kubenswrapper[4795]: I1129 08:05:38.457992 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9787cd2-4259-4322-a95d-e3fa30e1a4b6-scripts\") pod \"e9787cd2-4259-4322-a95d-e3fa30e1a4b6\" (UID: \"e9787cd2-4259-4322-a95d-e3fa30e1a4b6\") " Nov 29 08:05:38 crc kubenswrapper[4795]: I1129 08:05:38.458038 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqfpt\" (UniqueName: \"kubernetes.io/projected/e9787cd2-4259-4322-a95d-e3fa30e1a4b6-kube-api-access-bqfpt\") pod \"e9787cd2-4259-4322-a95d-e3fa30e1a4b6\" (UID: \"e9787cd2-4259-4322-a95d-e3fa30e1a4b6\") " Nov 29 08:05:38 crc kubenswrapper[4795]: I1129 08:05:38.458059 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9787cd2-4259-4322-a95d-e3fa30e1a4b6-config-data\") pod \"e9787cd2-4259-4322-a95d-e3fa30e1a4b6\" (UID: \"e9787cd2-4259-4322-a95d-e3fa30e1a4b6\") " Nov 29 08:05:38 crc kubenswrapper[4795]: I1129 08:05:38.458084 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52a6f349-88f4-429e-8c73-143752c8e249-scripts\") pod \"52a6f349-88f4-429e-8c73-143752c8e249\" (UID: \"52a6f349-88f4-429e-8c73-143752c8e249\") " Nov 29 08:05:38 crc kubenswrapper[4795]: I1129 08:05:38.458155 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mf9c\" (UniqueName: \"kubernetes.io/projected/52a6f349-88f4-429e-8c73-143752c8e249-kube-api-access-4mf9c\") pod \"52a6f349-88f4-429e-8c73-143752c8e249\" (UID: \"52a6f349-88f4-429e-8c73-143752c8e249\") " Nov 29 08:05:38 crc kubenswrapper[4795]: I1129 08:05:38.458177 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9787cd2-4259-4322-a95d-e3fa30e1a4b6-combined-ca-bundle\") pod \"e9787cd2-4259-4322-a95d-e3fa30e1a4b6\" (UID: \"e9787cd2-4259-4322-a95d-e3fa30e1a4b6\") " Nov 29 08:05:38 crc kubenswrapper[4795]: I1129 08:05:38.458454 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52a6f349-88f4-429e-8c73-143752c8e249-logs" (OuterVolumeSpecName: "logs") pod "52a6f349-88f4-429e-8c73-143752c8e249" (UID: "52a6f349-88f4-429e-8c73-143752c8e249"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:05:38 crc kubenswrapper[4795]: I1129 08:05:38.458683 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52a6f349-88f4-429e-8c73-143752c8e249-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "52a6f349-88f4-429e-8c73-143752c8e249" (UID: "52a6f349-88f4-429e-8c73-143752c8e249"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:05:38 crc kubenswrapper[4795]: I1129 08:05:38.458827 4795 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52a6f349-88f4-429e-8c73-143752c8e249-logs\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:38 crc kubenswrapper[4795]: I1129 08:05:38.459321 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9787cd2-4259-4322-a95d-e3fa30e1a4b6-logs" (OuterVolumeSpecName: "logs") pod "e9787cd2-4259-4322-a95d-e3fa30e1a4b6" (UID: "e9787cd2-4259-4322-a95d-e3fa30e1a4b6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:05:38 crc kubenswrapper[4795]: I1129 08:05:38.464742 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "52a6f349-88f4-429e-8c73-143752c8e249" (UID: "52a6f349-88f4-429e-8c73-143752c8e249"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 29 08:05:38 crc kubenswrapper[4795]: I1129 08:05:38.472982 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52a6f349-88f4-429e-8c73-143752c8e249-scripts" (OuterVolumeSpecName: "scripts") pod "52a6f349-88f4-429e-8c73-143752c8e249" (UID: "52a6f349-88f4-429e-8c73-143752c8e249"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:05:38 crc kubenswrapper[4795]: I1129 08:05:38.476351 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9787cd2-4259-4322-a95d-e3fa30e1a4b6-scripts" (OuterVolumeSpecName: "scripts") pod "e9787cd2-4259-4322-a95d-e3fa30e1a4b6" (UID: "e9787cd2-4259-4322-a95d-e3fa30e1a4b6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:05:38 crc kubenswrapper[4795]: I1129 08:05:38.477134 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "e9787cd2-4259-4322-a95d-e3fa30e1a4b6" (UID: "e9787cd2-4259-4322-a95d-e3fa30e1a4b6"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 29 08:05:38 crc kubenswrapper[4795]: I1129 08:05:38.479211 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9787cd2-4259-4322-a95d-e3fa30e1a4b6-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "e9787cd2-4259-4322-a95d-e3fa30e1a4b6" (UID: "e9787cd2-4259-4322-a95d-e3fa30e1a4b6"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:05:38 crc kubenswrapper[4795]: I1129 08:05:38.479349 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9787cd2-4259-4322-a95d-e3fa30e1a4b6-kube-api-access-bqfpt" (OuterVolumeSpecName: "kube-api-access-bqfpt") pod "e9787cd2-4259-4322-a95d-e3fa30e1a4b6" (UID: "e9787cd2-4259-4322-a95d-e3fa30e1a4b6"). InnerVolumeSpecName "kube-api-access-bqfpt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:05:38 crc kubenswrapper[4795]: I1129 08:05:38.481797 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52a6f349-88f4-429e-8c73-143752c8e249-kube-api-access-4mf9c" (OuterVolumeSpecName: "kube-api-access-4mf9c") pod "52a6f349-88f4-429e-8c73-143752c8e249" (UID: "52a6f349-88f4-429e-8c73-143752c8e249"). InnerVolumeSpecName "kube-api-access-4mf9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:05:38 crc kubenswrapper[4795]: I1129 08:05:38.562528 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mf9c\" (UniqueName: \"kubernetes.io/projected/52a6f349-88f4-429e-8c73-143752c8e249-kube-api-access-4mf9c\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:38 crc kubenswrapper[4795]: I1129 08:05:38.562566 4795 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/52a6f349-88f4-429e-8c73-143752c8e249-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:38 crc kubenswrapper[4795]: I1129 08:05:38.562581 4795 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e9787cd2-4259-4322-a95d-e3fa30e1a4b6-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:38 crc kubenswrapper[4795]: I1129 08:05:38.562618 4795 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Nov 29 08:05:38 crc kubenswrapper[4795]: I1129 08:05:38.562631 4795 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Nov 29 08:05:38 crc kubenswrapper[4795]: I1129 08:05:38.562640 4795 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9787cd2-4259-4322-a95d-e3fa30e1a4b6-logs\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:38 crc kubenswrapper[4795]: I1129 08:05:38.562649 4795 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9787cd2-4259-4322-a95d-e3fa30e1a4b6-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:38 crc kubenswrapper[4795]: I1129 08:05:38.562658 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqfpt\" (UniqueName: \"kubernetes.io/projected/e9787cd2-4259-4322-a95d-e3fa30e1a4b6-kube-api-access-bqfpt\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:38 crc kubenswrapper[4795]: I1129 08:05:38.562667 4795 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52a6f349-88f4-429e-8c73-143752c8e249-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:38 crc kubenswrapper[4795]: I1129 08:05:38.571853 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bcc9b4b57-btgmc" event={"ID":"25dbe68e-5c4f-4d79-afb2-a0ac640aa889","Type":"ContainerStarted","Data":"fd376c607281fbda0a7f37dd59bbecb3666da18217479a053050ecbf6f74cf81"} Nov 29 08:05:38 crc kubenswrapper[4795]: I1129 08:05:38.590298 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-85xp6" event={"ID":"59666d8f-35e8-4c8a-887f-0c23881547ec","Type":"ContainerStarted","Data":"6626ac55052d6cec6be4cab26b79b381bbc946d286f0273e78ae2de2ebb94a04"} Nov 29 08:05:38 crc kubenswrapper[4795]: I1129 08:05:38.613569 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e9787cd2-4259-4322-a95d-e3fa30e1a4b6","Type":"ContainerDied","Data":"f1fc52c035ae5884eff83585be7a370156ede3c3a83b7f852b2db8471b6095c1"} Nov 29 08:05:38 crc kubenswrapper[4795]: I1129 08:05:38.613642 4795 scope.go:117] "RemoveContainer" containerID="b2c3f3c1eecfbf601635b24ea6e3d760f791333d98c107abf75dcdb7c59eeccc" Nov 29 08:05:38 crc kubenswrapper[4795]: I1129 08:05:38.613855 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 29 08:05:38 crc kubenswrapper[4795]: I1129 08:05:38.641778 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 29 08:05:38 crc kubenswrapper[4795]: I1129 08:05:38.641837 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"52a6f349-88f4-429e-8c73-143752c8e249","Type":"ContainerDied","Data":"92034690c8d97bbee17ff54bcd99671019e5467a1a8d76648ae83254b3353e2e"} Nov 29 08:05:38 crc kubenswrapper[4795]: I1129 08:05:38.665991 4795 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Nov 29 08:05:38 crc kubenswrapper[4795]: I1129 08:05:38.666133 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9787cd2-4259-4322-a95d-e3fa30e1a4b6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e9787cd2-4259-4322-a95d-e3fa30e1a4b6" (UID: "e9787cd2-4259-4322-a95d-e3fa30e1a4b6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:05:38 crc kubenswrapper[4795]: I1129 08:05:38.669775 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-85xp6" podStartSLOduration=2.959400218 podStartE2EDuration="1m11.669757904s" podCreationTimestamp="2025-11-29 08:04:27 +0000 UTC" firstStartedPulling="2025-11-29 08:04:29.305272458 +0000 UTC m=+1515.280848238" lastFinishedPulling="2025-11-29 08:05:38.015630134 +0000 UTC m=+1583.991205924" observedRunningTime="2025-11-29 08:05:38.620454112 +0000 UTC m=+1584.596029912" watchObservedRunningTime="2025-11-29 08:05:38.669757904 +0000 UTC m=+1584.645333694" Nov 29 08:05:38 crc kubenswrapper[4795]: I1129 08:05:38.672328 4795 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9787cd2-4259-4322-a95d-e3fa30e1a4b6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:38 crc kubenswrapper[4795]: I1129 08:05:38.672362 4795 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:38 crc kubenswrapper[4795]: I1129 08:05:38.675033 4795 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Nov 29 08:05:38 crc kubenswrapper[4795]: I1129 08:05:38.675824 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52a6f349-88f4-429e-8c73-143752c8e249-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "52a6f349-88f4-429e-8c73-143752c8e249" (UID: "52a6f349-88f4-429e-8c73-143752c8e249"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:05:38 crc kubenswrapper[4795]: I1129 08:05:38.725457 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52a6f349-88f4-429e-8c73-143752c8e249-config-data" (OuterVolumeSpecName: "config-data") pod "52a6f349-88f4-429e-8c73-143752c8e249" (UID: "52a6f349-88f4-429e-8c73-143752c8e249"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:05:38 crc kubenswrapper[4795]: I1129 08:05:38.748363 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9787cd2-4259-4322-a95d-e3fa30e1a4b6-config-data" (OuterVolumeSpecName: "config-data") pod "e9787cd2-4259-4322-a95d-e3fa30e1a4b6" (UID: "e9787cd2-4259-4322-a95d-e3fa30e1a4b6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:05:38 crc kubenswrapper[4795]: I1129 08:05:38.777631 4795 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52a6f349-88f4-429e-8c73-143752c8e249-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:38 crc kubenswrapper[4795]: I1129 08:05:38.777674 4795 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:38 crc kubenswrapper[4795]: I1129 08:05:38.777685 4795 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52a6f349-88f4-429e-8c73-143752c8e249-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:38 crc kubenswrapper[4795]: I1129 08:05:38.777697 4795 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9787cd2-4259-4322-a95d-e3fa30e1a4b6-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.103668 4795 scope.go:117] "RemoveContainer" containerID="23359389ee83da425d3fc6016d537d8ee0204a7b06a019a52fccb355afb9e0ee" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.133307 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.153237 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.168497 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.171285 4795 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-f2mxf" podUID="b5c53f2b-bb1e-40be-9ca4-40b2c63545e4" containerName="registry-server" probeResult="failure" output=< Nov 29 08:05:39 crc kubenswrapper[4795]: timeout: failed to connect service ":50051" within 1s Nov 29 08:05:39 crc kubenswrapper[4795]: > Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.187192 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.203402 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 29 08:05:39 crc kubenswrapper[4795]: E1129 08:05:39.203977 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9787cd2-4259-4322-a95d-e3fa30e1a4b6" containerName="glance-httpd" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.203992 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9787cd2-4259-4322-a95d-e3fa30e1a4b6" containerName="glance-httpd" Nov 29 08:05:39 crc kubenswrapper[4795]: E1129 08:05:39.204020 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52a6f349-88f4-429e-8c73-143752c8e249" containerName="glance-httpd" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.204027 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="52a6f349-88f4-429e-8c73-143752c8e249" containerName="glance-httpd" Nov 29 08:05:39 crc kubenswrapper[4795]: E1129 08:05:39.204067 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9787cd2-4259-4322-a95d-e3fa30e1a4b6" containerName="glance-log" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.204076 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9787cd2-4259-4322-a95d-e3fa30e1a4b6" containerName="glance-log" Nov 29 08:05:39 crc kubenswrapper[4795]: E1129 08:05:39.204092 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52a6f349-88f4-429e-8c73-143752c8e249" containerName="glance-log" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.204100 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="52a6f349-88f4-429e-8c73-143752c8e249" containerName="glance-log" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.204332 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9787cd2-4259-4322-a95d-e3fa30e1a4b6" containerName="glance-httpd" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.204347 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="52a6f349-88f4-429e-8c73-143752c8e249" containerName="glance-log" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.204365 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9787cd2-4259-4322-a95d-e3fa30e1a4b6" containerName="glance-log" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.204384 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="52a6f349-88f4-429e-8c73-143752c8e249" containerName="glance-httpd" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.205829 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.213692 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-qkzd6" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.213987 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.214356 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.214912 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.219173 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.229070 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.231329 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.234338 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.240482 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.250793 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.278159 4795 scope.go:117] "RemoveContainer" containerID="3d62d62e35607881dedb8398c5f06dcf45da9a0349367bec3205ef89e02fef3c" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.309438 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314\") " pod="openstack/glance-default-internal-api-0" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.309557 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314\") " pod="openstack/glance-default-internal-api-0" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.309649 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314\") " pod="openstack/glance-default-internal-api-0" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.309730 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68tkt\" (UniqueName: \"kubernetes.io/projected/eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314-kube-api-access-68tkt\") pod \"glance-default-internal-api-0\" (UID: \"eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314\") " pod="openstack/glance-default-internal-api-0" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.309773 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314-logs\") pod \"glance-default-internal-api-0\" (UID: \"eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314\") " pod="openstack/glance-default-internal-api-0" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.309843 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314-scripts\") pod \"glance-default-internal-api-0\" (UID: \"eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314\") " pod="openstack/glance-default-internal-api-0" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.309926 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314-config-data\") pod \"glance-default-internal-api-0\" (UID: \"eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314\") " pod="openstack/glance-default-internal-api-0" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.310026 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314\") " pod="openstack/glance-default-internal-api-0" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.328265 4795 scope.go:117] "RemoveContainer" containerID="8651532643bb14f08367baeeb3b68fb7c315c6a2e267c866a35387fc10610728" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.333707 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-jf2ds" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.420102 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4cc813d1-4929-48c8-9268-49a4b96562ac-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4cc813d1-4929-48c8-9268-49a4b96562ac\") " pod="openstack/glance-default-external-api-0" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.420170 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9b7k\" (UniqueName: \"kubernetes.io/projected/4cc813d1-4929-48c8-9268-49a4b96562ac-kube-api-access-x9b7k\") pod \"glance-default-external-api-0\" (UID: \"4cc813d1-4929-48c8-9268-49a4b96562ac\") " pod="openstack/glance-default-external-api-0" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.420219 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314\") " pod="openstack/glance-default-internal-api-0" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.420290 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314\") " pod="openstack/glance-default-internal-api-0" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.420369 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314\") " pod="openstack/glance-default-internal-api-0" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.420431 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314\") " pod="openstack/glance-default-internal-api-0" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.420494 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4cc813d1-4929-48c8-9268-49a4b96562ac-scripts\") pod \"glance-default-external-api-0\" (UID: \"4cc813d1-4929-48c8-9268-49a4b96562ac\") " pod="openstack/glance-default-external-api-0" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.420551 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68tkt\" (UniqueName: \"kubernetes.io/projected/eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314-kube-api-access-68tkt\") pod \"glance-default-internal-api-0\" (UID: \"eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314\") " pod="openstack/glance-default-internal-api-0" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.420634 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4cc813d1-4929-48c8-9268-49a4b96562ac-logs\") pod \"glance-default-external-api-0\" (UID: \"4cc813d1-4929-48c8-9268-49a4b96562ac\") " pod="openstack/glance-default-external-api-0" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.420651 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cc813d1-4929-48c8-9268-49a4b96562ac-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"4cc813d1-4929-48c8-9268-49a4b96562ac\") " pod="openstack/glance-default-external-api-0" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.420673 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314-logs\") pod \"glance-default-internal-api-0\" (UID: \"eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314\") " pod="openstack/glance-default-internal-api-0" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.420736 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314-scripts\") pod \"glance-default-internal-api-0\" (UID: \"eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314\") " pod="openstack/glance-default-internal-api-0" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.420791 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cc813d1-4929-48c8-9268-49a4b96562ac-config-data\") pod \"glance-default-external-api-0\" (UID: \"4cc813d1-4929-48c8-9268-49a4b96562ac\") " pod="openstack/glance-default-external-api-0" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.420823 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"4cc813d1-4929-48c8-9268-49a4b96562ac\") " pod="openstack/glance-default-external-api-0" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.420867 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cc813d1-4929-48c8-9268-49a4b96562ac-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4cc813d1-4929-48c8-9268-49a4b96562ac\") " pod="openstack/glance-default-external-api-0" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.420899 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314-config-data\") pod \"glance-default-internal-api-0\" (UID: \"eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314\") " pod="openstack/glance-default-internal-api-0" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.421028 4795 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-internal-api-0" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.421096 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314\") " pod="openstack/glance-default-internal-api-0" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.422232 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314-logs\") pod \"glance-default-internal-api-0\" (UID: \"eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314\") " pod="openstack/glance-default-internal-api-0" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.426044 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314-scripts\") pod \"glance-default-internal-api-0\" (UID: \"eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314\") " pod="openstack/glance-default-internal-api-0" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.450463 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314\") " pod="openstack/glance-default-internal-api-0" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.450732 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314-config-data\") pod \"glance-default-internal-api-0\" (UID: \"eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314\") " pod="openstack/glance-default-internal-api-0" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.455058 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68tkt\" (UniqueName: \"kubernetes.io/projected/eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314-kube-api-access-68tkt\") pod \"glance-default-internal-api-0\" (UID: \"eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314\") " pod="openstack/glance-default-internal-api-0" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.455214 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314\") " pod="openstack/glance-default-internal-api-0" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.524227 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20682aa9-7d99-41f5-9214-2c08cb1533ec-combined-ca-bundle\") pod \"20682aa9-7d99-41f5-9214-2c08cb1533ec\" (UID: \"20682aa9-7d99-41f5-9214-2c08cb1533ec\") " Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.524393 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20682aa9-7d99-41f5-9214-2c08cb1533ec-scripts\") pod \"20682aa9-7d99-41f5-9214-2c08cb1533ec\" (UID: \"20682aa9-7d99-41f5-9214-2c08cb1533ec\") " Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.524501 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20682aa9-7d99-41f5-9214-2c08cb1533ec-config-data\") pod \"20682aa9-7d99-41f5-9214-2c08cb1533ec\" (UID: \"20682aa9-7d99-41f5-9214-2c08cb1533ec\") " Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.524528 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20682aa9-7d99-41f5-9214-2c08cb1533ec-logs\") pod \"20682aa9-7d99-41f5-9214-2c08cb1533ec\" (UID: \"20682aa9-7d99-41f5-9214-2c08cb1533ec\") " Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.524571 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbx69\" (UniqueName: \"kubernetes.io/projected/20682aa9-7d99-41f5-9214-2c08cb1533ec-kube-api-access-xbx69\") pod \"20682aa9-7d99-41f5-9214-2c08cb1533ec\" (UID: \"20682aa9-7d99-41f5-9214-2c08cb1533ec\") " Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.524959 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4cc813d1-4929-48c8-9268-49a4b96562ac-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4cc813d1-4929-48c8-9268-49a4b96562ac\") " pod="openstack/glance-default-external-api-0" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.525035 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9b7k\" (UniqueName: \"kubernetes.io/projected/4cc813d1-4929-48c8-9268-49a4b96562ac-kube-api-access-x9b7k\") pod \"glance-default-external-api-0\" (UID: \"4cc813d1-4929-48c8-9268-49a4b96562ac\") " pod="openstack/glance-default-external-api-0" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.525216 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4cc813d1-4929-48c8-9268-49a4b96562ac-scripts\") pod \"glance-default-external-api-0\" (UID: \"4cc813d1-4929-48c8-9268-49a4b96562ac\") " pod="openstack/glance-default-external-api-0" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.525286 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4cc813d1-4929-48c8-9268-49a4b96562ac-logs\") pod \"glance-default-external-api-0\" (UID: \"4cc813d1-4929-48c8-9268-49a4b96562ac\") " pod="openstack/glance-default-external-api-0" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.526103 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cc813d1-4929-48c8-9268-49a4b96562ac-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"4cc813d1-4929-48c8-9268-49a4b96562ac\") " pod="openstack/glance-default-external-api-0" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.526194 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cc813d1-4929-48c8-9268-49a4b96562ac-config-data\") pod \"glance-default-external-api-0\" (UID: \"4cc813d1-4929-48c8-9268-49a4b96562ac\") " pod="openstack/glance-default-external-api-0" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.526222 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"4cc813d1-4929-48c8-9268-49a4b96562ac\") " pod="openstack/glance-default-external-api-0" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.526270 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cc813d1-4929-48c8-9268-49a4b96562ac-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4cc813d1-4929-48c8-9268-49a4b96562ac\") " pod="openstack/glance-default-external-api-0" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.527173 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4cc813d1-4929-48c8-9268-49a4b96562ac-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4cc813d1-4929-48c8-9268-49a4b96562ac\") " pod="openstack/glance-default-external-api-0" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.527403 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20682aa9-7d99-41f5-9214-2c08cb1533ec-logs" (OuterVolumeSpecName: "logs") pod "20682aa9-7d99-41f5-9214-2c08cb1533ec" (UID: "20682aa9-7d99-41f5-9214-2c08cb1533ec"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.531016 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4cc813d1-4929-48c8-9268-49a4b96562ac-logs\") pod \"glance-default-external-api-0\" (UID: \"4cc813d1-4929-48c8-9268-49a4b96562ac\") " pod="openstack/glance-default-external-api-0" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.534515 4795 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"4cc813d1-4929-48c8-9268-49a4b96562ac\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.537864 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314\") " pod="openstack/glance-default-internal-api-0" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.553447 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20682aa9-7d99-41f5-9214-2c08cb1533ec-kube-api-access-xbx69" (OuterVolumeSpecName: "kube-api-access-xbx69") pod "20682aa9-7d99-41f5-9214-2c08cb1533ec" (UID: "20682aa9-7d99-41f5-9214-2c08cb1533ec"). InnerVolumeSpecName "kube-api-access-xbx69". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.557770 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.557857 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cc813d1-4929-48c8-9268-49a4b96562ac-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4cc813d1-4929-48c8-9268-49a4b96562ac\") " pod="openstack/glance-default-external-api-0" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.563212 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cc813d1-4929-48c8-9268-49a4b96562ac-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"4cc813d1-4929-48c8-9268-49a4b96562ac\") " pod="openstack/glance-default-external-api-0" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.563286 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4cc813d1-4929-48c8-9268-49a4b96562ac-scripts\") pod \"glance-default-external-api-0\" (UID: \"4cc813d1-4929-48c8-9268-49a4b96562ac\") " pod="openstack/glance-default-external-api-0" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.566644 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cc813d1-4929-48c8-9268-49a4b96562ac-config-data\") pod \"glance-default-external-api-0\" (UID: \"4cc813d1-4929-48c8-9268-49a4b96562ac\") " pod="openstack/glance-default-external-api-0" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.580569 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20682aa9-7d99-41f5-9214-2c08cb1533ec-scripts" (OuterVolumeSpecName: "scripts") pod "20682aa9-7d99-41f5-9214-2c08cb1533ec" (UID: "20682aa9-7d99-41f5-9214-2c08cb1533ec"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.584781 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9b7k\" (UniqueName: \"kubernetes.io/projected/4cc813d1-4929-48c8-9268-49a4b96562ac-kube-api-access-x9b7k\") pod \"glance-default-external-api-0\" (UID: \"4cc813d1-4929-48c8-9268-49a4b96562ac\") " pod="openstack/glance-default-external-api-0" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.614554 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20682aa9-7d99-41f5-9214-2c08cb1533ec-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "20682aa9-7d99-41f5-9214-2c08cb1533ec" (UID: "20682aa9-7d99-41f5-9214-2c08cb1533ec"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.631695 4795 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20682aa9-7d99-41f5-9214-2c08cb1533ec-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.631744 4795 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20682aa9-7d99-41f5-9214-2c08cb1533ec-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.631756 4795 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20682aa9-7d99-41f5-9214-2c08cb1533ec-logs\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.631766 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbx69\" (UniqueName: \"kubernetes.io/projected/20682aa9-7d99-41f5-9214-2c08cb1533ec-kube-api-access-xbx69\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.662227 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"4cc813d1-4929-48c8-9268-49a4b96562ac\") " pod="openstack/glance-default-external-api-0" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.662766 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-8686d8994d-2mhmq"] Nov 29 08:05:39 crc kubenswrapper[4795]: E1129 08:05:39.663273 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20682aa9-7d99-41f5-9214-2c08cb1533ec" containerName="placement-db-sync" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.663293 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="20682aa9-7d99-41f5-9214-2c08cb1533ec" containerName="placement-db-sync" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.663553 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="20682aa9-7d99-41f5-9214-2c08cb1533ec" containerName="placement-db-sync" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.677514 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8686d8994d-2mhmq" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.684387 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.684635 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.694818 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20682aa9-7d99-41f5-9214-2c08cb1533ec-config-data" (OuterVolumeSpecName: "config-data") pod "20682aa9-7d99-41f5-9214-2c08cb1533ec" (UID: "20682aa9-7d99-41f5-9214-2c08cb1533ec"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.695438 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8686d8994d-2mhmq"] Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.734007 4795 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20682aa9-7d99-41f5-9214-2c08cb1533ec-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.767875 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-jf2ds" event={"ID":"20682aa9-7d99-41f5-9214-2c08cb1533ec","Type":"ContainerDied","Data":"39b710d96a413cc0f88b94f614e724fb655de78f329b1b0b9cb1b51efc6acca4"} Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.767922 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39b710d96a413cc0f88b94f614e724fb655de78f329b1b0b9cb1b51efc6acca4" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.767976 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-jf2ds" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.787990 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bcc9b4b57-btgmc" event={"ID":"25dbe68e-5c4f-4d79-afb2-a0ac640aa889","Type":"ContainerStarted","Data":"8708874e673f5539b2c3b850a7f9d7f782908d0a2d8788d0e45d84fd180e58a7"} Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.788342 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-bcc9b4b57-btgmc" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.817435 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-bcc9b4b57-btgmc" podStartSLOduration=4.817413598 podStartE2EDuration="4.817413598s" podCreationTimestamp="2025-11-29 08:05:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 08:05:39.815976727 +0000 UTC m=+1585.791552517" watchObservedRunningTime="2025-11-29 08:05:39.817413598 +0000 UTC m=+1585.792989388" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.842996 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3c503b5-e625-4ba5-af4d-9ff304b3f371-logs\") pod \"placement-8686d8994d-2mhmq\" (UID: \"e3c503b5-e625-4ba5-af4d-9ff304b3f371\") " pod="openstack/placement-8686d8994d-2mhmq" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.843060 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3c503b5-e625-4ba5-af4d-9ff304b3f371-config-data\") pod \"placement-8686d8994d-2mhmq\" (UID: \"e3c503b5-e625-4ba5-af4d-9ff304b3f371\") " pod="openstack/placement-8686d8994d-2mhmq" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.843083 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3c503b5-e625-4ba5-af4d-9ff304b3f371-internal-tls-certs\") pod \"placement-8686d8994d-2mhmq\" (UID: \"e3c503b5-e625-4ba5-af4d-9ff304b3f371\") " pod="openstack/placement-8686d8994d-2mhmq" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.843140 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3c503b5-e625-4ba5-af4d-9ff304b3f371-public-tls-certs\") pod \"placement-8686d8994d-2mhmq\" (UID: \"e3c503b5-e625-4ba5-af4d-9ff304b3f371\") " pod="openstack/placement-8686d8994d-2mhmq" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.843165 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6k6b\" (UniqueName: \"kubernetes.io/projected/e3c503b5-e625-4ba5-af4d-9ff304b3f371-kube-api-access-q6k6b\") pod \"placement-8686d8994d-2mhmq\" (UID: \"e3c503b5-e625-4ba5-af4d-9ff304b3f371\") " pod="openstack/placement-8686d8994d-2mhmq" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.843241 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3c503b5-e625-4ba5-af4d-9ff304b3f371-scripts\") pod \"placement-8686d8994d-2mhmq\" (UID: \"e3c503b5-e625-4ba5-af4d-9ff304b3f371\") " pod="openstack/placement-8686d8994d-2mhmq" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.843267 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3c503b5-e625-4ba5-af4d-9ff304b3f371-combined-ca-bundle\") pod \"placement-8686d8994d-2mhmq\" (UID: \"e3c503b5-e625-4ba5-af4d-9ff304b3f371\") " pod="openstack/placement-8686d8994d-2mhmq" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.877426 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.945269 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3c503b5-e625-4ba5-af4d-9ff304b3f371-logs\") pod \"placement-8686d8994d-2mhmq\" (UID: \"e3c503b5-e625-4ba5-af4d-9ff304b3f371\") " pod="openstack/placement-8686d8994d-2mhmq" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.945358 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3c503b5-e625-4ba5-af4d-9ff304b3f371-config-data\") pod \"placement-8686d8994d-2mhmq\" (UID: \"e3c503b5-e625-4ba5-af4d-9ff304b3f371\") " pod="openstack/placement-8686d8994d-2mhmq" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.945413 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3c503b5-e625-4ba5-af4d-9ff304b3f371-internal-tls-certs\") pod \"placement-8686d8994d-2mhmq\" (UID: \"e3c503b5-e625-4ba5-af4d-9ff304b3f371\") " pod="openstack/placement-8686d8994d-2mhmq" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.945524 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3c503b5-e625-4ba5-af4d-9ff304b3f371-public-tls-certs\") pod \"placement-8686d8994d-2mhmq\" (UID: \"e3c503b5-e625-4ba5-af4d-9ff304b3f371\") " pod="openstack/placement-8686d8994d-2mhmq" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.945564 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6k6b\" (UniqueName: \"kubernetes.io/projected/e3c503b5-e625-4ba5-af4d-9ff304b3f371-kube-api-access-q6k6b\") pod \"placement-8686d8994d-2mhmq\" (UID: \"e3c503b5-e625-4ba5-af4d-9ff304b3f371\") " pod="openstack/placement-8686d8994d-2mhmq" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.946224 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3c503b5-e625-4ba5-af4d-9ff304b3f371-scripts\") pod \"placement-8686d8994d-2mhmq\" (UID: \"e3c503b5-e625-4ba5-af4d-9ff304b3f371\") " pod="openstack/placement-8686d8994d-2mhmq" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.946281 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3c503b5-e625-4ba5-af4d-9ff304b3f371-combined-ca-bundle\") pod \"placement-8686d8994d-2mhmq\" (UID: \"e3c503b5-e625-4ba5-af4d-9ff304b3f371\") " pod="openstack/placement-8686d8994d-2mhmq" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.948402 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3c503b5-e625-4ba5-af4d-9ff304b3f371-logs\") pod \"placement-8686d8994d-2mhmq\" (UID: \"e3c503b5-e625-4ba5-af4d-9ff304b3f371\") " pod="openstack/placement-8686d8994d-2mhmq" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.957145 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3c503b5-e625-4ba5-af4d-9ff304b3f371-internal-tls-certs\") pod \"placement-8686d8994d-2mhmq\" (UID: \"e3c503b5-e625-4ba5-af4d-9ff304b3f371\") " pod="openstack/placement-8686d8994d-2mhmq" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.972203 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3c503b5-e625-4ba5-af4d-9ff304b3f371-scripts\") pod \"placement-8686d8994d-2mhmq\" (UID: \"e3c503b5-e625-4ba5-af4d-9ff304b3f371\") " pod="openstack/placement-8686d8994d-2mhmq" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.972289 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3c503b5-e625-4ba5-af4d-9ff304b3f371-combined-ca-bundle\") pod \"placement-8686d8994d-2mhmq\" (UID: \"e3c503b5-e625-4ba5-af4d-9ff304b3f371\") " pod="openstack/placement-8686d8994d-2mhmq" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.972931 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3c503b5-e625-4ba5-af4d-9ff304b3f371-config-data\") pod \"placement-8686d8994d-2mhmq\" (UID: \"e3c503b5-e625-4ba5-af4d-9ff304b3f371\") " pod="openstack/placement-8686d8994d-2mhmq" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.973345 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3c503b5-e625-4ba5-af4d-9ff304b3f371-public-tls-certs\") pod \"placement-8686d8994d-2mhmq\" (UID: \"e3c503b5-e625-4ba5-af4d-9ff304b3f371\") " pod="openstack/placement-8686d8994d-2mhmq" Nov 29 08:05:39 crc kubenswrapper[4795]: I1129 08:05:39.978977 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6k6b\" (UniqueName: \"kubernetes.io/projected/e3c503b5-e625-4ba5-af4d-9ff304b3f371-kube-api-access-q6k6b\") pod \"placement-8686d8994d-2mhmq\" (UID: \"e3c503b5-e625-4ba5-af4d-9ff304b3f371\") " pod="openstack/placement-8686d8994d-2mhmq" Nov 29 08:05:40 crc kubenswrapper[4795]: I1129 08:05:40.129196 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8686d8994d-2mhmq" Nov 29 08:05:40 crc kubenswrapper[4795]: I1129 08:05:40.314399 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52a6f349-88f4-429e-8c73-143752c8e249" path="/var/lib/kubelet/pods/52a6f349-88f4-429e-8c73-143752c8e249/volumes" Nov 29 08:05:40 crc kubenswrapper[4795]: I1129 08:05:40.315392 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9787cd2-4259-4322-a95d-e3fa30e1a4b6" path="/var/lib/kubelet/pods/e9787cd2-4259-4322-a95d-e3fa30e1a4b6/volumes" Nov 29 08:05:40 crc kubenswrapper[4795]: I1129 08:05:40.531955 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 29 08:05:40 crc kubenswrapper[4795]: I1129 08:05:40.785096 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 29 08:05:40 crc kubenswrapper[4795]: I1129 08:05:40.823638 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4cc813d1-4929-48c8-9268-49a4b96562ac","Type":"ContainerStarted","Data":"abcde8702966f4b71b0dab16893cd2483fabce33ecffd8277e2992621cef7a9e"} Nov 29 08:05:40 crc kubenswrapper[4795]: I1129 08:05:40.842868 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-fdl5k" event={"ID":"f6dff0cb-a174-4227-ad82-21a12aee68f5","Type":"ContainerStarted","Data":"d6b9b5cfd126d42217de35750ef89b86614c5ee2c94ea412aacbe1440b81ae8d"} Nov 29 08:05:40 crc kubenswrapper[4795]: I1129 08:05:40.846968 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-tvhw5" event={"ID":"1082de8f-47bf-41ac-875f-8d7db0baab7b","Type":"ContainerStarted","Data":"3b2c7e91906b4f2c34ee265e94e43b0d265ed328aa7d59bcd1d6ea5a6694e59d"} Nov 29 08:05:40 crc kubenswrapper[4795]: I1129 08:05:40.875719 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8686d8994d-2mhmq"] Nov 29 08:05:40 crc kubenswrapper[4795]: I1129 08:05:40.877966 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314","Type":"ContainerStarted","Data":"ce244b42c43bea236b423e7f317fa01d1994de63659dd8db3208992542473a29"} Nov 29 08:05:40 crc kubenswrapper[4795]: I1129 08:05:40.898028 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-fdl5k" podStartSLOduration=2.839161685 podStartE2EDuration="1m12.898005866s" podCreationTimestamp="2025-11-29 08:04:28 +0000 UTC" firstStartedPulling="2025-11-29 08:04:30.033483105 +0000 UTC m=+1516.009058895" lastFinishedPulling="2025-11-29 08:05:40.092327296 +0000 UTC m=+1586.067903076" observedRunningTime="2025-11-29 08:05:40.859062219 +0000 UTC m=+1586.834638009" watchObservedRunningTime="2025-11-29 08:05:40.898005866 +0000 UTC m=+1586.873581656" Nov 29 08:05:40 crc kubenswrapper[4795]: I1129 08:05:40.918731 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-tvhw5" podStartSLOduration=4.011218565 podStartE2EDuration="1m12.918710465s" podCreationTimestamp="2025-11-29 08:04:28 +0000 UTC" firstStartedPulling="2025-11-29 08:04:29.920381959 +0000 UTC m=+1515.895957749" lastFinishedPulling="2025-11-29 08:05:38.827873859 +0000 UTC m=+1584.803449649" observedRunningTime="2025-11-29 08:05:40.891289185 +0000 UTC m=+1586.866864975" watchObservedRunningTime="2025-11-29 08:05:40.918710465 +0000 UTC m=+1586.894286255" Nov 29 08:05:41 crc kubenswrapper[4795]: I1129 08:05:41.103052 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8b2xv" Nov 29 08:05:41 crc kubenswrapper[4795]: I1129 08:05:41.104166 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8b2xv" Nov 29 08:05:41 crc kubenswrapper[4795]: I1129 08:05:41.950221 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314","Type":"ContainerStarted","Data":"3801e5cabb1fda13b76b19bc936d3b2b47d646d50dfb30a2b45e6f75f0068456"} Nov 29 08:05:41 crc kubenswrapper[4795]: I1129 08:05:41.969866 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8686d8994d-2mhmq" event={"ID":"e3c503b5-e625-4ba5-af4d-9ff304b3f371","Type":"ContainerStarted","Data":"1be481397dd49d5f782dfd650713f66ccf79f11b9ee6279051e03786a32f2152"} Nov 29 08:05:41 crc kubenswrapper[4795]: I1129 08:05:41.969921 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8686d8994d-2mhmq" event={"ID":"e3c503b5-e625-4ba5-af4d-9ff304b3f371","Type":"ContainerStarted","Data":"967c1469d53dd57adbc84fddbc08791d7eb2de7b1614c45f40cd73bcd0fdf9ac"} Nov 29 08:05:41 crc kubenswrapper[4795]: I1129 08:05:41.973899 4795 generic.go:334] "Generic (PLEG): container finished" podID="f63e39cf-970a-40df-a823-d1e60521e702" containerID="bfb19c9ce44284ee4afdf4be4136a2041c2065b1c9beb5337b6bb10989b8dd0c" exitCode=0 Nov 29 08:05:41 crc kubenswrapper[4795]: I1129 08:05:41.975975 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-48c6t" event={"ID":"f63e39cf-970a-40df-a823-d1e60521e702","Type":"ContainerDied","Data":"bfb19c9ce44284ee4afdf4be4136a2041c2065b1c9beb5337b6bb10989b8dd0c"} Nov 29 08:05:42 crc kubenswrapper[4795]: I1129 08:05:42.233508 4795 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-8b2xv" podUID="19557359-2cdd-493b-a9ad-0770bf37206e" containerName="registry-server" probeResult="failure" output=< Nov 29 08:05:42 crc kubenswrapper[4795]: timeout: failed to connect service ":50051" within 1s Nov 29 08:05:42 crc kubenswrapper[4795]: > Nov 29 08:05:42 crc kubenswrapper[4795]: I1129 08:05:42.994782 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314","Type":"ContainerStarted","Data":"cd7ac6c1c1e2a29845feefcb848633a58a7d9e2ccf00e46fad7dc38e0e2bd944"} Nov 29 08:05:43 crc kubenswrapper[4795]: I1129 08:05:43.013086 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8686d8994d-2mhmq" event={"ID":"e3c503b5-e625-4ba5-af4d-9ff304b3f371","Type":"ContainerStarted","Data":"b2460ff8bcd9b6e5b3fa7cf0e2314509eec7e552f0efa689c57d4ef263292b6d"} Nov 29 08:05:43 crc kubenswrapper[4795]: I1129 08:05:43.013827 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-8686d8994d-2mhmq" Nov 29 08:05:43 crc kubenswrapper[4795]: I1129 08:05:43.015097 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4cc813d1-4929-48c8-9268-49a4b96562ac","Type":"ContainerStarted","Data":"63e8511f39602027652081d731fc79d74ebb125214e08f7fb390d8a7d03be9a6"} Nov 29 08:05:43 crc kubenswrapper[4795]: I1129 08:05:43.022104 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.022085556 podStartE2EDuration="4.022085556s" podCreationTimestamp="2025-11-29 08:05:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 08:05:43.020775239 +0000 UTC m=+1588.996351019" watchObservedRunningTime="2025-11-29 08:05:43.022085556 +0000 UTC m=+1588.997661346" Nov 29 08:05:43 crc kubenswrapper[4795]: I1129 08:05:43.069030 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-8686d8994d-2mhmq" podStartSLOduration=4.068941319 podStartE2EDuration="4.068941319s" podCreationTimestamp="2025-11-29 08:05:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 08:05:43.044915636 +0000 UTC m=+1589.020491426" watchObservedRunningTime="2025-11-29 08:05:43.068941319 +0000 UTC m=+1589.044517119" Nov 29 08:05:44 crc kubenswrapper[4795]: I1129 08:05:44.030713 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-8686d8994d-2mhmq" Nov 29 08:05:44 crc kubenswrapper[4795]: I1129 08:05:44.197750 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5ccc5c4795-m6xrq" Nov 29 08:05:44 crc kubenswrapper[4795]: I1129 08:05:44.269569 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6ffb94d8ff-pwdw9"] Nov 29 08:05:44 crc kubenswrapper[4795]: I1129 08:05:44.269949 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6ffb94d8ff-pwdw9" podUID="81ba1e26-17fa-4357-86d9-0e51b2aa3814" containerName="dnsmasq-dns" containerID="cri-o://6344c4883c0f70725fbb83383f4df27f6c389ed311ae125d81350835a91e8c93" gracePeriod=10 Nov 29 08:05:45 crc kubenswrapper[4795]: I1129 08:05:45.055267 4795 generic.go:334] "Generic (PLEG): container finished" podID="81ba1e26-17fa-4357-86d9-0e51b2aa3814" containerID="6344c4883c0f70725fbb83383f4df27f6c389ed311ae125d81350835a91e8c93" exitCode=0 Nov 29 08:05:45 crc kubenswrapper[4795]: I1129 08:05:45.057349 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ffb94d8ff-pwdw9" event={"ID":"81ba1e26-17fa-4357-86d9-0e51b2aa3814","Type":"ContainerDied","Data":"6344c4883c0f70725fbb83383f4df27f6c389ed311ae125d81350835a91e8c93"} Nov 29 08:05:47 crc kubenswrapper[4795]: I1129 08:05:47.118570 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-48c6t" event={"ID":"f63e39cf-970a-40df-a823-d1e60521e702","Type":"ContainerDied","Data":"a9ee5173c048d905ae9a3615e56ff08a24b3e21afa91353ea63ac1c93120003e"} Nov 29 08:05:47 crc kubenswrapper[4795]: I1129 08:05:47.119435 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9ee5173c048d905ae9a3615e56ff08a24b3e21afa91353ea63ac1c93120003e" Nov 29 08:05:47 crc kubenswrapper[4795]: I1129 08:05:47.152409 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-48c6t" Nov 29 08:05:47 crc kubenswrapper[4795]: I1129 08:05:47.222939 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f63e39cf-970a-40df-a823-d1e60521e702-scripts\") pod \"f63e39cf-970a-40df-a823-d1e60521e702\" (UID: \"f63e39cf-970a-40df-a823-d1e60521e702\") " Nov 29 08:05:47 crc kubenswrapper[4795]: I1129 08:05:47.223079 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f63e39cf-970a-40df-a823-d1e60521e702-credential-keys\") pod \"f63e39cf-970a-40df-a823-d1e60521e702\" (UID: \"f63e39cf-970a-40df-a823-d1e60521e702\") " Nov 29 08:05:47 crc kubenswrapper[4795]: I1129 08:05:47.223118 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f63e39cf-970a-40df-a823-d1e60521e702-fernet-keys\") pod \"f63e39cf-970a-40df-a823-d1e60521e702\" (UID: \"f63e39cf-970a-40df-a823-d1e60521e702\") " Nov 29 08:05:47 crc kubenswrapper[4795]: I1129 08:05:47.223205 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f63e39cf-970a-40df-a823-d1e60521e702-combined-ca-bundle\") pod \"f63e39cf-970a-40df-a823-d1e60521e702\" (UID: \"f63e39cf-970a-40df-a823-d1e60521e702\") " Nov 29 08:05:47 crc kubenswrapper[4795]: I1129 08:05:47.223285 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f63e39cf-970a-40df-a823-d1e60521e702-config-data\") pod \"f63e39cf-970a-40df-a823-d1e60521e702\" (UID: \"f63e39cf-970a-40df-a823-d1e60521e702\") " Nov 29 08:05:47 crc kubenswrapper[4795]: I1129 08:05:47.223355 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jclnx\" (UniqueName: \"kubernetes.io/projected/f63e39cf-970a-40df-a823-d1e60521e702-kube-api-access-jclnx\") pod \"f63e39cf-970a-40df-a823-d1e60521e702\" (UID: \"f63e39cf-970a-40df-a823-d1e60521e702\") " Nov 29 08:05:47 crc kubenswrapper[4795]: I1129 08:05:47.229688 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f63e39cf-970a-40df-a823-d1e60521e702-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "f63e39cf-970a-40df-a823-d1e60521e702" (UID: "f63e39cf-970a-40df-a823-d1e60521e702"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:05:47 crc kubenswrapper[4795]: I1129 08:05:47.230039 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f63e39cf-970a-40df-a823-d1e60521e702-kube-api-access-jclnx" (OuterVolumeSpecName: "kube-api-access-jclnx") pod "f63e39cf-970a-40df-a823-d1e60521e702" (UID: "f63e39cf-970a-40df-a823-d1e60521e702"). InnerVolumeSpecName "kube-api-access-jclnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:05:47 crc kubenswrapper[4795]: I1129 08:05:47.235292 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f63e39cf-970a-40df-a823-d1e60521e702-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "f63e39cf-970a-40df-a823-d1e60521e702" (UID: "f63e39cf-970a-40df-a823-d1e60521e702"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:05:47 crc kubenswrapper[4795]: I1129 08:05:47.236509 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f63e39cf-970a-40df-a823-d1e60521e702-scripts" (OuterVolumeSpecName: "scripts") pod "f63e39cf-970a-40df-a823-d1e60521e702" (UID: "f63e39cf-970a-40df-a823-d1e60521e702"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:05:47 crc kubenswrapper[4795]: I1129 08:05:47.307887 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f63e39cf-970a-40df-a823-d1e60521e702-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f63e39cf-970a-40df-a823-d1e60521e702" (UID: "f63e39cf-970a-40df-a823-d1e60521e702"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:05:47 crc kubenswrapper[4795]: I1129 08:05:47.313824 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f63e39cf-970a-40df-a823-d1e60521e702-config-data" (OuterVolumeSpecName: "config-data") pod "f63e39cf-970a-40df-a823-d1e60521e702" (UID: "f63e39cf-970a-40df-a823-d1e60521e702"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:05:47 crc kubenswrapper[4795]: I1129 08:05:47.327937 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jclnx\" (UniqueName: \"kubernetes.io/projected/f63e39cf-970a-40df-a823-d1e60521e702-kube-api-access-jclnx\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:47 crc kubenswrapper[4795]: I1129 08:05:47.327979 4795 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f63e39cf-970a-40df-a823-d1e60521e702-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:47 crc kubenswrapper[4795]: I1129 08:05:47.327992 4795 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f63e39cf-970a-40df-a823-d1e60521e702-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:47 crc kubenswrapper[4795]: I1129 08:05:47.328003 4795 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f63e39cf-970a-40df-a823-d1e60521e702-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:47 crc kubenswrapper[4795]: I1129 08:05:47.328013 4795 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f63e39cf-970a-40df-a823-d1e60521e702-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:47 crc kubenswrapper[4795]: I1129 08:05:47.328023 4795 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f63e39cf-970a-40df-a823-d1e60521e702-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:47 crc kubenswrapper[4795]: I1129 08:05:47.449366 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ffb94d8ff-pwdw9" Nov 29 08:05:47 crc kubenswrapper[4795]: I1129 08:05:47.638882 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/81ba1e26-17fa-4357-86d9-0e51b2aa3814-ovsdbserver-nb\") pod \"81ba1e26-17fa-4357-86d9-0e51b2aa3814\" (UID: \"81ba1e26-17fa-4357-86d9-0e51b2aa3814\") " Nov 29 08:05:47 crc kubenswrapper[4795]: I1129 08:05:47.639013 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81ba1e26-17fa-4357-86d9-0e51b2aa3814-config\") pod \"81ba1e26-17fa-4357-86d9-0e51b2aa3814\" (UID: \"81ba1e26-17fa-4357-86d9-0e51b2aa3814\") " Nov 29 08:05:47 crc kubenswrapper[4795]: I1129 08:05:47.639045 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/81ba1e26-17fa-4357-86d9-0e51b2aa3814-ovsdbserver-sb\") pod \"81ba1e26-17fa-4357-86d9-0e51b2aa3814\" (UID: \"81ba1e26-17fa-4357-86d9-0e51b2aa3814\") " Nov 29 08:05:47 crc kubenswrapper[4795]: I1129 08:05:47.639142 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6xh8\" (UniqueName: \"kubernetes.io/projected/81ba1e26-17fa-4357-86d9-0e51b2aa3814-kube-api-access-p6xh8\") pod \"81ba1e26-17fa-4357-86d9-0e51b2aa3814\" (UID: \"81ba1e26-17fa-4357-86d9-0e51b2aa3814\") " Nov 29 08:05:47 crc kubenswrapper[4795]: I1129 08:05:47.639198 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81ba1e26-17fa-4357-86d9-0e51b2aa3814-dns-svc\") pod \"81ba1e26-17fa-4357-86d9-0e51b2aa3814\" (UID: \"81ba1e26-17fa-4357-86d9-0e51b2aa3814\") " Nov 29 08:05:47 crc kubenswrapper[4795]: I1129 08:05:47.664424 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81ba1e26-17fa-4357-86d9-0e51b2aa3814-kube-api-access-p6xh8" (OuterVolumeSpecName: "kube-api-access-p6xh8") pod "81ba1e26-17fa-4357-86d9-0e51b2aa3814" (UID: "81ba1e26-17fa-4357-86d9-0e51b2aa3814"). InnerVolumeSpecName "kube-api-access-p6xh8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:05:47 crc kubenswrapper[4795]: I1129 08:05:47.708144 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81ba1e26-17fa-4357-86d9-0e51b2aa3814-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "81ba1e26-17fa-4357-86d9-0e51b2aa3814" (UID: "81ba1e26-17fa-4357-86d9-0e51b2aa3814"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:05:47 crc kubenswrapper[4795]: I1129 08:05:47.708704 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81ba1e26-17fa-4357-86d9-0e51b2aa3814-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "81ba1e26-17fa-4357-86d9-0e51b2aa3814" (UID: "81ba1e26-17fa-4357-86d9-0e51b2aa3814"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:05:47 crc kubenswrapper[4795]: I1129 08:05:47.712263 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81ba1e26-17fa-4357-86d9-0e51b2aa3814-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "81ba1e26-17fa-4357-86d9-0e51b2aa3814" (UID: "81ba1e26-17fa-4357-86d9-0e51b2aa3814"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:05:47 crc kubenswrapper[4795]: I1129 08:05:47.732448 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81ba1e26-17fa-4357-86d9-0e51b2aa3814-config" (OuterVolumeSpecName: "config") pod "81ba1e26-17fa-4357-86d9-0e51b2aa3814" (UID: "81ba1e26-17fa-4357-86d9-0e51b2aa3814"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:05:47 crc kubenswrapper[4795]: I1129 08:05:47.744977 4795 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81ba1e26-17fa-4357-86d9-0e51b2aa3814-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:47 crc kubenswrapper[4795]: I1129 08:05:47.745014 4795 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/81ba1e26-17fa-4357-86d9-0e51b2aa3814-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:47 crc kubenswrapper[4795]: I1129 08:05:47.745026 4795 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81ba1e26-17fa-4357-86d9-0e51b2aa3814-config\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:47 crc kubenswrapper[4795]: I1129 08:05:47.745037 4795 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/81ba1e26-17fa-4357-86d9-0e51b2aa3814-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:47 crc kubenswrapper[4795]: I1129 08:05:47.745050 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6xh8\" (UniqueName: \"kubernetes.io/projected/81ba1e26-17fa-4357-86d9-0e51b2aa3814-kube-api-access-p6xh8\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:48 crc kubenswrapper[4795]: I1129 08:05:48.139902 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-f2mxf" Nov 29 08:05:48 crc kubenswrapper[4795]: I1129 08:05:48.162899 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ffb94d8ff-pwdw9" event={"ID":"81ba1e26-17fa-4357-86d9-0e51b2aa3814","Type":"ContainerDied","Data":"59611a1b99832e9d43b386c424c6bf8f80835da462d97aa2f658090ba0d6b07b"} Nov 29 08:05:48 crc kubenswrapper[4795]: I1129 08:05:48.163285 4795 scope.go:117] "RemoveContainer" containerID="6344c4883c0f70725fbb83383f4df27f6c389ed311ae125d81350835a91e8c93" Nov 29 08:05:48 crc kubenswrapper[4795]: I1129 08:05:48.162983 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ffb94d8ff-pwdw9" Nov 29 08:05:48 crc kubenswrapper[4795]: I1129 08:05:48.178334 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-48c6t" Nov 29 08:05:48 crc kubenswrapper[4795]: I1129 08:05:48.180724 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6dbced2-2ca2-4189-aad3-7a872ab6209c","Type":"ContainerStarted","Data":"5e54a4fa95674f1f2796053bdb12eb11ba4e4e51801a366a7d261847a2d43c38"} Nov 29 08:05:48 crc kubenswrapper[4795]: I1129 08:05:48.286820 4795 scope.go:117] "RemoveContainer" containerID="2bc3e959bf62522b84702513595a712ff9d3697feac1fe8438fc2c1fc8f9ea4c" Nov 29 08:05:48 crc kubenswrapper[4795]: I1129 08:05:48.297469 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-f2mxf" Nov 29 08:05:48 crc kubenswrapper[4795]: I1129 08:05:48.380874 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6ffb94d8ff-pwdw9"] Nov 29 08:05:48 crc kubenswrapper[4795]: I1129 08:05:48.418245 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6ffb94d8ff-pwdw9"] Nov 29 08:05:48 crc kubenswrapper[4795]: I1129 08:05:48.496695 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7db455fcf4-bs6l9"] Nov 29 08:05:48 crc kubenswrapper[4795]: E1129 08:05:48.497280 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f63e39cf-970a-40df-a823-d1e60521e702" containerName="keystone-bootstrap" Nov 29 08:05:48 crc kubenswrapper[4795]: I1129 08:05:48.497304 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="f63e39cf-970a-40df-a823-d1e60521e702" containerName="keystone-bootstrap" Nov 29 08:05:48 crc kubenswrapper[4795]: E1129 08:05:48.497321 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81ba1e26-17fa-4357-86d9-0e51b2aa3814" containerName="dnsmasq-dns" Nov 29 08:05:48 crc kubenswrapper[4795]: I1129 08:05:48.497330 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="81ba1e26-17fa-4357-86d9-0e51b2aa3814" containerName="dnsmasq-dns" Nov 29 08:05:48 crc kubenswrapper[4795]: E1129 08:05:48.497342 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81ba1e26-17fa-4357-86d9-0e51b2aa3814" containerName="init" Nov 29 08:05:48 crc kubenswrapper[4795]: I1129 08:05:48.497350 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="81ba1e26-17fa-4357-86d9-0e51b2aa3814" containerName="init" Nov 29 08:05:48 crc kubenswrapper[4795]: I1129 08:05:48.497758 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="f63e39cf-970a-40df-a823-d1e60521e702" containerName="keystone-bootstrap" Nov 29 08:05:48 crc kubenswrapper[4795]: I1129 08:05:48.497786 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="81ba1e26-17fa-4357-86d9-0e51b2aa3814" containerName="dnsmasq-dns" Nov 29 08:05:48 crc kubenswrapper[4795]: I1129 08:05:48.499051 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7db455fcf4-bs6l9" Nov 29 08:05:48 crc kubenswrapper[4795]: I1129 08:05:48.502547 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 29 08:05:48 crc kubenswrapper[4795]: I1129 08:05:48.502629 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-j4r47" Nov 29 08:05:48 crc kubenswrapper[4795]: I1129 08:05:48.502726 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Nov 29 08:05:48 crc kubenswrapper[4795]: I1129 08:05:48.502865 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 29 08:05:48 crc kubenswrapper[4795]: I1129 08:05:48.503045 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Nov 29 08:05:48 crc kubenswrapper[4795]: I1129 08:05:48.503189 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 29 08:05:48 crc kubenswrapper[4795]: I1129 08:05:48.530143 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-f2mxf"] Nov 29 08:05:48 crc kubenswrapper[4795]: I1129 08:05:48.549128 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7db455fcf4-bs6l9"] Nov 29 08:05:48 crc kubenswrapper[4795]: I1129 08:05:48.579160 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96vpn\" (UniqueName: \"kubernetes.io/projected/f0a7d947-7e48-449a-a691-63de87afc9c4-kube-api-access-96vpn\") pod \"keystone-7db455fcf4-bs6l9\" (UID: \"f0a7d947-7e48-449a-a691-63de87afc9c4\") " pod="openstack/keystone-7db455fcf4-bs6l9" Nov 29 08:05:48 crc kubenswrapper[4795]: I1129 08:05:48.579252 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0a7d947-7e48-449a-a691-63de87afc9c4-combined-ca-bundle\") pod \"keystone-7db455fcf4-bs6l9\" (UID: \"f0a7d947-7e48-449a-a691-63de87afc9c4\") " pod="openstack/keystone-7db455fcf4-bs6l9" Nov 29 08:05:48 crc kubenswrapper[4795]: I1129 08:05:48.579344 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0a7d947-7e48-449a-a691-63de87afc9c4-scripts\") pod \"keystone-7db455fcf4-bs6l9\" (UID: \"f0a7d947-7e48-449a-a691-63de87afc9c4\") " pod="openstack/keystone-7db455fcf4-bs6l9" Nov 29 08:05:48 crc kubenswrapper[4795]: I1129 08:05:48.579476 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0a7d947-7e48-449a-a691-63de87afc9c4-config-data\") pod \"keystone-7db455fcf4-bs6l9\" (UID: \"f0a7d947-7e48-449a-a691-63de87afc9c4\") " pod="openstack/keystone-7db455fcf4-bs6l9" Nov 29 08:05:48 crc kubenswrapper[4795]: I1129 08:05:48.579528 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f0a7d947-7e48-449a-a691-63de87afc9c4-credential-keys\") pod \"keystone-7db455fcf4-bs6l9\" (UID: \"f0a7d947-7e48-449a-a691-63de87afc9c4\") " pod="openstack/keystone-7db455fcf4-bs6l9" Nov 29 08:05:48 crc kubenswrapper[4795]: I1129 08:05:48.579639 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f0a7d947-7e48-449a-a691-63de87afc9c4-fernet-keys\") pod \"keystone-7db455fcf4-bs6l9\" (UID: \"f0a7d947-7e48-449a-a691-63de87afc9c4\") " pod="openstack/keystone-7db455fcf4-bs6l9" Nov 29 08:05:48 crc kubenswrapper[4795]: I1129 08:05:48.579691 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0a7d947-7e48-449a-a691-63de87afc9c4-internal-tls-certs\") pod \"keystone-7db455fcf4-bs6l9\" (UID: \"f0a7d947-7e48-449a-a691-63de87afc9c4\") " pod="openstack/keystone-7db455fcf4-bs6l9" Nov 29 08:05:48 crc kubenswrapper[4795]: I1129 08:05:48.579729 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0a7d947-7e48-449a-a691-63de87afc9c4-public-tls-certs\") pod \"keystone-7db455fcf4-bs6l9\" (UID: \"f0a7d947-7e48-449a-a691-63de87afc9c4\") " pod="openstack/keystone-7db455fcf4-bs6l9" Nov 29 08:05:48 crc kubenswrapper[4795]: I1129 08:05:48.682068 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f0a7d947-7e48-449a-a691-63de87afc9c4-fernet-keys\") pod \"keystone-7db455fcf4-bs6l9\" (UID: \"f0a7d947-7e48-449a-a691-63de87afc9c4\") " pod="openstack/keystone-7db455fcf4-bs6l9" Nov 29 08:05:48 crc kubenswrapper[4795]: I1129 08:05:48.682170 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0a7d947-7e48-449a-a691-63de87afc9c4-internal-tls-certs\") pod \"keystone-7db455fcf4-bs6l9\" (UID: \"f0a7d947-7e48-449a-a691-63de87afc9c4\") " pod="openstack/keystone-7db455fcf4-bs6l9" Nov 29 08:05:48 crc kubenswrapper[4795]: I1129 08:05:48.682209 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0a7d947-7e48-449a-a691-63de87afc9c4-public-tls-certs\") pod \"keystone-7db455fcf4-bs6l9\" (UID: \"f0a7d947-7e48-449a-a691-63de87afc9c4\") " pod="openstack/keystone-7db455fcf4-bs6l9" Nov 29 08:05:48 crc kubenswrapper[4795]: I1129 08:05:48.682326 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96vpn\" (UniqueName: \"kubernetes.io/projected/f0a7d947-7e48-449a-a691-63de87afc9c4-kube-api-access-96vpn\") pod \"keystone-7db455fcf4-bs6l9\" (UID: \"f0a7d947-7e48-449a-a691-63de87afc9c4\") " pod="openstack/keystone-7db455fcf4-bs6l9" Nov 29 08:05:48 crc kubenswrapper[4795]: I1129 08:05:48.682367 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0a7d947-7e48-449a-a691-63de87afc9c4-combined-ca-bundle\") pod \"keystone-7db455fcf4-bs6l9\" (UID: \"f0a7d947-7e48-449a-a691-63de87afc9c4\") " pod="openstack/keystone-7db455fcf4-bs6l9" Nov 29 08:05:48 crc kubenswrapper[4795]: I1129 08:05:48.682446 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0a7d947-7e48-449a-a691-63de87afc9c4-scripts\") pod \"keystone-7db455fcf4-bs6l9\" (UID: \"f0a7d947-7e48-449a-a691-63de87afc9c4\") " pod="openstack/keystone-7db455fcf4-bs6l9" Nov 29 08:05:48 crc kubenswrapper[4795]: I1129 08:05:48.682536 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0a7d947-7e48-449a-a691-63de87afc9c4-config-data\") pod \"keystone-7db455fcf4-bs6l9\" (UID: \"f0a7d947-7e48-449a-a691-63de87afc9c4\") " pod="openstack/keystone-7db455fcf4-bs6l9" Nov 29 08:05:48 crc kubenswrapper[4795]: I1129 08:05:48.682639 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f0a7d947-7e48-449a-a691-63de87afc9c4-credential-keys\") pod \"keystone-7db455fcf4-bs6l9\" (UID: \"f0a7d947-7e48-449a-a691-63de87afc9c4\") " pod="openstack/keystone-7db455fcf4-bs6l9" Nov 29 08:05:48 crc kubenswrapper[4795]: I1129 08:05:48.689460 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0a7d947-7e48-449a-a691-63de87afc9c4-scripts\") pod \"keystone-7db455fcf4-bs6l9\" (UID: \"f0a7d947-7e48-449a-a691-63de87afc9c4\") " pod="openstack/keystone-7db455fcf4-bs6l9" Nov 29 08:05:48 crc kubenswrapper[4795]: I1129 08:05:48.689620 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f0a7d947-7e48-449a-a691-63de87afc9c4-credential-keys\") pod \"keystone-7db455fcf4-bs6l9\" (UID: \"f0a7d947-7e48-449a-a691-63de87afc9c4\") " pod="openstack/keystone-7db455fcf4-bs6l9" Nov 29 08:05:48 crc kubenswrapper[4795]: I1129 08:05:48.690224 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0a7d947-7e48-449a-a691-63de87afc9c4-internal-tls-certs\") pod \"keystone-7db455fcf4-bs6l9\" (UID: \"f0a7d947-7e48-449a-a691-63de87afc9c4\") " pod="openstack/keystone-7db455fcf4-bs6l9" Nov 29 08:05:48 crc kubenswrapper[4795]: I1129 08:05:48.691568 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0a7d947-7e48-449a-a691-63de87afc9c4-combined-ca-bundle\") pod \"keystone-7db455fcf4-bs6l9\" (UID: \"f0a7d947-7e48-449a-a691-63de87afc9c4\") " pod="openstack/keystone-7db455fcf4-bs6l9" Nov 29 08:05:48 crc kubenswrapper[4795]: I1129 08:05:48.691614 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0a7d947-7e48-449a-a691-63de87afc9c4-public-tls-certs\") pod \"keystone-7db455fcf4-bs6l9\" (UID: \"f0a7d947-7e48-449a-a691-63de87afc9c4\") " pod="openstack/keystone-7db455fcf4-bs6l9" Nov 29 08:05:48 crc kubenswrapper[4795]: I1129 08:05:48.692075 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0a7d947-7e48-449a-a691-63de87afc9c4-config-data\") pod \"keystone-7db455fcf4-bs6l9\" (UID: \"f0a7d947-7e48-449a-a691-63de87afc9c4\") " pod="openstack/keystone-7db455fcf4-bs6l9" Nov 29 08:05:48 crc kubenswrapper[4795]: I1129 08:05:48.692201 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f0a7d947-7e48-449a-a691-63de87afc9c4-fernet-keys\") pod \"keystone-7db455fcf4-bs6l9\" (UID: \"f0a7d947-7e48-449a-a691-63de87afc9c4\") " pod="openstack/keystone-7db455fcf4-bs6l9" Nov 29 08:05:48 crc kubenswrapper[4795]: I1129 08:05:48.706335 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96vpn\" (UniqueName: \"kubernetes.io/projected/f0a7d947-7e48-449a-a691-63de87afc9c4-kube-api-access-96vpn\") pod \"keystone-7db455fcf4-bs6l9\" (UID: \"f0a7d947-7e48-449a-a691-63de87afc9c4\") " pod="openstack/keystone-7db455fcf4-bs6l9" Nov 29 08:05:48 crc kubenswrapper[4795]: I1129 08:05:48.835352 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7db455fcf4-bs6l9" Nov 29 08:05:49 crc kubenswrapper[4795]: I1129 08:05:49.254125 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4cc813d1-4929-48c8-9268-49a4b96562ac","Type":"ContainerStarted","Data":"6baeb754dc6c6709913872a997c4f4e688976a0e25ef7759f36ad0f0f3284123"} Nov 29 08:05:49 crc kubenswrapper[4795]: I1129 08:05:49.352603 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=10.352558671 podStartE2EDuration="10.352558671s" podCreationTimestamp="2025-11-29 08:05:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 08:05:49.29132104 +0000 UTC m=+1595.266896830" watchObservedRunningTime="2025-11-29 08:05:49.352558671 +0000 UTC m=+1595.328134461" Nov 29 08:05:49 crc kubenswrapper[4795]: I1129 08:05:49.560073 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 29 08:05:49 crc kubenswrapper[4795]: I1129 08:05:49.560522 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 29 08:05:49 crc kubenswrapper[4795]: I1129 08:05:49.623255 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 29 08:05:49 crc kubenswrapper[4795]: I1129 08:05:49.670457 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 29 08:05:49 crc kubenswrapper[4795]: I1129 08:05:49.694390 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7db455fcf4-bs6l9"] Nov 29 08:05:49 crc kubenswrapper[4795]: W1129 08:05:49.779265 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0a7d947_7e48_449a_a691_63de87afc9c4.slice/crio-ddc09f49c1e8fc2537cea3709d8133ac0935ad514567ec42ee2244752ecfaf1b WatchSource:0}: Error finding container ddc09f49c1e8fc2537cea3709d8133ac0935ad514567ec42ee2244752ecfaf1b: Status 404 returned error can't find the container with id ddc09f49c1e8fc2537cea3709d8133ac0935ad514567ec42ee2244752ecfaf1b Nov 29 08:05:49 crc kubenswrapper[4795]: I1129 08:05:49.880477 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 29 08:05:49 crc kubenswrapper[4795]: I1129 08:05:49.883727 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 29 08:05:49 crc kubenswrapper[4795]: I1129 08:05:49.967019 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 29 08:05:49 crc kubenswrapper[4795]: I1129 08:05:49.967569 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 29 08:05:50 crc kubenswrapper[4795]: I1129 08:05:50.276833 4795 generic.go:334] "Generic (PLEG): container finished" podID="f6dff0cb-a174-4227-ad82-21a12aee68f5" containerID="d6b9b5cfd126d42217de35750ef89b86614c5ee2c94ea412aacbe1440b81ae8d" exitCode=0 Nov 29 08:05:50 crc kubenswrapper[4795]: I1129 08:05:50.281264 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-f2mxf" podUID="b5c53f2b-bb1e-40be-9ca4-40b2c63545e4" containerName="registry-server" containerID="cri-o://a6fc80f3e705e6564150f04e1b69fd985697cbad439c8e4e18fd23e169f65450" gracePeriod=2 Nov 29 08:05:50 crc kubenswrapper[4795]: I1129 08:05:50.316045 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7db455fcf4-bs6l9" podStartSLOduration=2.316023958 podStartE2EDuration="2.316023958s" podCreationTimestamp="2025-11-29 08:05:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 08:05:50.309099491 +0000 UTC m=+1596.284675281" watchObservedRunningTime="2025-11-29 08:05:50.316023958 +0000 UTC m=+1596.291599748" Nov 29 08:05:50 crc kubenswrapper[4795]: I1129 08:05:50.342794 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81ba1e26-17fa-4357-86d9-0e51b2aa3814" path="/var/lib/kubelet/pods/81ba1e26-17fa-4357-86d9-0e51b2aa3814/volumes" Nov 29 08:05:50 crc kubenswrapper[4795]: I1129 08:05:50.344210 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-fdl5k" event={"ID":"f6dff0cb-a174-4227-ad82-21a12aee68f5","Type":"ContainerDied","Data":"d6b9b5cfd126d42217de35750ef89b86614c5ee2c94ea412aacbe1440b81ae8d"} Nov 29 08:05:50 crc kubenswrapper[4795]: I1129 08:05:50.344393 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7db455fcf4-bs6l9" event={"ID":"f0a7d947-7e48-449a-a691-63de87afc9c4","Type":"ContainerStarted","Data":"ee45c07a59bc531842085104dc1f8b0376befc1a59a2cdc7fbf55e88f2062caa"} Nov 29 08:05:50 crc kubenswrapper[4795]: I1129 08:05:50.344525 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7db455fcf4-bs6l9" event={"ID":"f0a7d947-7e48-449a-a691-63de87afc9c4","Type":"ContainerStarted","Data":"ddc09f49c1e8fc2537cea3709d8133ac0935ad514567ec42ee2244752ecfaf1b"} Nov 29 08:05:50 crc kubenswrapper[4795]: I1129 08:05:50.344726 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 29 08:05:50 crc kubenswrapper[4795]: I1129 08:05:50.344782 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 29 08:05:50 crc kubenswrapper[4795]: I1129 08:05:50.344798 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-7db455fcf4-bs6l9" Nov 29 08:05:50 crc kubenswrapper[4795]: I1129 08:05:50.344809 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 29 08:05:50 crc kubenswrapper[4795]: I1129 08:05:50.344817 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 29 08:05:51 crc kubenswrapper[4795]: I1129 08:05:51.122083 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f2mxf" Nov 29 08:05:51 crc kubenswrapper[4795]: I1129 08:05:51.154174 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6wtd\" (UniqueName: \"kubernetes.io/projected/b5c53f2b-bb1e-40be-9ca4-40b2c63545e4-kube-api-access-b6wtd\") pod \"b5c53f2b-bb1e-40be-9ca4-40b2c63545e4\" (UID: \"b5c53f2b-bb1e-40be-9ca4-40b2c63545e4\") " Nov 29 08:05:51 crc kubenswrapper[4795]: I1129 08:05:51.154376 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5c53f2b-bb1e-40be-9ca4-40b2c63545e4-catalog-content\") pod \"b5c53f2b-bb1e-40be-9ca4-40b2c63545e4\" (UID: \"b5c53f2b-bb1e-40be-9ca4-40b2c63545e4\") " Nov 29 08:05:51 crc kubenswrapper[4795]: I1129 08:05:51.154754 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5c53f2b-bb1e-40be-9ca4-40b2c63545e4-utilities\") pod \"b5c53f2b-bb1e-40be-9ca4-40b2c63545e4\" (UID: \"b5c53f2b-bb1e-40be-9ca4-40b2c63545e4\") " Nov 29 08:05:51 crc kubenswrapper[4795]: I1129 08:05:51.156044 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5c53f2b-bb1e-40be-9ca4-40b2c63545e4-utilities" (OuterVolumeSpecName: "utilities") pod "b5c53f2b-bb1e-40be-9ca4-40b2c63545e4" (UID: "b5c53f2b-bb1e-40be-9ca4-40b2c63545e4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:05:51 crc kubenswrapper[4795]: I1129 08:05:51.177936 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5c53f2b-bb1e-40be-9ca4-40b2c63545e4-kube-api-access-b6wtd" (OuterVolumeSpecName: "kube-api-access-b6wtd") pod "b5c53f2b-bb1e-40be-9ca4-40b2c63545e4" (UID: "b5c53f2b-bb1e-40be-9ca4-40b2c63545e4"). InnerVolumeSpecName "kube-api-access-b6wtd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:05:51 crc kubenswrapper[4795]: I1129 08:05:51.205012 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5c53f2b-bb1e-40be-9ca4-40b2c63545e4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b5c53f2b-bb1e-40be-9ca4-40b2c63545e4" (UID: "b5c53f2b-bb1e-40be-9ca4-40b2c63545e4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:05:51 crc kubenswrapper[4795]: I1129 08:05:51.216352 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8b2xv" Nov 29 08:05:51 crc kubenswrapper[4795]: I1129 08:05:51.270042 4795 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5c53f2b-bb1e-40be-9ca4-40b2c63545e4-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:51 crc kubenswrapper[4795]: I1129 08:05:51.270088 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6wtd\" (UniqueName: \"kubernetes.io/projected/b5c53f2b-bb1e-40be-9ca4-40b2c63545e4-kube-api-access-b6wtd\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:51 crc kubenswrapper[4795]: I1129 08:05:51.270101 4795 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5c53f2b-bb1e-40be-9ca4-40b2c63545e4-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:51 crc kubenswrapper[4795]: I1129 08:05:51.296446 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8b2xv" Nov 29 08:05:51 crc kubenswrapper[4795]: I1129 08:05:51.328204 4795 generic.go:334] "Generic (PLEG): container finished" podID="b5c53f2b-bb1e-40be-9ca4-40b2c63545e4" containerID="a6fc80f3e705e6564150f04e1b69fd985697cbad439c8e4e18fd23e169f65450" exitCode=0 Nov 29 08:05:51 crc kubenswrapper[4795]: I1129 08:05:51.328522 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f2mxf" Nov 29 08:05:51 crc kubenswrapper[4795]: I1129 08:05:51.332075 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f2mxf" event={"ID":"b5c53f2b-bb1e-40be-9ca4-40b2c63545e4","Type":"ContainerDied","Data":"a6fc80f3e705e6564150f04e1b69fd985697cbad439c8e4e18fd23e169f65450"} Nov 29 08:05:51 crc kubenswrapper[4795]: I1129 08:05:51.332143 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f2mxf" event={"ID":"b5c53f2b-bb1e-40be-9ca4-40b2c63545e4","Type":"ContainerDied","Data":"8283ce3d7995553c4ac3f7b58850e186affdb1f02b5ac9f8022266532e0b625d"} Nov 29 08:05:51 crc kubenswrapper[4795]: I1129 08:05:51.332167 4795 scope.go:117] "RemoveContainer" containerID="a6fc80f3e705e6564150f04e1b69fd985697cbad439c8e4e18fd23e169f65450" Nov 29 08:05:51 crc kubenswrapper[4795]: I1129 08:05:51.382358 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-f2mxf"] Nov 29 08:05:51 crc kubenswrapper[4795]: I1129 08:05:51.400444 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-f2mxf"] Nov 29 08:05:51 crc kubenswrapper[4795]: I1129 08:05:51.417241 4795 scope.go:117] "RemoveContainer" containerID="c2e0a8a3dd7cd0f05fa15fe7691c2a8ce25a36c2cd81b4ebbc47a85ce8f18aff" Nov 29 08:05:51 crc kubenswrapper[4795]: I1129 08:05:51.449894 4795 scope.go:117] "RemoveContainer" containerID="6fbb270eea863dd4902d649d7b81983afb166cd4e4daa0c5fbe9a60c56cf963c" Nov 29 08:05:51 crc kubenswrapper[4795]: I1129 08:05:51.549767 4795 scope.go:117] "RemoveContainer" containerID="a6fc80f3e705e6564150f04e1b69fd985697cbad439c8e4e18fd23e169f65450" Nov 29 08:05:51 crc kubenswrapper[4795]: E1129 08:05:51.553009 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6fc80f3e705e6564150f04e1b69fd985697cbad439c8e4e18fd23e169f65450\": container with ID starting with a6fc80f3e705e6564150f04e1b69fd985697cbad439c8e4e18fd23e169f65450 not found: ID does not exist" containerID="a6fc80f3e705e6564150f04e1b69fd985697cbad439c8e4e18fd23e169f65450" Nov 29 08:05:51 crc kubenswrapper[4795]: I1129 08:05:51.553048 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6fc80f3e705e6564150f04e1b69fd985697cbad439c8e4e18fd23e169f65450"} err="failed to get container status \"a6fc80f3e705e6564150f04e1b69fd985697cbad439c8e4e18fd23e169f65450\": rpc error: code = NotFound desc = could not find container \"a6fc80f3e705e6564150f04e1b69fd985697cbad439c8e4e18fd23e169f65450\": container with ID starting with a6fc80f3e705e6564150f04e1b69fd985697cbad439c8e4e18fd23e169f65450 not found: ID does not exist" Nov 29 08:05:51 crc kubenswrapper[4795]: I1129 08:05:51.553071 4795 scope.go:117] "RemoveContainer" containerID="c2e0a8a3dd7cd0f05fa15fe7691c2a8ce25a36c2cd81b4ebbc47a85ce8f18aff" Nov 29 08:05:51 crc kubenswrapper[4795]: E1129 08:05:51.553566 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2e0a8a3dd7cd0f05fa15fe7691c2a8ce25a36c2cd81b4ebbc47a85ce8f18aff\": container with ID starting with c2e0a8a3dd7cd0f05fa15fe7691c2a8ce25a36c2cd81b4ebbc47a85ce8f18aff not found: ID does not exist" containerID="c2e0a8a3dd7cd0f05fa15fe7691c2a8ce25a36c2cd81b4ebbc47a85ce8f18aff" Nov 29 08:05:51 crc kubenswrapper[4795]: I1129 08:05:51.553659 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2e0a8a3dd7cd0f05fa15fe7691c2a8ce25a36c2cd81b4ebbc47a85ce8f18aff"} err="failed to get container status \"c2e0a8a3dd7cd0f05fa15fe7691c2a8ce25a36c2cd81b4ebbc47a85ce8f18aff\": rpc error: code = NotFound desc = could not find container \"c2e0a8a3dd7cd0f05fa15fe7691c2a8ce25a36c2cd81b4ebbc47a85ce8f18aff\": container with ID starting with c2e0a8a3dd7cd0f05fa15fe7691c2a8ce25a36c2cd81b4ebbc47a85ce8f18aff not found: ID does not exist" Nov 29 08:05:51 crc kubenswrapper[4795]: I1129 08:05:51.553715 4795 scope.go:117] "RemoveContainer" containerID="6fbb270eea863dd4902d649d7b81983afb166cd4e4daa0c5fbe9a60c56cf963c" Nov 29 08:05:51 crc kubenswrapper[4795]: E1129 08:05:51.554132 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fbb270eea863dd4902d649d7b81983afb166cd4e4daa0c5fbe9a60c56cf963c\": container with ID starting with 6fbb270eea863dd4902d649d7b81983afb166cd4e4daa0c5fbe9a60c56cf963c not found: ID does not exist" containerID="6fbb270eea863dd4902d649d7b81983afb166cd4e4daa0c5fbe9a60c56cf963c" Nov 29 08:05:51 crc kubenswrapper[4795]: I1129 08:05:51.554209 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fbb270eea863dd4902d649d7b81983afb166cd4e4daa0c5fbe9a60c56cf963c"} err="failed to get container status \"6fbb270eea863dd4902d649d7b81983afb166cd4e4daa0c5fbe9a60c56cf963c\": rpc error: code = NotFound desc = could not find container \"6fbb270eea863dd4902d649d7b81983afb166cd4e4daa0c5fbe9a60c56cf963c\": container with ID starting with 6fbb270eea863dd4902d649d7b81983afb166cd4e4daa0c5fbe9a60c56cf963c not found: ID does not exist" Nov 29 08:05:51 crc kubenswrapper[4795]: I1129 08:05:51.736302 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-fdl5k" Nov 29 08:05:51 crc kubenswrapper[4795]: I1129 08:05:51.817641 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f6dff0cb-a174-4227-ad82-21a12aee68f5-db-sync-config-data\") pod \"f6dff0cb-a174-4227-ad82-21a12aee68f5\" (UID: \"f6dff0cb-a174-4227-ad82-21a12aee68f5\") " Nov 29 08:05:51 crc kubenswrapper[4795]: I1129 08:05:51.817729 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xpfp8\" (UniqueName: \"kubernetes.io/projected/f6dff0cb-a174-4227-ad82-21a12aee68f5-kube-api-access-xpfp8\") pod \"f6dff0cb-a174-4227-ad82-21a12aee68f5\" (UID: \"f6dff0cb-a174-4227-ad82-21a12aee68f5\") " Nov 29 08:05:51 crc kubenswrapper[4795]: I1129 08:05:51.817909 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6dff0cb-a174-4227-ad82-21a12aee68f5-combined-ca-bundle\") pod \"f6dff0cb-a174-4227-ad82-21a12aee68f5\" (UID: \"f6dff0cb-a174-4227-ad82-21a12aee68f5\") " Nov 29 08:05:51 crc kubenswrapper[4795]: I1129 08:05:51.827860 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6dff0cb-a174-4227-ad82-21a12aee68f5-kube-api-access-xpfp8" (OuterVolumeSpecName: "kube-api-access-xpfp8") pod "f6dff0cb-a174-4227-ad82-21a12aee68f5" (UID: "f6dff0cb-a174-4227-ad82-21a12aee68f5"). InnerVolumeSpecName "kube-api-access-xpfp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:05:51 crc kubenswrapper[4795]: I1129 08:05:51.829866 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6dff0cb-a174-4227-ad82-21a12aee68f5-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "f6dff0cb-a174-4227-ad82-21a12aee68f5" (UID: "f6dff0cb-a174-4227-ad82-21a12aee68f5"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:05:51 crc kubenswrapper[4795]: I1129 08:05:51.860725 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6dff0cb-a174-4227-ad82-21a12aee68f5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f6dff0cb-a174-4227-ad82-21a12aee68f5" (UID: "f6dff0cb-a174-4227-ad82-21a12aee68f5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:05:51 crc kubenswrapper[4795]: I1129 08:05:51.922096 4795 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f6dff0cb-a174-4227-ad82-21a12aee68f5-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:51 crc kubenswrapper[4795]: I1129 08:05:51.922160 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xpfp8\" (UniqueName: \"kubernetes.io/projected/f6dff0cb-a174-4227-ad82-21a12aee68f5-kube-api-access-xpfp8\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:51 crc kubenswrapper[4795]: I1129 08:05:51.922176 4795 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6dff0cb-a174-4227-ad82-21a12aee68f5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.214281 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8b2xv"] Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.294783 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5c53f2b-bb1e-40be-9ca4-40b2c63545e4" path="/var/lib/kubelet/pods/b5c53f2b-bb1e-40be-9ca4-40b2c63545e4/volumes" Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.346865 4795 generic.go:334] "Generic (PLEG): container finished" podID="59666d8f-35e8-4c8a-887f-0c23881547ec" containerID="6626ac55052d6cec6be4cab26b79b381bbc946d286f0273e78ae2de2ebb94a04" exitCode=0 Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.346930 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-85xp6" event={"ID":"59666d8f-35e8-4c8a-887f-0c23881547ec","Type":"ContainerDied","Data":"6626ac55052d6cec6be4cab26b79b381bbc946d286f0273e78ae2de2ebb94a04"} Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.350955 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8b2xv" podUID="19557359-2cdd-493b-a9ad-0770bf37206e" containerName="registry-server" containerID="cri-o://5ebe0a03b9d7ffc369722758e67d8d71695fb3dea054e5cf4126807196f14a47" gracePeriod=2 Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.351097 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-fdl5k" Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.351141 4795 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.351155 4795 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.351182 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-fdl5k" event={"ID":"f6dff0cb-a174-4227-ad82-21a12aee68f5","Type":"ContainerDied","Data":"de90d2d3e8ffb89d827ea31e045c69e829a1d31b2de20afc7ea74625c7525347"} Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.351200 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de90d2d3e8ffb89d827ea31e045c69e829a1d31b2de20afc7ea74625c7525347" Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.351927 4795 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.555344 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-6875d4f667-x5hjc"] Nov 29 08:05:52 crc kubenswrapper[4795]: E1129 08:05:52.556244 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6dff0cb-a174-4227-ad82-21a12aee68f5" containerName="barbican-db-sync" Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.556331 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6dff0cb-a174-4227-ad82-21a12aee68f5" containerName="barbican-db-sync" Nov 29 08:05:52 crc kubenswrapper[4795]: E1129 08:05:52.557817 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5c53f2b-bb1e-40be-9ca4-40b2c63545e4" containerName="extract-content" Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.573719 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5c53f2b-bb1e-40be-9ca4-40b2c63545e4" containerName="extract-content" Nov 29 08:05:52 crc kubenswrapper[4795]: E1129 08:05:52.574067 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5c53f2b-bb1e-40be-9ca4-40b2c63545e4" containerName="extract-utilities" Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.574380 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5c53f2b-bb1e-40be-9ca4-40b2c63545e4" containerName="extract-utilities" Nov 29 08:05:52 crc kubenswrapper[4795]: E1129 08:05:52.574519 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5c53f2b-bb1e-40be-9ca4-40b2c63545e4" containerName="registry-server" Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.574612 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5c53f2b-bb1e-40be-9ca4-40b2c63545e4" containerName="registry-server" Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.575303 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5c53f2b-bb1e-40be-9ca4-40b2c63545e4" containerName="registry-server" Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.575401 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6dff0cb-a174-4227-ad82-21a12aee68f5" containerName="barbican-db-sync" Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.577817 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6875d4f667-x5hjc"] Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.580355 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6875d4f667-x5hjc" Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.586505 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.586777 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.590914 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-zw9w5" Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.618661 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-65899db58b-mz594"] Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.620623 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-65899db58b-mz594" Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.625061 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.682770 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-65899db58b-mz594"] Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.730676 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-d8lng"] Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.732782 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-d8lng" Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.746914 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-d8lng"] Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.758514 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1bca86ff-c24c-4d08-b7ed-be2433fe9735-config-data-custom\") pod \"barbican-worker-6875d4f667-x5hjc\" (UID: \"1bca86ff-c24c-4d08-b7ed-be2433fe9735\") " pod="openstack/barbican-worker-6875d4f667-x5hjc" Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.758574 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c64caf8-e57d-495f-985c-844edea0d146-config-data\") pod \"barbican-keystone-listener-65899db58b-mz594\" (UID: \"0c64caf8-e57d-495f-985c-844edea0d146\") " pod="openstack/barbican-keystone-listener-65899db58b-mz594" Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.758673 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nl6w\" (UniqueName: \"kubernetes.io/projected/0c64caf8-e57d-495f-985c-844edea0d146-kube-api-access-5nl6w\") pod \"barbican-keystone-listener-65899db58b-mz594\" (UID: \"0c64caf8-e57d-495f-985c-844edea0d146\") " pod="openstack/barbican-keystone-listener-65899db58b-mz594" Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.758716 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkkd7\" (UniqueName: \"kubernetes.io/projected/1bca86ff-c24c-4d08-b7ed-be2433fe9735-kube-api-access-kkkd7\") pod \"barbican-worker-6875d4f667-x5hjc\" (UID: \"1bca86ff-c24c-4d08-b7ed-be2433fe9735\") " pod="openstack/barbican-worker-6875d4f667-x5hjc" Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.758779 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c64caf8-e57d-495f-985c-844edea0d146-combined-ca-bundle\") pod \"barbican-keystone-listener-65899db58b-mz594\" (UID: \"0c64caf8-e57d-495f-985c-844edea0d146\") " pod="openstack/barbican-keystone-listener-65899db58b-mz594" Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.758920 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0c64caf8-e57d-495f-985c-844edea0d146-config-data-custom\") pod \"barbican-keystone-listener-65899db58b-mz594\" (UID: \"0c64caf8-e57d-495f-985c-844edea0d146\") " pod="openstack/barbican-keystone-listener-65899db58b-mz594" Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.758957 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bca86ff-c24c-4d08-b7ed-be2433fe9735-logs\") pod \"barbican-worker-6875d4f667-x5hjc\" (UID: \"1bca86ff-c24c-4d08-b7ed-be2433fe9735\") " pod="openstack/barbican-worker-6875d4f667-x5hjc" Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.758986 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bca86ff-c24c-4d08-b7ed-be2433fe9735-combined-ca-bundle\") pod \"barbican-worker-6875d4f667-x5hjc\" (UID: \"1bca86ff-c24c-4d08-b7ed-be2433fe9735\") " pod="openstack/barbican-worker-6875d4f667-x5hjc" Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.759088 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bca86ff-c24c-4d08-b7ed-be2433fe9735-config-data\") pod \"barbican-worker-6875d4f667-x5hjc\" (UID: \"1bca86ff-c24c-4d08-b7ed-be2433fe9735\") " pod="openstack/barbican-worker-6875d4f667-x5hjc" Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.759111 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c64caf8-e57d-495f-985c-844edea0d146-logs\") pod \"barbican-keystone-listener-65899db58b-mz594\" (UID: \"0c64caf8-e57d-495f-985c-844edea0d146\") " pod="openstack/barbican-keystone-listener-65899db58b-mz594" Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.815528 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5c8566458-jwt4m"] Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.817386 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5c8566458-jwt4m" Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.820249 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.823915 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5c8566458-jwt4m"] Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.860934 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c64caf8-e57d-495f-985c-844edea0d146-combined-ca-bundle\") pod \"barbican-keystone-listener-65899db58b-mz594\" (UID: \"0c64caf8-e57d-495f-985c-844edea0d146\") " pod="openstack/barbican-keystone-listener-65899db58b-mz594" Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.861021 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2f892ddb-2efd-437e-9fae-9f7119f17847-dns-swift-storage-0\") pod \"dnsmasq-dns-688c87cc99-d8lng\" (UID: \"2f892ddb-2efd-437e-9fae-9f7119f17847\") " pod="openstack/dnsmasq-dns-688c87cc99-d8lng" Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.861053 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f892ddb-2efd-437e-9fae-9f7119f17847-config\") pod \"dnsmasq-dns-688c87cc99-d8lng\" (UID: \"2f892ddb-2efd-437e-9fae-9f7119f17847\") " pod="openstack/dnsmasq-dns-688c87cc99-d8lng" Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.861079 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2f892ddb-2efd-437e-9fae-9f7119f17847-ovsdbserver-sb\") pod \"dnsmasq-dns-688c87cc99-d8lng\" (UID: \"2f892ddb-2efd-437e-9fae-9f7119f17847\") " pod="openstack/dnsmasq-dns-688c87cc99-d8lng" Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.861127 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2f892ddb-2efd-437e-9fae-9f7119f17847-dns-svc\") pod \"dnsmasq-dns-688c87cc99-d8lng\" (UID: \"2f892ddb-2efd-437e-9fae-9f7119f17847\") " pod="openstack/dnsmasq-dns-688c87cc99-d8lng" Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.861148 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfm7b\" (UniqueName: \"kubernetes.io/projected/2f892ddb-2efd-437e-9fae-9f7119f17847-kube-api-access-sfm7b\") pod \"dnsmasq-dns-688c87cc99-d8lng\" (UID: \"2f892ddb-2efd-437e-9fae-9f7119f17847\") " pod="openstack/dnsmasq-dns-688c87cc99-d8lng" Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.861170 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0c64caf8-e57d-495f-985c-844edea0d146-config-data-custom\") pod \"barbican-keystone-listener-65899db58b-mz594\" (UID: \"0c64caf8-e57d-495f-985c-844edea0d146\") " pod="openstack/barbican-keystone-listener-65899db58b-mz594" Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.861203 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bca86ff-c24c-4d08-b7ed-be2433fe9735-logs\") pod \"barbican-worker-6875d4f667-x5hjc\" (UID: \"1bca86ff-c24c-4d08-b7ed-be2433fe9735\") " pod="openstack/barbican-worker-6875d4f667-x5hjc" Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.861233 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bca86ff-c24c-4d08-b7ed-be2433fe9735-combined-ca-bundle\") pod \"barbican-worker-6875d4f667-x5hjc\" (UID: \"1bca86ff-c24c-4d08-b7ed-be2433fe9735\") " pod="openstack/barbican-worker-6875d4f667-x5hjc" Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.861279 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bca86ff-c24c-4d08-b7ed-be2433fe9735-config-data\") pod \"barbican-worker-6875d4f667-x5hjc\" (UID: \"1bca86ff-c24c-4d08-b7ed-be2433fe9735\") " pod="openstack/barbican-worker-6875d4f667-x5hjc" Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.861300 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c64caf8-e57d-495f-985c-844edea0d146-logs\") pod \"barbican-keystone-listener-65899db58b-mz594\" (UID: \"0c64caf8-e57d-495f-985c-844edea0d146\") " pod="openstack/barbican-keystone-listener-65899db58b-mz594" Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.861317 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1bca86ff-c24c-4d08-b7ed-be2433fe9735-config-data-custom\") pod \"barbican-worker-6875d4f667-x5hjc\" (UID: \"1bca86ff-c24c-4d08-b7ed-be2433fe9735\") " pod="openstack/barbican-worker-6875d4f667-x5hjc" Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.861348 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c64caf8-e57d-495f-985c-844edea0d146-config-data\") pod \"barbican-keystone-listener-65899db58b-mz594\" (UID: \"0c64caf8-e57d-495f-985c-844edea0d146\") " pod="openstack/barbican-keystone-listener-65899db58b-mz594" Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.861383 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2f892ddb-2efd-437e-9fae-9f7119f17847-ovsdbserver-nb\") pod \"dnsmasq-dns-688c87cc99-d8lng\" (UID: \"2f892ddb-2efd-437e-9fae-9f7119f17847\") " pod="openstack/dnsmasq-dns-688c87cc99-d8lng" Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.861413 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nl6w\" (UniqueName: \"kubernetes.io/projected/0c64caf8-e57d-495f-985c-844edea0d146-kube-api-access-5nl6w\") pod \"barbican-keystone-listener-65899db58b-mz594\" (UID: \"0c64caf8-e57d-495f-985c-844edea0d146\") " pod="openstack/barbican-keystone-listener-65899db58b-mz594" Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.861443 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkkd7\" (UniqueName: \"kubernetes.io/projected/1bca86ff-c24c-4d08-b7ed-be2433fe9735-kube-api-access-kkkd7\") pod \"barbican-worker-6875d4f667-x5hjc\" (UID: \"1bca86ff-c24c-4d08-b7ed-be2433fe9735\") " pod="openstack/barbican-worker-6875d4f667-x5hjc" Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.862909 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bca86ff-c24c-4d08-b7ed-be2433fe9735-logs\") pod \"barbican-worker-6875d4f667-x5hjc\" (UID: \"1bca86ff-c24c-4d08-b7ed-be2433fe9735\") " pod="openstack/barbican-worker-6875d4f667-x5hjc" Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.864668 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c64caf8-e57d-495f-985c-844edea0d146-logs\") pod \"barbican-keystone-listener-65899db58b-mz594\" (UID: \"0c64caf8-e57d-495f-985c-844edea0d146\") " pod="openstack/barbican-keystone-listener-65899db58b-mz594" Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.867973 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c64caf8-e57d-495f-985c-844edea0d146-combined-ca-bundle\") pod \"barbican-keystone-listener-65899db58b-mz594\" (UID: \"0c64caf8-e57d-495f-985c-844edea0d146\") " pod="openstack/barbican-keystone-listener-65899db58b-mz594" Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.868805 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1bca86ff-c24c-4d08-b7ed-be2433fe9735-config-data-custom\") pod \"barbican-worker-6875d4f667-x5hjc\" (UID: \"1bca86ff-c24c-4d08-b7ed-be2433fe9735\") " pod="openstack/barbican-worker-6875d4f667-x5hjc" Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.872872 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bca86ff-c24c-4d08-b7ed-be2433fe9735-config-data\") pod \"barbican-worker-6875d4f667-x5hjc\" (UID: \"1bca86ff-c24c-4d08-b7ed-be2433fe9735\") " pod="openstack/barbican-worker-6875d4f667-x5hjc" Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.872957 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bca86ff-c24c-4d08-b7ed-be2433fe9735-combined-ca-bundle\") pod \"barbican-worker-6875d4f667-x5hjc\" (UID: \"1bca86ff-c24c-4d08-b7ed-be2433fe9735\") " pod="openstack/barbican-worker-6875d4f667-x5hjc" Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.873473 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0c64caf8-e57d-495f-985c-844edea0d146-config-data-custom\") pod \"barbican-keystone-listener-65899db58b-mz594\" (UID: \"0c64caf8-e57d-495f-985c-844edea0d146\") " pod="openstack/barbican-keystone-listener-65899db58b-mz594" Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.881243 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkkd7\" (UniqueName: \"kubernetes.io/projected/1bca86ff-c24c-4d08-b7ed-be2433fe9735-kube-api-access-kkkd7\") pod \"barbican-worker-6875d4f667-x5hjc\" (UID: \"1bca86ff-c24c-4d08-b7ed-be2433fe9735\") " pod="openstack/barbican-worker-6875d4f667-x5hjc" Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.881682 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c64caf8-e57d-495f-985c-844edea0d146-config-data\") pod \"barbican-keystone-listener-65899db58b-mz594\" (UID: \"0c64caf8-e57d-495f-985c-844edea0d146\") " pod="openstack/barbican-keystone-listener-65899db58b-mz594" Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.884874 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nl6w\" (UniqueName: \"kubernetes.io/projected/0c64caf8-e57d-495f-985c-844edea0d146-kube-api-access-5nl6w\") pod \"barbican-keystone-listener-65899db58b-mz594\" (UID: \"0c64caf8-e57d-495f-985c-844edea0d146\") " pod="openstack/barbican-keystone-listener-65899db58b-mz594" Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.917521 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6875d4f667-x5hjc" Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.948747 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-65899db58b-mz594" Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.979066 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dt9l\" (UniqueName: \"kubernetes.io/projected/39144d64-93f5-4c75-8cd3-39d6b17f08d6-kube-api-access-8dt9l\") pod \"barbican-api-5c8566458-jwt4m\" (UID: \"39144d64-93f5-4c75-8cd3-39d6b17f08d6\") " pod="openstack/barbican-api-5c8566458-jwt4m" Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.979131 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2f892ddb-2efd-437e-9fae-9f7119f17847-dns-svc\") pod \"dnsmasq-dns-688c87cc99-d8lng\" (UID: \"2f892ddb-2efd-437e-9fae-9f7119f17847\") " pod="openstack/dnsmasq-dns-688c87cc99-d8lng" Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.979152 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfm7b\" (UniqueName: \"kubernetes.io/projected/2f892ddb-2efd-437e-9fae-9f7119f17847-kube-api-access-sfm7b\") pod \"dnsmasq-dns-688c87cc99-d8lng\" (UID: \"2f892ddb-2efd-437e-9fae-9f7119f17847\") " pod="openstack/dnsmasq-dns-688c87cc99-d8lng" Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.979195 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39144d64-93f5-4c75-8cd3-39d6b17f08d6-config-data\") pod \"barbican-api-5c8566458-jwt4m\" (UID: \"39144d64-93f5-4c75-8cd3-39d6b17f08d6\") " pod="openstack/barbican-api-5c8566458-jwt4m" Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.979305 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2f892ddb-2efd-437e-9fae-9f7119f17847-ovsdbserver-nb\") pod \"dnsmasq-dns-688c87cc99-d8lng\" (UID: \"2f892ddb-2efd-437e-9fae-9f7119f17847\") " pod="openstack/dnsmasq-dns-688c87cc99-d8lng" Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.979329 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39144d64-93f5-4c75-8cd3-39d6b17f08d6-logs\") pod \"barbican-api-5c8566458-jwt4m\" (UID: \"39144d64-93f5-4c75-8cd3-39d6b17f08d6\") " pod="openstack/barbican-api-5c8566458-jwt4m" Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.979352 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/39144d64-93f5-4c75-8cd3-39d6b17f08d6-config-data-custom\") pod \"barbican-api-5c8566458-jwt4m\" (UID: \"39144d64-93f5-4c75-8cd3-39d6b17f08d6\") " pod="openstack/barbican-api-5c8566458-jwt4m" Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.979381 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39144d64-93f5-4c75-8cd3-39d6b17f08d6-combined-ca-bundle\") pod \"barbican-api-5c8566458-jwt4m\" (UID: \"39144d64-93f5-4c75-8cd3-39d6b17f08d6\") " pod="openstack/barbican-api-5c8566458-jwt4m" Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.979459 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2f892ddb-2efd-437e-9fae-9f7119f17847-dns-swift-storage-0\") pod \"dnsmasq-dns-688c87cc99-d8lng\" (UID: \"2f892ddb-2efd-437e-9fae-9f7119f17847\") " pod="openstack/dnsmasq-dns-688c87cc99-d8lng" Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.979586 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f892ddb-2efd-437e-9fae-9f7119f17847-config\") pod \"dnsmasq-dns-688c87cc99-d8lng\" (UID: \"2f892ddb-2efd-437e-9fae-9f7119f17847\") " pod="openstack/dnsmasq-dns-688c87cc99-d8lng" Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.979620 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2f892ddb-2efd-437e-9fae-9f7119f17847-ovsdbserver-sb\") pod \"dnsmasq-dns-688c87cc99-d8lng\" (UID: \"2f892ddb-2efd-437e-9fae-9f7119f17847\") " pod="openstack/dnsmasq-dns-688c87cc99-d8lng" Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.980058 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2f892ddb-2efd-437e-9fae-9f7119f17847-dns-svc\") pod \"dnsmasq-dns-688c87cc99-d8lng\" (UID: \"2f892ddb-2efd-437e-9fae-9f7119f17847\") " pod="openstack/dnsmasq-dns-688c87cc99-d8lng" Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.980444 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2f892ddb-2efd-437e-9fae-9f7119f17847-dns-swift-storage-0\") pod \"dnsmasq-dns-688c87cc99-d8lng\" (UID: \"2f892ddb-2efd-437e-9fae-9f7119f17847\") " pod="openstack/dnsmasq-dns-688c87cc99-d8lng" Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.980518 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2f892ddb-2efd-437e-9fae-9f7119f17847-ovsdbserver-sb\") pod \"dnsmasq-dns-688c87cc99-d8lng\" (UID: \"2f892ddb-2efd-437e-9fae-9f7119f17847\") " pod="openstack/dnsmasq-dns-688c87cc99-d8lng" Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.980925 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2f892ddb-2efd-437e-9fae-9f7119f17847-ovsdbserver-nb\") pod \"dnsmasq-dns-688c87cc99-d8lng\" (UID: \"2f892ddb-2efd-437e-9fae-9f7119f17847\") " pod="openstack/dnsmasq-dns-688c87cc99-d8lng" Nov 29 08:05:52 crc kubenswrapper[4795]: I1129 08:05:52.985006 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f892ddb-2efd-437e-9fae-9f7119f17847-config\") pod \"dnsmasq-dns-688c87cc99-d8lng\" (UID: \"2f892ddb-2efd-437e-9fae-9f7119f17847\") " pod="openstack/dnsmasq-dns-688c87cc99-d8lng" Nov 29 08:05:53 crc kubenswrapper[4795]: I1129 08:05:53.000214 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfm7b\" (UniqueName: \"kubernetes.io/projected/2f892ddb-2efd-437e-9fae-9f7119f17847-kube-api-access-sfm7b\") pod \"dnsmasq-dns-688c87cc99-d8lng\" (UID: \"2f892ddb-2efd-437e-9fae-9f7119f17847\") " pod="openstack/dnsmasq-dns-688c87cc99-d8lng" Nov 29 08:05:53 crc kubenswrapper[4795]: I1129 08:05:53.083079 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-d8lng" Nov 29 08:05:53 crc kubenswrapper[4795]: I1129 08:05:53.092415 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39144d64-93f5-4c75-8cd3-39d6b17f08d6-config-data\") pod \"barbican-api-5c8566458-jwt4m\" (UID: \"39144d64-93f5-4c75-8cd3-39d6b17f08d6\") " pod="openstack/barbican-api-5c8566458-jwt4m" Nov 29 08:05:53 crc kubenswrapper[4795]: I1129 08:05:53.093033 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39144d64-93f5-4c75-8cd3-39d6b17f08d6-logs\") pod \"barbican-api-5c8566458-jwt4m\" (UID: \"39144d64-93f5-4c75-8cd3-39d6b17f08d6\") " pod="openstack/barbican-api-5c8566458-jwt4m" Nov 29 08:05:53 crc kubenswrapper[4795]: I1129 08:05:53.093106 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/39144d64-93f5-4c75-8cd3-39d6b17f08d6-config-data-custom\") pod \"barbican-api-5c8566458-jwt4m\" (UID: \"39144d64-93f5-4c75-8cd3-39d6b17f08d6\") " pod="openstack/barbican-api-5c8566458-jwt4m" Nov 29 08:05:53 crc kubenswrapper[4795]: I1129 08:05:53.093168 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39144d64-93f5-4c75-8cd3-39d6b17f08d6-combined-ca-bundle\") pod \"barbican-api-5c8566458-jwt4m\" (UID: \"39144d64-93f5-4c75-8cd3-39d6b17f08d6\") " pod="openstack/barbican-api-5c8566458-jwt4m" Nov 29 08:05:53 crc kubenswrapper[4795]: I1129 08:05:53.093763 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39144d64-93f5-4c75-8cd3-39d6b17f08d6-logs\") pod \"barbican-api-5c8566458-jwt4m\" (UID: \"39144d64-93f5-4c75-8cd3-39d6b17f08d6\") " pod="openstack/barbican-api-5c8566458-jwt4m" Nov 29 08:05:53 crc kubenswrapper[4795]: I1129 08:05:53.094300 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dt9l\" (UniqueName: \"kubernetes.io/projected/39144d64-93f5-4c75-8cd3-39d6b17f08d6-kube-api-access-8dt9l\") pod \"barbican-api-5c8566458-jwt4m\" (UID: \"39144d64-93f5-4c75-8cd3-39d6b17f08d6\") " pod="openstack/barbican-api-5c8566458-jwt4m" Nov 29 08:05:53 crc kubenswrapper[4795]: I1129 08:05:53.098741 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39144d64-93f5-4c75-8cd3-39d6b17f08d6-config-data\") pod \"barbican-api-5c8566458-jwt4m\" (UID: \"39144d64-93f5-4c75-8cd3-39d6b17f08d6\") " pod="openstack/barbican-api-5c8566458-jwt4m" Nov 29 08:05:53 crc kubenswrapper[4795]: I1129 08:05:53.102034 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/39144d64-93f5-4c75-8cd3-39d6b17f08d6-config-data-custom\") pod \"barbican-api-5c8566458-jwt4m\" (UID: \"39144d64-93f5-4c75-8cd3-39d6b17f08d6\") " pod="openstack/barbican-api-5c8566458-jwt4m" Nov 29 08:05:53 crc kubenswrapper[4795]: I1129 08:05:53.102489 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39144d64-93f5-4c75-8cd3-39d6b17f08d6-combined-ca-bundle\") pod \"barbican-api-5c8566458-jwt4m\" (UID: \"39144d64-93f5-4c75-8cd3-39d6b17f08d6\") " pod="openstack/barbican-api-5c8566458-jwt4m" Nov 29 08:05:53 crc kubenswrapper[4795]: I1129 08:05:53.123209 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dt9l\" (UniqueName: \"kubernetes.io/projected/39144d64-93f5-4c75-8cd3-39d6b17f08d6-kube-api-access-8dt9l\") pod \"barbican-api-5c8566458-jwt4m\" (UID: \"39144d64-93f5-4c75-8cd3-39d6b17f08d6\") " pod="openstack/barbican-api-5c8566458-jwt4m" Nov 29 08:05:53 crc kubenswrapper[4795]: I1129 08:05:53.138899 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5c8566458-jwt4m" Nov 29 08:05:53 crc kubenswrapper[4795]: I1129 08:05:53.424374 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8b2xv" Nov 29 08:05:53 crc kubenswrapper[4795]: I1129 08:05:53.452555 4795 generic.go:334] "Generic (PLEG): container finished" podID="1082de8f-47bf-41ac-875f-8d7db0baab7b" containerID="3b2c7e91906b4f2c34ee265e94e43b0d265ed328aa7d59bcd1d6ea5a6694e59d" exitCode=0 Nov 29 08:05:53 crc kubenswrapper[4795]: I1129 08:05:53.452684 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-tvhw5" event={"ID":"1082de8f-47bf-41ac-875f-8d7db0baab7b","Type":"ContainerDied","Data":"3b2c7e91906b4f2c34ee265e94e43b0d265ed328aa7d59bcd1d6ea5a6694e59d"} Nov 29 08:05:53 crc kubenswrapper[4795]: I1129 08:05:53.511101 4795 generic.go:334] "Generic (PLEG): container finished" podID="19557359-2cdd-493b-a9ad-0770bf37206e" containerID="5ebe0a03b9d7ffc369722758e67d8d71695fb3dea054e5cf4126807196f14a47" exitCode=0 Nov 29 08:05:53 crc kubenswrapper[4795]: I1129 08:05:53.511739 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8b2xv" Nov 29 08:05:53 crc kubenswrapper[4795]: I1129 08:05:53.511840 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8b2xv" event={"ID":"19557359-2cdd-493b-a9ad-0770bf37206e","Type":"ContainerDied","Data":"5ebe0a03b9d7ffc369722758e67d8d71695fb3dea054e5cf4126807196f14a47"} Nov 29 08:05:53 crc kubenswrapper[4795]: I1129 08:05:53.529876 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8b2xv" event={"ID":"19557359-2cdd-493b-a9ad-0770bf37206e","Type":"ContainerDied","Data":"60bcc8a2c85ae650a7e114488481593e8d37afcbbaf7becceabfd519e711e933"} Nov 29 08:05:53 crc kubenswrapper[4795]: I1129 08:05:53.529969 4795 scope.go:117] "RemoveContainer" containerID="5ebe0a03b9d7ffc369722758e67d8d71695fb3dea054e5cf4126807196f14a47" Nov 29 08:05:53 crc kubenswrapper[4795]: I1129 08:05:53.547376 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7t4hf\" (UniqueName: \"kubernetes.io/projected/19557359-2cdd-493b-a9ad-0770bf37206e-kube-api-access-7t4hf\") pod \"19557359-2cdd-493b-a9ad-0770bf37206e\" (UID: \"19557359-2cdd-493b-a9ad-0770bf37206e\") " Nov 29 08:05:53 crc kubenswrapper[4795]: I1129 08:05:53.547500 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19557359-2cdd-493b-a9ad-0770bf37206e-utilities\") pod \"19557359-2cdd-493b-a9ad-0770bf37206e\" (UID: \"19557359-2cdd-493b-a9ad-0770bf37206e\") " Nov 29 08:05:53 crc kubenswrapper[4795]: I1129 08:05:53.557665 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19557359-2cdd-493b-a9ad-0770bf37206e-utilities" (OuterVolumeSpecName: "utilities") pod "19557359-2cdd-493b-a9ad-0770bf37206e" (UID: "19557359-2cdd-493b-a9ad-0770bf37206e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:05:53 crc kubenswrapper[4795]: I1129 08:05:53.579687 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19557359-2cdd-493b-a9ad-0770bf37206e-catalog-content\") pod \"19557359-2cdd-493b-a9ad-0770bf37206e\" (UID: \"19557359-2cdd-493b-a9ad-0770bf37206e\") " Nov 29 08:05:53 crc kubenswrapper[4795]: I1129 08:05:53.581336 4795 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19557359-2cdd-493b-a9ad-0770bf37206e-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:53 crc kubenswrapper[4795]: I1129 08:05:53.679070 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19557359-2cdd-493b-a9ad-0770bf37206e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "19557359-2cdd-493b-a9ad-0770bf37206e" (UID: "19557359-2cdd-493b-a9ad-0770bf37206e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:05:53 crc kubenswrapper[4795]: I1129 08:05:53.683069 4795 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19557359-2cdd-493b-a9ad-0770bf37206e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:53 crc kubenswrapper[4795]: I1129 08:05:53.758706 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19557359-2cdd-493b-a9ad-0770bf37206e-kube-api-access-7t4hf" (OuterVolumeSpecName: "kube-api-access-7t4hf") pod "19557359-2cdd-493b-a9ad-0770bf37206e" (UID: "19557359-2cdd-493b-a9ad-0770bf37206e"). InnerVolumeSpecName "kube-api-access-7t4hf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:05:53 crc kubenswrapper[4795]: I1129 08:05:53.784435 4795 scope.go:117] "RemoveContainer" containerID="e63ce49ee7e31529506ea8fcd72ab771eccd3d29ca7936226fbaeaea5c48eba5" Nov 29 08:05:53 crc kubenswrapper[4795]: I1129 08:05:53.787161 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7t4hf\" (UniqueName: \"kubernetes.io/projected/19557359-2cdd-493b-a9ad-0770bf37206e-kube-api-access-7t4hf\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:53 crc kubenswrapper[4795]: I1129 08:05:53.898378 4795 scope.go:117] "RemoveContainer" containerID="186c40ddf7e9415a10044c140f413f84d9509eefda35df4455eec069b836e0a7" Nov 29 08:05:53 crc kubenswrapper[4795]: I1129 08:05:53.904340 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8b2xv"] Nov 29 08:05:53 crc kubenswrapper[4795]: I1129 08:05:53.922236 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8b2xv"] Nov 29 08:05:53 crc kubenswrapper[4795]: I1129 08:05:53.941910 4795 scope.go:117] "RemoveContainer" containerID="5ebe0a03b9d7ffc369722758e67d8d71695fb3dea054e5cf4126807196f14a47" Nov 29 08:05:53 crc kubenswrapper[4795]: E1129 08:05:53.943283 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ebe0a03b9d7ffc369722758e67d8d71695fb3dea054e5cf4126807196f14a47\": container with ID starting with 5ebe0a03b9d7ffc369722758e67d8d71695fb3dea054e5cf4126807196f14a47 not found: ID does not exist" containerID="5ebe0a03b9d7ffc369722758e67d8d71695fb3dea054e5cf4126807196f14a47" Nov 29 08:05:53 crc kubenswrapper[4795]: I1129 08:05:53.943332 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ebe0a03b9d7ffc369722758e67d8d71695fb3dea054e5cf4126807196f14a47"} err="failed to get container status \"5ebe0a03b9d7ffc369722758e67d8d71695fb3dea054e5cf4126807196f14a47\": rpc error: code = NotFound desc = could not find container \"5ebe0a03b9d7ffc369722758e67d8d71695fb3dea054e5cf4126807196f14a47\": container with ID starting with 5ebe0a03b9d7ffc369722758e67d8d71695fb3dea054e5cf4126807196f14a47 not found: ID does not exist" Nov 29 08:05:53 crc kubenswrapper[4795]: I1129 08:05:53.943361 4795 scope.go:117] "RemoveContainer" containerID="e63ce49ee7e31529506ea8fcd72ab771eccd3d29ca7936226fbaeaea5c48eba5" Nov 29 08:05:53 crc kubenswrapper[4795]: E1129 08:05:53.948945 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e63ce49ee7e31529506ea8fcd72ab771eccd3d29ca7936226fbaeaea5c48eba5\": container with ID starting with e63ce49ee7e31529506ea8fcd72ab771eccd3d29ca7936226fbaeaea5c48eba5 not found: ID does not exist" containerID="e63ce49ee7e31529506ea8fcd72ab771eccd3d29ca7936226fbaeaea5c48eba5" Nov 29 08:05:53 crc kubenswrapper[4795]: I1129 08:05:53.949003 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e63ce49ee7e31529506ea8fcd72ab771eccd3d29ca7936226fbaeaea5c48eba5"} err="failed to get container status \"e63ce49ee7e31529506ea8fcd72ab771eccd3d29ca7936226fbaeaea5c48eba5\": rpc error: code = NotFound desc = could not find container \"e63ce49ee7e31529506ea8fcd72ab771eccd3d29ca7936226fbaeaea5c48eba5\": container with ID starting with e63ce49ee7e31529506ea8fcd72ab771eccd3d29ca7936226fbaeaea5c48eba5 not found: ID does not exist" Nov 29 08:05:53 crc kubenswrapper[4795]: I1129 08:05:53.949043 4795 scope.go:117] "RemoveContainer" containerID="186c40ddf7e9415a10044c140f413f84d9509eefda35df4455eec069b836e0a7" Nov 29 08:05:53 crc kubenswrapper[4795]: E1129 08:05:53.958092 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"186c40ddf7e9415a10044c140f413f84d9509eefda35df4455eec069b836e0a7\": container with ID starting with 186c40ddf7e9415a10044c140f413f84d9509eefda35df4455eec069b836e0a7 not found: ID does not exist" containerID="186c40ddf7e9415a10044c140f413f84d9509eefda35df4455eec069b836e0a7" Nov 29 08:05:53 crc kubenswrapper[4795]: I1129 08:05:53.958159 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"186c40ddf7e9415a10044c140f413f84d9509eefda35df4455eec069b836e0a7"} err="failed to get container status \"186c40ddf7e9415a10044c140f413f84d9509eefda35df4455eec069b836e0a7\": rpc error: code = NotFound desc = could not find container \"186c40ddf7e9415a10044c140f413f84d9509eefda35df4455eec069b836e0a7\": container with ID starting with 186c40ddf7e9415a10044c140f413f84d9509eefda35df4455eec069b836e0a7 not found: ID does not exist" Nov 29 08:05:54 crc kubenswrapper[4795]: I1129 08:05:54.314080 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19557359-2cdd-493b-a9ad-0770bf37206e" path="/var/lib/kubelet/pods/19557359-2cdd-493b-a9ad-0770bf37206e/volumes" Nov 29 08:05:54 crc kubenswrapper[4795]: I1129 08:05:54.558685 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-85xp6" event={"ID":"59666d8f-35e8-4c8a-887f-0c23881547ec","Type":"ContainerDied","Data":"16d0fc1fac5e9bd5157fecbc0601a219ff223bfbffb746da0a07b3b3c42a54a7"} Nov 29 08:05:54 crc kubenswrapper[4795]: I1129 08:05:54.558749 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16d0fc1fac5e9bd5157fecbc0601a219ff223bfbffb746da0a07b3b3c42a54a7" Nov 29 08:05:54 crc kubenswrapper[4795]: I1129 08:05:54.571127 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-85xp6" Nov 29 08:05:54 crc kubenswrapper[4795]: I1129 08:05:54.614424 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-blpm6\" (UniqueName: \"kubernetes.io/projected/59666d8f-35e8-4c8a-887f-0c23881547ec-kube-api-access-blpm6\") pod \"59666d8f-35e8-4c8a-887f-0c23881547ec\" (UID: \"59666d8f-35e8-4c8a-887f-0c23881547ec\") " Nov 29 08:05:54 crc kubenswrapper[4795]: I1129 08:05:54.614832 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59666d8f-35e8-4c8a-887f-0c23881547ec-config-data\") pod \"59666d8f-35e8-4c8a-887f-0c23881547ec\" (UID: \"59666d8f-35e8-4c8a-887f-0c23881547ec\") " Nov 29 08:05:54 crc kubenswrapper[4795]: I1129 08:05:54.614944 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59666d8f-35e8-4c8a-887f-0c23881547ec-combined-ca-bundle\") pod \"59666d8f-35e8-4c8a-887f-0c23881547ec\" (UID: \"59666d8f-35e8-4c8a-887f-0c23881547ec\") " Nov 29 08:05:54 crc kubenswrapper[4795]: I1129 08:05:54.632992 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59666d8f-35e8-4c8a-887f-0c23881547ec-kube-api-access-blpm6" (OuterVolumeSpecName: "kube-api-access-blpm6") pod "59666d8f-35e8-4c8a-887f-0c23881547ec" (UID: "59666d8f-35e8-4c8a-887f-0c23881547ec"). InnerVolumeSpecName "kube-api-access-blpm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:05:54 crc kubenswrapper[4795]: I1129 08:05:54.717650 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-blpm6\" (UniqueName: \"kubernetes.io/projected/59666d8f-35e8-4c8a-887f-0c23881547ec-kube-api-access-blpm6\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:54 crc kubenswrapper[4795]: I1129 08:05:54.727741 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59666d8f-35e8-4c8a-887f-0c23881547ec-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "59666d8f-35e8-4c8a-887f-0c23881547ec" (UID: "59666d8f-35e8-4c8a-887f-0c23881547ec"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:05:54 crc kubenswrapper[4795]: W1129 08:05:54.729402 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c64caf8_e57d_495f_985c_844edea0d146.slice/crio-89f2f3d3915b7f847b0591829f50b64db44a3e491402ae5dbdfec22072987785 WatchSource:0}: Error finding container 89f2f3d3915b7f847b0591829f50b64db44a3e491402ae5dbdfec22072987785: Status 404 returned error can't find the container with id 89f2f3d3915b7f847b0591829f50b64db44a3e491402ae5dbdfec22072987785 Nov 29 08:05:54 crc kubenswrapper[4795]: I1129 08:05:54.732994 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-d8lng"] Nov 29 08:05:54 crc kubenswrapper[4795]: I1129 08:05:54.775582 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6875d4f667-x5hjc"] Nov 29 08:05:54 crc kubenswrapper[4795]: I1129 08:05:54.827104 4795 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59666d8f-35e8-4c8a-887f-0c23881547ec-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:54 crc kubenswrapper[4795]: I1129 08:05:54.829468 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-65899db58b-mz594"] Nov 29 08:05:54 crc kubenswrapper[4795]: I1129 08:05:54.859940 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5c8566458-jwt4m"] Nov 29 08:05:54 crc kubenswrapper[4795]: I1129 08:05:54.892226 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59666d8f-35e8-4c8a-887f-0c23881547ec-config-data" (OuterVolumeSpecName: "config-data") pod "59666d8f-35e8-4c8a-887f-0c23881547ec" (UID: "59666d8f-35e8-4c8a-887f-0c23881547ec"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:05:54 crc kubenswrapper[4795]: I1129 08:05:54.932582 4795 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59666d8f-35e8-4c8a-887f-0c23881547ec-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:55 crc kubenswrapper[4795]: I1129 08:05:55.530306 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 29 08:05:55 crc kubenswrapper[4795]: I1129 08:05:55.530720 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 29 08:05:55 crc kubenswrapper[4795]: I1129 08:05:55.535457 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 29 08:05:55 crc kubenswrapper[4795]: I1129 08:05:55.578103 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 29 08:05:55 crc kubenswrapper[4795]: I1129 08:05:55.579726 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-65899db58b-mz594" event={"ID":"0c64caf8-e57d-495f-985c-844edea0d146","Type":"ContainerStarted","Data":"89f2f3d3915b7f847b0591829f50b64db44a3e491402ae5dbdfec22072987785"} Nov 29 08:05:55 crc kubenswrapper[4795]: I1129 08:05:55.594113 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5c8566458-jwt4m" event={"ID":"39144d64-93f5-4c75-8cd3-39d6b17f08d6","Type":"ContainerStarted","Data":"548f34c2652851ca8b5e064e016c8e5abc6aa699b8a7e546ffbe69e24e732616"} Nov 29 08:05:55 crc kubenswrapper[4795]: I1129 08:05:55.594163 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5c8566458-jwt4m" event={"ID":"39144d64-93f5-4c75-8cd3-39d6b17f08d6","Type":"ContainerStarted","Data":"fbfd3c7a09f05a3cefd5e35f9592eab140e7abdff6b73b079da97f0c47b02269"} Nov 29 08:05:55 crc kubenswrapper[4795]: I1129 08:05:55.594381 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-tvhw5" Nov 29 08:05:55 crc kubenswrapper[4795]: I1129 08:05:55.598290 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6875d4f667-x5hjc" event={"ID":"1bca86ff-c24c-4d08-b7ed-be2433fe9735","Type":"ContainerStarted","Data":"dcc8acec0e7f4cd2679650cd2d13c0293971b93f9a3fd39b933f99a543210a1e"} Nov 29 08:05:55 crc kubenswrapper[4795]: I1129 08:05:55.606330 4795 generic.go:334] "Generic (PLEG): container finished" podID="2f892ddb-2efd-437e-9fae-9f7119f17847" containerID="5480f4446ced188ad401bd7d03117d9b9cf863cff6ea65f7007aad22047c72c2" exitCode=0 Nov 29 08:05:55 crc kubenswrapper[4795]: I1129 08:05:55.606407 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-d8lng" event={"ID":"2f892ddb-2efd-437e-9fae-9f7119f17847","Type":"ContainerDied","Data":"5480f4446ced188ad401bd7d03117d9b9cf863cff6ea65f7007aad22047c72c2"} Nov 29 08:05:55 crc kubenswrapper[4795]: I1129 08:05:55.606437 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-d8lng" event={"ID":"2f892ddb-2efd-437e-9fae-9f7119f17847","Type":"ContainerStarted","Data":"a1412485efbae22f0109adc8c773920c43ab13e4ccab70cccdcbfec19d90f324"} Nov 29 08:05:55 crc kubenswrapper[4795]: I1129 08:05:55.643920 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-85xp6" Nov 29 08:05:55 crc kubenswrapper[4795]: I1129 08:05:55.644746 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-tvhw5" event={"ID":"1082de8f-47bf-41ac-875f-8d7db0baab7b","Type":"ContainerDied","Data":"f34dcfd68119562074854949abe1e1270f406458f3c63340e8cc226888b94fe6"} Nov 29 08:05:55 crc kubenswrapper[4795]: I1129 08:05:55.644792 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f34dcfd68119562074854949abe1e1270f406458f3c63340e8cc226888b94fe6" Nov 29 08:05:55 crc kubenswrapper[4795]: I1129 08:05:55.644807 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-tvhw5" Nov 29 08:05:55 crc kubenswrapper[4795]: I1129 08:05:55.657899 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1082de8f-47bf-41ac-875f-8d7db0baab7b-scripts\") pod \"1082de8f-47bf-41ac-875f-8d7db0baab7b\" (UID: \"1082de8f-47bf-41ac-875f-8d7db0baab7b\") " Nov 29 08:05:55 crc kubenswrapper[4795]: I1129 08:05:55.658007 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1082de8f-47bf-41ac-875f-8d7db0baab7b-config-data\") pod \"1082de8f-47bf-41ac-875f-8d7db0baab7b\" (UID: \"1082de8f-47bf-41ac-875f-8d7db0baab7b\") " Nov 29 08:05:55 crc kubenswrapper[4795]: I1129 08:05:55.658078 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1082de8f-47bf-41ac-875f-8d7db0baab7b-db-sync-config-data\") pod \"1082de8f-47bf-41ac-875f-8d7db0baab7b\" (UID: \"1082de8f-47bf-41ac-875f-8d7db0baab7b\") " Nov 29 08:05:55 crc kubenswrapper[4795]: I1129 08:05:55.658114 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1082de8f-47bf-41ac-875f-8d7db0baab7b-etc-machine-id\") pod \"1082de8f-47bf-41ac-875f-8d7db0baab7b\" (UID: \"1082de8f-47bf-41ac-875f-8d7db0baab7b\") " Nov 29 08:05:55 crc kubenswrapper[4795]: I1129 08:05:55.658186 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1082de8f-47bf-41ac-875f-8d7db0baab7b-combined-ca-bundle\") pod \"1082de8f-47bf-41ac-875f-8d7db0baab7b\" (UID: \"1082de8f-47bf-41ac-875f-8d7db0baab7b\") " Nov 29 08:05:55 crc kubenswrapper[4795]: I1129 08:05:55.658218 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9k97\" (UniqueName: \"kubernetes.io/projected/1082de8f-47bf-41ac-875f-8d7db0baab7b-kube-api-access-c9k97\") pod \"1082de8f-47bf-41ac-875f-8d7db0baab7b\" (UID: \"1082de8f-47bf-41ac-875f-8d7db0baab7b\") " Nov 29 08:05:55 crc kubenswrapper[4795]: I1129 08:05:55.667861 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1082de8f-47bf-41ac-875f-8d7db0baab7b-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "1082de8f-47bf-41ac-875f-8d7db0baab7b" (UID: "1082de8f-47bf-41ac-875f-8d7db0baab7b"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 08:05:55 crc kubenswrapper[4795]: I1129 08:05:55.688360 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1082de8f-47bf-41ac-875f-8d7db0baab7b-scripts" (OuterVolumeSpecName: "scripts") pod "1082de8f-47bf-41ac-875f-8d7db0baab7b" (UID: "1082de8f-47bf-41ac-875f-8d7db0baab7b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:05:55 crc kubenswrapper[4795]: I1129 08:05:55.692323 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1082de8f-47bf-41ac-875f-8d7db0baab7b-kube-api-access-c9k97" (OuterVolumeSpecName: "kube-api-access-c9k97") pod "1082de8f-47bf-41ac-875f-8d7db0baab7b" (UID: "1082de8f-47bf-41ac-875f-8d7db0baab7b"). InnerVolumeSpecName "kube-api-access-c9k97". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:05:55 crc kubenswrapper[4795]: I1129 08:05:55.706912 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1082de8f-47bf-41ac-875f-8d7db0baab7b-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "1082de8f-47bf-41ac-875f-8d7db0baab7b" (UID: "1082de8f-47bf-41ac-875f-8d7db0baab7b"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:05:55 crc kubenswrapper[4795]: I1129 08:05:55.767087 4795 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1082de8f-47bf-41ac-875f-8d7db0baab7b-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:55 crc kubenswrapper[4795]: I1129 08:05:55.767117 4795 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1082de8f-47bf-41ac-875f-8d7db0baab7b-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:55 crc kubenswrapper[4795]: I1129 08:05:55.767143 4795 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1082de8f-47bf-41ac-875f-8d7db0baab7b-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:55 crc kubenswrapper[4795]: I1129 08:05:55.767165 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9k97\" (UniqueName: \"kubernetes.io/projected/1082de8f-47bf-41ac-875f-8d7db0baab7b-kube-api-access-c9k97\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:55 crc kubenswrapper[4795]: I1129 08:05:55.796459 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1082de8f-47bf-41ac-875f-8d7db0baab7b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1082de8f-47bf-41ac-875f-8d7db0baab7b" (UID: "1082de8f-47bf-41ac-875f-8d7db0baab7b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:05:55 crc kubenswrapper[4795]: I1129 08:05:55.893696 4795 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1082de8f-47bf-41ac-875f-8d7db0baab7b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:56 crc kubenswrapper[4795]: I1129 08:05:56.006610 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1082de8f-47bf-41ac-875f-8d7db0baab7b-config-data" (OuterVolumeSpecName: "config-data") pod "1082de8f-47bf-41ac-875f-8d7db0baab7b" (UID: "1082de8f-47bf-41ac-875f-8d7db0baab7b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:05:56 crc kubenswrapper[4795]: I1129 08:05:56.102330 4795 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1082de8f-47bf-41ac-875f-8d7db0baab7b-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:56 crc kubenswrapper[4795]: I1129 08:05:56.679659 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-d8lng" event={"ID":"2f892ddb-2efd-437e-9fae-9f7119f17847","Type":"ContainerStarted","Data":"81de92269787b0ce7f44cad613a4054544be55e20e4d62cdc0a4c4efbdf98902"} Nov 29 08:05:56 crc kubenswrapper[4795]: I1129 08:05:56.682473 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-688c87cc99-d8lng" Nov 29 08:05:56 crc kubenswrapper[4795]: I1129 08:05:56.689086 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5c8566458-jwt4m" event={"ID":"39144d64-93f5-4c75-8cd3-39d6b17f08d6","Type":"ContainerStarted","Data":"f487adc56134b7e9888c25a49a74e448e87beaf406191e404aa4540ab96cf4d5"} Nov 29 08:05:56 crc kubenswrapper[4795]: I1129 08:05:56.689616 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5c8566458-jwt4m" Nov 29 08:05:56 crc kubenswrapper[4795]: I1129 08:05:56.689763 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5c8566458-jwt4m" Nov 29 08:05:56 crc kubenswrapper[4795]: I1129 08:05:56.725719 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-688c87cc99-d8lng" podStartSLOduration=4.725697503 podStartE2EDuration="4.725697503s" podCreationTimestamp="2025-11-29 08:05:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 08:05:56.713136996 +0000 UTC m=+1602.688712786" watchObservedRunningTime="2025-11-29 08:05:56.725697503 +0000 UTC m=+1602.701273293" Nov 29 08:05:56 crc kubenswrapper[4795]: I1129 08:05:56.742980 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5c8566458-jwt4m" podStartSLOduration=4.742955594 podStartE2EDuration="4.742955594s" podCreationTimestamp="2025-11-29 08:05:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 08:05:56.739888677 +0000 UTC m=+1602.715464467" watchObservedRunningTime="2025-11-29 08:05:56.742955594 +0000 UTC m=+1602.718531404" Nov 29 08:05:56 crc kubenswrapper[4795]: I1129 08:05:56.970540 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 29 08:05:56 crc kubenswrapper[4795]: E1129 08:05:56.971724 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59666d8f-35e8-4c8a-887f-0c23881547ec" containerName="heat-db-sync" Nov 29 08:05:56 crc kubenswrapper[4795]: I1129 08:05:56.971745 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="59666d8f-35e8-4c8a-887f-0c23881547ec" containerName="heat-db-sync" Nov 29 08:05:56 crc kubenswrapper[4795]: E1129 08:05:56.971815 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19557359-2cdd-493b-a9ad-0770bf37206e" containerName="registry-server" Nov 29 08:05:56 crc kubenswrapper[4795]: I1129 08:05:56.971822 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="19557359-2cdd-493b-a9ad-0770bf37206e" containerName="registry-server" Nov 29 08:05:56 crc kubenswrapper[4795]: E1129 08:05:56.971853 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19557359-2cdd-493b-a9ad-0770bf37206e" containerName="extract-content" Nov 29 08:05:56 crc kubenswrapper[4795]: I1129 08:05:56.971860 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="19557359-2cdd-493b-a9ad-0770bf37206e" containerName="extract-content" Nov 29 08:05:56 crc kubenswrapper[4795]: E1129 08:05:56.971870 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19557359-2cdd-493b-a9ad-0770bf37206e" containerName="extract-utilities" Nov 29 08:05:56 crc kubenswrapper[4795]: I1129 08:05:56.971875 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="19557359-2cdd-493b-a9ad-0770bf37206e" containerName="extract-utilities" Nov 29 08:05:56 crc kubenswrapper[4795]: E1129 08:05:56.971890 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1082de8f-47bf-41ac-875f-8d7db0baab7b" containerName="cinder-db-sync" Nov 29 08:05:56 crc kubenswrapper[4795]: I1129 08:05:56.971896 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="1082de8f-47bf-41ac-875f-8d7db0baab7b" containerName="cinder-db-sync" Nov 29 08:05:56 crc kubenswrapper[4795]: I1129 08:05:56.972166 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="59666d8f-35e8-4c8a-887f-0c23881547ec" containerName="heat-db-sync" Nov 29 08:05:56 crc kubenswrapper[4795]: I1129 08:05:56.972196 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="19557359-2cdd-493b-a9ad-0770bf37206e" containerName="registry-server" Nov 29 08:05:56 crc kubenswrapper[4795]: I1129 08:05:56.972209 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="1082de8f-47bf-41ac-875f-8d7db0baab7b" containerName="cinder-db-sync" Nov 29 08:05:56 crc kubenswrapper[4795]: I1129 08:05:56.980379 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 29 08:05:56 crc kubenswrapper[4795]: I1129 08:05:56.987343 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 29 08:05:56 crc kubenswrapper[4795]: I1129 08:05:56.987618 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 29 08:05:56 crc kubenswrapper[4795]: I1129 08:05:56.987825 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-6ql88" Nov 29 08:05:56 crc kubenswrapper[4795]: I1129 08:05:56.987961 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 29 08:05:56 crc kubenswrapper[4795]: I1129 08:05:56.989366 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.068887 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-d8lng"] Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.118580 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-hxtm7"] Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.120715 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-hxtm7" Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.131205 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eaf38aac-d198-4ddf-8014-ad9c598601ae-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"eaf38aac-d198-4ddf-8014-ad9c598601ae\") " pod="openstack/cinder-scheduler-0" Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.131291 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaf38aac-d198-4ddf-8014-ad9c598601ae-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"eaf38aac-d198-4ddf-8014-ad9c598601ae\") " pod="openstack/cinder-scheduler-0" Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.131318 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eaf38aac-d198-4ddf-8014-ad9c598601ae-scripts\") pod \"cinder-scheduler-0\" (UID: \"eaf38aac-d198-4ddf-8014-ad9c598601ae\") " pod="openstack/cinder-scheduler-0" Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.131338 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaf38aac-d198-4ddf-8014-ad9c598601ae-config-data\") pod \"cinder-scheduler-0\" (UID: \"eaf38aac-d198-4ddf-8014-ad9c598601ae\") " pod="openstack/cinder-scheduler-0" Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.131412 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dd2tc\" (UniqueName: \"kubernetes.io/projected/eaf38aac-d198-4ddf-8014-ad9c598601ae-kube-api-access-dd2tc\") pod \"cinder-scheduler-0\" (UID: \"eaf38aac-d198-4ddf-8014-ad9c598601ae\") " pod="openstack/cinder-scheduler-0" Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.131515 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eaf38aac-d198-4ddf-8014-ad9c598601ae-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"eaf38aac-d198-4ddf-8014-ad9c598601ae\") " pod="openstack/cinder-scheduler-0" Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.151198 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-hxtm7"] Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.233264 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eaf38aac-d198-4ddf-8014-ad9c598601ae-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"eaf38aac-d198-4ddf-8014-ad9c598601ae\") " pod="openstack/cinder-scheduler-0" Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.233632 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1aa67cd3-59ee-4a03-bc97-fc1fa5729e40-config\") pod \"dnsmasq-dns-6bb4fc677f-hxtm7\" (UID: \"1aa67cd3-59ee-4a03-bc97-fc1fa5729e40\") " pod="openstack/dnsmasq-dns-6bb4fc677f-hxtm7" Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.233687 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaf38aac-d198-4ddf-8014-ad9c598601ae-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"eaf38aac-d198-4ddf-8014-ad9c598601ae\") " pod="openstack/cinder-scheduler-0" Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.233710 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eaf38aac-d198-4ddf-8014-ad9c598601ae-scripts\") pod \"cinder-scheduler-0\" (UID: \"eaf38aac-d198-4ddf-8014-ad9c598601ae\") " pod="openstack/cinder-scheduler-0" Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.233732 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ghsx\" (UniqueName: \"kubernetes.io/projected/1aa67cd3-59ee-4a03-bc97-fc1fa5729e40-kube-api-access-8ghsx\") pod \"dnsmasq-dns-6bb4fc677f-hxtm7\" (UID: \"1aa67cd3-59ee-4a03-bc97-fc1fa5729e40\") " pod="openstack/dnsmasq-dns-6bb4fc677f-hxtm7" Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.233756 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaf38aac-d198-4ddf-8014-ad9c598601ae-config-data\") pod \"cinder-scheduler-0\" (UID: \"eaf38aac-d198-4ddf-8014-ad9c598601ae\") " pod="openstack/cinder-scheduler-0" Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.233787 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1aa67cd3-59ee-4a03-bc97-fc1fa5729e40-dns-svc\") pod \"dnsmasq-dns-6bb4fc677f-hxtm7\" (UID: \"1aa67cd3-59ee-4a03-bc97-fc1fa5729e40\") " pod="openstack/dnsmasq-dns-6bb4fc677f-hxtm7" Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.233810 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1aa67cd3-59ee-4a03-bc97-fc1fa5729e40-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb4fc677f-hxtm7\" (UID: \"1aa67cd3-59ee-4a03-bc97-fc1fa5729e40\") " pod="openstack/dnsmasq-dns-6bb4fc677f-hxtm7" Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.233826 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1aa67cd3-59ee-4a03-bc97-fc1fa5729e40-dns-swift-storage-0\") pod \"dnsmasq-dns-6bb4fc677f-hxtm7\" (UID: \"1aa67cd3-59ee-4a03-bc97-fc1fa5729e40\") " pod="openstack/dnsmasq-dns-6bb4fc677f-hxtm7" Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.233866 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dd2tc\" (UniqueName: \"kubernetes.io/projected/eaf38aac-d198-4ddf-8014-ad9c598601ae-kube-api-access-dd2tc\") pod \"cinder-scheduler-0\" (UID: \"eaf38aac-d198-4ddf-8014-ad9c598601ae\") " pod="openstack/cinder-scheduler-0" Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.233927 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1aa67cd3-59ee-4a03-bc97-fc1fa5729e40-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb4fc677f-hxtm7\" (UID: \"1aa67cd3-59ee-4a03-bc97-fc1fa5729e40\") " pod="openstack/dnsmasq-dns-6bb4fc677f-hxtm7" Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.234019 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eaf38aac-d198-4ddf-8014-ad9c598601ae-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"eaf38aac-d198-4ddf-8014-ad9c598601ae\") " pod="openstack/cinder-scheduler-0" Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.234019 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eaf38aac-d198-4ddf-8014-ad9c598601ae-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"eaf38aac-d198-4ddf-8014-ad9c598601ae\") " pod="openstack/cinder-scheduler-0" Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.238187 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.241509 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.245290 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.246166 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eaf38aac-d198-4ddf-8014-ad9c598601ae-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"eaf38aac-d198-4ddf-8014-ad9c598601ae\") " pod="openstack/cinder-scheduler-0" Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.248758 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaf38aac-d198-4ddf-8014-ad9c598601ae-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"eaf38aac-d198-4ddf-8014-ad9c598601ae\") " pod="openstack/cinder-scheduler-0" Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.253753 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eaf38aac-d198-4ddf-8014-ad9c598601ae-scripts\") pod \"cinder-scheduler-0\" (UID: \"eaf38aac-d198-4ddf-8014-ad9c598601ae\") " pod="openstack/cinder-scheduler-0" Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.285030 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaf38aac-d198-4ddf-8014-ad9c598601ae-config-data\") pod \"cinder-scheduler-0\" (UID: \"eaf38aac-d198-4ddf-8014-ad9c598601ae\") " pod="openstack/cinder-scheduler-0" Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.299253 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.305309 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dd2tc\" (UniqueName: \"kubernetes.io/projected/eaf38aac-d198-4ddf-8014-ad9c598601ae-kube-api-access-dd2tc\") pod \"cinder-scheduler-0\" (UID: \"eaf38aac-d198-4ddf-8014-ad9c598601ae\") " pod="openstack/cinder-scheduler-0" Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.314819 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.353559 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1aa67cd3-59ee-4a03-bc97-fc1fa5729e40-config\") pod \"dnsmasq-dns-6bb4fc677f-hxtm7\" (UID: \"1aa67cd3-59ee-4a03-bc97-fc1fa5729e40\") " pod="openstack/dnsmasq-dns-6bb4fc677f-hxtm7" Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.354169 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/391cd433-a6c7-4004-8f82-6c18506aac37-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"391cd433-a6c7-4004-8f82-6c18506aac37\") " pod="openstack/cinder-api-0" Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.354365 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/391cd433-a6c7-4004-8f82-6c18506aac37-etc-machine-id\") pod \"cinder-api-0\" (UID: \"391cd433-a6c7-4004-8f82-6c18506aac37\") " pod="openstack/cinder-api-0" Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.354538 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ghsx\" (UniqueName: \"kubernetes.io/projected/1aa67cd3-59ee-4a03-bc97-fc1fa5729e40-kube-api-access-8ghsx\") pod \"dnsmasq-dns-6bb4fc677f-hxtm7\" (UID: \"1aa67cd3-59ee-4a03-bc97-fc1fa5729e40\") " pod="openstack/dnsmasq-dns-6bb4fc677f-hxtm7" Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.354757 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1aa67cd3-59ee-4a03-bc97-fc1fa5729e40-dns-svc\") pod \"dnsmasq-dns-6bb4fc677f-hxtm7\" (UID: \"1aa67cd3-59ee-4a03-bc97-fc1fa5729e40\") " pod="openstack/dnsmasq-dns-6bb4fc677f-hxtm7" Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.354856 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1aa67cd3-59ee-4a03-bc97-fc1fa5729e40-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb4fc677f-hxtm7\" (UID: \"1aa67cd3-59ee-4a03-bc97-fc1fa5729e40\") " pod="openstack/dnsmasq-dns-6bb4fc677f-hxtm7" Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.354936 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1aa67cd3-59ee-4a03-bc97-fc1fa5729e40-dns-swift-storage-0\") pod \"dnsmasq-dns-6bb4fc677f-hxtm7\" (UID: \"1aa67cd3-59ee-4a03-bc97-fc1fa5729e40\") " pod="openstack/dnsmasq-dns-6bb4fc677f-hxtm7" Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.355009 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1aa67cd3-59ee-4a03-bc97-fc1fa5729e40-config\") pod \"dnsmasq-dns-6bb4fc677f-hxtm7\" (UID: \"1aa67cd3-59ee-4a03-bc97-fc1fa5729e40\") " pod="openstack/dnsmasq-dns-6bb4fc677f-hxtm7" Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.355275 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1aa67cd3-59ee-4a03-bc97-fc1fa5729e40-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb4fc677f-hxtm7\" (UID: \"1aa67cd3-59ee-4a03-bc97-fc1fa5729e40\") " pod="openstack/dnsmasq-dns-6bb4fc677f-hxtm7" Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.355388 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/391cd433-a6c7-4004-8f82-6c18506aac37-config-data-custom\") pod \"cinder-api-0\" (UID: \"391cd433-a6c7-4004-8f82-6c18506aac37\") " pod="openstack/cinder-api-0" Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.355517 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/391cd433-a6c7-4004-8f82-6c18506aac37-logs\") pod \"cinder-api-0\" (UID: \"391cd433-a6c7-4004-8f82-6c18506aac37\") " pod="openstack/cinder-api-0" Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.373764 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdm26\" (UniqueName: \"kubernetes.io/projected/391cd433-a6c7-4004-8f82-6c18506aac37-kube-api-access-rdm26\") pod \"cinder-api-0\" (UID: \"391cd433-a6c7-4004-8f82-6c18506aac37\") " pod="openstack/cinder-api-0" Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.374102 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/391cd433-a6c7-4004-8f82-6c18506aac37-scripts\") pod \"cinder-api-0\" (UID: \"391cd433-a6c7-4004-8f82-6c18506aac37\") " pod="openstack/cinder-api-0" Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.374262 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/391cd433-a6c7-4004-8f82-6c18506aac37-config-data\") pod \"cinder-api-0\" (UID: \"391cd433-a6c7-4004-8f82-6c18506aac37\") " pod="openstack/cinder-api-0" Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.356024 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1aa67cd3-59ee-4a03-bc97-fc1fa5729e40-dns-svc\") pod \"dnsmasq-dns-6bb4fc677f-hxtm7\" (UID: \"1aa67cd3-59ee-4a03-bc97-fc1fa5729e40\") " pod="openstack/dnsmasq-dns-6bb4fc677f-hxtm7" Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.358186 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1aa67cd3-59ee-4a03-bc97-fc1fa5729e40-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb4fc677f-hxtm7\" (UID: \"1aa67cd3-59ee-4a03-bc97-fc1fa5729e40\") " pod="openstack/dnsmasq-dns-6bb4fc677f-hxtm7" Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.357164 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1aa67cd3-59ee-4a03-bc97-fc1fa5729e40-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb4fc677f-hxtm7\" (UID: \"1aa67cd3-59ee-4a03-bc97-fc1fa5729e40\") " pod="openstack/dnsmasq-dns-6bb4fc677f-hxtm7" Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.357711 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1aa67cd3-59ee-4a03-bc97-fc1fa5729e40-dns-swift-storage-0\") pod \"dnsmasq-dns-6bb4fc677f-hxtm7\" (UID: \"1aa67cd3-59ee-4a03-bc97-fc1fa5729e40\") " pod="openstack/dnsmasq-dns-6bb4fc677f-hxtm7" Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.400507 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ghsx\" (UniqueName: \"kubernetes.io/projected/1aa67cd3-59ee-4a03-bc97-fc1fa5729e40-kube-api-access-8ghsx\") pod \"dnsmasq-dns-6bb4fc677f-hxtm7\" (UID: \"1aa67cd3-59ee-4a03-bc97-fc1fa5729e40\") " pod="openstack/dnsmasq-dns-6bb4fc677f-hxtm7" Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.462373 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-hxtm7" Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.480584 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/391cd433-a6c7-4004-8f82-6c18506aac37-config-data-custom\") pod \"cinder-api-0\" (UID: \"391cd433-a6c7-4004-8f82-6c18506aac37\") " pod="openstack/cinder-api-0" Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.480707 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/391cd433-a6c7-4004-8f82-6c18506aac37-logs\") pod \"cinder-api-0\" (UID: \"391cd433-a6c7-4004-8f82-6c18506aac37\") " pod="openstack/cinder-api-0" Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.480739 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdm26\" (UniqueName: \"kubernetes.io/projected/391cd433-a6c7-4004-8f82-6c18506aac37-kube-api-access-rdm26\") pod \"cinder-api-0\" (UID: \"391cd433-a6c7-4004-8f82-6c18506aac37\") " pod="openstack/cinder-api-0" Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.480780 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/391cd433-a6c7-4004-8f82-6c18506aac37-scripts\") pod \"cinder-api-0\" (UID: \"391cd433-a6c7-4004-8f82-6c18506aac37\") " pod="openstack/cinder-api-0" Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.480833 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/391cd433-a6c7-4004-8f82-6c18506aac37-config-data\") pod \"cinder-api-0\" (UID: \"391cd433-a6c7-4004-8f82-6c18506aac37\") " pod="openstack/cinder-api-0" Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.480969 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/391cd433-a6c7-4004-8f82-6c18506aac37-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"391cd433-a6c7-4004-8f82-6c18506aac37\") " pod="openstack/cinder-api-0" Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.481013 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/391cd433-a6c7-4004-8f82-6c18506aac37-etc-machine-id\") pod \"cinder-api-0\" (UID: \"391cd433-a6c7-4004-8f82-6c18506aac37\") " pod="openstack/cinder-api-0" Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.481252 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/391cd433-a6c7-4004-8f82-6c18506aac37-etc-machine-id\") pod \"cinder-api-0\" (UID: \"391cd433-a6c7-4004-8f82-6c18506aac37\") " pod="openstack/cinder-api-0" Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.482286 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/391cd433-a6c7-4004-8f82-6c18506aac37-logs\") pod \"cinder-api-0\" (UID: \"391cd433-a6c7-4004-8f82-6c18506aac37\") " pod="openstack/cinder-api-0" Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.486298 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/391cd433-a6c7-4004-8f82-6c18506aac37-scripts\") pod \"cinder-api-0\" (UID: \"391cd433-a6c7-4004-8f82-6c18506aac37\") " pod="openstack/cinder-api-0" Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.488085 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/391cd433-a6c7-4004-8f82-6c18506aac37-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"391cd433-a6c7-4004-8f82-6c18506aac37\") " pod="openstack/cinder-api-0" Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.488807 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/391cd433-a6c7-4004-8f82-6c18506aac37-config-data-custom\") pod \"cinder-api-0\" (UID: \"391cd433-a6c7-4004-8f82-6c18506aac37\") " pod="openstack/cinder-api-0" Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.493861 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/391cd433-a6c7-4004-8f82-6c18506aac37-config-data\") pod \"cinder-api-0\" (UID: \"391cd433-a6c7-4004-8f82-6c18506aac37\") " pod="openstack/cinder-api-0" Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.514425 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdm26\" (UniqueName: \"kubernetes.io/projected/391cd433-a6c7-4004-8f82-6c18506aac37-kube-api-access-rdm26\") pod \"cinder-api-0\" (UID: \"391cd433-a6c7-4004-8f82-6c18506aac37\") " pod="openstack/cinder-api-0" Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.629254 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.776523 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-dc94996-zmk9m"] Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.783685 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-dc94996-zmk9m" Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.786468 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.788196 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.810531 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-dc94996-zmk9m"] Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.890667 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6613a7a2-0f90-4a83-80ea-18e316d6338d-public-tls-certs\") pod \"barbican-api-dc94996-zmk9m\" (UID: \"6613a7a2-0f90-4a83-80ea-18e316d6338d\") " pod="openstack/barbican-api-dc94996-zmk9m" Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.890743 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95ptw\" (UniqueName: \"kubernetes.io/projected/6613a7a2-0f90-4a83-80ea-18e316d6338d-kube-api-access-95ptw\") pod \"barbican-api-dc94996-zmk9m\" (UID: \"6613a7a2-0f90-4a83-80ea-18e316d6338d\") " pod="openstack/barbican-api-dc94996-zmk9m" Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.890793 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6613a7a2-0f90-4a83-80ea-18e316d6338d-config-data\") pod \"barbican-api-dc94996-zmk9m\" (UID: \"6613a7a2-0f90-4a83-80ea-18e316d6338d\") " pod="openstack/barbican-api-dc94996-zmk9m" Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.890870 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6613a7a2-0f90-4a83-80ea-18e316d6338d-logs\") pod \"barbican-api-dc94996-zmk9m\" (UID: \"6613a7a2-0f90-4a83-80ea-18e316d6338d\") " pod="openstack/barbican-api-dc94996-zmk9m" Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.890910 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6613a7a2-0f90-4a83-80ea-18e316d6338d-internal-tls-certs\") pod \"barbican-api-dc94996-zmk9m\" (UID: \"6613a7a2-0f90-4a83-80ea-18e316d6338d\") " pod="openstack/barbican-api-dc94996-zmk9m" Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.890974 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6613a7a2-0f90-4a83-80ea-18e316d6338d-config-data-custom\") pod \"barbican-api-dc94996-zmk9m\" (UID: \"6613a7a2-0f90-4a83-80ea-18e316d6338d\") " pod="openstack/barbican-api-dc94996-zmk9m" Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.891029 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6613a7a2-0f90-4a83-80ea-18e316d6338d-combined-ca-bundle\") pod \"barbican-api-dc94996-zmk9m\" (UID: \"6613a7a2-0f90-4a83-80ea-18e316d6338d\") " pod="openstack/barbican-api-dc94996-zmk9m" Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.991918 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6613a7a2-0f90-4a83-80ea-18e316d6338d-public-tls-certs\") pod \"barbican-api-dc94996-zmk9m\" (UID: \"6613a7a2-0f90-4a83-80ea-18e316d6338d\") " pod="openstack/barbican-api-dc94996-zmk9m" Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.991973 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95ptw\" (UniqueName: \"kubernetes.io/projected/6613a7a2-0f90-4a83-80ea-18e316d6338d-kube-api-access-95ptw\") pod \"barbican-api-dc94996-zmk9m\" (UID: \"6613a7a2-0f90-4a83-80ea-18e316d6338d\") " pod="openstack/barbican-api-dc94996-zmk9m" Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.992013 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6613a7a2-0f90-4a83-80ea-18e316d6338d-config-data\") pod \"barbican-api-dc94996-zmk9m\" (UID: \"6613a7a2-0f90-4a83-80ea-18e316d6338d\") " pod="openstack/barbican-api-dc94996-zmk9m" Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.992076 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6613a7a2-0f90-4a83-80ea-18e316d6338d-logs\") pod \"barbican-api-dc94996-zmk9m\" (UID: \"6613a7a2-0f90-4a83-80ea-18e316d6338d\") " pod="openstack/barbican-api-dc94996-zmk9m" Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.992114 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6613a7a2-0f90-4a83-80ea-18e316d6338d-internal-tls-certs\") pod \"barbican-api-dc94996-zmk9m\" (UID: \"6613a7a2-0f90-4a83-80ea-18e316d6338d\") " pod="openstack/barbican-api-dc94996-zmk9m" Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.992152 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6613a7a2-0f90-4a83-80ea-18e316d6338d-config-data-custom\") pod \"barbican-api-dc94996-zmk9m\" (UID: \"6613a7a2-0f90-4a83-80ea-18e316d6338d\") " pod="openstack/barbican-api-dc94996-zmk9m" Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.992205 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6613a7a2-0f90-4a83-80ea-18e316d6338d-combined-ca-bundle\") pod \"barbican-api-dc94996-zmk9m\" (UID: \"6613a7a2-0f90-4a83-80ea-18e316d6338d\") " pod="openstack/barbican-api-dc94996-zmk9m" Nov 29 08:05:57 crc kubenswrapper[4795]: I1129 08:05:57.995212 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6613a7a2-0f90-4a83-80ea-18e316d6338d-logs\") pod \"barbican-api-dc94996-zmk9m\" (UID: \"6613a7a2-0f90-4a83-80ea-18e316d6338d\") " pod="openstack/barbican-api-dc94996-zmk9m" Nov 29 08:05:58 crc kubenswrapper[4795]: I1129 08:05:58.001239 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6613a7a2-0f90-4a83-80ea-18e316d6338d-config-data-custom\") pod \"barbican-api-dc94996-zmk9m\" (UID: \"6613a7a2-0f90-4a83-80ea-18e316d6338d\") " pod="openstack/barbican-api-dc94996-zmk9m" Nov 29 08:05:58 crc kubenswrapper[4795]: I1129 08:05:58.001880 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6613a7a2-0f90-4a83-80ea-18e316d6338d-combined-ca-bundle\") pod \"barbican-api-dc94996-zmk9m\" (UID: \"6613a7a2-0f90-4a83-80ea-18e316d6338d\") " pod="openstack/barbican-api-dc94996-zmk9m" Nov 29 08:05:58 crc kubenswrapper[4795]: I1129 08:05:58.002155 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6613a7a2-0f90-4a83-80ea-18e316d6338d-public-tls-certs\") pod \"barbican-api-dc94996-zmk9m\" (UID: \"6613a7a2-0f90-4a83-80ea-18e316d6338d\") " pod="openstack/barbican-api-dc94996-zmk9m" Nov 29 08:05:58 crc kubenswrapper[4795]: I1129 08:05:58.005695 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6613a7a2-0f90-4a83-80ea-18e316d6338d-config-data\") pod \"barbican-api-dc94996-zmk9m\" (UID: \"6613a7a2-0f90-4a83-80ea-18e316d6338d\") " pod="openstack/barbican-api-dc94996-zmk9m" Nov 29 08:05:58 crc kubenswrapper[4795]: I1129 08:05:58.006053 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6613a7a2-0f90-4a83-80ea-18e316d6338d-internal-tls-certs\") pod \"barbican-api-dc94996-zmk9m\" (UID: \"6613a7a2-0f90-4a83-80ea-18e316d6338d\") " pod="openstack/barbican-api-dc94996-zmk9m" Nov 29 08:05:58 crc kubenswrapper[4795]: I1129 08:05:58.026286 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95ptw\" (UniqueName: \"kubernetes.io/projected/6613a7a2-0f90-4a83-80ea-18e316d6338d-kube-api-access-95ptw\") pod \"barbican-api-dc94996-zmk9m\" (UID: \"6613a7a2-0f90-4a83-80ea-18e316d6338d\") " pod="openstack/barbican-api-dc94996-zmk9m" Nov 29 08:05:58 crc kubenswrapper[4795]: I1129 08:05:58.142970 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-dc94996-zmk9m" Nov 29 08:05:58 crc kubenswrapper[4795]: I1129 08:05:58.733049 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-688c87cc99-d8lng" podUID="2f892ddb-2efd-437e-9fae-9f7119f17847" containerName="dnsmasq-dns" containerID="cri-o://81de92269787b0ce7f44cad613a4054544be55e20e4d62cdc0a4c4efbdf98902" gracePeriod=10 Nov 29 08:05:59 crc kubenswrapper[4795]: I1129 08:05:59.106713 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5bdf57676-nn72s" Nov 29 08:05:59 crc kubenswrapper[4795]: I1129 08:05:59.862367 4795 generic.go:334] "Generic (PLEG): container finished" podID="2f892ddb-2efd-437e-9fae-9f7119f17847" containerID="81de92269787b0ce7f44cad613a4054544be55e20e4d62cdc0a4c4efbdf98902" exitCode=0 Nov 29 08:05:59 crc kubenswrapper[4795]: I1129 08:05:59.863384 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-d8lng" event={"ID":"2f892ddb-2efd-437e-9fae-9f7119f17847","Type":"ContainerDied","Data":"81de92269787b0ce7f44cad613a4054544be55e20e4d62cdc0a4c4efbdf98902"} Nov 29 08:06:00 crc kubenswrapper[4795]: I1129 08:06:00.768805 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 29 08:06:04 crc kubenswrapper[4795]: I1129 08:06:04.915940 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5c8566458-jwt4m" Nov 29 08:06:05 crc kubenswrapper[4795]: I1129 08:06:05.144245 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5c8566458-jwt4m" Nov 29 08:06:05 crc kubenswrapper[4795]: I1129 08:06:05.755003 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-d8lng" Nov 29 08:06:05 crc kubenswrapper[4795]: I1129 08:06:05.932669 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2f892ddb-2efd-437e-9fae-9f7119f17847-ovsdbserver-sb\") pod \"2f892ddb-2efd-437e-9fae-9f7119f17847\" (UID: \"2f892ddb-2efd-437e-9fae-9f7119f17847\") " Nov 29 08:06:05 crc kubenswrapper[4795]: I1129 08:06:05.933031 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfm7b\" (UniqueName: \"kubernetes.io/projected/2f892ddb-2efd-437e-9fae-9f7119f17847-kube-api-access-sfm7b\") pod \"2f892ddb-2efd-437e-9fae-9f7119f17847\" (UID: \"2f892ddb-2efd-437e-9fae-9f7119f17847\") " Nov 29 08:06:05 crc kubenswrapper[4795]: I1129 08:06:05.933094 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2f892ddb-2efd-437e-9fae-9f7119f17847-dns-swift-storage-0\") pod \"2f892ddb-2efd-437e-9fae-9f7119f17847\" (UID: \"2f892ddb-2efd-437e-9fae-9f7119f17847\") " Nov 29 08:06:05 crc kubenswrapper[4795]: I1129 08:06:05.933134 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f892ddb-2efd-437e-9fae-9f7119f17847-config\") pod \"2f892ddb-2efd-437e-9fae-9f7119f17847\" (UID: \"2f892ddb-2efd-437e-9fae-9f7119f17847\") " Nov 29 08:06:05 crc kubenswrapper[4795]: I1129 08:06:05.933230 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2f892ddb-2efd-437e-9fae-9f7119f17847-ovsdbserver-nb\") pod \"2f892ddb-2efd-437e-9fae-9f7119f17847\" (UID: \"2f892ddb-2efd-437e-9fae-9f7119f17847\") " Nov 29 08:06:05 crc kubenswrapper[4795]: I1129 08:06:05.933306 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2f892ddb-2efd-437e-9fae-9f7119f17847-dns-svc\") pod \"2f892ddb-2efd-437e-9fae-9f7119f17847\" (UID: \"2f892ddb-2efd-437e-9fae-9f7119f17847\") " Nov 29 08:06:05 crc kubenswrapper[4795]: I1129 08:06:05.946772 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f892ddb-2efd-437e-9fae-9f7119f17847-kube-api-access-sfm7b" (OuterVolumeSpecName: "kube-api-access-sfm7b") pod "2f892ddb-2efd-437e-9fae-9f7119f17847" (UID: "2f892ddb-2efd-437e-9fae-9f7119f17847"). InnerVolumeSpecName "kube-api-access-sfm7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:06:06 crc kubenswrapper[4795]: I1129 08:06:06.036247 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sfm7b\" (UniqueName: \"kubernetes.io/projected/2f892ddb-2efd-437e-9fae-9f7119f17847-kube-api-access-sfm7b\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:06 crc kubenswrapper[4795]: I1129 08:06:06.058346 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f892ddb-2efd-437e-9fae-9f7119f17847-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2f892ddb-2efd-437e-9fae-9f7119f17847" (UID: "2f892ddb-2efd-437e-9fae-9f7119f17847"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:06:06 crc kubenswrapper[4795]: I1129 08:06:06.072521 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f892ddb-2efd-437e-9fae-9f7119f17847-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2f892ddb-2efd-437e-9fae-9f7119f17847" (UID: "2f892ddb-2efd-437e-9fae-9f7119f17847"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:06:06 crc kubenswrapper[4795]: I1129 08:06:06.077708 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-d8lng" event={"ID":"2f892ddb-2efd-437e-9fae-9f7119f17847","Type":"ContainerDied","Data":"a1412485efbae22f0109adc8c773920c43ab13e4ccab70cccdcbfec19d90f324"} Nov 29 08:06:06 crc kubenswrapper[4795]: I1129 08:06:06.077737 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-d8lng" Nov 29 08:06:06 crc kubenswrapper[4795]: I1129 08:06:06.077766 4795 scope.go:117] "RemoveContainer" containerID="81de92269787b0ce7f44cad613a4054544be55e20e4d62cdc0a4c4efbdf98902" Nov 29 08:06:06 crc kubenswrapper[4795]: I1129 08:06:06.089231 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f892ddb-2efd-437e-9fae-9f7119f17847-config" (OuterVolumeSpecName: "config") pod "2f892ddb-2efd-437e-9fae-9f7119f17847" (UID: "2f892ddb-2efd-437e-9fae-9f7119f17847"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:06:06 crc kubenswrapper[4795]: I1129 08:06:06.102962 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f892ddb-2efd-437e-9fae-9f7119f17847-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2f892ddb-2efd-437e-9fae-9f7119f17847" (UID: "2f892ddb-2efd-437e-9fae-9f7119f17847"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:06:06 crc kubenswrapper[4795]: I1129 08:06:06.114096 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f892ddb-2efd-437e-9fae-9f7119f17847-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2f892ddb-2efd-437e-9fae-9f7119f17847" (UID: "2f892ddb-2efd-437e-9fae-9f7119f17847"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:06:06 crc kubenswrapper[4795]: I1129 08:06:06.141030 4795 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2f892ddb-2efd-437e-9fae-9f7119f17847-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:06 crc kubenswrapper[4795]: I1129 08:06:06.141070 4795 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2f892ddb-2efd-437e-9fae-9f7119f17847-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:06 crc kubenswrapper[4795]: I1129 08:06:06.141086 4795 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2f892ddb-2efd-437e-9fae-9f7119f17847-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:06 crc kubenswrapper[4795]: I1129 08:06:06.141094 4795 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f892ddb-2efd-437e-9fae-9f7119f17847-config\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:06 crc kubenswrapper[4795]: I1129 08:06:06.141104 4795 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2f892ddb-2efd-437e-9fae-9f7119f17847-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:06 crc kubenswrapper[4795]: I1129 08:06:06.153802 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-bcc9b4b57-btgmc" Nov 29 08:06:06 crc kubenswrapper[4795]: I1129 08:06:06.318455 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5bdf57676-nn72s"] Nov 29 08:06:06 crc kubenswrapper[4795]: I1129 08:06:06.318758 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5bdf57676-nn72s" podUID="ac907bde-e98c-4e4f-aa14-e105a4fe885c" containerName="neutron-api" containerID="cri-o://45a4c4aa94ff4f81822bbb57e1e819c08101d4d990fb49cd356761d8201d876a" gracePeriod=30 Nov 29 08:06:06 crc kubenswrapper[4795]: I1129 08:06:06.318881 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5bdf57676-nn72s" podUID="ac907bde-e98c-4e4f-aa14-e105a4fe885c" containerName="neutron-httpd" containerID="cri-o://b502aeb549c6281a4c29dfce7ff4bb46e86bfaf86e8bd5850874871365e44283" gracePeriod=30 Nov 29 08:06:06 crc kubenswrapper[4795]: I1129 08:06:06.414793 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-d8lng"] Nov 29 08:06:06 crc kubenswrapper[4795]: I1129 08:06:06.438777 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-d8lng"] Nov 29 08:06:06 crc kubenswrapper[4795]: E1129 08:06:06.782817 4795 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/ubi9/httpd-24:latest" Nov 29 08:06:06 crc kubenswrapper[4795]: E1129 08:06:06.783037 4795 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hmxvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(c6dbced2-2ca2-4189-aad3-7a872ab6209c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 29 08:06:06 crc kubenswrapper[4795]: E1129 08:06:06.784253 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"]" pod="openstack/ceilometer-0" podUID="c6dbced2-2ca2-4189-aad3-7a872ab6209c" Nov 29 08:06:06 crc kubenswrapper[4795]: I1129 08:06:06.851436 4795 scope.go:117] "RemoveContainer" containerID="5480f4446ced188ad401bd7d03117d9b9cf863cff6ea65f7007aad22047c72c2" Nov 29 08:06:07 crc kubenswrapper[4795]: I1129 08:06:07.099370 4795 generic.go:334] "Generic (PLEG): container finished" podID="ac907bde-e98c-4e4f-aa14-e105a4fe885c" containerID="b502aeb549c6281a4c29dfce7ff4bb46e86bfaf86e8bd5850874871365e44283" exitCode=0 Nov 29 08:06:07 crc kubenswrapper[4795]: I1129 08:06:07.099813 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5bdf57676-nn72s" event={"ID":"ac907bde-e98c-4e4f-aa14-e105a4fe885c","Type":"ContainerDied","Data":"b502aeb549c6281a4c29dfce7ff4bb46e86bfaf86e8bd5850874871365e44283"} Nov 29 08:06:07 crc kubenswrapper[4795]: I1129 08:06:07.102861 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c6dbced2-2ca2-4189-aad3-7a872ab6209c" containerName="ceilometer-notification-agent" containerID="cri-o://bfa68787ef29f44c70901173a1f1a2dc9583802c7d38a555e5dfe2f938b751b9" gracePeriod=30 Nov 29 08:06:07 crc kubenswrapper[4795]: I1129 08:06:07.103802 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c6dbced2-2ca2-4189-aad3-7a872ab6209c" containerName="sg-core" containerID="cri-o://5e54a4fa95674f1f2796053bdb12eb11ba4e4e51801a366a7d261847a2d43c38" gracePeriod=30 Nov 29 08:06:07 crc kubenswrapper[4795]: W1129 08:06:07.621025 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1aa67cd3_59ee_4a03_bc97_fc1fa5729e40.slice/crio-a0f480cc8e03d96e3904ff9f7bf19f8b00a0c28fb2c842fe45fac61d7aa037d0 WatchSource:0}: Error finding container a0f480cc8e03d96e3904ff9f7bf19f8b00a0c28fb2c842fe45fac61d7aa037d0: Status 404 returned error can't find the container with id a0f480cc8e03d96e3904ff9f7bf19f8b00a0c28fb2c842fe45fac61d7aa037d0 Nov 29 08:06:07 crc kubenswrapper[4795]: I1129 08:06:07.631797 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 29 08:06:07 crc kubenswrapper[4795]: I1129 08:06:07.647605 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-hxtm7"] Nov 29 08:06:07 crc kubenswrapper[4795]: I1129 08:06:07.832199 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 29 08:06:07 crc kubenswrapper[4795]: W1129 08:06:07.843120 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeaf38aac_d198_4ddf_8014_ad9c598601ae.slice/crio-17c7c68183bf149590f2ed8a294357aa3b1c106c7cb78736bdfc02dcba1de2a8 WatchSource:0}: Error finding container 17c7c68183bf149590f2ed8a294357aa3b1c106c7cb78736bdfc02dcba1de2a8: Status 404 returned error can't find the container with id 17c7c68183bf149590f2ed8a294357aa3b1c106c7cb78736bdfc02dcba1de2a8 Nov 29 08:06:07 crc kubenswrapper[4795]: I1129 08:06:07.853420 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-dc94996-zmk9m"] Nov 29 08:06:08 crc kubenswrapper[4795]: I1129 08:06:08.106287 4795 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-688c87cc99-d8lng" podUID="2f892ddb-2efd-437e-9fae-9f7119f17847" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.198:5353: i/o timeout" Nov 29 08:06:08 crc kubenswrapper[4795]: I1129 08:06:08.173097 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"eaf38aac-d198-4ddf-8014-ad9c598601ae","Type":"ContainerStarted","Data":"17c7c68183bf149590f2ed8a294357aa3b1c106c7cb78736bdfc02dcba1de2a8"} Nov 29 08:06:08 crc kubenswrapper[4795]: I1129 08:06:08.195521 4795 generic.go:334] "Generic (PLEG): container finished" podID="c6dbced2-2ca2-4189-aad3-7a872ab6209c" containerID="5e54a4fa95674f1f2796053bdb12eb11ba4e4e51801a366a7d261847a2d43c38" exitCode=2 Nov 29 08:06:08 crc kubenswrapper[4795]: I1129 08:06:08.195842 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6dbced2-2ca2-4189-aad3-7a872ab6209c","Type":"ContainerDied","Data":"5e54a4fa95674f1f2796053bdb12eb11ba4e4e51801a366a7d261847a2d43c38"} Nov 29 08:06:08 crc kubenswrapper[4795]: I1129 08:06:08.197798 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-65899db58b-mz594" event={"ID":"0c64caf8-e57d-495f-985c-844edea0d146","Type":"ContainerStarted","Data":"8914683e02c997d02085d5458d74cdd60fe7dd8fc05bc250d489c4c3afda3f94"} Nov 29 08:06:08 crc kubenswrapper[4795]: I1129 08:06:08.198656 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"391cd433-a6c7-4004-8f82-6c18506aac37","Type":"ContainerStarted","Data":"4ac339296073aa4dda5f6e3c1173d0e80644dc7b1090777a8c71950d2eea3e9b"} Nov 29 08:06:08 crc kubenswrapper[4795]: I1129 08:06:08.227006 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-dc94996-zmk9m" event={"ID":"6613a7a2-0f90-4a83-80ea-18e316d6338d","Type":"ContainerStarted","Data":"9d76fd6b32f6fa18ae2d1a712a9ea20672cdff9b4172efabdf8e360bfe985368"} Nov 29 08:06:08 crc kubenswrapper[4795]: I1129 08:06:08.263922 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-hxtm7" event={"ID":"1aa67cd3-59ee-4a03-bc97-fc1fa5729e40","Type":"ContainerStarted","Data":"a0f480cc8e03d96e3904ff9f7bf19f8b00a0c28fb2c842fe45fac61d7aa037d0"} Nov 29 08:06:08 crc kubenswrapper[4795]: I1129 08:06:08.320959 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f892ddb-2efd-437e-9fae-9f7119f17847" path="/var/lib/kubelet/pods/2f892ddb-2efd-437e-9fae-9f7119f17847/volumes" Nov 29 08:06:08 crc kubenswrapper[4795]: I1129 08:06:08.323879 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6875d4f667-x5hjc" event={"ID":"1bca86ff-c24c-4d08-b7ed-be2433fe9735","Type":"ContainerStarted","Data":"478563ae2b4475e903ede3411d61bee5d9ff2f4d9e933ae06b47c0630bf37c56"} Nov 29 08:06:09 crc kubenswrapper[4795]: I1129 08:06:09.302603 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 08:06:09 crc kubenswrapper[4795]: I1129 08:06:09.318112 4795 generic.go:334] "Generic (PLEG): container finished" podID="1aa67cd3-59ee-4a03-bc97-fc1fa5729e40" containerID="a4af938d4794130d91d0e6907f3fea2a831bc12ff4ce6ef2136ba533df8b3365" exitCode=0 Nov 29 08:06:09 crc kubenswrapper[4795]: I1129 08:06:09.318206 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-hxtm7" event={"ID":"1aa67cd3-59ee-4a03-bc97-fc1fa5729e40","Type":"ContainerDied","Data":"a4af938d4794130d91d0e6907f3fea2a831bc12ff4ce6ef2136ba533df8b3365"} Nov 29 08:06:09 crc kubenswrapper[4795]: I1129 08:06:09.335232 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6875d4f667-x5hjc" event={"ID":"1bca86ff-c24c-4d08-b7ed-be2433fe9735","Type":"ContainerStarted","Data":"825f78a9156689591ee04dc590996873a12f37ab0210dc3f74943f0d32644ce2"} Nov 29 08:06:09 crc kubenswrapper[4795]: I1129 08:06:09.351857 4795 generic.go:334] "Generic (PLEG): container finished" podID="c6dbced2-2ca2-4189-aad3-7a872ab6209c" containerID="bfa68787ef29f44c70901173a1f1a2dc9583802c7d38a555e5dfe2f938b751b9" exitCode=0 Nov 29 08:06:09 crc kubenswrapper[4795]: I1129 08:06:09.351951 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6dbced2-2ca2-4189-aad3-7a872ab6209c","Type":"ContainerDied","Data":"bfa68787ef29f44c70901173a1f1a2dc9583802c7d38a555e5dfe2f938b751b9"} Nov 29 08:06:09 crc kubenswrapper[4795]: I1129 08:06:09.351979 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6dbced2-2ca2-4189-aad3-7a872ab6209c","Type":"ContainerDied","Data":"5832325ba17fad5c07c62549a6e44f28a51be2c23eccb49326c41f4870fae845"} Nov 29 08:06:09 crc kubenswrapper[4795]: I1129 08:06:09.351997 4795 scope.go:117] "RemoveContainer" containerID="5e54a4fa95674f1f2796053bdb12eb11ba4e4e51801a366a7d261847a2d43c38" Nov 29 08:06:09 crc kubenswrapper[4795]: I1129 08:06:09.352146 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 08:06:09 crc kubenswrapper[4795]: I1129 08:06:09.353084 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6dbced2-2ca2-4189-aad3-7a872ab6209c-scripts\") pod \"c6dbced2-2ca2-4189-aad3-7a872ab6209c\" (UID: \"c6dbced2-2ca2-4189-aad3-7a872ab6209c\") " Nov 29 08:06:09 crc kubenswrapper[4795]: I1129 08:06:09.353136 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6dbced2-2ca2-4189-aad3-7a872ab6209c-config-data\") pod \"c6dbced2-2ca2-4189-aad3-7a872ab6209c\" (UID: \"c6dbced2-2ca2-4189-aad3-7a872ab6209c\") " Nov 29 08:06:09 crc kubenswrapper[4795]: I1129 08:06:09.353204 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c6dbced2-2ca2-4189-aad3-7a872ab6209c-sg-core-conf-yaml\") pod \"c6dbced2-2ca2-4189-aad3-7a872ab6209c\" (UID: \"c6dbced2-2ca2-4189-aad3-7a872ab6209c\") " Nov 29 08:06:09 crc kubenswrapper[4795]: I1129 08:06:09.353238 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6dbced2-2ca2-4189-aad3-7a872ab6209c-combined-ca-bundle\") pod \"c6dbced2-2ca2-4189-aad3-7a872ab6209c\" (UID: \"c6dbced2-2ca2-4189-aad3-7a872ab6209c\") " Nov 29 08:06:09 crc kubenswrapper[4795]: I1129 08:06:09.353273 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6dbced2-2ca2-4189-aad3-7a872ab6209c-run-httpd\") pod \"c6dbced2-2ca2-4189-aad3-7a872ab6209c\" (UID: \"c6dbced2-2ca2-4189-aad3-7a872ab6209c\") " Nov 29 08:06:09 crc kubenswrapper[4795]: I1129 08:06:09.353403 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmxvx\" (UniqueName: \"kubernetes.io/projected/c6dbced2-2ca2-4189-aad3-7a872ab6209c-kube-api-access-hmxvx\") pod \"c6dbced2-2ca2-4189-aad3-7a872ab6209c\" (UID: \"c6dbced2-2ca2-4189-aad3-7a872ab6209c\") " Nov 29 08:06:09 crc kubenswrapper[4795]: I1129 08:06:09.353474 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6dbced2-2ca2-4189-aad3-7a872ab6209c-log-httpd\") pod \"c6dbced2-2ca2-4189-aad3-7a872ab6209c\" (UID: \"c6dbced2-2ca2-4189-aad3-7a872ab6209c\") " Nov 29 08:06:09 crc kubenswrapper[4795]: I1129 08:06:09.359345 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6dbced2-2ca2-4189-aad3-7a872ab6209c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c6dbced2-2ca2-4189-aad3-7a872ab6209c" (UID: "c6dbced2-2ca2-4189-aad3-7a872ab6209c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:06:09 crc kubenswrapper[4795]: I1129 08:06:09.361849 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6dbced2-2ca2-4189-aad3-7a872ab6209c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c6dbced2-2ca2-4189-aad3-7a872ab6209c" (UID: "c6dbced2-2ca2-4189-aad3-7a872ab6209c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:06:09 crc kubenswrapper[4795]: I1129 08:06:09.365138 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6dbced2-2ca2-4189-aad3-7a872ab6209c-scripts" (OuterVolumeSpecName: "scripts") pod "c6dbced2-2ca2-4189-aad3-7a872ab6209c" (UID: "c6dbced2-2ca2-4189-aad3-7a872ab6209c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:06:09 crc kubenswrapper[4795]: I1129 08:06:09.379143 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6dbced2-2ca2-4189-aad3-7a872ab6209c-kube-api-access-hmxvx" (OuterVolumeSpecName: "kube-api-access-hmxvx") pod "c6dbced2-2ca2-4189-aad3-7a872ab6209c" (UID: "c6dbced2-2ca2-4189-aad3-7a872ab6209c"). InnerVolumeSpecName "kube-api-access-hmxvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:06:09 crc kubenswrapper[4795]: I1129 08:06:09.383080 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-65899db58b-mz594" event={"ID":"0c64caf8-e57d-495f-985c-844edea0d146","Type":"ContainerStarted","Data":"ead2c4b449c3835d37cf36fc5d24beeb5395640b5a650922664770191a2a6584"} Nov 29 08:06:09 crc kubenswrapper[4795]: I1129 08:06:09.399747 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"391cd433-a6c7-4004-8f82-6c18506aac37","Type":"ContainerStarted","Data":"7d455e86aff8b7415da4a8b3f2855454a536a8680d679e0571bf4f35d43b2a52"} Nov 29 08:06:09 crc kubenswrapper[4795]: I1129 08:06:09.410489 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6dbced2-2ca2-4189-aad3-7a872ab6209c-config-data" (OuterVolumeSpecName: "config-data") pod "c6dbced2-2ca2-4189-aad3-7a872ab6209c" (UID: "c6dbced2-2ca2-4189-aad3-7a872ab6209c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:06:09 crc kubenswrapper[4795]: I1129 08:06:09.429469 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-dc94996-zmk9m" event={"ID":"6613a7a2-0f90-4a83-80ea-18e316d6338d","Type":"ContainerStarted","Data":"3ec0dd23b1271f982bee2f962d280963b580a5e76fb77c5f3ba3dbe495ef15da"} Nov 29 08:06:09 crc kubenswrapper[4795]: I1129 08:06:09.429581 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-dc94996-zmk9m" event={"ID":"6613a7a2-0f90-4a83-80ea-18e316d6338d","Type":"ContainerStarted","Data":"de490ea0d79a8e86fa64f9433a37cdf465897b75cd57b68e8a5debb68bcbd9b7"} Nov 29 08:06:09 crc kubenswrapper[4795]: I1129 08:06:09.429801 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-dc94996-zmk9m" Nov 29 08:06:09 crc kubenswrapper[4795]: I1129 08:06:09.429827 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-dc94996-zmk9m" Nov 29 08:06:09 crc kubenswrapper[4795]: I1129 08:06:09.429898 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6dbced2-2ca2-4189-aad3-7a872ab6209c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c6dbced2-2ca2-4189-aad3-7a872ab6209c" (UID: "c6dbced2-2ca2-4189-aad3-7a872ab6209c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:06:09 crc kubenswrapper[4795]: I1129 08:06:09.455154 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-6875d4f667-x5hjc" podStartSLOduration=5.340536878 podStartE2EDuration="17.45513035s" podCreationTimestamp="2025-11-29 08:05:52 +0000 UTC" firstStartedPulling="2025-11-29 08:05:54.736836699 +0000 UTC m=+1600.712412489" lastFinishedPulling="2025-11-29 08:06:06.851430171 +0000 UTC m=+1612.827005961" observedRunningTime="2025-11-29 08:06:09.356147005 +0000 UTC m=+1615.331722785" watchObservedRunningTime="2025-11-29 08:06:09.45513035 +0000 UTC m=+1615.430706140" Nov 29 08:06:09 crc kubenswrapper[4795]: I1129 08:06:09.455386 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6dbced2-2ca2-4189-aad3-7a872ab6209c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c6dbced2-2ca2-4189-aad3-7a872ab6209c" (UID: "c6dbced2-2ca2-4189-aad3-7a872ab6209c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:06:09 crc kubenswrapper[4795]: I1129 08:06:09.455627 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c6dbced2-2ca2-4189-aad3-7a872ab6209c-sg-core-conf-yaml\") pod \"c6dbced2-2ca2-4189-aad3-7a872ab6209c\" (UID: \"c6dbced2-2ca2-4189-aad3-7a872ab6209c\") " Nov 29 08:06:09 crc kubenswrapper[4795]: W1129 08:06:09.456308 4795 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/c6dbced2-2ca2-4189-aad3-7a872ab6209c/volumes/kubernetes.io~secret/sg-core-conf-yaml Nov 29 08:06:09 crc kubenswrapper[4795]: I1129 08:06:09.456327 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6dbced2-2ca2-4189-aad3-7a872ab6209c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c6dbced2-2ca2-4189-aad3-7a872ab6209c" (UID: "c6dbced2-2ca2-4189-aad3-7a872ab6209c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:06:09 crc kubenswrapper[4795]: I1129 08:06:09.461233 4795 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6dbced2-2ca2-4189-aad3-7a872ab6209c-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:09 crc kubenswrapper[4795]: I1129 08:06:09.462436 4795 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6dbced2-2ca2-4189-aad3-7a872ab6209c-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:09 crc kubenswrapper[4795]: I1129 08:06:09.463710 4795 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c6dbced2-2ca2-4189-aad3-7a872ab6209c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:09 crc kubenswrapper[4795]: I1129 08:06:09.463749 4795 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6dbced2-2ca2-4189-aad3-7a872ab6209c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:09 crc kubenswrapper[4795]: I1129 08:06:09.463763 4795 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6dbced2-2ca2-4189-aad3-7a872ab6209c-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:09 crc kubenswrapper[4795]: I1129 08:06:09.463811 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmxvx\" (UniqueName: \"kubernetes.io/projected/c6dbced2-2ca2-4189-aad3-7a872ab6209c-kube-api-access-hmxvx\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:09 crc kubenswrapper[4795]: I1129 08:06:09.463825 4795 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6dbced2-2ca2-4189-aad3-7a872ab6209c-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:09 crc kubenswrapper[4795]: I1129 08:06:09.469399 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-65899db58b-mz594" podStartSLOduration=5.339114618 podStartE2EDuration="17.469374915s" podCreationTimestamp="2025-11-29 08:05:52 +0000 UTC" firstStartedPulling="2025-11-29 08:05:54.737193889 +0000 UTC m=+1600.712769679" lastFinishedPulling="2025-11-29 08:06:06.867454186 +0000 UTC m=+1612.843029976" observedRunningTime="2025-11-29 08:06:09.426258619 +0000 UTC m=+1615.401834409" watchObservedRunningTime="2025-11-29 08:06:09.469374915 +0000 UTC m=+1615.444950705" Nov 29 08:06:09 crc kubenswrapper[4795]: I1129 08:06:09.501824 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-dc94996-zmk9m" podStartSLOduration=12.501798857 podStartE2EDuration="12.501798857s" podCreationTimestamp="2025-11-29 08:05:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 08:06:09.463785236 +0000 UTC m=+1615.439361026" watchObservedRunningTime="2025-11-29 08:06:09.501798857 +0000 UTC m=+1615.477374657" Nov 29 08:06:09 crc kubenswrapper[4795]: I1129 08:06:09.599391 4795 scope.go:117] "RemoveContainer" containerID="bfa68787ef29f44c70901173a1f1a2dc9583802c7d38a555e5dfe2f938b751b9" Nov 29 08:06:09 crc kubenswrapper[4795]: I1129 08:06:09.675494 4795 scope.go:117] "RemoveContainer" containerID="5e54a4fa95674f1f2796053bdb12eb11ba4e4e51801a366a7d261847a2d43c38" Nov 29 08:06:09 crc kubenswrapper[4795]: E1129 08:06:09.678606 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e54a4fa95674f1f2796053bdb12eb11ba4e4e51801a366a7d261847a2d43c38\": container with ID starting with 5e54a4fa95674f1f2796053bdb12eb11ba4e4e51801a366a7d261847a2d43c38 not found: ID does not exist" containerID="5e54a4fa95674f1f2796053bdb12eb11ba4e4e51801a366a7d261847a2d43c38" Nov 29 08:06:09 crc kubenswrapper[4795]: I1129 08:06:09.678667 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e54a4fa95674f1f2796053bdb12eb11ba4e4e51801a366a7d261847a2d43c38"} err="failed to get container status \"5e54a4fa95674f1f2796053bdb12eb11ba4e4e51801a366a7d261847a2d43c38\": rpc error: code = NotFound desc = could not find container \"5e54a4fa95674f1f2796053bdb12eb11ba4e4e51801a366a7d261847a2d43c38\": container with ID starting with 5e54a4fa95674f1f2796053bdb12eb11ba4e4e51801a366a7d261847a2d43c38 not found: ID does not exist" Nov 29 08:06:09 crc kubenswrapper[4795]: I1129 08:06:09.678699 4795 scope.go:117] "RemoveContainer" containerID="bfa68787ef29f44c70901173a1f1a2dc9583802c7d38a555e5dfe2f938b751b9" Nov 29 08:06:09 crc kubenswrapper[4795]: E1129 08:06:09.679409 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfa68787ef29f44c70901173a1f1a2dc9583802c7d38a555e5dfe2f938b751b9\": container with ID starting with bfa68787ef29f44c70901173a1f1a2dc9583802c7d38a555e5dfe2f938b751b9 not found: ID does not exist" containerID="bfa68787ef29f44c70901173a1f1a2dc9583802c7d38a555e5dfe2f938b751b9" Nov 29 08:06:09 crc kubenswrapper[4795]: I1129 08:06:09.679455 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfa68787ef29f44c70901173a1f1a2dc9583802c7d38a555e5dfe2f938b751b9"} err="failed to get container status \"bfa68787ef29f44c70901173a1f1a2dc9583802c7d38a555e5dfe2f938b751b9\": rpc error: code = NotFound desc = could not find container \"bfa68787ef29f44c70901173a1f1a2dc9583802c7d38a555e5dfe2f938b751b9\": container with ID starting with bfa68787ef29f44c70901173a1f1a2dc9583802c7d38a555e5dfe2f938b751b9 not found: ID does not exist" Nov 29 08:06:09 crc kubenswrapper[4795]: I1129 08:06:09.742673 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 08:06:09 crc kubenswrapper[4795]: I1129 08:06:09.751870 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 29 08:06:09 crc kubenswrapper[4795]: I1129 08:06:09.807359 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 29 08:06:09 crc kubenswrapper[4795]: E1129 08:06:09.808123 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6dbced2-2ca2-4189-aad3-7a872ab6209c" containerName="sg-core" Nov 29 08:06:09 crc kubenswrapper[4795]: I1129 08:06:09.808137 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6dbced2-2ca2-4189-aad3-7a872ab6209c" containerName="sg-core" Nov 29 08:06:09 crc kubenswrapper[4795]: E1129 08:06:09.808161 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6dbced2-2ca2-4189-aad3-7a872ab6209c" containerName="ceilometer-notification-agent" Nov 29 08:06:09 crc kubenswrapper[4795]: I1129 08:06:09.808169 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6dbced2-2ca2-4189-aad3-7a872ab6209c" containerName="ceilometer-notification-agent" Nov 29 08:06:09 crc kubenswrapper[4795]: E1129 08:06:09.808195 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f892ddb-2efd-437e-9fae-9f7119f17847" containerName="init" Nov 29 08:06:09 crc kubenswrapper[4795]: I1129 08:06:09.808201 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f892ddb-2efd-437e-9fae-9f7119f17847" containerName="init" Nov 29 08:06:09 crc kubenswrapper[4795]: E1129 08:06:09.808211 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f892ddb-2efd-437e-9fae-9f7119f17847" containerName="dnsmasq-dns" Nov 29 08:06:09 crc kubenswrapper[4795]: I1129 08:06:09.808217 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f892ddb-2efd-437e-9fae-9f7119f17847" containerName="dnsmasq-dns" Nov 29 08:06:09 crc kubenswrapper[4795]: I1129 08:06:09.808430 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6dbced2-2ca2-4189-aad3-7a872ab6209c" containerName="sg-core" Nov 29 08:06:09 crc kubenswrapper[4795]: I1129 08:06:09.808448 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f892ddb-2efd-437e-9fae-9f7119f17847" containerName="dnsmasq-dns" Nov 29 08:06:09 crc kubenswrapper[4795]: I1129 08:06:09.808463 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6dbced2-2ca2-4189-aad3-7a872ab6209c" containerName="ceilometer-notification-agent" Nov 29 08:06:09 crc kubenswrapper[4795]: I1129 08:06:09.810970 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 08:06:09 crc kubenswrapper[4795]: I1129 08:06:09.819225 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 29 08:06:09 crc kubenswrapper[4795]: I1129 08:06:09.819493 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 29 08:06:09 crc kubenswrapper[4795]: I1129 08:06:09.870949 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 08:06:09 crc kubenswrapper[4795]: I1129 08:06:09.982470 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b070ef9-63f2-4d88-b5e1-3de486b62c80-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6b070ef9-63f2-4d88-b5e1-3de486b62c80\") " pod="openstack/ceilometer-0" Nov 29 08:06:09 crc kubenswrapper[4795]: I1129 08:06:09.983098 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9g5c\" (UniqueName: \"kubernetes.io/projected/6b070ef9-63f2-4d88-b5e1-3de486b62c80-kube-api-access-z9g5c\") pod \"ceilometer-0\" (UID: \"6b070ef9-63f2-4d88-b5e1-3de486b62c80\") " pod="openstack/ceilometer-0" Nov 29 08:06:09 crc kubenswrapper[4795]: I1129 08:06:09.983409 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b070ef9-63f2-4d88-b5e1-3de486b62c80-run-httpd\") pod \"ceilometer-0\" (UID: \"6b070ef9-63f2-4d88-b5e1-3de486b62c80\") " pod="openstack/ceilometer-0" Nov 29 08:06:09 crc kubenswrapper[4795]: I1129 08:06:09.983705 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6b070ef9-63f2-4d88-b5e1-3de486b62c80-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6b070ef9-63f2-4d88-b5e1-3de486b62c80\") " pod="openstack/ceilometer-0" Nov 29 08:06:09 crc kubenswrapper[4795]: I1129 08:06:09.984053 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b070ef9-63f2-4d88-b5e1-3de486b62c80-config-data\") pod \"ceilometer-0\" (UID: \"6b070ef9-63f2-4d88-b5e1-3de486b62c80\") " pod="openstack/ceilometer-0" Nov 29 08:06:09 crc kubenswrapper[4795]: I1129 08:06:09.984095 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b070ef9-63f2-4d88-b5e1-3de486b62c80-log-httpd\") pod \"ceilometer-0\" (UID: \"6b070ef9-63f2-4d88-b5e1-3de486b62c80\") " pod="openstack/ceilometer-0" Nov 29 08:06:09 crc kubenswrapper[4795]: I1129 08:06:09.984226 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b070ef9-63f2-4d88-b5e1-3de486b62c80-scripts\") pod \"ceilometer-0\" (UID: \"6b070ef9-63f2-4d88-b5e1-3de486b62c80\") " pod="openstack/ceilometer-0" Nov 29 08:06:10 crc kubenswrapper[4795]: I1129 08:06:10.088173 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9g5c\" (UniqueName: \"kubernetes.io/projected/6b070ef9-63f2-4d88-b5e1-3de486b62c80-kube-api-access-z9g5c\") pod \"ceilometer-0\" (UID: \"6b070ef9-63f2-4d88-b5e1-3de486b62c80\") " pod="openstack/ceilometer-0" Nov 29 08:06:10 crc kubenswrapper[4795]: I1129 08:06:10.088305 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b070ef9-63f2-4d88-b5e1-3de486b62c80-run-httpd\") pod \"ceilometer-0\" (UID: \"6b070ef9-63f2-4d88-b5e1-3de486b62c80\") " pod="openstack/ceilometer-0" Nov 29 08:06:10 crc kubenswrapper[4795]: I1129 08:06:10.088944 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b070ef9-63f2-4d88-b5e1-3de486b62c80-run-httpd\") pod \"ceilometer-0\" (UID: \"6b070ef9-63f2-4d88-b5e1-3de486b62c80\") " pod="openstack/ceilometer-0" Nov 29 08:06:10 crc kubenswrapper[4795]: I1129 08:06:10.089041 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6b070ef9-63f2-4d88-b5e1-3de486b62c80-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6b070ef9-63f2-4d88-b5e1-3de486b62c80\") " pod="openstack/ceilometer-0" Nov 29 08:06:10 crc kubenswrapper[4795]: I1129 08:06:10.089758 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b070ef9-63f2-4d88-b5e1-3de486b62c80-config-data\") pod \"ceilometer-0\" (UID: \"6b070ef9-63f2-4d88-b5e1-3de486b62c80\") " pod="openstack/ceilometer-0" Nov 29 08:06:10 crc kubenswrapper[4795]: I1129 08:06:10.089788 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b070ef9-63f2-4d88-b5e1-3de486b62c80-log-httpd\") pod \"ceilometer-0\" (UID: \"6b070ef9-63f2-4d88-b5e1-3de486b62c80\") " pod="openstack/ceilometer-0" Nov 29 08:06:10 crc kubenswrapper[4795]: I1129 08:06:10.090091 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b070ef9-63f2-4d88-b5e1-3de486b62c80-log-httpd\") pod \"ceilometer-0\" (UID: \"6b070ef9-63f2-4d88-b5e1-3de486b62c80\") " pod="openstack/ceilometer-0" Nov 29 08:06:10 crc kubenswrapper[4795]: I1129 08:06:10.090577 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b070ef9-63f2-4d88-b5e1-3de486b62c80-scripts\") pod \"ceilometer-0\" (UID: \"6b070ef9-63f2-4d88-b5e1-3de486b62c80\") " pod="openstack/ceilometer-0" Nov 29 08:06:10 crc kubenswrapper[4795]: I1129 08:06:10.091062 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b070ef9-63f2-4d88-b5e1-3de486b62c80-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6b070ef9-63f2-4d88-b5e1-3de486b62c80\") " pod="openstack/ceilometer-0" Nov 29 08:06:10 crc kubenswrapper[4795]: I1129 08:06:10.096355 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b070ef9-63f2-4d88-b5e1-3de486b62c80-scripts\") pod \"ceilometer-0\" (UID: \"6b070ef9-63f2-4d88-b5e1-3de486b62c80\") " pod="openstack/ceilometer-0" Nov 29 08:06:10 crc kubenswrapper[4795]: I1129 08:06:10.096872 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6b070ef9-63f2-4d88-b5e1-3de486b62c80-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6b070ef9-63f2-4d88-b5e1-3de486b62c80\") " pod="openstack/ceilometer-0" Nov 29 08:06:10 crc kubenswrapper[4795]: I1129 08:06:10.097537 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b070ef9-63f2-4d88-b5e1-3de486b62c80-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6b070ef9-63f2-4d88-b5e1-3de486b62c80\") " pod="openstack/ceilometer-0" Nov 29 08:06:10 crc kubenswrapper[4795]: I1129 08:06:10.107924 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b070ef9-63f2-4d88-b5e1-3de486b62c80-config-data\") pod \"ceilometer-0\" (UID: \"6b070ef9-63f2-4d88-b5e1-3de486b62c80\") " pod="openstack/ceilometer-0" Nov 29 08:06:10 crc kubenswrapper[4795]: I1129 08:06:10.108702 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9g5c\" (UniqueName: \"kubernetes.io/projected/6b070ef9-63f2-4d88-b5e1-3de486b62c80-kube-api-access-z9g5c\") pod \"ceilometer-0\" (UID: \"6b070ef9-63f2-4d88-b5e1-3de486b62c80\") " pod="openstack/ceilometer-0" Nov 29 08:06:10 crc kubenswrapper[4795]: I1129 08:06:10.315725 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 08:06:10 crc kubenswrapper[4795]: I1129 08:06:10.324353 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6dbced2-2ca2-4189-aad3-7a872ab6209c" path="/var/lib/kubelet/pods/c6dbced2-2ca2-4189-aad3-7a872ab6209c/volumes" Nov 29 08:06:10 crc kubenswrapper[4795]: I1129 08:06:10.822682 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"391cd433-a6c7-4004-8f82-6c18506aac37","Type":"ContainerStarted","Data":"45aff4e04f436c934e05f11902bc56abc46831df5b4f2050348a229d16d44142"} Nov 29 08:06:10 crc kubenswrapper[4795]: I1129 08:06:10.822984 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="391cd433-a6c7-4004-8f82-6c18506aac37" containerName="cinder-api-log" containerID="cri-o://7d455e86aff8b7415da4a8b3f2855454a536a8680d679e0571bf4f35d43b2a52" gracePeriod=30 Nov 29 08:06:10 crc kubenswrapper[4795]: I1129 08:06:10.823648 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 29 08:06:10 crc kubenswrapper[4795]: I1129 08:06:10.823740 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="391cd433-a6c7-4004-8f82-6c18506aac37" containerName="cinder-api" containerID="cri-o://45aff4e04f436c934e05f11902bc56abc46831df5b4f2050348a229d16d44142" gracePeriod=30 Nov 29 08:06:10 crc kubenswrapper[4795]: I1129 08:06:10.988946 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=13.988915674 podStartE2EDuration="13.988915674s" podCreationTimestamp="2025-11-29 08:05:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 08:06:10.945413478 +0000 UTC m=+1616.920989258" watchObservedRunningTime="2025-11-29 08:06:10.988915674 +0000 UTC m=+1616.964491484" Nov 29 08:06:11 crc kubenswrapper[4795]: I1129 08:06:11.035679 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-hxtm7" event={"ID":"1aa67cd3-59ee-4a03-bc97-fc1fa5729e40","Type":"ContainerStarted","Data":"6df4be89eb8944158fdfcea161de794cf380ad51baaf08cca57f9debff70642c"} Nov 29 08:06:11 crc kubenswrapper[4795]: I1129 08:06:11.075131 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bb4fc677f-hxtm7" podStartSLOduration=14.075111215 podStartE2EDuration="14.075111215s" podCreationTimestamp="2025-11-29 08:05:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 08:06:11.067174339 +0000 UTC m=+1617.042750129" watchObservedRunningTime="2025-11-29 08:06:11.075111215 +0000 UTC m=+1617.050687005" Nov 29 08:06:11 crc kubenswrapper[4795]: I1129 08:06:11.251958 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 08:06:12 crc kubenswrapper[4795]: I1129 08:06:12.090926 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b070ef9-63f2-4d88-b5e1-3de486b62c80","Type":"ContainerStarted","Data":"14fd4eec865107536815b05b299d141824d8f4b2e49120dd4b502708ea9a75fa"} Nov 29 08:06:12 crc kubenswrapper[4795]: I1129 08:06:12.117170 4795 generic.go:334] "Generic (PLEG): container finished" podID="391cd433-a6c7-4004-8f82-6c18506aac37" containerID="45aff4e04f436c934e05f11902bc56abc46831df5b4f2050348a229d16d44142" exitCode=0 Nov 29 08:06:12 crc kubenswrapper[4795]: I1129 08:06:12.117212 4795 generic.go:334] "Generic (PLEG): container finished" podID="391cd433-a6c7-4004-8f82-6c18506aac37" containerID="7d455e86aff8b7415da4a8b3f2855454a536a8680d679e0571bf4f35d43b2a52" exitCode=143 Nov 29 08:06:12 crc kubenswrapper[4795]: I1129 08:06:12.117286 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"391cd433-a6c7-4004-8f82-6c18506aac37","Type":"ContainerDied","Data":"45aff4e04f436c934e05f11902bc56abc46831df5b4f2050348a229d16d44142"} Nov 29 08:06:12 crc kubenswrapper[4795]: I1129 08:06:12.117317 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"391cd433-a6c7-4004-8f82-6c18506aac37","Type":"ContainerDied","Data":"7d455e86aff8b7415da4a8b3f2855454a536a8680d679e0571bf4f35d43b2a52"} Nov 29 08:06:12 crc kubenswrapper[4795]: I1129 08:06:12.145650 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"eaf38aac-d198-4ddf-8014-ad9c598601ae","Type":"ContainerStarted","Data":"3ec6fd0db9e17849ba639d7a4a0906d5cf03cec6c036c99772f206ff37d032b9"} Nov 29 08:06:12 crc kubenswrapper[4795]: I1129 08:06:12.145776 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bb4fc677f-hxtm7" Nov 29 08:06:12 crc kubenswrapper[4795]: I1129 08:06:12.775979 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 29 08:06:12 crc kubenswrapper[4795]: I1129 08:06:12.892439 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/391cd433-a6c7-4004-8f82-6c18506aac37-config-data\") pod \"391cd433-a6c7-4004-8f82-6c18506aac37\" (UID: \"391cd433-a6c7-4004-8f82-6c18506aac37\") " Nov 29 08:06:12 crc kubenswrapper[4795]: I1129 08:06:12.892548 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/391cd433-a6c7-4004-8f82-6c18506aac37-scripts\") pod \"391cd433-a6c7-4004-8f82-6c18506aac37\" (UID: \"391cd433-a6c7-4004-8f82-6c18506aac37\") " Nov 29 08:06:12 crc kubenswrapper[4795]: I1129 08:06:12.892696 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/391cd433-a6c7-4004-8f82-6c18506aac37-config-data-custom\") pod \"391cd433-a6c7-4004-8f82-6c18506aac37\" (UID: \"391cd433-a6c7-4004-8f82-6c18506aac37\") " Nov 29 08:06:12 crc kubenswrapper[4795]: I1129 08:06:12.892766 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/391cd433-a6c7-4004-8f82-6c18506aac37-etc-machine-id\") pod \"391cd433-a6c7-4004-8f82-6c18506aac37\" (UID: \"391cd433-a6c7-4004-8f82-6c18506aac37\") " Nov 29 08:06:12 crc kubenswrapper[4795]: I1129 08:06:12.892792 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rdm26\" (UniqueName: \"kubernetes.io/projected/391cd433-a6c7-4004-8f82-6c18506aac37-kube-api-access-rdm26\") pod \"391cd433-a6c7-4004-8f82-6c18506aac37\" (UID: \"391cd433-a6c7-4004-8f82-6c18506aac37\") " Nov 29 08:06:12 crc kubenswrapper[4795]: I1129 08:06:12.892857 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/391cd433-a6c7-4004-8f82-6c18506aac37-combined-ca-bundle\") pod \"391cd433-a6c7-4004-8f82-6c18506aac37\" (UID: \"391cd433-a6c7-4004-8f82-6c18506aac37\") " Nov 29 08:06:12 crc kubenswrapper[4795]: I1129 08:06:12.893044 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/391cd433-a6c7-4004-8f82-6c18506aac37-logs\") pod \"391cd433-a6c7-4004-8f82-6c18506aac37\" (UID: \"391cd433-a6c7-4004-8f82-6c18506aac37\") " Nov 29 08:06:12 crc kubenswrapper[4795]: I1129 08:06:12.893865 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/391cd433-a6c7-4004-8f82-6c18506aac37-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "391cd433-a6c7-4004-8f82-6c18506aac37" (UID: "391cd433-a6c7-4004-8f82-6c18506aac37"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 08:06:12 crc kubenswrapper[4795]: I1129 08:06:12.894230 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/391cd433-a6c7-4004-8f82-6c18506aac37-logs" (OuterVolumeSpecName: "logs") pod "391cd433-a6c7-4004-8f82-6c18506aac37" (UID: "391cd433-a6c7-4004-8f82-6c18506aac37"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:06:12 crc kubenswrapper[4795]: I1129 08:06:12.901330 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/391cd433-a6c7-4004-8f82-6c18506aac37-kube-api-access-rdm26" (OuterVolumeSpecName: "kube-api-access-rdm26") pod "391cd433-a6c7-4004-8f82-6c18506aac37" (UID: "391cd433-a6c7-4004-8f82-6c18506aac37"). InnerVolumeSpecName "kube-api-access-rdm26". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:06:12 crc kubenswrapper[4795]: I1129 08:06:12.901898 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/391cd433-a6c7-4004-8f82-6c18506aac37-scripts" (OuterVolumeSpecName: "scripts") pod "391cd433-a6c7-4004-8f82-6c18506aac37" (UID: "391cd433-a6c7-4004-8f82-6c18506aac37"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:06:12 crc kubenswrapper[4795]: I1129 08:06:12.905023 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/391cd433-a6c7-4004-8f82-6c18506aac37-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "391cd433-a6c7-4004-8f82-6c18506aac37" (UID: "391cd433-a6c7-4004-8f82-6c18506aac37"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:06:12 crc kubenswrapper[4795]: I1129 08:06:12.995999 4795 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/391cd433-a6c7-4004-8f82-6c18506aac37-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:12 crc kubenswrapper[4795]: I1129 08:06:12.996041 4795 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/391cd433-a6c7-4004-8f82-6c18506aac37-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:12 crc kubenswrapper[4795]: I1129 08:06:12.996054 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rdm26\" (UniqueName: \"kubernetes.io/projected/391cd433-a6c7-4004-8f82-6c18506aac37-kube-api-access-rdm26\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:12 crc kubenswrapper[4795]: I1129 08:06:12.996069 4795 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/391cd433-a6c7-4004-8f82-6c18506aac37-logs\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:12 crc kubenswrapper[4795]: I1129 08:06:12.996084 4795 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/391cd433-a6c7-4004-8f82-6c18506aac37-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:13 crc kubenswrapper[4795]: I1129 08:06:13.000872 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/391cd433-a6c7-4004-8f82-6c18506aac37-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "391cd433-a6c7-4004-8f82-6c18506aac37" (UID: "391cd433-a6c7-4004-8f82-6c18506aac37"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:06:13 crc kubenswrapper[4795]: I1129 08:06:13.001733 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/391cd433-a6c7-4004-8f82-6c18506aac37-config-data" (OuterVolumeSpecName: "config-data") pod "391cd433-a6c7-4004-8f82-6c18506aac37" (UID: "391cd433-a6c7-4004-8f82-6c18506aac37"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:06:13 crc kubenswrapper[4795]: I1129 08:06:13.098180 4795 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/391cd433-a6c7-4004-8f82-6c18506aac37-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:13 crc kubenswrapper[4795]: I1129 08:06:13.098212 4795 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/391cd433-a6c7-4004-8f82-6c18506aac37-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:13 crc kubenswrapper[4795]: I1129 08:06:13.151680 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-8686d8994d-2mhmq" Nov 29 08:06:13 crc kubenswrapper[4795]: I1129 08:06:13.221433 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"eaf38aac-d198-4ddf-8014-ad9c598601ae","Type":"ContainerStarted","Data":"9e1daf68c4a37900b3a214484ab6491c0a68c9e6467b68bb828f1fb110187eb7"} Nov 29 08:06:13 crc kubenswrapper[4795]: I1129 08:06:13.246848 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b070ef9-63f2-4d88-b5e1-3de486b62c80","Type":"ContainerStarted","Data":"11345ce1abaf5eba72c8acbedd384cd7249a43800459b32530fea8476aa2dce8"} Nov 29 08:06:13 crc kubenswrapper[4795]: I1129 08:06:13.256520 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=15.408680209 podStartE2EDuration="17.256498045s" podCreationTimestamp="2025-11-29 08:05:56 +0000 UTC" firstStartedPulling="2025-11-29 08:06:07.859461454 +0000 UTC m=+1613.835037244" lastFinishedPulling="2025-11-29 08:06:09.70727929 +0000 UTC m=+1615.682855080" observedRunningTime="2025-11-29 08:06:13.250566017 +0000 UTC m=+1619.226141817" watchObservedRunningTime="2025-11-29 08:06:13.256498045 +0000 UTC m=+1619.232073835" Nov 29 08:06:13 crc kubenswrapper[4795]: I1129 08:06:13.259494 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 29 08:06:13 crc kubenswrapper[4795]: I1129 08:06:13.260099 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"391cd433-a6c7-4004-8f82-6c18506aac37","Type":"ContainerDied","Data":"4ac339296073aa4dda5f6e3c1173d0e80644dc7b1090777a8c71950d2eea3e9b"} Nov 29 08:06:13 crc kubenswrapper[4795]: I1129 08:06:13.260145 4795 scope.go:117] "RemoveContainer" containerID="45aff4e04f436c934e05f11902bc56abc46831df5b4f2050348a229d16d44142" Nov 29 08:06:13 crc kubenswrapper[4795]: I1129 08:06:13.293454 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-8686d8994d-2mhmq" Nov 29 08:06:13 crc kubenswrapper[4795]: I1129 08:06:13.347777 4795 scope.go:117] "RemoveContainer" containerID="7d455e86aff8b7415da4a8b3f2855454a536a8680d679e0571bf4f35d43b2a52" Nov 29 08:06:13 crc kubenswrapper[4795]: I1129 08:06:13.425948 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 29 08:06:13 crc kubenswrapper[4795]: I1129 08:06:13.438432 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Nov 29 08:06:13 crc kubenswrapper[4795]: I1129 08:06:13.467661 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 29 08:06:13 crc kubenswrapper[4795]: E1129 08:06:13.468430 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="391cd433-a6c7-4004-8f82-6c18506aac37" containerName="cinder-api-log" Nov 29 08:06:13 crc kubenswrapper[4795]: I1129 08:06:13.468548 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="391cd433-a6c7-4004-8f82-6c18506aac37" containerName="cinder-api-log" Nov 29 08:06:13 crc kubenswrapper[4795]: E1129 08:06:13.468701 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="391cd433-a6c7-4004-8f82-6c18506aac37" containerName="cinder-api" Nov 29 08:06:13 crc kubenswrapper[4795]: I1129 08:06:13.468777 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="391cd433-a6c7-4004-8f82-6c18506aac37" containerName="cinder-api" Nov 29 08:06:13 crc kubenswrapper[4795]: I1129 08:06:13.469153 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="391cd433-a6c7-4004-8f82-6c18506aac37" containerName="cinder-api-log" Nov 29 08:06:13 crc kubenswrapper[4795]: I1129 08:06:13.469250 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="391cd433-a6c7-4004-8f82-6c18506aac37" containerName="cinder-api" Nov 29 08:06:13 crc kubenswrapper[4795]: I1129 08:06:13.470636 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 29 08:06:13 crc kubenswrapper[4795]: I1129 08:06:13.486247 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 29 08:06:13 crc kubenswrapper[4795]: I1129 08:06:13.488555 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Nov 29 08:06:13 crc kubenswrapper[4795]: I1129 08:06:13.488856 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Nov 29 08:06:13 crc kubenswrapper[4795]: I1129 08:06:13.496883 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 29 08:06:13 crc kubenswrapper[4795]: I1129 08:06:13.641419 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/477216f2-bd7e-4768-9a1f-53915135fbc3-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"477216f2-bd7e-4768-9a1f-53915135fbc3\") " pod="openstack/cinder-api-0" Nov 29 08:06:13 crc kubenswrapper[4795]: I1129 08:06:13.641848 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/477216f2-bd7e-4768-9a1f-53915135fbc3-config-data-custom\") pod \"cinder-api-0\" (UID: \"477216f2-bd7e-4768-9a1f-53915135fbc3\") " pod="openstack/cinder-api-0" Nov 29 08:06:13 crc kubenswrapper[4795]: I1129 08:06:13.641876 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/477216f2-bd7e-4768-9a1f-53915135fbc3-scripts\") pod \"cinder-api-0\" (UID: \"477216f2-bd7e-4768-9a1f-53915135fbc3\") " pod="openstack/cinder-api-0" Nov 29 08:06:13 crc kubenswrapper[4795]: I1129 08:06:13.641920 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/477216f2-bd7e-4768-9a1f-53915135fbc3-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"477216f2-bd7e-4768-9a1f-53915135fbc3\") " pod="openstack/cinder-api-0" Nov 29 08:06:13 crc kubenswrapper[4795]: I1129 08:06:13.641960 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/477216f2-bd7e-4768-9a1f-53915135fbc3-logs\") pod \"cinder-api-0\" (UID: \"477216f2-bd7e-4768-9a1f-53915135fbc3\") " pod="openstack/cinder-api-0" Nov 29 08:06:13 crc kubenswrapper[4795]: I1129 08:06:13.641992 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/477216f2-bd7e-4768-9a1f-53915135fbc3-public-tls-certs\") pod \"cinder-api-0\" (UID: \"477216f2-bd7e-4768-9a1f-53915135fbc3\") " pod="openstack/cinder-api-0" Nov 29 08:06:13 crc kubenswrapper[4795]: I1129 08:06:13.642033 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/477216f2-bd7e-4768-9a1f-53915135fbc3-config-data\") pod \"cinder-api-0\" (UID: \"477216f2-bd7e-4768-9a1f-53915135fbc3\") " pod="openstack/cinder-api-0" Nov 29 08:06:13 crc kubenswrapper[4795]: I1129 08:06:13.642071 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrg7d\" (UniqueName: \"kubernetes.io/projected/477216f2-bd7e-4768-9a1f-53915135fbc3-kube-api-access-nrg7d\") pod \"cinder-api-0\" (UID: \"477216f2-bd7e-4768-9a1f-53915135fbc3\") " pod="openstack/cinder-api-0" Nov 29 08:06:13 crc kubenswrapper[4795]: I1129 08:06:13.642158 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/477216f2-bd7e-4768-9a1f-53915135fbc3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"477216f2-bd7e-4768-9a1f-53915135fbc3\") " pod="openstack/cinder-api-0" Nov 29 08:06:13 crc kubenswrapper[4795]: I1129 08:06:13.755795 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/477216f2-bd7e-4768-9a1f-53915135fbc3-logs\") pod \"cinder-api-0\" (UID: \"477216f2-bd7e-4768-9a1f-53915135fbc3\") " pod="openstack/cinder-api-0" Nov 29 08:06:13 crc kubenswrapper[4795]: I1129 08:06:13.756192 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/477216f2-bd7e-4768-9a1f-53915135fbc3-public-tls-certs\") pod \"cinder-api-0\" (UID: \"477216f2-bd7e-4768-9a1f-53915135fbc3\") " pod="openstack/cinder-api-0" Nov 29 08:06:13 crc kubenswrapper[4795]: I1129 08:06:13.756499 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/477216f2-bd7e-4768-9a1f-53915135fbc3-config-data\") pod \"cinder-api-0\" (UID: \"477216f2-bd7e-4768-9a1f-53915135fbc3\") " pod="openstack/cinder-api-0" Nov 29 08:06:13 crc kubenswrapper[4795]: I1129 08:06:13.756617 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrg7d\" (UniqueName: \"kubernetes.io/projected/477216f2-bd7e-4768-9a1f-53915135fbc3-kube-api-access-nrg7d\") pod \"cinder-api-0\" (UID: \"477216f2-bd7e-4768-9a1f-53915135fbc3\") " pod="openstack/cinder-api-0" Nov 29 08:06:13 crc kubenswrapper[4795]: I1129 08:06:13.756924 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/477216f2-bd7e-4768-9a1f-53915135fbc3-logs\") pod \"cinder-api-0\" (UID: \"477216f2-bd7e-4768-9a1f-53915135fbc3\") " pod="openstack/cinder-api-0" Nov 29 08:06:13 crc kubenswrapper[4795]: I1129 08:06:13.756951 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/477216f2-bd7e-4768-9a1f-53915135fbc3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"477216f2-bd7e-4768-9a1f-53915135fbc3\") " pod="openstack/cinder-api-0" Nov 29 08:06:13 crc kubenswrapper[4795]: I1129 08:06:13.757296 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/477216f2-bd7e-4768-9a1f-53915135fbc3-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"477216f2-bd7e-4768-9a1f-53915135fbc3\") " pod="openstack/cinder-api-0" Nov 29 08:06:13 crc kubenswrapper[4795]: I1129 08:06:13.757355 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/477216f2-bd7e-4768-9a1f-53915135fbc3-config-data-custom\") pod \"cinder-api-0\" (UID: \"477216f2-bd7e-4768-9a1f-53915135fbc3\") " pod="openstack/cinder-api-0" Nov 29 08:06:13 crc kubenswrapper[4795]: I1129 08:06:13.757392 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/477216f2-bd7e-4768-9a1f-53915135fbc3-scripts\") pod \"cinder-api-0\" (UID: \"477216f2-bd7e-4768-9a1f-53915135fbc3\") " pod="openstack/cinder-api-0" Nov 29 08:06:13 crc kubenswrapper[4795]: I1129 08:06:13.757557 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/477216f2-bd7e-4768-9a1f-53915135fbc3-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"477216f2-bd7e-4768-9a1f-53915135fbc3\") " pod="openstack/cinder-api-0" Nov 29 08:06:13 crc kubenswrapper[4795]: I1129 08:06:13.757006 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/477216f2-bd7e-4768-9a1f-53915135fbc3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"477216f2-bd7e-4768-9a1f-53915135fbc3\") " pod="openstack/cinder-api-0" Nov 29 08:06:13 crc kubenswrapper[4795]: I1129 08:06:13.771874 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/477216f2-bd7e-4768-9a1f-53915135fbc3-config-data\") pod \"cinder-api-0\" (UID: \"477216f2-bd7e-4768-9a1f-53915135fbc3\") " pod="openstack/cinder-api-0" Nov 29 08:06:13 crc kubenswrapper[4795]: I1129 08:06:13.784069 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/477216f2-bd7e-4768-9a1f-53915135fbc3-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"477216f2-bd7e-4768-9a1f-53915135fbc3\") " pod="openstack/cinder-api-0" Nov 29 08:06:13 crc kubenswrapper[4795]: I1129 08:06:13.797396 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrg7d\" (UniqueName: \"kubernetes.io/projected/477216f2-bd7e-4768-9a1f-53915135fbc3-kube-api-access-nrg7d\") pod \"cinder-api-0\" (UID: \"477216f2-bd7e-4768-9a1f-53915135fbc3\") " pod="openstack/cinder-api-0" Nov 29 08:06:13 crc kubenswrapper[4795]: I1129 08:06:13.835087 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/477216f2-bd7e-4768-9a1f-53915135fbc3-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"477216f2-bd7e-4768-9a1f-53915135fbc3\") " pod="openstack/cinder-api-0" Nov 29 08:06:13 crc kubenswrapper[4795]: I1129 08:06:13.835569 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/477216f2-bd7e-4768-9a1f-53915135fbc3-public-tls-certs\") pod \"cinder-api-0\" (UID: \"477216f2-bd7e-4768-9a1f-53915135fbc3\") " pod="openstack/cinder-api-0" Nov 29 08:06:13 crc kubenswrapper[4795]: I1129 08:06:13.840472 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/477216f2-bd7e-4768-9a1f-53915135fbc3-config-data-custom\") pod \"cinder-api-0\" (UID: \"477216f2-bd7e-4768-9a1f-53915135fbc3\") " pod="openstack/cinder-api-0" Nov 29 08:06:13 crc kubenswrapper[4795]: I1129 08:06:13.860020 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/477216f2-bd7e-4768-9a1f-53915135fbc3-scripts\") pod \"cinder-api-0\" (UID: \"477216f2-bd7e-4768-9a1f-53915135fbc3\") " pod="openstack/cinder-api-0" Nov 29 08:06:14 crc kubenswrapper[4795]: I1129 08:06:14.129525 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 29 08:06:14 crc kubenswrapper[4795]: I1129 08:06:14.305710 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="391cd433-a6c7-4004-8f82-6c18506aac37" path="/var/lib/kubelet/pods/391cd433-a6c7-4004-8f82-6c18506aac37/volumes" Nov 29 08:06:14 crc kubenswrapper[4795]: I1129 08:06:14.308435 4795 generic.go:334] "Generic (PLEG): container finished" podID="ac907bde-e98c-4e4f-aa14-e105a4fe885c" containerID="45a4c4aa94ff4f81822bbb57e1e819c08101d4d990fb49cd356761d8201d876a" exitCode=0 Nov 29 08:06:14 crc kubenswrapper[4795]: I1129 08:06:14.308515 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5bdf57676-nn72s" event={"ID":"ac907bde-e98c-4e4f-aa14-e105a4fe885c","Type":"ContainerDied","Data":"45a4c4aa94ff4f81822bbb57e1e819c08101d4d990fb49cd356761d8201d876a"} Nov 29 08:06:14 crc kubenswrapper[4795]: I1129 08:06:14.355550 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b070ef9-63f2-4d88-b5e1-3de486b62c80","Type":"ContainerStarted","Data":"1b39c09129a7b6363e7a2c86b5a144dfdf2d91894055a17bfb0a859d950b2c61"} Nov 29 08:06:14 crc kubenswrapper[4795]: I1129 08:06:14.933631 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 29 08:06:14 crc kubenswrapper[4795]: I1129 08:06:14.985037 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5bdf57676-nn72s" Nov 29 08:06:15 crc kubenswrapper[4795]: I1129 08:06:15.096094 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ac907bde-e98c-4e4f-aa14-e105a4fe885c-config\") pod \"ac907bde-e98c-4e4f-aa14-e105a4fe885c\" (UID: \"ac907bde-e98c-4e4f-aa14-e105a4fe885c\") " Nov 29 08:06:15 crc kubenswrapper[4795]: I1129 08:06:15.097021 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fb8nn\" (UniqueName: \"kubernetes.io/projected/ac907bde-e98c-4e4f-aa14-e105a4fe885c-kube-api-access-fb8nn\") pod \"ac907bde-e98c-4e4f-aa14-e105a4fe885c\" (UID: \"ac907bde-e98c-4e4f-aa14-e105a4fe885c\") " Nov 29 08:06:15 crc kubenswrapper[4795]: I1129 08:06:15.097093 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ac907bde-e98c-4e4f-aa14-e105a4fe885c-httpd-config\") pod \"ac907bde-e98c-4e4f-aa14-e105a4fe885c\" (UID: \"ac907bde-e98c-4e4f-aa14-e105a4fe885c\") " Nov 29 08:06:15 crc kubenswrapper[4795]: I1129 08:06:15.097225 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac907bde-e98c-4e4f-aa14-e105a4fe885c-ovndb-tls-certs\") pod \"ac907bde-e98c-4e4f-aa14-e105a4fe885c\" (UID: \"ac907bde-e98c-4e4f-aa14-e105a4fe885c\") " Nov 29 08:06:15 crc kubenswrapper[4795]: I1129 08:06:15.097259 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac907bde-e98c-4e4f-aa14-e105a4fe885c-combined-ca-bundle\") pod \"ac907bde-e98c-4e4f-aa14-e105a4fe885c\" (UID: \"ac907bde-e98c-4e4f-aa14-e105a4fe885c\") " Nov 29 08:06:15 crc kubenswrapper[4795]: I1129 08:06:15.112914 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac907bde-e98c-4e4f-aa14-e105a4fe885c-kube-api-access-fb8nn" (OuterVolumeSpecName: "kube-api-access-fb8nn") pod "ac907bde-e98c-4e4f-aa14-e105a4fe885c" (UID: "ac907bde-e98c-4e4f-aa14-e105a4fe885c"). InnerVolumeSpecName "kube-api-access-fb8nn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:06:15 crc kubenswrapper[4795]: I1129 08:06:15.116677 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac907bde-e98c-4e4f-aa14-e105a4fe885c-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "ac907bde-e98c-4e4f-aa14-e105a4fe885c" (UID: "ac907bde-e98c-4e4f-aa14-e105a4fe885c"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:06:15 crc kubenswrapper[4795]: I1129 08:06:15.182716 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac907bde-e98c-4e4f-aa14-e105a4fe885c-config" (OuterVolumeSpecName: "config") pod "ac907bde-e98c-4e4f-aa14-e105a4fe885c" (UID: "ac907bde-e98c-4e4f-aa14-e105a4fe885c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:06:15 crc kubenswrapper[4795]: I1129 08:06:15.208252 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fb8nn\" (UniqueName: \"kubernetes.io/projected/ac907bde-e98c-4e4f-aa14-e105a4fe885c-kube-api-access-fb8nn\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:15 crc kubenswrapper[4795]: I1129 08:06:15.208286 4795 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ac907bde-e98c-4e4f-aa14-e105a4fe885c-httpd-config\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:15 crc kubenswrapper[4795]: I1129 08:06:15.208298 4795 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/ac907bde-e98c-4e4f-aa14-e105a4fe885c-config\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:15 crc kubenswrapper[4795]: I1129 08:06:15.211774 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac907bde-e98c-4e4f-aa14-e105a4fe885c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ac907bde-e98c-4e4f-aa14-e105a4fe885c" (UID: "ac907bde-e98c-4e4f-aa14-e105a4fe885c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:06:15 crc kubenswrapper[4795]: I1129 08:06:15.286342 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac907bde-e98c-4e4f-aa14-e105a4fe885c-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "ac907bde-e98c-4e4f-aa14-e105a4fe885c" (UID: "ac907bde-e98c-4e4f-aa14-e105a4fe885c"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:06:15 crc kubenswrapper[4795]: I1129 08:06:15.310613 4795 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac907bde-e98c-4e4f-aa14-e105a4fe885c-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:15 crc kubenswrapper[4795]: I1129 08:06:15.310651 4795 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac907bde-e98c-4e4f-aa14-e105a4fe885c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:15 crc kubenswrapper[4795]: I1129 08:06:15.394314 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"477216f2-bd7e-4768-9a1f-53915135fbc3","Type":"ContainerStarted","Data":"7bb0548df5b6bf63d66d323db4a34f52fc25e8fdcdd842de2e5618999c3742b3"} Nov 29 08:06:15 crc kubenswrapper[4795]: I1129 08:06:15.428885 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5bdf57676-nn72s" event={"ID":"ac907bde-e98c-4e4f-aa14-e105a4fe885c","Type":"ContainerDied","Data":"193e44cb247772cb177a8b1373506dea540472cd63fac2dddb52c1a27bfe046d"} Nov 29 08:06:15 crc kubenswrapper[4795]: I1129 08:06:15.428945 4795 scope.go:117] "RemoveContainer" containerID="b502aeb549c6281a4c29dfce7ff4bb46e86bfaf86e8bd5850874871365e44283" Nov 29 08:06:15 crc kubenswrapper[4795]: I1129 08:06:15.429113 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5bdf57676-nn72s" Nov 29 08:06:15 crc kubenswrapper[4795]: I1129 08:06:15.446104 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b070ef9-63f2-4d88-b5e1-3de486b62c80","Type":"ContainerStarted","Data":"212c23e9602e99a4f83a1904309fbfbb8e7afe58f0d9f77ef1755e97f8111710"} Nov 29 08:06:15 crc kubenswrapper[4795]: I1129 08:06:15.490752 4795 scope.go:117] "RemoveContainer" containerID="45a4c4aa94ff4f81822bbb57e1e819c08101d4d990fb49cd356761d8201d876a" Nov 29 08:06:15 crc kubenswrapper[4795]: I1129 08:06:15.536680 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5bdf57676-nn72s"] Nov 29 08:06:15 crc kubenswrapper[4795]: I1129 08:06:15.547420 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5bdf57676-nn72s"] Nov 29 08:06:16 crc kubenswrapper[4795]: I1129 08:06:16.330688 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac907bde-e98c-4e4f-aa14-e105a4fe885c" path="/var/lib/kubelet/pods/ac907bde-e98c-4e4f-aa14-e105a4fe885c/volumes" Nov 29 08:06:16 crc kubenswrapper[4795]: I1129 08:06:16.556409 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"477216f2-bd7e-4768-9a1f-53915135fbc3","Type":"ContainerStarted","Data":"b555ebb4ded6634aa1ff2651ca07e78e46628cacbcfb71c0fcd0b2658a4432c0"} Nov 29 08:06:16 crc kubenswrapper[4795]: I1129 08:06:16.633188 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-dc94996-zmk9m" Nov 29 08:06:16 crc kubenswrapper[4795]: I1129 08:06:16.826157 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-dc94996-zmk9m" Nov 29 08:06:16 crc kubenswrapper[4795]: I1129 08:06:16.913613 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5c8566458-jwt4m"] Nov 29 08:06:16 crc kubenswrapper[4795]: I1129 08:06:16.914210 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5c8566458-jwt4m" podUID="39144d64-93f5-4c75-8cd3-39d6b17f08d6" containerName="barbican-api-log" containerID="cri-o://548f34c2652851ca8b5e064e016c8e5abc6aa699b8a7e546ffbe69e24e732616" gracePeriod=30 Nov 29 08:06:16 crc kubenswrapper[4795]: I1129 08:06:16.916346 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5c8566458-jwt4m" podUID="39144d64-93f5-4c75-8cd3-39d6b17f08d6" containerName="barbican-api" containerID="cri-o://f487adc56134b7e9888c25a49a74e448e87beaf406191e404aa4540ab96cf4d5" gracePeriod=30 Nov 29 08:06:17 crc kubenswrapper[4795]: I1129 08:06:17.316511 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 29 08:06:17 crc kubenswrapper[4795]: I1129 08:06:17.465740 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6bb4fc677f-hxtm7" Nov 29 08:06:17 crc kubenswrapper[4795]: I1129 08:06:17.569263 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-m6xrq"] Nov 29 08:06:17 crc kubenswrapper[4795]: I1129 08:06:17.569544 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5ccc5c4795-m6xrq" podUID="fe89b8b3-85af-4550-a246-b092a5f2f233" containerName="dnsmasq-dns" containerID="cri-o://802c86dfd3edba72a7085b4fe4da96b0ee8206da13562600448cafcc2a964b84" gracePeriod=10 Nov 29 08:06:17 crc kubenswrapper[4795]: I1129 08:06:17.628868 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"477216f2-bd7e-4768-9a1f-53915135fbc3","Type":"ContainerStarted","Data":"496fa7cfab4255e990bcb66ab87f6e28eb1bedeff5cd4e129a94618d8afb8acf"} Nov 29 08:06:17 crc kubenswrapper[4795]: I1129 08:06:17.630189 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 29 08:06:17 crc kubenswrapper[4795]: I1129 08:06:17.670875 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b070ef9-63f2-4d88-b5e1-3de486b62c80","Type":"ContainerStarted","Data":"fd8f64cc7b0d15aeac63e310457d432625560f43f404e4b7c6e56184ba77042a"} Nov 29 08:06:17 crc kubenswrapper[4795]: I1129 08:06:17.672460 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 29 08:06:17 crc kubenswrapper[4795]: I1129 08:06:17.680841 4795 generic.go:334] "Generic (PLEG): container finished" podID="39144d64-93f5-4c75-8cd3-39d6b17f08d6" containerID="548f34c2652851ca8b5e064e016c8e5abc6aa699b8a7e546ffbe69e24e732616" exitCode=143 Nov 29 08:06:17 crc kubenswrapper[4795]: I1129 08:06:17.681667 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5c8566458-jwt4m" event={"ID":"39144d64-93f5-4c75-8cd3-39d6b17f08d6","Type":"ContainerDied","Data":"548f34c2652851ca8b5e064e016c8e5abc6aa699b8a7e546ffbe69e24e732616"} Nov 29 08:06:17 crc kubenswrapper[4795]: I1129 08:06:17.713504 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.7134862250000005 podStartE2EDuration="4.713486225s" podCreationTimestamp="2025-11-29 08:06:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 08:06:17.659292619 +0000 UTC m=+1623.634868409" watchObservedRunningTime="2025-11-29 08:06:17.713486225 +0000 UTC m=+1623.689062015" Nov 29 08:06:17 crc kubenswrapper[4795]: I1129 08:06:17.741334 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.910891435 podStartE2EDuration="8.741312624s" podCreationTimestamp="2025-11-29 08:06:09 +0000 UTC" firstStartedPulling="2025-11-29 08:06:11.341449069 +0000 UTC m=+1617.317024859" lastFinishedPulling="2025-11-29 08:06:16.171870258 +0000 UTC m=+1622.147446048" observedRunningTime="2025-11-29 08:06:17.717863299 +0000 UTC m=+1623.693439099" watchObservedRunningTime="2025-11-29 08:06:17.741312624 +0000 UTC m=+1623.716888414" Nov 29 08:06:17 crc kubenswrapper[4795]: I1129 08:06:17.874419 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 29 08:06:17 crc kubenswrapper[4795]: I1129 08:06:17.987294 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 29 08:06:18 crc kubenswrapper[4795]: I1129 08:06:18.500460 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc5c4795-m6xrq" Nov 29 08:06:18 crc kubenswrapper[4795]: I1129 08:06:18.652979 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fe89b8b3-85af-4550-a246-b092a5f2f233-dns-svc\") pod \"fe89b8b3-85af-4550-a246-b092a5f2f233\" (UID: \"fe89b8b3-85af-4550-a246-b092a5f2f233\") " Nov 29 08:06:18 crc kubenswrapper[4795]: I1129 08:06:18.653303 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fe89b8b3-85af-4550-a246-b092a5f2f233-dns-swift-storage-0\") pod \"fe89b8b3-85af-4550-a246-b092a5f2f233\" (UID: \"fe89b8b3-85af-4550-a246-b092a5f2f233\") " Nov 29 08:06:18 crc kubenswrapper[4795]: I1129 08:06:18.653545 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe89b8b3-85af-4550-a246-b092a5f2f233-config\") pod \"fe89b8b3-85af-4550-a246-b092a5f2f233\" (UID: \"fe89b8b3-85af-4550-a246-b092a5f2f233\") " Nov 29 08:06:18 crc kubenswrapper[4795]: I1129 08:06:18.653756 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fe89b8b3-85af-4550-a246-b092a5f2f233-ovsdbserver-nb\") pod \"fe89b8b3-85af-4550-a246-b092a5f2f233\" (UID: \"fe89b8b3-85af-4550-a246-b092a5f2f233\") " Nov 29 08:06:18 crc kubenswrapper[4795]: I1129 08:06:18.660871 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fe89b8b3-85af-4550-a246-b092a5f2f233-ovsdbserver-sb\") pod \"fe89b8b3-85af-4550-a246-b092a5f2f233\" (UID: \"fe89b8b3-85af-4550-a246-b092a5f2f233\") " Nov 29 08:06:18 crc kubenswrapper[4795]: I1129 08:06:18.661037 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkplf\" (UniqueName: \"kubernetes.io/projected/fe89b8b3-85af-4550-a246-b092a5f2f233-kube-api-access-xkplf\") pod \"fe89b8b3-85af-4550-a246-b092a5f2f233\" (UID: \"fe89b8b3-85af-4550-a246-b092a5f2f233\") " Nov 29 08:06:18 crc kubenswrapper[4795]: I1129 08:06:18.682015 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe89b8b3-85af-4550-a246-b092a5f2f233-kube-api-access-xkplf" (OuterVolumeSpecName: "kube-api-access-xkplf") pod "fe89b8b3-85af-4550-a246-b092a5f2f233" (UID: "fe89b8b3-85af-4550-a246-b092a5f2f233"). InnerVolumeSpecName "kube-api-access-xkplf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:06:18 crc kubenswrapper[4795]: I1129 08:06:18.722581 4795 generic.go:334] "Generic (PLEG): container finished" podID="fe89b8b3-85af-4550-a246-b092a5f2f233" containerID="802c86dfd3edba72a7085b4fe4da96b0ee8206da13562600448cafcc2a964b84" exitCode=0 Nov 29 08:06:18 crc kubenswrapper[4795]: I1129 08:06:18.723850 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc5c4795-m6xrq" Nov 29 08:06:18 crc kubenswrapper[4795]: I1129 08:06:18.723944 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-m6xrq" event={"ID":"fe89b8b3-85af-4550-a246-b092a5f2f233","Type":"ContainerDied","Data":"802c86dfd3edba72a7085b4fe4da96b0ee8206da13562600448cafcc2a964b84"} Nov 29 08:06:18 crc kubenswrapper[4795]: I1129 08:06:18.723979 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-m6xrq" event={"ID":"fe89b8b3-85af-4550-a246-b092a5f2f233","Type":"ContainerDied","Data":"aa4d8c22036cc597ada6aecc0b49c9de5a52991757590cdec3601ec44b5c1f31"} Nov 29 08:06:18 crc kubenswrapper[4795]: I1129 08:06:18.723998 4795 scope.go:117] "RemoveContainer" containerID="802c86dfd3edba72a7085b4fe4da96b0ee8206da13562600448cafcc2a964b84" Nov 29 08:06:18 crc kubenswrapper[4795]: I1129 08:06:18.725178 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="eaf38aac-d198-4ddf-8014-ad9c598601ae" containerName="probe" containerID="cri-o://9e1daf68c4a37900b3a214484ab6491c0a68c9e6467b68bb828f1fb110187eb7" gracePeriod=30 Nov 29 08:06:18 crc kubenswrapper[4795]: I1129 08:06:18.725018 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="eaf38aac-d198-4ddf-8014-ad9c598601ae" containerName="cinder-scheduler" containerID="cri-o://3ec6fd0db9e17849ba639d7a4a0906d5cf03cec6c036c99772f206ff37d032b9" gracePeriod=30 Nov 29 08:06:18 crc kubenswrapper[4795]: I1129 08:06:18.781628 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkplf\" (UniqueName: \"kubernetes.io/projected/fe89b8b3-85af-4550-a246-b092a5f2f233-kube-api-access-xkplf\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:18 crc kubenswrapper[4795]: I1129 08:06:18.789622 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe89b8b3-85af-4550-a246-b092a5f2f233-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fe89b8b3-85af-4550-a246-b092a5f2f233" (UID: "fe89b8b3-85af-4550-a246-b092a5f2f233"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:06:18 crc kubenswrapper[4795]: I1129 08:06:18.881553 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe89b8b3-85af-4550-a246-b092a5f2f233-config" (OuterVolumeSpecName: "config") pod "fe89b8b3-85af-4550-a246-b092a5f2f233" (UID: "fe89b8b3-85af-4550-a246-b092a5f2f233"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:06:18 crc kubenswrapper[4795]: I1129 08:06:18.884289 4795 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe89b8b3-85af-4550-a246-b092a5f2f233-config\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:18 crc kubenswrapper[4795]: I1129 08:06:18.884324 4795 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fe89b8b3-85af-4550-a246-b092a5f2f233-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:18 crc kubenswrapper[4795]: I1129 08:06:18.907456 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe89b8b3-85af-4550-a246-b092a5f2f233-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "fe89b8b3-85af-4550-a246-b092a5f2f233" (UID: "fe89b8b3-85af-4550-a246-b092a5f2f233"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:06:18 crc kubenswrapper[4795]: I1129 08:06:18.959577 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe89b8b3-85af-4550-a246-b092a5f2f233-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fe89b8b3-85af-4550-a246-b092a5f2f233" (UID: "fe89b8b3-85af-4550-a246-b092a5f2f233"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:06:19 crc kubenswrapper[4795]: I1129 08:06:19.018747 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe89b8b3-85af-4550-a246-b092a5f2f233-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fe89b8b3-85af-4550-a246-b092a5f2f233" (UID: "fe89b8b3-85af-4550-a246-b092a5f2f233"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:06:19 crc kubenswrapper[4795]: I1129 08:06:19.044360 4795 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fe89b8b3-85af-4550-a246-b092a5f2f233-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:19 crc kubenswrapper[4795]: I1129 08:06:19.044402 4795 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fe89b8b3-85af-4550-a246-b092a5f2f233-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:19 crc kubenswrapper[4795]: I1129 08:06:19.044420 4795 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fe89b8b3-85af-4550-a246-b092a5f2f233-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:19 crc kubenswrapper[4795]: I1129 08:06:19.117678 4795 scope.go:117] "RemoveContainer" containerID="7bc0a83de9bce7740ee41ff52910e667c268e267857c79e5f471da536bc145f9" Nov 29 08:06:19 crc kubenswrapper[4795]: I1129 08:06:19.164665 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-m6xrq"] Nov 29 08:06:19 crc kubenswrapper[4795]: I1129 08:06:19.184354 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-m6xrq"] Nov 29 08:06:19 crc kubenswrapper[4795]: I1129 08:06:19.242383 4795 scope.go:117] "RemoveContainer" containerID="802c86dfd3edba72a7085b4fe4da96b0ee8206da13562600448cafcc2a964b84" Nov 29 08:06:19 crc kubenswrapper[4795]: E1129 08:06:19.242804 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"802c86dfd3edba72a7085b4fe4da96b0ee8206da13562600448cafcc2a964b84\": container with ID starting with 802c86dfd3edba72a7085b4fe4da96b0ee8206da13562600448cafcc2a964b84 not found: ID does not exist" containerID="802c86dfd3edba72a7085b4fe4da96b0ee8206da13562600448cafcc2a964b84" Nov 29 08:06:19 crc kubenswrapper[4795]: I1129 08:06:19.242845 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"802c86dfd3edba72a7085b4fe4da96b0ee8206da13562600448cafcc2a964b84"} err="failed to get container status \"802c86dfd3edba72a7085b4fe4da96b0ee8206da13562600448cafcc2a964b84\": rpc error: code = NotFound desc = could not find container \"802c86dfd3edba72a7085b4fe4da96b0ee8206da13562600448cafcc2a964b84\": container with ID starting with 802c86dfd3edba72a7085b4fe4da96b0ee8206da13562600448cafcc2a964b84 not found: ID does not exist" Nov 29 08:06:19 crc kubenswrapper[4795]: I1129 08:06:19.242870 4795 scope.go:117] "RemoveContainer" containerID="7bc0a83de9bce7740ee41ff52910e667c268e267857c79e5f471da536bc145f9" Nov 29 08:06:19 crc kubenswrapper[4795]: E1129 08:06:19.243085 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bc0a83de9bce7740ee41ff52910e667c268e267857c79e5f471da536bc145f9\": container with ID starting with 7bc0a83de9bce7740ee41ff52910e667c268e267857c79e5f471da536bc145f9 not found: ID does not exist" containerID="7bc0a83de9bce7740ee41ff52910e667c268e267857c79e5f471da536bc145f9" Nov 29 08:06:19 crc kubenswrapper[4795]: I1129 08:06:19.243113 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bc0a83de9bce7740ee41ff52910e667c268e267857c79e5f471da536bc145f9"} err="failed to get container status \"7bc0a83de9bce7740ee41ff52910e667c268e267857c79e5f471da536bc145f9\": rpc error: code = NotFound desc = could not find container \"7bc0a83de9bce7740ee41ff52910e667c268e267857c79e5f471da536bc145f9\": container with ID starting with 7bc0a83de9bce7740ee41ff52910e667c268e267857c79e5f471da536bc145f9 not found: ID does not exist" Nov 29 08:06:20 crc kubenswrapper[4795]: I1129 08:06:20.165754 4795 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5c8566458-jwt4m" podUID="39144d64-93f5-4c75-8cd3-39d6b17f08d6" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.199:9311/healthcheck\": read tcp 10.217.0.2:53492->10.217.0.199:9311: read: connection reset by peer" Nov 29 08:06:20 crc kubenswrapper[4795]: I1129 08:06:20.165891 4795 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5c8566458-jwt4m" podUID="39144d64-93f5-4c75-8cd3-39d6b17f08d6" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.199:9311/healthcheck\": read tcp 10.217.0.2:53498->10.217.0.199:9311: read: connection reset by peer" Nov 29 08:06:20 crc kubenswrapper[4795]: I1129 08:06:20.297793 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe89b8b3-85af-4550-a246-b092a5f2f233" path="/var/lib/kubelet/pods/fe89b8b3-85af-4550-a246-b092a5f2f233/volumes" Nov 29 08:06:20 crc kubenswrapper[4795]: I1129 08:06:20.366055 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 29 08:06:20 crc kubenswrapper[4795]: I1129 08:06:20.484988 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaf38aac-d198-4ddf-8014-ad9c598601ae-combined-ca-bundle\") pod \"eaf38aac-d198-4ddf-8014-ad9c598601ae\" (UID: \"eaf38aac-d198-4ddf-8014-ad9c598601ae\") " Nov 29 08:06:20 crc kubenswrapper[4795]: I1129 08:06:20.485036 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaf38aac-d198-4ddf-8014-ad9c598601ae-config-data\") pod \"eaf38aac-d198-4ddf-8014-ad9c598601ae\" (UID: \"eaf38aac-d198-4ddf-8014-ad9c598601ae\") " Nov 29 08:06:20 crc kubenswrapper[4795]: I1129 08:06:20.485123 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eaf38aac-d198-4ddf-8014-ad9c598601ae-config-data-custom\") pod \"eaf38aac-d198-4ddf-8014-ad9c598601ae\" (UID: \"eaf38aac-d198-4ddf-8014-ad9c598601ae\") " Nov 29 08:06:20 crc kubenswrapper[4795]: I1129 08:06:20.485152 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dd2tc\" (UniqueName: \"kubernetes.io/projected/eaf38aac-d198-4ddf-8014-ad9c598601ae-kube-api-access-dd2tc\") pod \"eaf38aac-d198-4ddf-8014-ad9c598601ae\" (UID: \"eaf38aac-d198-4ddf-8014-ad9c598601ae\") " Nov 29 08:06:20 crc kubenswrapper[4795]: I1129 08:06:20.485177 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eaf38aac-d198-4ddf-8014-ad9c598601ae-etc-machine-id\") pod \"eaf38aac-d198-4ddf-8014-ad9c598601ae\" (UID: \"eaf38aac-d198-4ddf-8014-ad9c598601ae\") " Nov 29 08:06:20 crc kubenswrapper[4795]: I1129 08:06:20.485348 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eaf38aac-d198-4ddf-8014-ad9c598601ae-scripts\") pod \"eaf38aac-d198-4ddf-8014-ad9c598601ae\" (UID: \"eaf38aac-d198-4ddf-8014-ad9c598601ae\") " Nov 29 08:06:20 crc kubenswrapper[4795]: I1129 08:06:20.487813 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eaf38aac-d198-4ddf-8014-ad9c598601ae-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "eaf38aac-d198-4ddf-8014-ad9c598601ae" (UID: "eaf38aac-d198-4ddf-8014-ad9c598601ae"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 08:06:20 crc kubenswrapper[4795]: I1129 08:06:20.494788 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaf38aac-d198-4ddf-8014-ad9c598601ae-scripts" (OuterVolumeSpecName: "scripts") pod "eaf38aac-d198-4ddf-8014-ad9c598601ae" (UID: "eaf38aac-d198-4ddf-8014-ad9c598601ae"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:06:20 crc kubenswrapper[4795]: I1129 08:06:20.495901 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaf38aac-d198-4ddf-8014-ad9c598601ae-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "eaf38aac-d198-4ddf-8014-ad9c598601ae" (UID: "eaf38aac-d198-4ddf-8014-ad9c598601ae"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:06:20 crc kubenswrapper[4795]: I1129 08:06:20.505844 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eaf38aac-d198-4ddf-8014-ad9c598601ae-kube-api-access-dd2tc" (OuterVolumeSpecName: "kube-api-access-dd2tc") pod "eaf38aac-d198-4ddf-8014-ad9c598601ae" (UID: "eaf38aac-d198-4ddf-8014-ad9c598601ae"). InnerVolumeSpecName "kube-api-access-dd2tc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:06:20 crc kubenswrapper[4795]: I1129 08:06:20.588647 4795 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eaf38aac-d198-4ddf-8014-ad9c598601ae-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:20 crc kubenswrapper[4795]: I1129 08:06:20.588697 4795 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eaf38aac-d198-4ddf-8014-ad9c598601ae-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:20 crc kubenswrapper[4795]: I1129 08:06:20.588710 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dd2tc\" (UniqueName: \"kubernetes.io/projected/eaf38aac-d198-4ddf-8014-ad9c598601ae-kube-api-access-dd2tc\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:20 crc kubenswrapper[4795]: I1129 08:06:20.588721 4795 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eaf38aac-d198-4ddf-8014-ad9c598601ae-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:20 crc kubenswrapper[4795]: I1129 08:06:20.596740 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaf38aac-d198-4ddf-8014-ad9c598601ae-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eaf38aac-d198-4ddf-8014-ad9c598601ae" (UID: "eaf38aac-d198-4ddf-8014-ad9c598601ae"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:06:20 crc kubenswrapper[4795]: I1129 08:06:20.691158 4795 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaf38aac-d198-4ddf-8014-ad9c598601ae-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:20 crc kubenswrapper[4795]: I1129 08:06:20.694858 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaf38aac-d198-4ddf-8014-ad9c598601ae-config-data" (OuterVolumeSpecName: "config-data") pod "eaf38aac-d198-4ddf-8014-ad9c598601ae" (UID: "eaf38aac-d198-4ddf-8014-ad9c598601ae"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:06:20 crc kubenswrapper[4795]: I1129 08:06:20.749016 4795 generic.go:334] "Generic (PLEG): container finished" podID="eaf38aac-d198-4ddf-8014-ad9c598601ae" containerID="9e1daf68c4a37900b3a214484ab6491c0a68c9e6467b68bb828f1fb110187eb7" exitCode=0 Nov 29 08:06:20 crc kubenswrapper[4795]: I1129 08:06:20.749056 4795 generic.go:334] "Generic (PLEG): container finished" podID="eaf38aac-d198-4ddf-8014-ad9c598601ae" containerID="3ec6fd0db9e17849ba639d7a4a0906d5cf03cec6c036c99772f206ff37d032b9" exitCode=0 Nov 29 08:06:20 crc kubenswrapper[4795]: I1129 08:06:20.749088 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 29 08:06:20 crc kubenswrapper[4795]: I1129 08:06:20.749152 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"eaf38aac-d198-4ddf-8014-ad9c598601ae","Type":"ContainerDied","Data":"9e1daf68c4a37900b3a214484ab6491c0a68c9e6467b68bb828f1fb110187eb7"} Nov 29 08:06:20 crc kubenswrapper[4795]: I1129 08:06:20.749183 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"eaf38aac-d198-4ddf-8014-ad9c598601ae","Type":"ContainerDied","Data":"3ec6fd0db9e17849ba639d7a4a0906d5cf03cec6c036c99772f206ff37d032b9"} Nov 29 08:06:20 crc kubenswrapper[4795]: I1129 08:06:20.749193 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"eaf38aac-d198-4ddf-8014-ad9c598601ae","Type":"ContainerDied","Data":"17c7c68183bf149590f2ed8a294357aa3b1c106c7cb78736bdfc02dcba1de2a8"} Nov 29 08:06:20 crc kubenswrapper[4795]: I1129 08:06:20.749209 4795 scope.go:117] "RemoveContainer" containerID="9e1daf68c4a37900b3a214484ab6491c0a68c9e6467b68bb828f1fb110187eb7" Nov 29 08:06:20 crc kubenswrapper[4795]: I1129 08:06:20.791436 4795 generic.go:334] "Generic (PLEG): container finished" podID="39144d64-93f5-4c75-8cd3-39d6b17f08d6" containerID="f487adc56134b7e9888c25a49a74e448e87beaf406191e404aa4540ab96cf4d5" exitCode=0 Nov 29 08:06:20 crc kubenswrapper[4795]: I1129 08:06:20.791492 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5c8566458-jwt4m" event={"ID":"39144d64-93f5-4c75-8cd3-39d6b17f08d6","Type":"ContainerDied","Data":"f487adc56134b7e9888c25a49a74e448e87beaf406191e404aa4540ab96cf4d5"} Nov 29 08:06:20 crc kubenswrapper[4795]: I1129 08:06:20.791547 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5c8566458-jwt4m" event={"ID":"39144d64-93f5-4c75-8cd3-39d6b17f08d6","Type":"ContainerDied","Data":"fbfd3c7a09f05a3cefd5e35f9592eab140e7abdff6b73b079da97f0c47b02269"} Nov 29 08:06:20 crc kubenswrapper[4795]: I1129 08:06:20.791560 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbfd3c7a09f05a3cefd5e35f9592eab140e7abdff6b73b079da97f0c47b02269" Nov 29 08:06:20 crc kubenswrapper[4795]: I1129 08:06:20.795553 4795 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaf38aac-d198-4ddf-8014-ad9c598601ae-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:20 crc kubenswrapper[4795]: I1129 08:06:20.809132 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5c8566458-jwt4m" Nov 29 08:06:20 crc kubenswrapper[4795]: I1129 08:06:20.841961 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 29 08:06:20 crc kubenswrapper[4795]: I1129 08:06:20.842220 4795 scope.go:117] "RemoveContainer" containerID="3ec6fd0db9e17849ba639d7a4a0906d5cf03cec6c036c99772f206ff37d032b9" Nov 29 08:06:20 crc kubenswrapper[4795]: I1129 08:06:20.852449 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 29 08:06:20 crc kubenswrapper[4795]: I1129 08:06:20.882478 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 29 08:06:20 crc kubenswrapper[4795]: E1129 08:06:20.883008 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac907bde-e98c-4e4f-aa14-e105a4fe885c" containerName="neutron-httpd" Nov 29 08:06:20 crc kubenswrapper[4795]: I1129 08:06:20.883026 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac907bde-e98c-4e4f-aa14-e105a4fe885c" containerName="neutron-httpd" Nov 29 08:06:20 crc kubenswrapper[4795]: E1129 08:06:20.883060 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac907bde-e98c-4e4f-aa14-e105a4fe885c" containerName="neutron-api" Nov 29 08:06:20 crc kubenswrapper[4795]: I1129 08:06:20.883069 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac907bde-e98c-4e4f-aa14-e105a4fe885c" containerName="neutron-api" Nov 29 08:06:20 crc kubenswrapper[4795]: E1129 08:06:20.883087 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39144d64-93f5-4c75-8cd3-39d6b17f08d6" containerName="barbican-api" Nov 29 08:06:20 crc kubenswrapper[4795]: I1129 08:06:20.883093 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="39144d64-93f5-4c75-8cd3-39d6b17f08d6" containerName="barbican-api" Nov 29 08:06:20 crc kubenswrapper[4795]: E1129 08:06:20.883105 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe89b8b3-85af-4550-a246-b092a5f2f233" containerName="dnsmasq-dns" Nov 29 08:06:20 crc kubenswrapper[4795]: I1129 08:06:20.883111 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe89b8b3-85af-4550-a246-b092a5f2f233" containerName="dnsmasq-dns" Nov 29 08:06:20 crc kubenswrapper[4795]: E1129 08:06:20.883121 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaf38aac-d198-4ddf-8014-ad9c598601ae" containerName="probe" Nov 29 08:06:20 crc kubenswrapper[4795]: I1129 08:06:20.883126 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaf38aac-d198-4ddf-8014-ad9c598601ae" containerName="probe" Nov 29 08:06:20 crc kubenswrapper[4795]: E1129 08:06:20.883137 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39144d64-93f5-4c75-8cd3-39d6b17f08d6" containerName="barbican-api-log" Nov 29 08:06:20 crc kubenswrapper[4795]: I1129 08:06:20.883143 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="39144d64-93f5-4c75-8cd3-39d6b17f08d6" containerName="barbican-api-log" Nov 29 08:06:20 crc kubenswrapper[4795]: E1129 08:06:20.883151 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe89b8b3-85af-4550-a246-b092a5f2f233" containerName="init" Nov 29 08:06:20 crc kubenswrapper[4795]: I1129 08:06:20.883156 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe89b8b3-85af-4550-a246-b092a5f2f233" containerName="init" Nov 29 08:06:20 crc kubenswrapper[4795]: E1129 08:06:20.883172 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaf38aac-d198-4ddf-8014-ad9c598601ae" containerName="cinder-scheduler" Nov 29 08:06:20 crc kubenswrapper[4795]: I1129 08:06:20.883178 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaf38aac-d198-4ddf-8014-ad9c598601ae" containerName="cinder-scheduler" Nov 29 08:06:20 crc kubenswrapper[4795]: I1129 08:06:20.883382 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="eaf38aac-d198-4ddf-8014-ad9c598601ae" containerName="probe" Nov 29 08:06:20 crc kubenswrapper[4795]: I1129 08:06:20.883393 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="39144d64-93f5-4c75-8cd3-39d6b17f08d6" containerName="barbican-api-log" Nov 29 08:06:20 crc kubenswrapper[4795]: I1129 08:06:20.883407 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="39144d64-93f5-4c75-8cd3-39d6b17f08d6" containerName="barbican-api" Nov 29 08:06:20 crc kubenswrapper[4795]: I1129 08:06:20.883429 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="eaf38aac-d198-4ddf-8014-ad9c598601ae" containerName="cinder-scheduler" Nov 29 08:06:20 crc kubenswrapper[4795]: I1129 08:06:20.883443 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac907bde-e98c-4e4f-aa14-e105a4fe885c" containerName="neutron-api" Nov 29 08:06:20 crc kubenswrapper[4795]: I1129 08:06:20.883453 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe89b8b3-85af-4550-a246-b092a5f2f233" containerName="dnsmasq-dns" Nov 29 08:06:20 crc kubenswrapper[4795]: I1129 08:06:20.883462 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac907bde-e98c-4e4f-aa14-e105a4fe885c" containerName="neutron-httpd" Nov 29 08:06:20 crc kubenswrapper[4795]: I1129 08:06:20.891810 4795 scope.go:117] "RemoveContainer" containerID="9e1daf68c4a37900b3a214484ab6491c0a68c9e6467b68bb828f1fb110187eb7" Nov 29 08:06:20 crc kubenswrapper[4795]: E1129 08:06:20.896240 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e1daf68c4a37900b3a214484ab6491c0a68c9e6467b68bb828f1fb110187eb7\": container with ID starting with 9e1daf68c4a37900b3a214484ab6491c0a68c9e6467b68bb828f1fb110187eb7 not found: ID does not exist" containerID="9e1daf68c4a37900b3a214484ab6491c0a68c9e6467b68bb828f1fb110187eb7" Nov 29 08:06:20 crc kubenswrapper[4795]: I1129 08:06:20.896287 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e1daf68c4a37900b3a214484ab6491c0a68c9e6467b68bb828f1fb110187eb7"} err="failed to get container status \"9e1daf68c4a37900b3a214484ab6491c0a68c9e6467b68bb828f1fb110187eb7\": rpc error: code = NotFound desc = could not find container \"9e1daf68c4a37900b3a214484ab6491c0a68c9e6467b68bb828f1fb110187eb7\": container with ID starting with 9e1daf68c4a37900b3a214484ab6491c0a68c9e6467b68bb828f1fb110187eb7 not found: ID does not exist" Nov 29 08:06:20 crc kubenswrapper[4795]: I1129 08:06:20.896321 4795 scope.go:117] "RemoveContainer" containerID="3ec6fd0db9e17849ba639d7a4a0906d5cf03cec6c036c99772f206ff37d032b9" Nov 29 08:06:20 crc kubenswrapper[4795]: I1129 08:06:20.898056 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8dt9l\" (UniqueName: \"kubernetes.io/projected/39144d64-93f5-4c75-8cd3-39d6b17f08d6-kube-api-access-8dt9l\") pod \"39144d64-93f5-4c75-8cd3-39d6b17f08d6\" (UID: \"39144d64-93f5-4c75-8cd3-39d6b17f08d6\") " Nov 29 08:06:20 crc kubenswrapper[4795]: I1129 08:06:20.898140 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39144d64-93f5-4c75-8cd3-39d6b17f08d6-config-data\") pod \"39144d64-93f5-4c75-8cd3-39d6b17f08d6\" (UID: \"39144d64-93f5-4c75-8cd3-39d6b17f08d6\") " Nov 29 08:06:20 crc kubenswrapper[4795]: I1129 08:06:20.898174 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39144d64-93f5-4c75-8cd3-39d6b17f08d6-logs\") pod \"39144d64-93f5-4c75-8cd3-39d6b17f08d6\" (UID: \"39144d64-93f5-4c75-8cd3-39d6b17f08d6\") " Nov 29 08:06:20 crc kubenswrapper[4795]: I1129 08:06:20.898327 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39144d64-93f5-4c75-8cd3-39d6b17f08d6-combined-ca-bundle\") pod \"39144d64-93f5-4c75-8cd3-39d6b17f08d6\" (UID: \"39144d64-93f5-4c75-8cd3-39d6b17f08d6\") " Nov 29 08:06:20 crc kubenswrapper[4795]: I1129 08:06:20.898367 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/39144d64-93f5-4c75-8cd3-39d6b17f08d6-config-data-custom\") pod \"39144d64-93f5-4c75-8cd3-39d6b17f08d6\" (UID: \"39144d64-93f5-4c75-8cd3-39d6b17f08d6\") " Nov 29 08:06:20 crc kubenswrapper[4795]: E1129 08:06:20.900346 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ec6fd0db9e17849ba639d7a4a0906d5cf03cec6c036c99772f206ff37d032b9\": container with ID starting with 3ec6fd0db9e17849ba639d7a4a0906d5cf03cec6c036c99772f206ff37d032b9 not found: ID does not exist" containerID="3ec6fd0db9e17849ba639d7a4a0906d5cf03cec6c036c99772f206ff37d032b9" Nov 29 08:06:20 crc kubenswrapper[4795]: I1129 08:06:20.900414 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ec6fd0db9e17849ba639d7a4a0906d5cf03cec6c036c99772f206ff37d032b9"} err="failed to get container status \"3ec6fd0db9e17849ba639d7a4a0906d5cf03cec6c036c99772f206ff37d032b9\": rpc error: code = NotFound desc = could not find container \"3ec6fd0db9e17849ba639d7a4a0906d5cf03cec6c036c99772f206ff37d032b9\": container with ID starting with 3ec6fd0db9e17849ba639d7a4a0906d5cf03cec6c036c99772f206ff37d032b9 not found: ID does not exist" Nov 29 08:06:20 crc kubenswrapper[4795]: I1129 08:06:20.900441 4795 scope.go:117] "RemoveContainer" containerID="9e1daf68c4a37900b3a214484ab6491c0a68c9e6467b68bb828f1fb110187eb7" Nov 29 08:06:20 crc kubenswrapper[4795]: I1129 08:06:20.901142 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39144d64-93f5-4c75-8cd3-39d6b17f08d6-logs" (OuterVolumeSpecName: "logs") pod "39144d64-93f5-4c75-8cd3-39d6b17f08d6" (UID: "39144d64-93f5-4c75-8cd3-39d6b17f08d6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:06:20 crc kubenswrapper[4795]: I1129 08:06:20.902753 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e1daf68c4a37900b3a214484ab6491c0a68c9e6467b68bb828f1fb110187eb7"} err="failed to get container status \"9e1daf68c4a37900b3a214484ab6491c0a68c9e6467b68bb828f1fb110187eb7\": rpc error: code = NotFound desc = could not find container \"9e1daf68c4a37900b3a214484ab6491c0a68c9e6467b68bb828f1fb110187eb7\": container with ID starting with 9e1daf68c4a37900b3a214484ab6491c0a68c9e6467b68bb828f1fb110187eb7 not found: ID does not exist" Nov 29 08:06:20 crc kubenswrapper[4795]: I1129 08:06:20.902803 4795 scope.go:117] "RemoveContainer" containerID="3ec6fd0db9e17849ba639d7a4a0906d5cf03cec6c036c99772f206ff37d032b9" Nov 29 08:06:20 crc kubenswrapper[4795]: I1129 08:06:20.903327 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 29 08:06:20 crc kubenswrapper[4795]: I1129 08:06:20.903331 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ec6fd0db9e17849ba639d7a4a0906d5cf03cec6c036c99772f206ff37d032b9"} err="failed to get container status \"3ec6fd0db9e17849ba639d7a4a0906d5cf03cec6c036c99772f206ff37d032b9\": rpc error: code = NotFound desc = could not find container \"3ec6fd0db9e17849ba639d7a4a0906d5cf03cec6c036c99772f206ff37d032b9\": container with ID starting with 3ec6fd0db9e17849ba639d7a4a0906d5cf03cec6c036c99772f206ff37d032b9 not found: ID does not exist" Nov 29 08:06:20 crc kubenswrapper[4795]: I1129 08:06:20.903421 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 29 08:06:20 crc kubenswrapper[4795]: I1129 08:06:20.924885 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39144d64-93f5-4c75-8cd3-39d6b17f08d6-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "39144d64-93f5-4c75-8cd3-39d6b17f08d6" (UID: "39144d64-93f5-4c75-8cd3-39d6b17f08d6"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:06:20 crc kubenswrapper[4795]: I1129 08:06:20.925987 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39144d64-93f5-4c75-8cd3-39d6b17f08d6-kube-api-access-8dt9l" (OuterVolumeSpecName: "kube-api-access-8dt9l") pod "39144d64-93f5-4c75-8cd3-39d6b17f08d6" (UID: "39144d64-93f5-4c75-8cd3-39d6b17f08d6"). InnerVolumeSpecName "kube-api-access-8dt9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:06:20 crc kubenswrapper[4795]: I1129 08:06:20.927695 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 29 08:06:20 crc kubenswrapper[4795]: I1129 08:06:20.992739 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39144d64-93f5-4c75-8cd3-39d6b17f08d6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "39144d64-93f5-4c75-8cd3-39d6b17f08d6" (UID: "39144d64-93f5-4c75-8cd3-39d6b17f08d6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:06:21 crc kubenswrapper[4795]: I1129 08:06:21.007200 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r7v2\" (UniqueName: \"kubernetes.io/projected/945f619e-60af-4c36-8ec9-a98d54c15276-kube-api-access-5r7v2\") pod \"cinder-scheduler-0\" (UID: \"945f619e-60af-4c36-8ec9-a98d54c15276\") " pod="openstack/cinder-scheduler-0" Nov 29 08:06:21 crc kubenswrapper[4795]: I1129 08:06:21.007261 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/945f619e-60af-4c36-8ec9-a98d54c15276-config-data\") pod \"cinder-scheduler-0\" (UID: \"945f619e-60af-4c36-8ec9-a98d54c15276\") " pod="openstack/cinder-scheduler-0" Nov 29 08:06:21 crc kubenswrapper[4795]: I1129 08:06:21.007314 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/945f619e-60af-4c36-8ec9-a98d54c15276-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"945f619e-60af-4c36-8ec9-a98d54c15276\") " pod="openstack/cinder-scheduler-0" Nov 29 08:06:21 crc kubenswrapper[4795]: I1129 08:06:21.007523 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/945f619e-60af-4c36-8ec9-a98d54c15276-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"945f619e-60af-4c36-8ec9-a98d54c15276\") " pod="openstack/cinder-scheduler-0" Nov 29 08:06:21 crc kubenswrapper[4795]: I1129 08:06:21.007551 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/945f619e-60af-4c36-8ec9-a98d54c15276-scripts\") pod \"cinder-scheduler-0\" (UID: \"945f619e-60af-4c36-8ec9-a98d54c15276\") " pod="openstack/cinder-scheduler-0" Nov 29 08:06:21 crc kubenswrapper[4795]: I1129 08:06:21.007608 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/945f619e-60af-4c36-8ec9-a98d54c15276-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"945f619e-60af-4c36-8ec9-a98d54c15276\") " pod="openstack/cinder-scheduler-0" Nov 29 08:06:21 crc kubenswrapper[4795]: I1129 08:06:21.007703 4795 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39144d64-93f5-4c75-8cd3-39d6b17f08d6-logs\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:21 crc kubenswrapper[4795]: I1129 08:06:21.007722 4795 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39144d64-93f5-4c75-8cd3-39d6b17f08d6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:21 crc kubenswrapper[4795]: I1129 08:06:21.007803 4795 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/39144d64-93f5-4c75-8cd3-39d6b17f08d6-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:21 crc kubenswrapper[4795]: I1129 08:06:21.007816 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8dt9l\" (UniqueName: \"kubernetes.io/projected/39144d64-93f5-4c75-8cd3-39d6b17f08d6-kube-api-access-8dt9l\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:21 crc kubenswrapper[4795]: I1129 08:06:21.015368 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39144d64-93f5-4c75-8cd3-39d6b17f08d6-config-data" (OuterVolumeSpecName: "config-data") pod "39144d64-93f5-4c75-8cd3-39d6b17f08d6" (UID: "39144d64-93f5-4c75-8cd3-39d6b17f08d6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:06:21 crc kubenswrapper[4795]: I1129 08:06:21.109949 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5r7v2\" (UniqueName: \"kubernetes.io/projected/945f619e-60af-4c36-8ec9-a98d54c15276-kube-api-access-5r7v2\") pod \"cinder-scheduler-0\" (UID: \"945f619e-60af-4c36-8ec9-a98d54c15276\") " pod="openstack/cinder-scheduler-0" Nov 29 08:06:21 crc kubenswrapper[4795]: I1129 08:06:21.110011 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/945f619e-60af-4c36-8ec9-a98d54c15276-config-data\") pod \"cinder-scheduler-0\" (UID: \"945f619e-60af-4c36-8ec9-a98d54c15276\") " pod="openstack/cinder-scheduler-0" Nov 29 08:06:21 crc kubenswrapper[4795]: I1129 08:06:21.110062 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/945f619e-60af-4c36-8ec9-a98d54c15276-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"945f619e-60af-4c36-8ec9-a98d54c15276\") " pod="openstack/cinder-scheduler-0" Nov 29 08:06:21 crc kubenswrapper[4795]: I1129 08:06:21.110264 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/945f619e-60af-4c36-8ec9-a98d54c15276-scripts\") pod \"cinder-scheduler-0\" (UID: \"945f619e-60af-4c36-8ec9-a98d54c15276\") " pod="openstack/cinder-scheduler-0" Nov 29 08:06:21 crc kubenswrapper[4795]: I1129 08:06:21.110291 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/945f619e-60af-4c36-8ec9-a98d54c15276-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"945f619e-60af-4c36-8ec9-a98d54c15276\") " pod="openstack/cinder-scheduler-0" Nov 29 08:06:21 crc kubenswrapper[4795]: I1129 08:06:21.110329 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/945f619e-60af-4c36-8ec9-a98d54c15276-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"945f619e-60af-4c36-8ec9-a98d54c15276\") " pod="openstack/cinder-scheduler-0" Nov 29 08:06:21 crc kubenswrapper[4795]: I1129 08:06:21.110437 4795 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39144d64-93f5-4c75-8cd3-39d6b17f08d6-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:21 crc kubenswrapper[4795]: I1129 08:06:21.110501 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/945f619e-60af-4c36-8ec9-a98d54c15276-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"945f619e-60af-4c36-8ec9-a98d54c15276\") " pod="openstack/cinder-scheduler-0" Nov 29 08:06:21 crc kubenswrapper[4795]: I1129 08:06:21.120523 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/945f619e-60af-4c36-8ec9-a98d54c15276-scripts\") pod \"cinder-scheduler-0\" (UID: \"945f619e-60af-4c36-8ec9-a98d54c15276\") " pod="openstack/cinder-scheduler-0" Nov 29 08:06:21 crc kubenswrapper[4795]: I1129 08:06:21.120612 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/945f619e-60af-4c36-8ec9-a98d54c15276-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"945f619e-60af-4c36-8ec9-a98d54c15276\") " pod="openstack/cinder-scheduler-0" Nov 29 08:06:21 crc kubenswrapper[4795]: I1129 08:06:21.120887 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/945f619e-60af-4c36-8ec9-a98d54c15276-config-data\") pod \"cinder-scheduler-0\" (UID: \"945f619e-60af-4c36-8ec9-a98d54c15276\") " pod="openstack/cinder-scheduler-0" Nov 29 08:06:21 crc kubenswrapper[4795]: I1129 08:06:21.121166 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/945f619e-60af-4c36-8ec9-a98d54c15276-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"945f619e-60af-4c36-8ec9-a98d54c15276\") " pod="openstack/cinder-scheduler-0" Nov 29 08:06:21 crc kubenswrapper[4795]: I1129 08:06:21.129139 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r7v2\" (UniqueName: \"kubernetes.io/projected/945f619e-60af-4c36-8ec9-a98d54c15276-kube-api-access-5r7v2\") pod \"cinder-scheduler-0\" (UID: \"945f619e-60af-4c36-8ec9-a98d54c15276\") " pod="openstack/cinder-scheduler-0" Nov 29 08:06:21 crc kubenswrapper[4795]: I1129 08:06:21.234833 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 29 08:06:21 crc kubenswrapper[4795]: I1129 08:06:21.633680 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-7db455fcf4-bs6l9" Nov 29 08:06:21 crc kubenswrapper[4795]: W1129 08:06:21.742859 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod945f619e_60af_4c36_8ec9_a98d54c15276.slice/crio-c33cdd80a1420ec877d24ae578597418b1f4611948aac822cba1133cb9ed50a2 WatchSource:0}: Error finding container c33cdd80a1420ec877d24ae578597418b1f4611948aac822cba1133cb9ed50a2: Status 404 returned error can't find the container with id c33cdd80a1420ec877d24ae578597418b1f4611948aac822cba1133cb9ed50a2 Nov 29 08:06:21 crc kubenswrapper[4795]: I1129 08:06:21.745735 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 29 08:06:21 crc kubenswrapper[4795]: I1129 08:06:21.911123 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"945f619e-60af-4c36-8ec9-a98d54c15276","Type":"ContainerStarted","Data":"c33cdd80a1420ec877d24ae578597418b1f4611948aac822cba1133cb9ed50a2"} Nov 29 08:06:21 crc kubenswrapper[4795]: I1129 08:06:21.932134 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5c8566458-jwt4m" Nov 29 08:06:21 crc kubenswrapper[4795]: I1129 08:06:21.992030 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5c8566458-jwt4m"] Nov 29 08:06:22 crc kubenswrapper[4795]: I1129 08:06:22.002611 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-5c8566458-jwt4m"] Nov 29 08:06:22 crc kubenswrapper[4795]: I1129 08:06:22.288425 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39144d64-93f5-4c75-8cd3-39d6b17f08d6" path="/var/lib/kubelet/pods/39144d64-93f5-4c75-8cd3-39d6b17f08d6/volumes" Nov 29 08:06:22 crc kubenswrapper[4795]: I1129 08:06:22.289455 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eaf38aac-d198-4ddf-8014-ad9c598601ae" path="/var/lib/kubelet/pods/eaf38aac-d198-4ddf-8014-ad9c598601ae/volumes" Nov 29 08:06:22 crc kubenswrapper[4795]: I1129 08:06:22.950669 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"945f619e-60af-4c36-8ec9-a98d54c15276","Type":"ContainerStarted","Data":"3b3241aad679adee90d200c392fe2ad201e5b6111b78c0f531c4b3c765b865c3"} Nov 29 08:06:23 crc kubenswrapper[4795]: I1129 08:06:23.964808 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"945f619e-60af-4c36-8ec9-a98d54c15276","Type":"ContainerStarted","Data":"0c004bddb079b158b7ae6ac4a7310810a820f12c6fe89ccebe0ced047506e5da"} Nov 29 08:06:23 crc kubenswrapper[4795]: I1129 08:06:23.999055 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.9990351889999998 podStartE2EDuration="3.999035189s" podCreationTimestamp="2025-11-29 08:06:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 08:06:23.991164396 +0000 UTC m=+1629.966740186" watchObservedRunningTime="2025-11-29 08:06:23.999035189 +0000 UTC m=+1629.974610979" Nov 29 08:06:26 crc kubenswrapper[4795]: I1129 08:06:26.197434 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 29 08:06:26 crc kubenswrapper[4795]: I1129 08:06:26.199876 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 29 08:06:26 crc kubenswrapper[4795]: I1129 08:06:26.202561 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Nov 29 08:06:26 crc kubenswrapper[4795]: I1129 08:06:26.203568 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Nov 29 08:06:26 crc kubenswrapper[4795]: I1129 08:06:26.203846 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-k9rp6" Nov 29 08:06:26 crc kubenswrapper[4795]: I1129 08:06:26.214078 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 29 08:06:26 crc kubenswrapper[4795]: I1129 08:06:26.238361 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 29 08:06:26 crc kubenswrapper[4795]: I1129 08:06:26.381852 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gxjg\" (UniqueName: \"kubernetes.io/projected/de5e89b3-d4a1-4ef0-bf3a-814cb09d5ade-kube-api-access-4gxjg\") pod \"openstackclient\" (UID: \"de5e89b3-d4a1-4ef0-bf3a-814cb09d5ade\") " pod="openstack/openstackclient" Nov 29 08:06:26 crc kubenswrapper[4795]: I1129 08:06:26.381906 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de5e89b3-d4a1-4ef0-bf3a-814cb09d5ade-combined-ca-bundle\") pod \"openstackclient\" (UID: \"de5e89b3-d4a1-4ef0-bf3a-814cb09d5ade\") " pod="openstack/openstackclient" Nov 29 08:06:26 crc kubenswrapper[4795]: I1129 08:06:26.381994 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/de5e89b3-d4a1-4ef0-bf3a-814cb09d5ade-openstack-config-secret\") pod \"openstackclient\" (UID: \"de5e89b3-d4a1-4ef0-bf3a-814cb09d5ade\") " pod="openstack/openstackclient" Nov 29 08:06:26 crc kubenswrapper[4795]: I1129 08:06:26.382132 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/de5e89b3-d4a1-4ef0-bf3a-814cb09d5ade-openstack-config\") pod \"openstackclient\" (UID: \"de5e89b3-d4a1-4ef0-bf3a-814cb09d5ade\") " pod="openstack/openstackclient" Nov 29 08:06:26 crc kubenswrapper[4795]: I1129 08:06:26.484247 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gxjg\" (UniqueName: \"kubernetes.io/projected/de5e89b3-d4a1-4ef0-bf3a-814cb09d5ade-kube-api-access-4gxjg\") pod \"openstackclient\" (UID: \"de5e89b3-d4a1-4ef0-bf3a-814cb09d5ade\") " pod="openstack/openstackclient" Nov 29 08:06:26 crc kubenswrapper[4795]: I1129 08:06:26.484312 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de5e89b3-d4a1-4ef0-bf3a-814cb09d5ade-combined-ca-bundle\") pod \"openstackclient\" (UID: \"de5e89b3-d4a1-4ef0-bf3a-814cb09d5ade\") " pod="openstack/openstackclient" Nov 29 08:06:26 crc kubenswrapper[4795]: I1129 08:06:26.484386 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/de5e89b3-d4a1-4ef0-bf3a-814cb09d5ade-openstack-config-secret\") pod \"openstackclient\" (UID: \"de5e89b3-d4a1-4ef0-bf3a-814cb09d5ade\") " pod="openstack/openstackclient" Nov 29 08:06:26 crc kubenswrapper[4795]: I1129 08:06:26.484486 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/de5e89b3-d4a1-4ef0-bf3a-814cb09d5ade-openstack-config\") pod \"openstackclient\" (UID: \"de5e89b3-d4a1-4ef0-bf3a-814cb09d5ade\") " pod="openstack/openstackclient" Nov 29 08:06:26 crc kubenswrapper[4795]: I1129 08:06:26.485425 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/de5e89b3-d4a1-4ef0-bf3a-814cb09d5ade-openstack-config\") pod \"openstackclient\" (UID: \"de5e89b3-d4a1-4ef0-bf3a-814cb09d5ade\") " pod="openstack/openstackclient" Nov 29 08:06:26 crc kubenswrapper[4795]: I1129 08:06:26.494065 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/de5e89b3-d4a1-4ef0-bf3a-814cb09d5ade-openstack-config-secret\") pod \"openstackclient\" (UID: \"de5e89b3-d4a1-4ef0-bf3a-814cb09d5ade\") " pod="openstack/openstackclient" Nov 29 08:06:26 crc kubenswrapper[4795]: I1129 08:06:26.496834 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de5e89b3-d4a1-4ef0-bf3a-814cb09d5ade-combined-ca-bundle\") pod \"openstackclient\" (UID: \"de5e89b3-d4a1-4ef0-bf3a-814cb09d5ade\") " pod="openstack/openstackclient" Nov 29 08:06:26 crc kubenswrapper[4795]: I1129 08:06:26.505990 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 29 08:06:26 crc kubenswrapper[4795]: I1129 08:06:26.506265 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gxjg\" (UniqueName: \"kubernetes.io/projected/de5e89b3-d4a1-4ef0-bf3a-814cb09d5ade-kube-api-access-4gxjg\") pod \"openstackclient\" (UID: \"de5e89b3-d4a1-4ef0-bf3a-814cb09d5ade\") " pod="openstack/openstackclient" Nov 29 08:06:26 crc kubenswrapper[4795]: I1129 08:06:26.539580 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 29 08:06:27 crc kubenswrapper[4795]: I1129 08:06:27.489269 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 29 08:06:27 crc kubenswrapper[4795]: W1129 08:06:27.516550 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde5e89b3_d4a1_4ef0_bf3a_814cb09d5ade.slice/crio-df86196fb9d60d97766d01817e20ef7fb11f9f441cca9f0c409a261de9630e21 WatchSource:0}: Error finding container df86196fb9d60d97766d01817e20ef7fb11f9f441cca9f0c409a261de9630e21: Status 404 returned error can't find the container with id df86196fb9d60d97766d01817e20ef7fb11f9f441cca9f0c409a261de9630e21 Nov 29 08:06:28 crc kubenswrapper[4795]: I1129 08:06:28.016972 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"de5e89b3-d4a1-4ef0-bf3a-814cb09d5ade","Type":"ContainerStarted","Data":"df86196fb9d60d97766d01817e20ef7fb11f9f441cca9f0c409a261de9630e21"} Nov 29 08:06:28 crc kubenswrapper[4795]: I1129 08:06:28.687058 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-7ff5f779bc-nzx8l"] Nov 29 08:06:28 crc kubenswrapper[4795]: I1129 08:06:28.694478 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7ff5f779bc-nzx8l" Nov 29 08:06:28 crc kubenswrapper[4795]: I1129 08:06:28.699217 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Nov 29 08:06:28 crc kubenswrapper[4795]: I1129 08:06:28.699475 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Nov 29 08:06:28 crc kubenswrapper[4795]: I1129 08:06:28.699540 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 29 08:06:28 crc kubenswrapper[4795]: I1129 08:06:28.721802 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7ff5f779bc-nzx8l"] Nov 29 08:06:28 crc kubenswrapper[4795]: I1129 08:06:28.770419 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd1d8c65-0785-455c-9991-e32eea8a9b83-public-tls-certs\") pod \"swift-proxy-7ff5f779bc-nzx8l\" (UID: \"dd1d8c65-0785-455c-9991-e32eea8a9b83\") " pod="openstack/swift-proxy-7ff5f779bc-nzx8l" Nov 29 08:06:28 crc kubenswrapper[4795]: I1129 08:06:28.770484 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd1d8c65-0785-455c-9991-e32eea8a9b83-log-httpd\") pod \"swift-proxy-7ff5f779bc-nzx8l\" (UID: \"dd1d8c65-0785-455c-9991-e32eea8a9b83\") " pod="openstack/swift-proxy-7ff5f779bc-nzx8l" Nov 29 08:06:28 crc kubenswrapper[4795]: I1129 08:06:28.770780 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd1d8c65-0785-455c-9991-e32eea8a9b83-config-data\") pod \"swift-proxy-7ff5f779bc-nzx8l\" (UID: \"dd1d8c65-0785-455c-9991-e32eea8a9b83\") " pod="openstack/swift-proxy-7ff5f779bc-nzx8l" Nov 29 08:06:28 crc kubenswrapper[4795]: I1129 08:06:28.770897 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/dd1d8c65-0785-455c-9991-e32eea8a9b83-etc-swift\") pod \"swift-proxy-7ff5f779bc-nzx8l\" (UID: \"dd1d8c65-0785-455c-9991-e32eea8a9b83\") " pod="openstack/swift-proxy-7ff5f779bc-nzx8l" Nov 29 08:06:28 crc kubenswrapper[4795]: I1129 08:06:28.770924 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd1d8c65-0785-455c-9991-e32eea8a9b83-combined-ca-bundle\") pod \"swift-proxy-7ff5f779bc-nzx8l\" (UID: \"dd1d8c65-0785-455c-9991-e32eea8a9b83\") " pod="openstack/swift-proxy-7ff5f779bc-nzx8l" Nov 29 08:06:28 crc kubenswrapper[4795]: I1129 08:06:28.770993 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd1d8c65-0785-455c-9991-e32eea8a9b83-run-httpd\") pod \"swift-proxy-7ff5f779bc-nzx8l\" (UID: \"dd1d8c65-0785-455c-9991-e32eea8a9b83\") " pod="openstack/swift-proxy-7ff5f779bc-nzx8l" Nov 29 08:06:28 crc kubenswrapper[4795]: I1129 08:06:28.771126 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd1d8c65-0785-455c-9991-e32eea8a9b83-internal-tls-certs\") pod \"swift-proxy-7ff5f779bc-nzx8l\" (UID: \"dd1d8c65-0785-455c-9991-e32eea8a9b83\") " pod="openstack/swift-proxy-7ff5f779bc-nzx8l" Nov 29 08:06:28 crc kubenswrapper[4795]: I1129 08:06:28.771335 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzwbx\" (UniqueName: \"kubernetes.io/projected/dd1d8c65-0785-455c-9991-e32eea8a9b83-kube-api-access-lzwbx\") pod \"swift-proxy-7ff5f779bc-nzx8l\" (UID: \"dd1d8c65-0785-455c-9991-e32eea8a9b83\") " pod="openstack/swift-proxy-7ff5f779bc-nzx8l" Nov 29 08:06:28 crc kubenswrapper[4795]: I1129 08:06:28.875288 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd1d8c65-0785-455c-9991-e32eea8a9b83-public-tls-certs\") pod \"swift-proxy-7ff5f779bc-nzx8l\" (UID: \"dd1d8c65-0785-455c-9991-e32eea8a9b83\") " pod="openstack/swift-proxy-7ff5f779bc-nzx8l" Nov 29 08:06:28 crc kubenswrapper[4795]: I1129 08:06:28.875864 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd1d8c65-0785-455c-9991-e32eea8a9b83-log-httpd\") pod \"swift-proxy-7ff5f779bc-nzx8l\" (UID: \"dd1d8c65-0785-455c-9991-e32eea8a9b83\") " pod="openstack/swift-proxy-7ff5f779bc-nzx8l" Nov 29 08:06:28 crc kubenswrapper[4795]: I1129 08:06:28.876037 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd1d8c65-0785-455c-9991-e32eea8a9b83-config-data\") pod \"swift-proxy-7ff5f779bc-nzx8l\" (UID: \"dd1d8c65-0785-455c-9991-e32eea8a9b83\") " pod="openstack/swift-proxy-7ff5f779bc-nzx8l" Nov 29 08:06:28 crc kubenswrapper[4795]: I1129 08:06:28.876159 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd1d8c65-0785-455c-9991-e32eea8a9b83-combined-ca-bundle\") pod \"swift-proxy-7ff5f779bc-nzx8l\" (UID: \"dd1d8c65-0785-455c-9991-e32eea8a9b83\") " pod="openstack/swift-proxy-7ff5f779bc-nzx8l" Nov 29 08:06:28 crc kubenswrapper[4795]: I1129 08:06:28.876268 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/dd1d8c65-0785-455c-9991-e32eea8a9b83-etc-swift\") pod \"swift-proxy-7ff5f779bc-nzx8l\" (UID: \"dd1d8c65-0785-455c-9991-e32eea8a9b83\") " pod="openstack/swift-proxy-7ff5f779bc-nzx8l" Nov 29 08:06:28 crc kubenswrapper[4795]: I1129 08:06:28.876387 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd1d8c65-0785-455c-9991-e32eea8a9b83-run-httpd\") pod \"swift-proxy-7ff5f779bc-nzx8l\" (UID: \"dd1d8c65-0785-455c-9991-e32eea8a9b83\") " pod="openstack/swift-proxy-7ff5f779bc-nzx8l" Nov 29 08:06:28 crc kubenswrapper[4795]: I1129 08:06:28.876438 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd1d8c65-0785-455c-9991-e32eea8a9b83-log-httpd\") pod \"swift-proxy-7ff5f779bc-nzx8l\" (UID: \"dd1d8c65-0785-455c-9991-e32eea8a9b83\") " pod="openstack/swift-proxy-7ff5f779bc-nzx8l" Nov 29 08:06:28 crc kubenswrapper[4795]: I1129 08:06:28.876571 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd1d8c65-0785-455c-9991-e32eea8a9b83-internal-tls-certs\") pod \"swift-proxy-7ff5f779bc-nzx8l\" (UID: \"dd1d8c65-0785-455c-9991-e32eea8a9b83\") " pod="openstack/swift-proxy-7ff5f779bc-nzx8l" Nov 29 08:06:28 crc kubenswrapper[4795]: I1129 08:06:28.876853 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd1d8c65-0785-455c-9991-e32eea8a9b83-run-httpd\") pod \"swift-proxy-7ff5f779bc-nzx8l\" (UID: \"dd1d8c65-0785-455c-9991-e32eea8a9b83\") " pod="openstack/swift-proxy-7ff5f779bc-nzx8l" Nov 29 08:06:28 crc kubenswrapper[4795]: I1129 08:06:28.877544 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzwbx\" (UniqueName: \"kubernetes.io/projected/dd1d8c65-0785-455c-9991-e32eea8a9b83-kube-api-access-lzwbx\") pod \"swift-proxy-7ff5f779bc-nzx8l\" (UID: \"dd1d8c65-0785-455c-9991-e32eea8a9b83\") " pod="openstack/swift-proxy-7ff5f779bc-nzx8l" Nov 29 08:06:28 crc kubenswrapper[4795]: I1129 08:06:28.883668 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/dd1d8c65-0785-455c-9991-e32eea8a9b83-etc-swift\") pod \"swift-proxy-7ff5f779bc-nzx8l\" (UID: \"dd1d8c65-0785-455c-9991-e32eea8a9b83\") " pod="openstack/swift-proxy-7ff5f779bc-nzx8l" Nov 29 08:06:28 crc kubenswrapper[4795]: I1129 08:06:28.883765 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd1d8c65-0785-455c-9991-e32eea8a9b83-combined-ca-bundle\") pod \"swift-proxy-7ff5f779bc-nzx8l\" (UID: \"dd1d8c65-0785-455c-9991-e32eea8a9b83\") " pod="openstack/swift-proxy-7ff5f779bc-nzx8l" Nov 29 08:06:28 crc kubenswrapper[4795]: I1129 08:06:28.885118 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd1d8c65-0785-455c-9991-e32eea8a9b83-public-tls-certs\") pod \"swift-proxy-7ff5f779bc-nzx8l\" (UID: \"dd1d8c65-0785-455c-9991-e32eea8a9b83\") " pod="openstack/swift-proxy-7ff5f779bc-nzx8l" Nov 29 08:06:28 crc kubenswrapper[4795]: I1129 08:06:28.894891 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd1d8c65-0785-455c-9991-e32eea8a9b83-config-data\") pod \"swift-proxy-7ff5f779bc-nzx8l\" (UID: \"dd1d8c65-0785-455c-9991-e32eea8a9b83\") " pod="openstack/swift-proxy-7ff5f779bc-nzx8l" Nov 29 08:06:28 crc kubenswrapper[4795]: I1129 08:06:28.895895 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd1d8c65-0785-455c-9991-e32eea8a9b83-internal-tls-certs\") pod \"swift-proxy-7ff5f779bc-nzx8l\" (UID: \"dd1d8c65-0785-455c-9991-e32eea8a9b83\") " pod="openstack/swift-proxy-7ff5f779bc-nzx8l" Nov 29 08:06:28 crc kubenswrapper[4795]: I1129 08:06:28.898246 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzwbx\" (UniqueName: \"kubernetes.io/projected/dd1d8c65-0785-455c-9991-e32eea8a9b83-kube-api-access-lzwbx\") pod \"swift-proxy-7ff5f779bc-nzx8l\" (UID: \"dd1d8c65-0785-455c-9991-e32eea8a9b83\") " pod="openstack/swift-proxy-7ff5f779bc-nzx8l" Nov 29 08:06:29 crc kubenswrapper[4795]: I1129 08:06:29.043107 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 08:06:29 crc kubenswrapper[4795]: I1129 08:06:29.043447 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6b070ef9-63f2-4d88-b5e1-3de486b62c80" containerName="proxy-httpd" containerID="cri-o://fd8f64cc7b0d15aeac63e310457d432625560f43f404e4b7c6e56184ba77042a" gracePeriod=30 Nov 29 08:06:29 crc kubenswrapper[4795]: I1129 08:06:29.043473 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6b070ef9-63f2-4d88-b5e1-3de486b62c80" containerName="sg-core" containerID="cri-o://212c23e9602e99a4f83a1904309fbfbb8e7afe58f0d9f77ef1755e97f8111710" gracePeriod=30 Nov 29 08:06:29 crc kubenswrapper[4795]: I1129 08:06:29.043690 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6b070ef9-63f2-4d88-b5e1-3de486b62c80" containerName="ceilometer-notification-agent" containerID="cri-o://1b39c09129a7b6363e7a2c86b5a144dfdf2d91894055a17bfb0a859d950b2c61" gracePeriod=30 Nov 29 08:06:29 crc kubenswrapper[4795]: I1129 08:06:29.043799 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6b070ef9-63f2-4d88-b5e1-3de486b62c80" containerName="ceilometer-central-agent" containerID="cri-o://11345ce1abaf5eba72c8acbedd384cd7249a43800459b32530fea8476aa2dce8" gracePeriod=30 Nov 29 08:06:29 crc kubenswrapper[4795]: I1129 08:06:29.063188 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7ff5f779bc-nzx8l" Nov 29 08:06:29 crc kubenswrapper[4795]: I1129 08:06:29.070267 4795 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="6b070ef9-63f2-4d88-b5e1-3de486b62c80" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.204:3000/\": EOF" Nov 29 08:06:29 crc kubenswrapper[4795]: I1129 08:06:29.866890 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7ff5f779bc-nzx8l"] Nov 29 08:06:29 crc kubenswrapper[4795]: W1129 08:06:29.879981 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd1d8c65_0785_455c_9991_e32eea8a9b83.slice/crio-6effeb60249c760118c05c6cf0f058de8bb6162169c8d684ebdf178ce07d4def WatchSource:0}: Error finding container 6effeb60249c760118c05c6cf0f058de8bb6162169c8d684ebdf178ce07d4def: Status 404 returned error can't find the container with id 6effeb60249c760118c05c6cf0f058de8bb6162169c8d684ebdf178ce07d4def Nov 29 08:06:30 crc kubenswrapper[4795]: I1129 08:06:30.085586 4795 generic.go:334] "Generic (PLEG): container finished" podID="6b070ef9-63f2-4d88-b5e1-3de486b62c80" containerID="fd8f64cc7b0d15aeac63e310457d432625560f43f404e4b7c6e56184ba77042a" exitCode=0 Nov 29 08:06:30 crc kubenswrapper[4795]: I1129 08:06:30.085644 4795 generic.go:334] "Generic (PLEG): container finished" podID="6b070ef9-63f2-4d88-b5e1-3de486b62c80" containerID="212c23e9602e99a4f83a1904309fbfbb8e7afe58f0d9f77ef1755e97f8111710" exitCode=2 Nov 29 08:06:30 crc kubenswrapper[4795]: I1129 08:06:30.085654 4795 generic.go:334] "Generic (PLEG): container finished" podID="6b070ef9-63f2-4d88-b5e1-3de486b62c80" containerID="11345ce1abaf5eba72c8acbedd384cd7249a43800459b32530fea8476aa2dce8" exitCode=0 Nov 29 08:06:30 crc kubenswrapper[4795]: I1129 08:06:30.085638 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b070ef9-63f2-4d88-b5e1-3de486b62c80","Type":"ContainerDied","Data":"fd8f64cc7b0d15aeac63e310457d432625560f43f404e4b7c6e56184ba77042a"} Nov 29 08:06:30 crc kubenswrapper[4795]: I1129 08:06:30.085705 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b070ef9-63f2-4d88-b5e1-3de486b62c80","Type":"ContainerDied","Data":"212c23e9602e99a4f83a1904309fbfbb8e7afe58f0d9f77ef1755e97f8111710"} Nov 29 08:06:30 crc kubenswrapper[4795]: I1129 08:06:30.085722 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b070ef9-63f2-4d88-b5e1-3de486b62c80","Type":"ContainerDied","Data":"11345ce1abaf5eba72c8acbedd384cd7249a43800459b32530fea8476aa2dce8"} Nov 29 08:06:30 crc kubenswrapper[4795]: I1129 08:06:30.087580 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7ff5f779bc-nzx8l" event={"ID":"dd1d8c65-0785-455c-9991-e32eea8a9b83","Type":"ContainerStarted","Data":"6effeb60249c760118c05c6cf0f058de8bb6162169c8d684ebdf178ce07d4def"} Nov 29 08:06:30 crc kubenswrapper[4795]: I1129 08:06:30.692479 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 08:06:30 crc kubenswrapper[4795]: I1129 08:06:30.858334 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9g5c\" (UniqueName: \"kubernetes.io/projected/6b070ef9-63f2-4d88-b5e1-3de486b62c80-kube-api-access-z9g5c\") pod \"6b070ef9-63f2-4d88-b5e1-3de486b62c80\" (UID: \"6b070ef9-63f2-4d88-b5e1-3de486b62c80\") " Nov 29 08:06:30 crc kubenswrapper[4795]: I1129 08:06:30.858468 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b070ef9-63f2-4d88-b5e1-3de486b62c80-config-data\") pod \"6b070ef9-63f2-4d88-b5e1-3de486b62c80\" (UID: \"6b070ef9-63f2-4d88-b5e1-3de486b62c80\") " Nov 29 08:06:30 crc kubenswrapper[4795]: I1129 08:06:30.858533 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b070ef9-63f2-4d88-b5e1-3de486b62c80-combined-ca-bundle\") pod \"6b070ef9-63f2-4d88-b5e1-3de486b62c80\" (UID: \"6b070ef9-63f2-4d88-b5e1-3de486b62c80\") " Nov 29 08:06:30 crc kubenswrapper[4795]: I1129 08:06:30.858585 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b070ef9-63f2-4d88-b5e1-3de486b62c80-run-httpd\") pod \"6b070ef9-63f2-4d88-b5e1-3de486b62c80\" (UID: \"6b070ef9-63f2-4d88-b5e1-3de486b62c80\") " Nov 29 08:06:30 crc kubenswrapper[4795]: I1129 08:06:30.858666 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6b070ef9-63f2-4d88-b5e1-3de486b62c80-sg-core-conf-yaml\") pod \"6b070ef9-63f2-4d88-b5e1-3de486b62c80\" (UID: \"6b070ef9-63f2-4d88-b5e1-3de486b62c80\") " Nov 29 08:06:30 crc kubenswrapper[4795]: I1129 08:06:30.858771 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b070ef9-63f2-4d88-b5e1-3de486b62c80-log-httpd\") pod \"6b070ef9-63f2-4d88-b5e1-3de486b62c80\" (UID: \"6b070ef9-63f2-4d88-b5e1-3de486b62c80\") " Nov 29 08:06:30 crc kubenswrapper[4795]: I1129 08:06:30.858791 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b070ef9-63f2-4d88-b5e1-3de486b62c80-scripts\") pod \"6b070ef9-63f2-4d88-b5e1-3de486b62c80\" (UID: \"6b070ef9-63f2-4d88-b5e1-3de486b62c80\") " Nov 29 08:06:30 crc kubenswrapper[4795]: I1129 08:06:30.864860 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b070ef9-63f2-4d88-b5e1-3de486b62c80-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6b070ef9-63f2-4d88-b5e1-3de486b62c80" (UID: "6b070ef9-63f2-4d88-b5e1-3de486b62c80"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:06:30 crc kubenswrapper[4795]: I1129 08:06:30.865349 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b070ef9-63f2-4d88-b5e1-3de486b62c80-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6b070ef9-63f2-4d88-b5e1-3de486b62c80" (UID: "6b070ef9-63f2-4d88-b5e1-3de486b62c80"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:06:30 crc kubenswrapper[4795]: I1129 08:06:30.866768 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b070ef9-63f2-4d88-b5e1-3de486b62c80-kube-api-access-z9g5c" (OuterVolumeSpecName: "kube-api-access-z9g5c") pod "6b070ef9-63f2-4d88-b5e1-3de486b62c80" (UID: "6b070ef9-63f2-4d88-b5e1-3de486b62c80"). InnerVolumeSpecName "kube-api-access-z9g5c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:06:30 crc kubenswrapper[4795]: I1129 08:06:30.866867 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b070ef9-63f2-4d88-b5e1-3de486b62c80-scripts" (OuterVolumeSpecName: "scripts") pod "6b070ef9-63f2-4d88-b5e1-3de486b62c80" (UID: "6b070ef9-63f2-4d88-b5e1-3de486b62c80"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:06:30 crc kubenswrapper[4795]: I1129 08:06:30.921355 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b070ef9-63f2-4d88-b5e1-3de486b62c80-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6b070ef9-63f2-4d88-b5e1-3de486b62c80" (UID: "6b070ef9-63f2-4d88-b5e1-3de486b62c80"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:06:30 crc kubenswrapper[4795]: I1129 08:06:30.962705 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9g5c\" (UniqueName: \"kubernetes.io/projected/6b070ef9-63f2-4d88-b5e1-3de486b62c80-kube-api-access-z9g5c\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:30 crc kubenswrapper[4795]: I1129 08:06:30.962750 4795 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b070ef9-63f2-4d88-b5e1-3de486b62c80-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:30 crc kubenswrapper[4795]: I1129 08:06:30.962763 4795 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6b070ef9-63f2-4d88-b5e1-3de486b62c80-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:30 crc kubenswrapper[4795]: I1129 08:06:30.962774 4795 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b070ef9-63f2-4d88-b5e1-3de486b62c80-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:30 crc kubenswrapper[4795]: I1129 08:06:30.962786 4795 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b070ef9-63f2-4d88-b5e1-3de486b62c80-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:30 crc kubenswrapper[4795]: I1129 08:06:30.996853 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b070ef9-63f2-4d88-b5e1-3de486b62c80-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6b070ef9-63f2-4d88-b5e1-3de486b62c80" (UID: "6b070ef9-63f2-4d88-b5e1-3de486b62c80"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:06:31 crc kubenswrapper[4795]: I1129 08:06:31.064533 4795 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b070ef9-63f2-4d88-b5e1-3de486b62c80-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:31 crc kubenswrapper[4795]: I1129 08:06:31.065623 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b070ef9-63f2-4d88-b5e1-3de486b62c80-config-data" (OuterVolumeSpecName: "config-data") pod "6b070ef9-63f2-4d88-b5e1-3de486b62c80" (UID: "6b070ef9-63f2-4d88-b5e1-3de486b62c80"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:06:31 crc kubenswrapper[4795]: I1129 08:06:31.107476 4795 generic.go:334] "Generic (PLEG): container finished" podID="6b070ef9-63f2-4d88-b5e1-3de486b62c80" containerID="1b39c09129a7b6363e7a2c86b5a144dfdf2d91894055a17bfb0a859d950b2c61" exitCode=0 Nov 29 08:06:31 crc kubenswrapper[4795]: I1129 08:06:31.107632 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 08:06:31 crc kubenswrapper[4795]: I1129 08:06:31.107668 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b070ef9-63f2-4d88-b5e1-3de486b62c80","Type":"ContainerDied","Data":"1b39c09129a7b6363e7a2c86b5a144dfdf2d91894055a17bfb0a859d950b2c61"} Nov 29 08:06:31 crc kubenswrapper[4795]: I1129 08:06:31.107703 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b070ef9-63f2-4d88-b5e1-3de486b62c80","Type":"ContainerDied","Data":"14fd4eec865107536815b05b299d141824d8f4b2e49120dd4b502708ea9a75fa"} Nov 29 08:06:31 crc kubenswrapper[4795]: I1129 08:06:31.107722 4795 scope.go:117] "RemoveContainer" containerID="fd8f64cc7b0d15aeac63e310457d432625560f43f404e4b7c6e56184ba77042a" Nov 29 08:06:31 crc kubenswrapper[4795]: I1129 08:06:31.123346 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7ff5f779bc-nzx8l" event={"ID":"dd1d8c65-0785-455c-9991-e32eea8a9b83","Type":"ContainerStarted","Data":"045b230e0a101afaca0cf2cb0ce7f7a990ac39f786800d27c960ca4fa8865e31"} Nov 29 08:06:31 crc kubenswrapper[4795]: I1129 08:06:31.123406 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7ff5f779bc-nzx8l" event={"ID":"dd1d8c65-0785-455c-9991-e32eea8a9b83","Type":"ContainerStarted","Data":"8c8d41ddb6e60c29fe551572532369e89e5742136e0c10b3ce33ccf40b54abe2"} Nov 29 08:06:31 crc kubenswrapper[4795]: I1129 08:06:31.123566 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7ff5f779bc-nzx8l" Nov 29 08:06:31 crc kubenswrapper[4795]: I1129 08:06:31.123676 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7ff5f779bc-nzx8l" Nov 29 08:06:31 crc kubenswrapper[4795]: I1129 08:06:31.157307 4795 scope.go:117] "RemoveContainer" containerID="212c23e9602e99a4f83a1904309fbfbb8e7afe58f0d9f77ef1755e97f8111710" Nov 29 08:06:31 crc kubenswrapper[4795]: I1129 08:06:31.166794 4795 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b070ef9-63f2-4d88-b5e1-3de486b62c80-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:31 crc kubenswrapper[4795]: I1129 08:06:31.172490 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-7ff5f779bc-nzx8l" podStartSLOduration=3.1724690349999998 podStartE2EDuration="3.172469035s" podCreationTimestamp="2025-11-29 08:06:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 08:06:31.157651315 +0000 UTC m=+1637.133227105" watchObservedRunningTime="2025-11-29 08:06:31.172469035 +0000 UTC m=+1637.148044825" Nov 29 08:06:31 crc kubenswrapper[4795]: I1129 08:06:31.206744 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 08:06:31 crc kubenswrapper[4795]: I1129 08:06:31.228818 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 29 08:06:31 crc kubenswrapper[4795]: I1129 08:06:31.232234 4795 scope.go:117] "RemoveContainer" containerID="1b39c09129a7b6363e7a2c86b5a144dfdf2d91894055a17bfb0a859d950b2c61" Nov 29 08:06:31 crc kubenswrapper[4795]: I1129 08:06:31.252684 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 29 08:06:31 crc kubenswrapper[4795]: E1129 08:06:31.253266 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b070ef9-63f2-4d88-b5e1-3de486b62c80" containerName="proxy-httpd" Nov 29 08:06:31 crc kubenswrapper[4795]: I1129 08:06:31.253283 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b070ef9-63f2-4d88-b5e1-3de486b62c80" containerName="proxy-httpd" Nov 29 08:06:31 crc kubenswrapper[4795]: E1129 08:06:31.253331 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b070ef9-63f2-4d88-b5e1-3de486b62c80" containerName="ceilometer-notification-agent" Nov 29 08:06:31 crc kubenswrapper[4795]: I1129 08:06:31.253340 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b070ef9-63f2-4d88-b5e1-3de486b62c80" containerName="ceilometer-notification-agent" Nov 29 08:06:31 crc kubenswrapper[4795]: E1129 08:06:31.253359 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b070ef9-63f2-4d88-b5e1-3de486b62c80" containerName="sg-core" Nov 29 08:06:31 crc kubenswrapper[4795]: I1129 08:06:31.253367 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b070ef9-63f2-4d88-b5e1-3de486b62c80" containerName="sg-core" Nov 29 08:06:31 crc kubenswrapper[4795]: E1129 08:06:31.253395 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b070ef9-63f2-4d88-b5e1-3de486b62c80" containerName="ceilometer-central-agent" Nov 29 08:06:31 crc kubenswrapper[4795]: I1129 08:06:31.253403 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b070ef9-63f2-4d88-b5e1-3de486b62c80" containerName="ceilometer-central-agent" Nov 29 08:06:31 crc kubenswrapper[4795]: I1129 08:06:31.253670 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b070ef9-63f2-4d88-b5e1-3de486b62c80" containerName="ceilometer-notification-agent" Nov 29 08:06:31 crc kubenswrapper[4795]: I1129 08:06:31.253690 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b070ef9-63f2-4d88-b5e1-3de486b62c80" containerName="sg-core" Nov 29 08:06:31 crc kubenswrapper[4795]: I1129 08:06:31.253705 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b070ef9-63f2-4d88-b5e1-3de486b62c80" containerName="proxy-httpd" Nov 29 08:06:31 crc kubenswrapper[4795]: I1129 08:06:31.253774 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b070ef9-63f2-4d88-b5e1-3de486b62c80" containerName="ceilometer-central-agent" Nov 29 08:06:31 crc kubenswrapper[4795]: I1129 08:06:31.256610 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 08:06:31 crc kubenswrapper[4795]: I1129 08:06:31.257305 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 08:06:31 crc kubenswrapper[4795]: I1129 08:06:31.263991 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 29 08:06:31 crc kubenswrapper[4795]: I1129 08:06:31.264464 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 29 08:06:31 crc kubenswrapper[4795]: I1129 08:06:31.292776 4795 scope.go:117] "RemoveContainer" containerID="11345ce1abaf5eba72c8acbedd384cd7249a43800459b32530fea8476aa2dce8" Nov 29 08:06:31 crc kubenswrapper[4795]: I1129 08:06:31.342055 4795 scope.go:117] "RemoveContainer" containerID="fd8f64cc7b0d15aeac63e310457d432625560f43f404e4b7c6e56184ba77042a" Nov 29 08:06:31 crc kubenswrapper[4795]: E1129 08:06:31.342732 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd8f64cc7b0d15aeac63e310457d432625560f43f404e4b7c6e56184ba77042a\": container with ID starting with fd8f64cc7b0d15aeac63e310457d432625560f43f404e4b7c6e56184ba77042a not found: ID does not exist" containerID="fd8f64cc7b0d15aeac63e310457d432625560f43f404e4b7c6e56184ba77042a" Nov 29 08:06:31 crc kubenswrapper[4795]: I1129 08:06:31.342768 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd8f64cc7b0d15aeac63e310457d432625560f43f404e4b7c6e56184ba77042a"} err="failed to get container status \"fd8f64cc7b0d15aeac63e310457d432625560f43f404e4b7c6e56184ba77042a\": rpc error: code = NotFound desc = could not find container \"fd8f64cc7b0d15aeac63e310457d432625560f43f404e4b7c6e56184ba77042a\": container with ID starting with fd8f64cc7b0d15aeac63e310457d432625560f43f404e4b7c6e56184ba77042a not found: ID does not exist" Nov 29 08:06:31 crc kubenswrapper[4795]: I1129 08:06:31.342797 4795 scope.go:117] "RemoveContainer" containerID="212c23e9602e99a4f83a1904309fbfbb8e7afe58f0d9f77ef1755e97f8111710" Nov 29 08:06:31 crc kubenswrapper[4795]: E1129 08:06:31.343134 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"212c23e9602e99a4f83a1904309fbfbb8e7afe58f0d9f77ef1755e97f8111710\": container with ID starting with 212c23e9602e99a4f83a1904309fbfbb8e7afe58f0d9f77ef1755e97f8111710 not found: ID does not exist" containerID="212c23e9602e99a4f83a1904309fbfbb8e7afe58f0d9f77ef1755e97f8111710" Nov 29 08:06:31 crc kubenswrapper[4795]: I1129 08:06:31.343165 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"212c23e9602e99a4f83a1904309fbfbb8e7afe58f0d9f77ef1755e97f8111710"} err="failed to get container status \"212c23e9602e99a4f83a1904309fbfbb8e7afe58f0d9f77ef1755e97f8111710\": rpc error: code = NotFound desc = could not find container \"212c23e9602e99a4f83a1904309fbfbb8e7afe58f0d9f77ef1755e97f8111710\": container with ID starting with 212c23e9602e99a4f83a1904309fbfbb8e7afe58f0d9f77ef1755e97f8111710 not found: ID does not exist" Nov 29 08:06:31 crc kubenswrapper[4795]: I1129 08:06:31.343182 4795 scope.go:117] "RemoveContainer" containerID="1b39c09129a7b6363e7a2c86b5a144dfdf2d91894055a17bfb0a859d950b2c61" Nov 29 08:06:31 crc kubenswrapper[4795]: E1129 08:06:31.343644 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b39c09129a7b6363e7a2c86b5a144dfdf2d91894055a17bfb0a859d950b2c61\": container with ID starting with 1b39c09129a7b6363e7a2c86b5a144dfdf2d91894055a17bfb0a859d950b2c61 not found: ID does not exist" containerID="1b39c09129a7b6363e7a2c86b5a144dfdf2d91894055a17bfb0a859d950b2c61" Nov 29 08:06:31 crc kubenswrapper[4795]: I1129 08:06:31.343684 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b39c09129a7b6363e7a2c86b5a144dfdf2d91894055a17bfb0a859d950b2c61"} err="failed to get container status \"1b39c09129a7b6363e7a2c86b5a144dfdf2d91894055a17bfb0a859d950b2c61\": rpc error: code = NotFound desc = could not find container \"1b39c09129a7b6363e7a2c86b5a144dfdf2d91894055a17bfb0a859d950b2c61\": container with ID starting with 1b39c09129a7b6363e7a2c86b5a144dfdf2d91894055a17bfb0a859d950b2c61 not found: ID does not exist" Nov 29 08:06:31 crc kubenswrapper[4795]: I1129 08:06:31.343714 4795 scope.go:117] "RemoveContainer" containerID="11345ce1abaf5eba72c8acbedd384cd7249a43800459b32530fea8476aa2dce8" Nov 29 08:06:31 crc kubenswrapper[4795]: E1129 08:06:31.344131 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11345ce1abaf5eba72c8acbedd384cd7249a43800459b32530fea8476aa2dce8\": container with ID starting with 11345ce1abaf5eba72c8acbedd384cd7249a43800459b32530fea8476aa2dce8 not found: ID does not exist" containerID="11345ce1abaf5eba72c8acbedd384cd7249a43800459b32530fea8476aa2dce8" Nov 29 08:06:31 crc kubenswrapper[4795]: I1129 08:06:31.344252 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11345ce1abaf5eba72c8acbedd384cd7249a43800459b32530fea8476aa2dce8"} err="failed to get container status \"11345ce1abaf5eba72c8acbedd384cd7249a43800459b32530fea8476aa2dce8\": rpc error: code = NotFound desc = could not find container \"11345ce1abaf5eba72c8acbedd384cd7249a43800459b32530fea8476aa2dce8\": container with ID starting with 11345ce1abaf5eba72c8acbedd384cd7249a43800459b32530fea8476aa2dce8 not found: ID does not exist" Nov 29 08:06:31 crc kubenswrapper[4795]: I1129 08:06:31.370265 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/feb0fddc-7d3b-4aec-bc39-a9f36e23d61d-log-httpd\") pod \"ceilometer-0\" (UID: \"feb0fddc-7d3b-4aec-bc39-a9f36e23d61d\") " pod="openstack/ceilometer-0" Nov 29 08:06:31 crc kubenswrapper[4795]: I1129 08:06:31.370393 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/feb0fddc-7d3b-4aec-bc39-a9f36e23d61d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"feb0fddc-7d3b-4aec-bc39-a9f36e23d61d\") " pod="openstack/ceilometer-0" Nov 29 08:06:31 crc kubenswrapper[4795]: I1129 08:06:31.370480 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/feb0fddc-7d3b-4aec-bc39-a9f36e23d61d-scripts\") pod \"ceilometer-0\" (UID: \"feb0fddc-7d3b-4aec-bc39-a9f36e23d61d\") " pod="openstack/ceilometer-0" Nov 29 08:06:31 crc kubenswrapper[4795]: I1129 08:06:31.370527 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqmk9\" (UniqueName: \"kubernetes.io/projected/feb0fddc-7d3b-4aec-bc39-a9f36e23d61d-kube-api-access-pqmk9\") pod \"ceilometer-0\" (UID: \"feb0fddc-7d3b-4aec-bc39-a9f36e23d61d\") " pod="openstack/ceilometer-0" Nov 29 08:06:31 crc kubenswrapper[4795]: I1129 08:06:31.370602 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/feb0fddc-7d3b-4aec-bc39-a9f36e23d61d-run-httpd\") pod \"ceilometer-0\" (UID: \"feb0fddc-7d3b-4aec-bc39-a9f36e23d61d\") " pod="openstack/ceilometer-0" Nov 29 08:06:31 crc kubenswrapper[4795]: I1129 08:06:31.370644 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/feb0fddc-7d3b-4aec-bc39-a9f36e23d61d-config-data\") pod \"ceilometer-0\" (UID: \"feb0fddc-7d3b-4aec-bc39-a9f36e23d61d\") " pod="openstack/ceilometer-0" Nov 29 08:06:31 crc kubenswrapper[4795]: I1129 08:06:31.370705 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/feb0fddc-7d3b-4aec-bc39-a9f36e23d61d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"feb0fddc-7d3b-4aec-bc39-a9f36e23d61d\") " pod="openstack/ceilometer-0" Nov 29 08:06:31 crc kubenswrapper[4795]: I1129 08:06:31.474235 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/feb0fddc-7d3b-4aec-bc39-a9f36e23d61d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"feb0fddc-7d3b-4aec-bc39-a9f36e23d61d\") " pod="openstack/ceilometer-0" Nov 29 08:06:31 crc kubenswrapper[4795]: I1129 08:06:31.474322 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/feb0fddc-7d3b-4aec-bc39-a9f36e23d61d-scripts\") pod \"ceilometer-0\" (UID: \"feb0fddc-7d3b-4aec-bc39-a9f36e23d61d\") " pod="openstack/ceilometer-0" Nov 29 08:06:31 crc kubenswrapper[4795]: I1129 08:06:31.474370 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqmk9\" (UniqueName: \"kubernetes.io/projected/feb0fddc-7d3b-4aec-bc39-a9f36e23d61d-kube-api-access-pqmk9\") pod \"ceilometer-0\" (UID: \"feb0fddc-7d3b-4aec-bc39-a9f36e23d61d\") " pod="openstack/ceilometer-0" Nov 29 08:06:31 crc kubenswrapper[4795]: I1129 08:06:31.474411 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/feb0fddc-7d3b-4aec-bc39-a9f36e23d61d-run-httpd\") pod \"ceilometer-0\" (UID: \"feb0fddc-7d3b-4aec-bc39-a9f36e23d61d\") " pod="openstack/ceilometer-0" Nov 29 08:06:31 crc kubenswrapper[4795]: I1129 08:06:31.474441 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/feb0fddc-7d3b-4aec-bc39-a9f36e23d61d-config-data\") pod \"ceilometer-0\" (UID: \"feb0fddc-7d3b-4aec-bc39-a9f36e23d61d\") " pod="openstack/ceilometer-0" Nov 29 08:06:31 crc kubenswrapper[4795]: I1129 08:06:31.474476 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/feb0fddc-7d3b-4aec-bc39-a9f36e23d61d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"feb0fddc-7d3b-4aec-bc39-a9f36e23d61d\") " pod="openstack/ceilometer-0" Nov 29 08:06:31 crc kubenswrapper[4795]: I1129 08:06:31.474523 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/feb0fddc-7d3b-4aec-bc39-a9f36e23d61d-log-httpd\") pod \"ceilometer-0\" (UID: \"feb0fddc-7d3b-4aec-bc39-a9f36e23d61d\") " pod="openstack/ceilometer-0" Nov 29 08:06:31 crc kubenswrapper[4795]: I1129 08:06:31.475131 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/feb0fddc-7d3b-4aec-bc39-a9f36e23d61d-log-httpd\") pod \"ceilometer-0\" (UID: \"feb0fddc-7d3b-4aec-bc39-a9f36e23d61d\") " pod="openstack/ceilometer-0" Nov 29 08:06:31 crc kubenswrapper[4795]: I1129 08:06:31.478395 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/feb0fddc-7d3b-4aec-bc39-a9f36e23d61d-run-httpd\") pod \"ceilometer-0\" (UID: \"feb0fddc-7d3b-4aec-bc39-a9f36e23d61d\") " pod="openstack/ceilometer-0" Nov 29 08:06:31 crc kubenswrapper[4795]: I1129 08:06:31.482688 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/feb0fddc-7d3b-4aec-bc39-a9f36e23d61d-config-data\") pod \"ceilometer-0\" (UID: \"feb0fddc-7d3b-4aec-bc39-a9f36e23d61d\") " pod="openstack/ceilometer-0" Nov 29 08:06:31 crc kubenswrapper[4795]: I1129 08:06:31.483163 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/feb0fddc-7d3b-4aec-bc39-a9f36e23d61d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"feb0fddc-7d3b-4aec-bc39-a9f36e23d61d\") " pod="openstack/ceilometer-0" Nov 29 08:06:31 crc kubenswrapper[4795]: I1129 08:06:31.489517 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/feb0fddc-7d3b-4aec-bc39-a9f36e23d61d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"feb0fddc-7d3b-4aec-bc39-a9f36e23d61d\") " pod="openstack/ceilometer-0" Nov 29 08:06:31 crc kubenswrapper[4795]: I1129 08:06:31.496609 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/feb0fddc-7d3b-4aec-bc39-a9f36e23d61d-scripts\") pod \"ceilometer-0\" (UID: \"feb0fddc-7d3b-4aec-bc39-a9f36e23d61d\") " pod="openstack/ceilometer-0" Nov 29 08:06:31 crc kubenswrapper[4795]: I1129 08:06:31.511613 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqmk9\" (UniqueName: \"kubernetes.io/projected/feb0fddc-7d3b-4aec-bc39-a9f36e23d61d-kube-api-access-pqmk9\") pod \"ceilometer-0\" (UID: \"feb0fddc-7d3b-4aec-bc39-a9f36e23d61d\") " pod="openstack/ceilometer-0" Nov 29 08:06:31 crc kubenswrapper[4795]: I1129 08:06:31.575059 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 29 08:06:31 crc kubenswrapper[4795]: I1129 08:06:31.597350 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 08:06:32 crc kubenswrapper[4795]: I1129 08:06:32.195833 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 08:06:32 crc kubenswrapper[4795]: W1129 08:06:32.215863 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfeb0fddc_7d3b_4aec_bc39_a9f36e23d61d.slice/crio-cb7fe4aa31fac838621bbd710155725cdcb62de954e8a4058eb9269cc5d3105b WatchSource:0}: Error finding container cb7fe4aa31fac838621bbd710155725cdcb62de954e8a4058eb9269cc5d3105b: Status 404 returned error can't find the container with id cb7fe4aa31fac838621bbd710155725cdcb62de954e8a4058eb9269cc5d3105b Nov 29 08:06:32 crc kubenswrapper[4795]: I1129 08:06:32.291722 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b070ef9-63f2-4d88-b5e1-3de486b62c80" path="/var/lib/kubelet/pods/6b070ef9-63f2-4d88-b5e1-3de486b62c80/volumes" Nov 29 08:06:33 crc kubenswrapper[4795]: I1129 08:06:33.167211 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"feb0fddc-7d3b-4aec-bc39-a9f36e23d61d","Type":"ContainerStarted","Data":"9b337f0b4a586b4a0d1fb02ca546492820bae61736791b1c2a523ff1b1866713"} Nov 29 08:06:33 crc kubenswrapper[4795]: I1129 08:06:33.167327 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"feb0fddc-7d3b-4aec-bc39-a9f36e23d61d","Type":"ContainerStarted","Data":"cb7fe4aa31fac838621bbd710155725cdcb62de954e8a4058eb9269cc5d3105b"} Nov 29 08:06:34 crc kubenswrapper[4795]: I1129 08:06:34.212643 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"feb0fddc-7d3b-4aec-bc39-a9f36e23d61d","Type":"ContainerStarted","Data":"d02ef8c2d9b8e692689ea53b5d36d4aba55d1ca6d7e37f4309deb1a640ba6063"} Nov 29 08:06:34 crc kubenswrapper[4795]: I1129 08:06:34.987946 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 08:06:35 crc kubenswrapper[4795]: I1129 08:06:35.233965 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"feb0fddc-7d3b-4aec-bc39-a9f36e23d61d","Type":"ContainerStarted","Data":"56fec66f0900f9de49b4ae67aa68059931b4f857f338a3a9649da3926ded1f26"} Nov 29 08:06:39 crc kubenswrapper[4795]: I1129 08:06:39.070815 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7ff5f779bc-nzx8l" Nov 29 08:06:39 crc kubenswrapper[4795]: I1129 08:06:39.073346 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7ff5f779bc-nzx8l" Nov 29 08:06:40 crc kubenswrapper[4795]: I1129 08:06:40.869651 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-fd5dc85fb-fss4q"] Nov 29 08:06:40 crc kubenswrapper[4795]: I1129 08:06:40.871875 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-fd5dc85fb-fss4q" Nov 29 08:06:40 crc kubenswrapper[4795]: I1129 08:06:40.879998 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-5vxbl" Nov 29 08:06:40 crc kubenswrapper[4795]: I1129 08:06:40.880215 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Nov 29 08:06:40 crc kubenswrapper[4795]: I1129 08:06:40.880799 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Nov 29 08:06:40 crc kubenswrapper[4795]: I1129 08:06:40.892770 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-fd5dc85fb-fss4q"] Nov 29 08:06:40 crc kubenswrapper[4795]: I1129 08:06:40.981011 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d978555f9-krdd7"] Nov 29 08:06:40 crc kubenswrapper[4795]: I1129 08:06:40.983668 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d978555f9-krdd7" Nov 29 08:06:41 crc kubenswrapper[4795]: I1129 08:06:41.023732 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d978555f9-krdd7"] Nov 29 08:06:41 crc kubenswrapper[4795]: I1129 08:06:41.024940 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29304def-4cbe-4b1d-abb4-c7bd11587183-combined-ca-bundle\") pod \"heat-engine-fd5dc85fb-fss4q\" (UID: \"29304def-4cbe-4b1d-abb4-c7bd11587183\") " pod="openstack/heat-engine-fd5dc85fb-fss4q" Nov 29 08:06:41 crc kubenswrapper[4795]: I1129 08:06:41.024969 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/29304def-4cbe-4b1d-abb4-c7bd11587183-config-data-custom\") pod \"heat-engine-fd5dc85fb-fss4q\" (UID: \"29304def-4cbe-4b1d-abb4-c7bd11587183\") " pod="openstack/heat-engine-fd5dc85fb-fss4q" Nov 29 08:06:41 crc kubenswrapper[4795]: I1129 08:06:41.025041 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29304def-4cbe-4b1d-abb4-c7bd11587183-config-data\") pod \"heat-engine-fd5dc85fb-fss4q\" (UID: \"29304def-4cbe-4b1d-abb4-c7bd11587183\") " pod="openstack/heat-engine-fd5dc85fb-fss4q" Nov 29 08:06:41 crc kubenswrapper[4795]: I1129 08:06:41.025209 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48ddz\" (UniqueName: \"kubernetes.io/projected/29304def-4cbe-4b1d-abb4-c7bd11587183-kube-api-access-48ddz\") pod \"heat-engine-fd5dc85fb-fss4q\" (UID: \"29304def-4cbe-4b1d-abb4-c7bd11587183\") " pod="openstack/heat-engine-fd5dc85fb-fss4q" Nov 29 08:06:41 crc kubenswrapper[4795]: I1129 08:06:41.075077 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-55879c458d-ms4f2"] Nov 29 08:06:41 crc kubenswrapper[4795]: I1129 08:06:41.076553 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-55879c458d-ms4f2" Nov 29 08:06:41 crc kubenswrapper[4795]: I1129 08:06:41.094340 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Nov 29 08:06:41 crc kubenswrapper[4795]: I1129 08:06:41.143144 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48ddz\" (UniqueName: \"kubernetes.io/projected/29304def-4cbe-4b1d-abb4-c7bd11587183-kube-api-access-48ddz\") pod \"heat-engine-fd5dc85fb-fss4q\" (UID: \"29304def-4cbe-4b1d-abb4-c7bd11587183\") " pod="openstack/heat-engine-fd5dc85fb-fss4q" Nov 29 08:06:41 crc kubenswrapper[4795]: I1129 08:06:41.143250 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29304def-4cbe-4b1d-abb4-c7bd11587183-combined-ca-bundle\") pod \"heat-engine-fd5dc85fb-fss4q\" (UID: \"29304def-4cbe-4b1d-abb4-c7bd11587183\") " pod="openstack/heat-engine-fd5dc85fb-fss4q" Nov 29 08:06:41 crc kubenswrapper[4795]: I1129 08:06:41.143280 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/29304def-4cbe-4b1d-abb4-c7bd11587183-config-data-custom\") pod \"heat-engine-fd5dc85fb-fss4q\" (UID: \"29304def-4cbe-4b1d-abb4-c7bd11587183\") " pod="openstack/heat-engine-fd5dc85fb-fss4q" Nov 29 08:06:41 crc kubenswrapper[4795]: I1129 08:06:41.143332 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f7a16be4-8056-4ba8-9720-6503361132f4-ovsdbserver-nb\") pod \"dnsmasq-dns-7d978555f9-krdd7\" (UID: \"f7a16be4-8056-4ba8-9720-6503361132f4\") " pod="openstack/dnsmasq-dns-7d978555f9-krdd7" Nov 29 08:06:41 crc kubenswrapper[4795]: I1129 08:06:41.143373 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zf9lp\" (UniqueName: \"kubernetes.io/projected/f7a16be4-8056-4ba8-9720-6503361132f4-kube-api-access-zf9lp\") pod \"dnsmasq-dns-7d978555f9-krdd7\" (UID: \"f7a16be4-8056-4ba8-9720-6503361132f4\") " pod="openstack/dnsmasq-dns-7d978555f9-krdd7" Nov 29 08:06:41 crc kubenswrapper[4795]: I1129 08:06:41.143400 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f7a16be4-8056-4ba8-9720-6503361132f4-dns-swift-storage-0\") pod \"dnsmasq-dns-7d978555f9-krdd7\" (UID: \"f7a16be4-8056-4ba8-9720-6503361132f4\") " pod="openstack/dnsmasq-dns-7d978555f9-krdd7" Nov 29 08:06:41 crc kubenswrapper[4795]: I1129 08:06:41.144451 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f7a16be4-8056-4ba8-9720-6503361132f4-ovsdbserver-sb\") pod \"dnsmasq-dns-7d978555f9-krdd7\" (UID: \"f7a16be4-8056-4ba8-9720-6503361132f4\") " pod="openstack/dnsmasq-dns-7d978555f9-krdd7" Nov 29 08:06:41 crc kubenswrapper[4795]: I1129 08:06:41.144550 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29304def-4cbe-4b1d-abb4-c7bd11587183-config-data\") pod \"heat-engine-fd5dc85fb-fss4q\" (UID: \"29304def-4cbe-4b1d-abb4-c7bd11587183\") " pod="openstack/heat-engine-fd5dc85fb-fss4q" Nov 29 08:06:41 crc kubenswrapper[4795]: I1129 08:06:41.144637 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7a16be4-8056-4ba8-9720-6503361132f4-config\") pod \"dnsmasq-dns-7d978555f9-krdd7\" (UID: \"f7a16be4-8056-4ba8-9720-6503361132f4\") " pod="openstack/dnsmasq-dns-7d978555f9-krdd7" Nov 29 08:06:41 crc kubenswrapper[4795]: I1129 08:06:41.145130 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f7a16be4-8056-4ba8-9720-6503361132f4-dns-svc\") pod \"dnsmasq-dns-7d978555f9-krdd7\" (UID: \"f7a16be4-8056-4ba8-9720-6503361132f4\") " pod="openstack/dnsmasq-dns-7d978555f9-krdd7" Nov 29 08:06:41 crc kubenswrapper[4795]: I1129 08:06:41.156626 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-55879c458d-ms4f2"] Nov 29 08:06:41 crc kubenswrapper[4795]: I1129 08:06:41.178886 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/29304def-4cbe-4b1d-abb4-c7bd11587183-config-data-custom\") pod \"heat-engine-fd5dc85fb-fss4q\" (UID: \"29304def-4cbe-4b1d-abb4-c7bd11587183\") " pod="openstack/heat-engine-fd5dc85fb-fss4q" Nov 29 08:06:41 crc kubenswrapper[4795]: I1129 08:06:41.180583 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29304def-4cbe-4b1d-abb4-c7bd11587183-config-data\") pod \"heat-engine-fd5dc85fb-fss4q\" (UID: \"29304def-4cbe-4b1d-abb4-c7bd11587183\") " pod="openstack/heat-engine-fd5dc85fb-fss4q" Nov 29 08:06:41 crc kubenswrapper[4795]: I1129 08:06:41.189745 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48ddz\" (UniqueName: \"kubernetes.io/projected/29304def-4cbe-4b1d-abb4-c7bd11587183-kube-api-access-48ddz\") pod \"heat-engine-fd5dc85fb-fss4q\" (UID: \"29304def-4cbe-4b1d-abb4-c7bd11587183\") " pod="openstack/heat-engine-fd5dc85fb-fss4q" Nov 29 08:06:41 crc kubenswrapper[4795]: I1129 08:06:41.193898 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29304def-4cbe-4b1d-abb4-c7bd11587183-combined-ca-bundle\") pod \"heat-engine-fd5dc85fb-fss4q\" (UID: \"29304def-4cbe-4b1d-abb4-c7bd11587183\") " pod="openstack/heat-engine-fd5dc85fb-fss4q" Nov 29 08:06:41 crc kubenswrapper[4795]: I1129 08:06:41.211882 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-66ffb89474-rscfw"] Nov 29 08:06:41 crc kubenswrapper[4795]: I1129 08:06:41.222104 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-fd5dc85fb-fss4q" Nov 29 08:06:41 crc kubenswrapper[4795]: I1129 08:06:41.222784 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-66ffb89474-rscfw" Nov 29 08:06:41 crc kubenswrapper[4795]: I1129 08:06:41.227857 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Nov 29 08:06:41 crc kubenswrapper[4795]: I1129 08:06:41.244066 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-66ffb89474-rscfw"] Nov 29 08:06:41 crc kubenswrapper[4795]: I1129 08:06:41.251422 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f7a16be4-8056-4ba8-9720-6503361132f4-ovsdbserver-sb\") pod \"dnsmasq-dns-7d978555f9-krdd7\" (UID: \"f7a16be4-8056-4ba8-9720-6503361132f4\") " pod="openstack/dnsmasq-dns-7d978555f9-krdd7" Nov 29 08:06:41 crc kubenswrapper[4795]: I1129 08:06:41.251464 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2b704537-db2a-453a-b977-b2a3c31cf61b-config-data-custom\") pod \"heat-cfnapi-55879c458d-ms4f2\" (UID: \"2b704537-db2a-453a-b977-b2a3c31cf61b\") " pod="openstack/heat-cfnapi-55879c458d-ms4f2" Nov 29 08:06:41 crc kubenswrapper[4795]: I1129 08:06:41.251518 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7a16be4-8056-4ba8-9720-6503361132f4-config\") pod \"dnsmasq-dns-7d978555f9-krdd7\" (UID: \"f7a16be4-8056-4ba8-9720-6503361132f4\") " pod="openstack/dnsmasq-dns-7d978555f9-krdd7" Nov 29 08:06:41 crc kubenswrapper[4795]: I1129 08:06:41.251637 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b704537-db2a-453a-b977-b2a3c31cf61b-combined-ca-bundle\") pod \"heat-cfnapi-55879c458d-ms4f2\" (UID: \"2b704537-db2a-453a-b977-b2a3c31cf61b\") " pod="openstack/heat-cfnapi-55879c458d-ms4f2" Nov 29 08:06:41 crc kubenswrapper[4795]: I1129 08:06:41.251667 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f7a16be4-8056-4ba8-9720-6503361132f4-dns-svc\") pod \"dnsmasq-dns-7d978555f9-krdd7\" (UID: \"f7a16be4-8056-4ba8-9720-6503361132f4\") " pod="openstack/dnsmasq-dns-7d978555f9-krdd7" Nov 29 08:06:41 crc kubenswrapper[4795]: I1129 08:06:41.251708 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tc47l\" (UniqueName: \"kubernetes.io/projected/2b704537-db2a-453a-b977-b2a3c31cf61b-kube-api-access-tc47l\") pod \"heat-cfnapi-55879c458d-ms4f2\" (UID: \"2b704537-db2a-453a-b977-b2a3c31cf61b\") " pod="openstack/heat-cfnapi-55879c458d-ms4f2" Nov 29 08:06:41 crc kubenswrapper[4795]: I1129 08:06:41.251737 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b704537-db2a-453a-b977-b2a3c31cf61b-config-data\") pod \"heat-cfnapi-55879c458d-ms4f2\" (UID: \"2b704537-db2a-453a-b977-b2a3c31cf61b\") " pod="openstack/heat-cfnapi-55879c458d-ms4f2" Nov 29 08:06:41 crc kubenswrapper[4795]: I1129 08:06:41.251758 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f7a16be4-8056-4ba8-9720-6503361132f4-ovsdbserver-nb\") pod \"dnsmasq-dns-7d978555f9-krdd7\" (UID: \"f7a16be4-8056-4ba8-9720-6503361132f4\") " pod="openstack/dnsmasq-dns-7d978555f9-krdd7" Nov 29 08:06:41 crc kubenswrapper[4795]: I1129 08:06:41.251779 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zf9lp\" (UniqueName: \"kubernetes.io/projected/f7a16be4-8056-4ba8-9720-6503361132f4-kube-api-access-zf9lp\") pod \"dnsmasq-dns-7d978555f9-krdd7\" (UID: \"f7a16be4-8056-4ba8-9720-6503361132f4\") " pod="openstack/dnsmasq-dns-7d978555f9-krdd7" Nov 29 08:06:41 crc kubenswrapper[4795]: I1129 08:06:41.251797 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f7a16be4-8056-4ba8-9720-6503361132f4-dns-swift-storage-0\") pod \"dnsmasq-dns-7d978555f9-krdd7\" (UID: \"f7a16be4-8056-4ba8-9720-6503361132f4\") " pod="openstack/dnsmasq-dns-7d978555f9-krdd7" Nov 29 08:06:41 crc kubenswrapper[4795]: I1129 08:06:41.252634 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f7a16be4-8056-4ba8-9720-6503361132f4-dns-swift-storage-0\") pod \"dnsmasq-dns-7d978555f9-krdd7\" (UID: \"f7a16be4-8056-4ba8-9720-6503361132f4\") " pod="openstack/dnsmasq-dns-7d978555f9-krdd7" Nov 29 08:06:41 crc kubenswrapper[4795]: I1129 08:06:41.253152 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f7a16be4-8056-4ba8-9720-6503361132f4-ovsdbserver-sb\") pod \"dnsmasq-dns-7d978555f9-krdd7\" (UID: \"f7a16be4-8056-4ba8-9720-6503361132f4\") " pod="openstack/dnsmasq-dns-7d978555f9-krdd7" Nov 29 08:06:41 crc kubenswrapper[4795]: I1129 08:06:41.256125 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7a16be4-8056-4ba8-9720-6503361132f4-config\") pod \"dnsmasq-dns-7d978555f9-krdd7\" (UID: \"f7a16be4-8056-4ba8-9720-6503361132f4\") " pod="openstack/dnsmasq-dns-7d978555f9-krdd7" Nov 29 08:06:41 crc kubenswrapper[4795]: I1129 08:06:41.256487 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f7a16be4-8056-4ba8-9720-6503361132f4-dns-svc\") pod \"dnsmasq-dns-7d978555f9-krdd7\" (UID: \"f7a16be4-8056-4ba8-9720-6503361132f4\") " pod="openstack/dnsmasq-dns-7d978555f9-krdd7" Nov 29 08:06:41 crc kubenswrapper[4795]: I1129 08:06:41.260507 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f7a16be4-8056-4ba8-9720-6503361132f4-ovsdbserver-nb\") pod \"dnsmasq-dns-7d978555f9-krdd7\" (UID: \"f7a16be4-8056-4ba8-9720-6503361132f4\") " pod="openstack/dnsmasq-dns-7d978555f9-krdd7" Nov 29 08:06:41 crc kubenswrapper[4795]: I1129 08:06:41.287560 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zf9lp\" (UniqueName: \"kubernetes.io/projected/f7a16be4-8056-4ba8-9720-6503361132f4-kube-api-access-zf9lp\") pod \"dnsmasq-dns-7d978555f9-krdd7\" (UID: \"f7a16be4-8056-4ba8-9720-6503361132f4\") " pod="openstack/dnsmasq-dns-7d978555f9-krdd7" Nov 29 08:06:41 crc kubenswrapper[4795]: I1129 08:06:41.320190 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d978555f9-krdd7" Nov 29 08:06:41 crc kubenswrapper[4795]: I1129 08:06:41.358070 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/323b7b10-963c-41ff-b3ce-69dd3374f746-config-data-custom\") pod \"heat-api-66ffb89474-rscfw\" (UID: \"323b7b10-963c-41ff-b3ce-69dd3374f746\") " pod="openstack/heat-api-66ffb89474-rscfw" Nov 29 08:06:41 crc kubenswrapper[4795]: I1129 08:06:41.358256 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b704537-db2a-453a-b977-b2a3c31cf61b-combined-ca-bundle\") pod \"heat-cfnapi-55879c458d-ms4f2\" (UID: \"2b704537-db2a-453a-b977-b2a3c31cf61b\") " pod="openstack/heat-cfnapi-55879c458d-ms4f2" Nov 29 08:06:41 crc kubenswrapper[4795]: I1129 08:06:41.358399 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/323b7b10-963c-41ff-b3ce-69dd3374f746-combined-ca-bundle\") pod \"heat-api-66ffb89474-rscfw\" (UID: \"323b7b10-963c-41ff-b3ce-69dd3374f746\") " pod="openstack/heat-api-66ffb89474-rscfw" Nov 29 08:06:41 crc kubenswrapper[4795]: I1129 08:06:41.358564 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/323b7b10-963c-41ff-b3ce-69dd3374f746-config-data\") pod \"heat-api-66ffb89474-rscfw\" (UID: \"323b7b10-963c-41ff-b3ce-69dd3374f746\") " pod="openstack/heat-api-66ffb89474-rscfw" Nov 29 08:06:41 crc kubenswrapper[4795]: I1129 08:06:41.358738 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tc47l\" (UniqueName: \"kubernetes.io/projected/2b704537-db2a-453a-b977-b2a3c31cf61b-kube-api-access-tc47l\") pod \"heat-cfnapi-55879c458d-ms4f2\" (UID: \"2b704537-db2a-453a-b977-b2a3c31cf61b\") " pod="openstack/heat-cfnapi-55879c458d-ms4f2" Nov 29 08:06:41 crc kubenswrapper[4795]: I1129 08:06:41.359108 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b704537-db2a-453a-b977-b2a3c31cf61b-config-data\") pod \"heat-cfnapi-55879c458d-ms4f2\" (UID: \"2b704537-db2a-453a-b977-b2a3c31cf61b\") " pod="openstack/heat-cfnapi-55879c458d-ms4f2" Nov 29 08:06:41 crc kubenswrapper[4795]: I1129 08:06:41.359862 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2b704537-db2a-453a-b977-b2a3c31cf61b-config-data-custom\") pod \"heat-cfnapi-55879c458d-ms4f2\" (UID: \"2b704537-db2a-453a-b977-b2a3c31cf61b\") " pod="openstack/heat-cfnapi-55879c458d-ms4f2" Nov 29 08:06:41 crc kubenswrapper[4795]: I1129 08:06:41.360333 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psxb6\" (UniqueName: \"kubernetes.io/projected/323b7b10-963c-41ff-b3ce-69dd3374f746-kube-api-access-psxb6\") pod \"heat-api-66ffb89474-rscfw\" (UID: \"323b7b10-963c-41ff-b3ce-69dd3374f746\") " pod="openstack/heat-api-66ffb89474-rscfw" Nov 29 08:06:41 crc kubenswrapper[4795]: I1129 08:06:41.362377 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b704537-db2a-453a-b977-b2a3c31cf61b-combined-ca-bundle\") pod \"heat-cfnapi-55879c458d-ms4f2\" (UID: \"2b704537-db2a-453a-b977-b2a3c31cf61b\") " pod="openstack/heat-cfnapi-55879c458d-ms4f2" Nov 29 08:06:41 crc kubenswrapper[4795]: I1129 08:06:41.363676 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b704537-db2a-453a-b977-b2a3c31cf61b-config-data\") pod \"heat-cfnapi-55879c458d-ms4f2\" (UID: \"2b704537-db2a-453a-b977-b2a3c31cf61b\") " pod="openstack/heat-cfnapi-55879c458d-ms4f2" Nov 29 08:06:41 crc kubenswrapper[4795]: I1129 08:06:41.368745 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2b704537-db2a-453a-b977-b2a3c31cf61b-config-data-custom\") pod \"heat-cfnapi-55879c458d-ms4f2\" (UID: \"2b704537-db2a-453a-b977-b2a3c31cf61b\") " pod="openstack/heat-cfnapi-55879c458d-ms4f2" Nov 29 08:06:41 crc kubenswrapper[4795]: I1129 08:06:41.410670 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tc47l\" (UniqueName: \"kubernetes.io/projected/2b704537-db2a-453a-b977-b2a3c31cf61b-kube-api-access-tc47l\") pod \"heat-cfnapi-55879c458d-ms4f2\" (UID: \"2b704537-db2a-453a-b977-b2a3c31cf61b\") " pod="openstack/heat-cfnapi-55879c458d-ms4f2" Nov 29 08:06:41 crc kubenswrapper[4795]: I1129 08:06:41.463853 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/323b7b10-963c-41ff-b3ce-69dd3374f746-config-data-custom\") pod \"heat-api-66ffb89474-rscfw\" (UID: \"323b7b10-963c-41ff-b3ce-69dd3374f746\") " pod="openstack/heat-api-66ffb89474-rscfw" Nov 29 08:06:41 crc kubenswrapper[4795]: I1129 08:06:41.463901 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/323b7b10-963c-41ff-b3ce-69dd3374f746-combined-ca-bundle\") pod \"heat-api-66ffb89474-rscfw\" (UID: \"323b7b10-963c-41ff-b3ce-69dd3374f746\") " pod="openstack/heat-api-66ffb89474-rscfw" Nov 29 08:06:41 crc kubenswrapper[4795]: I1129 08:06:41.463937 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/323b7b10-963c-41ff-b3ce-69dd3374f746-config-data\") pod \"heat-api-66ffb89474-rscfw\" (UID: \"323b7b10-963c-41ff-b3ce-69dd3374f746\") " pod="openstack/heat-api-66ffb89474-rscfw" Nov 29 08:06:41 crc kubenswrapper[4795]: I1129 08:06:41.464083 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psxb6\" (UniqueName: \"kubernetes.io/projected/323b7b10-963c-41ff-b3ce-69dd3374f746-kube-api-access-psxb6\") pod \"heat-api-66ffb89474-rscfw\" (UID: \"323b7b10-963c-41ff-b3ce-69dd3374f746\") " pod="openstack/heat-api-66ffb89474-rscfw" Nov 29 08:06:41 crc kubenswrapper[4795]: I1129 08:06:41.468626 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/323b7b10-963c-41ff-b3ce-69dd3374f746-config-data-custom\") pod \"heat-api-66ffb89474-rscfw\" (UID: \"323b7b10-963c-41ff-b3ce-69dd3374f746\") " pod="openstack/heat-api-66ffb89474-rscfw" Nov 29 08:06:41 crc kubenswrapper[4795]: I1129 08:06:41.469300 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/323b7b10-963c-41ff-b3ce-69dd3374f746-combined-ca-bundle\") pod \"heat-api-66ffb89474-rscfw\" (UID: \"323b7b10-963c-41ff-b3ce-69dd3374f746\") " pod="openstack/heat-api-66ffb89474-rscfw" Nov 29 08:06:41 crc kubenswrapper[4795]: I1129 08:06:41.473787 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/323b7b10-963c-41ff-b3ce-69dd3374f746-config-data\") pod \"heat-api-66ffb89474-rscfw\" (UID: \"323b7b10-963c-41ff-b3ce-69dd3374f746\") " pod="openstack/heat-api-66ffb89474-rscfw" Nov 29 08:06:41 crc kubenswrapper[4795]: I1129 08:06:41.488396 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psxb6\" (UniqueName: \"kubernetes.io/projected/323b7b10-963c-41ff-b3ce-69dd3374f746-kube-api-access-psxb6\") pod \"heat-api-66ffb89474-rscfw\" (UID: \"323b7b10-963c-41ff-b3ce-69dd3374f746\") " pod="openstack/heat-api-66ffb89474-rscfw" Nov 29 08:06:41 crc kubenswrapper[4795]: I1129 08:06:41.674326 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-66ffb89474-rscfw" Nov 29 08:06:41 crc kubenswrapper[4795]: I1129 08:06:41.700076 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-55879c458d-ms4f2" Nov 29 08:06:44 crc kubenswrapper[4795]: I1129 08:06:44.009479 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-66ffb89474-rscfw"] Nov 29 08:06:44 crc kubenswrapper[4795]: W1129 08:06:44.009835 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod323b7b10_963c_41ff_b3ce_69dd3374f746.slice/crio-3c1eb86ee26e4e0fa785d977345d8a1e43035b8d09d105743f4b06c4b6200429 WatchSource:0}: Error finding container 3c1eb86ee26e4e0fa785d977345d8a1e43035b8d09d105743f4b06c4b6200429: Status 404 returned error can't find the container with id 3c1eb86ee26e4e0fa785d977345d8a1e43035b8d09d105743f4b06c4b6200429 Nov 29 08:06:44 crc kubenswrapper[4795]: I1129 08:06:44.172534 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-fd5dc85fb-fss4q"] Nov 29 08:06:44 crc kubenswrapper[4795]: I1129 08:06:44.186566 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-55879c458d-ms4f2"] Nov 29 08:06:44 crc kubenswrapper[4795]: W1129 08:06:44.192528 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod29304def_4cbe_4b1d_abb4_c7bd11587183.slice/crio-4099c3d37028341066a096ffb68b3b1be1a54bc18cc4dd7ac258ec1f6900ebf9 WatchSource:0}: Error finding container 4099c3d37028341066a096ffb68b3b1be1a54bc18cc4dd7ac258ec1f6900ebf9: Status 404 returned error can't find the container with id 4099c3d37028341066a096ffb68b3b1be1a54bc18cc4dd7ac258ec1f6900ebf9 Nov 29 08:06:44 crc kubenswrapper[4795]: W1129 08:06:44.197736 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b704537_db2a_453a_b977_b2a3c31cf61b.slice/crio-825815a76bf539ea3755a77f8bfb5bb233ff18265583b05cee48fe3eb7f1bf38 WatchSource:0}: Error finding container 825815a76bf539ea3755a77f8bfb5bb233ff18265583b05cee48fe3eb7f1bf38: Status 404 returned error can't find the container with id 825815a76bf539ea3755a77f8bfb5bb233ff18265583b05cee48fe3eb7f1bf38 Nov 29 08:06:44 crc kubenswrapper[4795]: I1129 08:06:44.202657 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d978555f9-krdd7"] Nov 29 08:06:44 crc kubenswrapper[4795]: I1129 08:06:44.384136 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-66ffb89474-rscfw" event={"ID":"323b7b10-963c-41ff-b3ce-69dd3374f746","Type":"ContainerStarted","Data":"3c1eb86ee26e4e0fa785d977345d8a1e43035b8d09d105743f4b06c4b6200429"} Nov 29 08:06:44 crc kubenswrapper[4795]: I1129 08:06:44.388053 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-55879c458d-ms4f2" event={"ID":"2b704537-db2a-453a-b977-b2a3c31cf61b","Type":"ContainerStarted","Data":"825815a76bf539ea3755a77f8bfb5bb233ff18265583b05cee48fe3eb7f1bf38"} Nov 29 08:06:44 crc kubenswrapper[4795]: I1129 08:06:44.390194 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d978555f9-krdd7" event={"ID":"f7a16be4-8056-4ba8-9720-6503361132f4","Type":"ContainerStarted","Data":"ecca4bbca4279fc7e82c5b17dd4b972af90ec196abb22353db4fd6d0cff6865d"} Nov 29 08:06:44 crc kubenswrapper[4795]: I1129 08:06:44.392271 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-fd5dc85fb-fss4q" event={"ID":"29304def-4cbe-4b1d-abb4-c7bd11587183","Type":"ContainerStarted","Data":"4099c3d37028341066a096ffb68b3b1be1a54bc18cc4dd7ac258ec1f6900ebf9"} Nov 29 08:06:44 crc kubenswrapper[4795]: I1129 08:06:44.395313 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"de5e89b3-d4a1-4ef0-bf3a-814cb09d5ade","Type":"ContainerStarted","Data":"6afdd5aace837974eabecbc6965222695bbb2865ad491e9ffb7a1656bffb57d3"} Nov 29 08:06:44 crc kubenswrapper[4795]: I1129 08:06:44.412771 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"feb0fddc-7d3b-4aec-bc39-a9f36e23d61d","Type":"ContainerStarted","Data":"3b9ff3b26fe53f8c56e9caee6635fb6cc922fd335421bb87916347a41548d813"} Nov 29 08:06:44 crc kubenswrapper[4795]: I1129 08:06:44.413006 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="feb0fddc-7d3b-4aec-bc39-a9f36e23d61d" containerName="ceilometer-central-agent" containerID="cri-o://9b337f0b4a586b4a0d1fb02ca546492820bae61736791b1c2a523ff1b1866713" gracePeriod=30 Nov 29 08:06:44 crc kubenswrapper[4795]: I1129 08:06:44.414881 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 29 08:06:44 crc kubenswrapper[4795]: I1129 08:06:44.414973 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="feb0fddc-7d3b-4aec-bc39-a9f36e23d61d" containerName="proxy-httpd" containerID="cri-o://3b9ff3b26fe53f8c56e9caee6635fb6cc922fd335421bb87916347a41548d813" gracePeriod=30 Nov 29 08:06:44 crc kubenswrapper[4795]: I1129 08:06:44.415027 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="feb0fddc-7d3b-4aec-bc39-a9f36e23d61d" containerName="sg-core" containerID="cri-o://56fec66f0900f9de49b4ae67aa68059931b4f857f338a3a9649da3926ded1f26" gracePeriod=30 Nov 29 08:06:44 crc kubenswrapper[4795]: I1129 08:06:44.415093 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="feb0fddc-7d3b-4aec-bc39-a9f36e23d61d" containerName="ceilometer-notification-agent" containerID="cri-o://d02ef8c2d9b8e692689ea53b5d36d4aba55d1ca6d7e37f4309deb1a640ba6063" gracePeriod=30 Nov 29 08:06:44 crc kubenswrapper[4795]: I1129 08:06:44.460243 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.568139743 podStartE2EDuration="18.460219803s" podCreationTimestamp="2025-11-29 08:06:26 +0000 UTC" firstStartedPulling="2025-11-29 08:06:27.519994179 +0000 UTC m=+1633.495569969" lastFinishedPulling="2025-11-29 08:06:43.412074239 +0000 UTC m=+1649.387650029" observedRunningTime="2025-11-29 08:06:44.425467878 +0000 UTC m=+1650.401043688" watchObservedRunningTime="2025-11-29 08:06:44.460219803 +0000 UTC m=+1650.435795593" Nov 29 08:06:44 crc kubenswrapper[4795]: I1129 08:06:44.473611 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.28777645 podStartE2EDuration="13.473579801s" podCreationTimestamp="2025-11-29 08:06:31 +0000 UTC" firstStartedPulling="2025-11-29 08:06:32.218763756 +0000 UTC m=+1638.194339546" lastFinishedPulling="2025-11-29 08:06:43.404567107 +0000 UTC m=+1649.380142897" observedRunningTime="2025-11-29 08:06:44.460976224 +0000 UTC m=+1650.436552014" watchObservedRunningTime="2025-11-29 08:06:44.473579801 +0000 UTC m=+1650.449155591" Nov 29 08:06:45 crc kubenswrapper[4795]: I1129 08:06:45.430317 4795 generic.go:334] "Generic (PLEG): container finished" podID="f7a16be4-8056-4ba8-9720-6503361132f4" containerID="e22e18761fe241c53e28b6dcb8bb1bb770d06a2dc97c916613e5c05a2fc74460" exitCode=0 Nov 29 08:06:45 crc kubenswrapper[4795]: I1129 08:06:45.430425 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d978555f9-krdd7" event={"ID":"f7a16be4-8056-4ba8-9720-6503361132f4","Type":"ContainerDied","Data":"e22e18761fe241c53e28b6dcb8bb1bb770d06a2dc97c916613e5c05a2fc74460"} Nov 29 08:06:45 crc kubenswrapper[4795]: I1129 08:06:45.435431 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-fd5dc85fb-fss4q" event={"ID":"29304def-4cbe-4b1d-abb4-c7bd11587183","Type":"ContainerStarted","Data":"52236dc856c9ff58b724e49ea15972d1a4376f5bffb0f668fd6fc0b638a4778d"} Nov 29 08:06:45 crc kubenswrapper[4795]: I1129 08:06:45.435868 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-fd5dc85fb-fss4q" Nov 29 08:06:45 crc kubenswrapper[4795]: I1129 08:06:45.438666 4795 generic.go:334] "Generic (PLEG): container finished" podID="feb0fddc-7d3b-4aec-bc39-a9f36e23d61d" containerID="3b9ff3b26fe53f8c56e9caee6635fb6cc922fd335421bb87916347a41548d813" exitCode=0 Nov 29 08:06:45 crc kubenswrapper[4795]: I1129 08:06:45.438705 4795 generic.go:334] "Generic (PLEG): container finished" podID="feb0fddc-7d3b-4aec-bc39-a9f36e23d61d" containerID="56fec66f0900f9de49b4ae67aa68059931b4f857f338a3a9649da3926ded1f26" exitCode=2 Nov 29 08:06:45 crc kubenswrapper[4795]: I1129 08:06:45.438713 4795 generic.go:334] "Generic (PLEG): container finished" podID="feb0fddc-7d3b-4aec-bc39-a9f36e23d61d" containerID="d02ef8c2d9b8e692689ea53b5d36d4aba55d1ca6d7e37f4309deb1a640ba6063" exitCode=0 Nov 29 08:06:45 crc kubenswrapper[4795]: I1129 08:06:45.438723 4795 generic.go:334] "Generic (PLEG): container finished" podID="feb0fddc-7d3b-4aec-bc39-a9f36e23d61d" containerID="9b337f0b4a586b4a0d1fb02ca546492820bae61736791b1c2a523ff1b1866713" exitCode=0 Nov 29 08:06:45 crc kubenswrapper[4795]: I1129 08:06:45.438821 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"feb0fddc-7d3b-4aec-bc39-a9f36e23d61d","Type":"ContainerDied","Data":"3b9ff3b26fe53f8c56e9caee6635fb6cc922fd335421bb87916347a41548d813"} Nov 29 08:06:45 crc kubenswrapper[4795]: I1129 08:06:45.438865 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"feb0fddc-7d3b-4aec-bc39-a9f36e23d61d","Type":"ContainerDied","Data":"56fec66f0900f9de49b4ae67aa68059931b4f857f338a3a9649da3926ded1f26"} Nov 29 08:06:45 crc kubenswrapper[4795]: I1129 08:06:45.438875 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"feb0fddc-7d3b-4aec-bc39-a9f36e23d61d","Type":"ContainerDied","Data":"d02ef8c2d9b8e692689ea53b5d36d4aba55d1ca6d7e37f4309deb1a640ba6063"} Nov 29 08:06:45 crc kubenswrapper[4795]: I1129 08:06:45.438884 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"feb0fddc-7d3b-4aec-bc39-a9f36e23d61d","Type":"ContainerDied","Data":"9b337f0b4a586b4a0d1fb02ca546492820bae61736791b1c2a523ff1b1866713"} Nov 29 08:06:45 crc kubenswrapper[4795]: I1129 08:06:45.484565 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-fd5dc85fb-fss4q" podStartSLOduration=5.484539601 podStartE2EDuration="5.484539601s" podCreationTimestamp="2025-11-29 08:06:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 08:06:45.465896633 +0000 UTC m=+1651.441472423" watchObservedRunningTime="2025-11-29 08:06:45.484539601 +0000 UTC m=+1651.460115391" Nov 29 08:06:46 crc kubenswrapper[4795]: I1129 08:06:46.175237 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 08:06:46 crc kubenswrapper[4795]: I1129 08:06:46.303191 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/feb0fddc-7d3b-4aec-bc39-a9f36e23d61d-run-httpd\") pod \"feb0fddc-7d3b-4aec-bc39-a9f36e23d61d\" (UID: \"feb0fddc-7d3b-4aec-bc39-a9f36e23d61d\") " Nov 29 08:06:46 crc kubenswrapper[4795]: I1129 08:06:46.303269 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/feb0fddc-7d3b-4aec-bc39-a9f36e23d61d-log-httpd\") pod \"feb0fddc-7d3b-4aec-bc39-a9f36e23d61d\" (UID: \"feb0fddc-7d3b-4aec-bc39-a9f36e23d61d\") " Nov 29 08:06:46 crc kubenswrapper[4795]: I1129 08:06:46.303290 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqmk9\" (UniqueName: \"kubernetes.io/projected/feb0fddc-7d3b-4aec-bc39-a9f36e23d61d-kube-api-access-pqmk9\") pod \"feb0fddc-7d3b-4aec-bc39-a9f36e23d61d\" (UID: \"feb0fddc-7d3b-4aec-bc39-a9f36e23d61d\") " Nov 29 08:06:46 crc kubenswrapper[4795]: I1129 08:06:46.303330 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/feb0fddc-7d3b-4aec-bc39-a9f36e23d61d-scripts\") pod \"feb0fddc-7d3b-4aec-bc39-a9f36e23d61d\" (UID: \"feb0fddc-7d3b-4aec-bc39-a9f36e23d61d\") " Nov 29 08:06:46 crc kubenswrapper[4795]: I1129 08:06:46.303404 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/feb0fddc-7d3b-4aec-bc39-a9f36e23d61d-combined-ca-bundle\") pod \"feb0fddc-7d3b-4aec-bc39-a9f36e23d61d\" (UID: \"feb0fddc-7d3b-4aec-bc39-a9f36e23d61d\") " Nov 29 08:06:46 crc kubenswrapper[4795]: I1129 08:06:46.303444 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/feb0fddc-7d3b-4aec-bc39-a9f36e23d61d-sg-core-conf-yaml\") pod \"feb0fddc-7d3b-4aec-bc39-a9f36e23d61d\" (UID: \"feb0fddc-7d3b-4aec-bc39-a9f36e23d61d\") " Nov 29 08:06:46 crc kubenswrapper[4795]: I1129 08:06:46.303459 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/feb0fddc-7d3b-4aec-bc39-a9f36e23d61d-config-data\") pod \"feb0fddc-7d3b-4aec-bc39-a9f36e23d61d\" (UID: \"feb0fddc-7d3b-4aec-bc39-a9f36e23d61d\") " Nov 29 08:06:46 crc kubenswrapper[4795]: I1129 08:06:46.303808 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/feb0fddc-7d3b-4aec-bc39-a9f36e23d61d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "feb0fddc-7d3b-4aec-bc39-a9f36e23d61d" (UID: "feb0fddc-7d3b-4aec-bc39-a9f36e23d61d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:06:46 crc kubenswrapper[4795]: I1129 08:06:46.303959 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/feb0fddc-7d3b-4aec-bc39-a9f36e23d61d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "feb0fddc-7d3b-4aec-bc39-a9f36e23d61d" (UID: "feb0fddc-7d3b-4aec-bc39-a9f36e23d61d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:06:46 crc kubenswrapper[4795]: I1129 08:06:46.304035 4795 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/feb0fddc-7d3b-4aec-bc39-a9f36e23d61d-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:46 crc kubenswrapper[4795]: I1129 08:06:46.349253 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/feb0fddc-7d3b-4aec-bc39-a9f36e23d61d-scripts" (OuterVolumeSpecName: "scripts") pod "feb0fddc-7d3b-4aec-bc39-a9f36e23d61d" (UID: "feb0fddc-7d3b-4aec-bc39-a9f36e23d61d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:06:46 crc kubenswrapper[4795]: I1129 08:06:46.355834 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/feb0fddc-7d3b-4aec-bc39-a9f36e23d61d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "feb0fddc-7d3b-4aec-bc39-a9f36e23d61d" (UID: "feb0fddc-7d3b-4aec-bc39-a9f36e23d61d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:06:46 crc kubenswrapper[4795]: I1129 08:06:46.355921 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/feb0fddc-7d3b-4aec-bc39-a9f36e23d61d-kube-api-access-pqmk9" (OuterVolumeSpecName: "kube-api-access-pqmk9") pod "feb0fddc-7d3b-4aec-bc39-a9f36e23d61d" (UID: "feb0fddc-7d3b-4aec-bc39-a9f36e23d61d"). InnerVolumeSpecName "kube-api-access-pqmk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:06:46 crc kubenswrapper[4795]: I1129 08:06:46.406043 4795 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/feb0fddc-7d3b-4aec-bc39-a9f36e23d61d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:46 crc kubenswrapper[4795]: I1129 08:06:46.406072 4795 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/feb0fddc-7d3b-4aec-bc39-a9f36e23d61d-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:46 crc kubenswrapper[4795]: I1129 08:06:46.406081 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqmk9\" (UniqueName: \"kubernetes.io/projected/feb0fddc-7d3b-4aec-bc39-a9f36e23d61d-kube-api-access-pqmk9\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:46 crc kubenswrapper[4795]: I1129 08:06:46.406091 4795 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/feb0fddc-7d3b-4aec-bc39-a9f36e23d61d-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:46 crc kubenswrapper[4795]: I1129 08:06:46.494477 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 08:06:46 crc kubenswrapper[4795]: I1129 08:06:46.495032 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"feb0fddc-7d3b-4aec-bc39-a9f36e23d61d","Type":"ContainerDied","Data":"cb7fe4aa31fac838621bbd710155725cdcb62de954e8a4058eb9269cc5d3105b"} Nov 29 08:06:46 crc kubenswrapper[4795]: I1129 08:06:46.495067 4795 scope.go:117] "RemoveContainer" containerID="3b9ff3b26fe53f8c56e9caee6635fb6cc922fd335421bb87916347a41548d813" Nov 29 08:06:46 crc kubenswrapper[4795]: I1129 08:06:46.540154 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/feb0fddc-7d3b-4aec-bc39-a9f36e23d61d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "feb0fddc-7d3b-4aec-bc39-a9f36e23d61d" (UID: "feb0fddc-7d3b-4aec-bc39-a9f36e23d61d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:06:46 crc kubenswrapper[4795]: I1129 08:06:46.554054 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/feb0fddc-7d3b-4aec-bc39-a9f36e23d61d-config-data" (OuterVolumeSpecName: "config-data") pod "feb0fddc-7d3b-4aec-bc39-a9f36e23d61d" (UID: "feb0fddc-7d3b-4aec-bc39-a9f36e23d61d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:06:46 crc kubenswrapper[4795]: I1129 08:06:46.613539 4795 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/feb0fddc-7d3b-4aec-bc39-a9f36e23d61d-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:46 crc kubenswrapper[4795]: I1129 08:06:46.613566 4795 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/feb0fddc-7d3b-4aec-bc39-a9f36e23d61d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:46 crc kubenswrapper[4795]: I1129 08:06:46.859049 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 08:06:46 crc kubenswrapper[4795]: I1129 08:06:46.870232 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 29 08:06:46 crc kubenswrapper[4795]: I1129 08:06:46.895866 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 29 08:06:46 crc kubenswrapper[4795]: E1129 08:06:46.896441 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="feb0fddc-7d3b-4aec-bc39-a9f36e23d61d" containerName="ceilometer-notification-agent" Nov 29 08:06:46 crc kubenswrapper[4795]: I1129 08:06:46.896466 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="feb0fddc-7d3b-4aec-bc39-a9f36e23d61d" containerName="ceilometer-notification-agent" Nov 29 08:06:46 crc kubenswrapper[4795]: E1129 08:06:46.896498 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="feb0fddc-7d3b-4aec-bc39-a9f36e23d61d" containerName="proxy-httpd" Nov 29 08:06:46 crc kubenswrapper[4795]: I1129 08:06:46.896507 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="feb0fddc-7d3b-4aec-bc39-a9f36e23d61d" containerName="proxy-httpd" Nov 29 08:06:46 crc kubenswrapper[4795]: E1129 08:06:46.896528 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="feb0fddc-7d3b-4aec-bc39-a9f36e23d61d" containerName="sg-core" Nov 29 08:06:46 crc kubenswrapper[4795]: I1129 08:06:46.896535 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="feb0fddc-7d3b-4aec-bc39-a9f36e23d61d" containerName="sg-core" Nov 29 08:06:46 crc kubenswrapper[4795]: E1129 08:06:46.896563 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="feb0fddc-7d3b-4aec-bc39-a9f36e23d61d" containerName="ceilometer-central-agent" Nov 29 08:06:46 crc kubenswrapper[4795]: I1129 08:06:46.896574 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="feb0fddc-7d3b-4aec-bc39-a9f36e23d61d" containerName="ceilometer-central-agent" Nov 29 08:06:46 crc kubenswrapper[4795]: I1129 08:06:46.896829 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="feb0fddc-7d3b-4aec-bc39-a9f36e23d61d" containerName="ceilometer-central-agent" Nov 29 08:06:46 crc kubenswrapper[4795]: I1129 08:06:46.896864 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="feb0fddc-7d3b-4aec-bc39-a9f36e23d61d" containerName="proxy-httpd" Nov 29 08:06:46 crc kubenswrapper[4795]: I1129 08:06:46.896886 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="feb0fddc-7d3b-4aec-bc39-a9f36e23d61d" containerName="sg-core" Nov 29 08:06:46 crc kubenswrapper[4795]: I1129 08:06:46.896901 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="feb0fddc-7d3b-4aec-bc39-a9f36e23d61d" containerName="ceilometer-notification-agent" Nov 29 08:06:46 crc kubenswrapper[4795]: I1129 08:06:46.898974 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 08:06:46 crc kubenswrapper[4795]: I1129 08:06:46.902994 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 29 08:06:46 crc kubenswrapper[4795]: I1129 08:06:46.903038 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 29 08:06:46 crc kubenswrapper[4795]: I1129 08:06:46.924745 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 08:06:47 crc kubenswrapper[4795]: I1129 08:06:47.022763 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/569960ad-ab7f-454b-b142-dae24e00340f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"569960ad-ab7f-454b-b142-dae24e00340f\") " pod="openstack/ceilometer-0" Nov 29 08:06:47 crc kubenswrapper[4795]: I1129 08:06:47.022867 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/569960ad-ab7f-454b-b142-dae24e00340f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"569960ad-ab7f-454b-b142-dae24e00340f\") " pod="openstack/ceilometer-0" Nov 29 08:06:47 crc kubenswrapper[4795]: I1129 08:06:47.022930 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/569960ad-ab7f-454b-b142-dae24e00340f-scripts\") pod \"ceilometer-0\" (UID: \"569960ad-ab7f-454b-b142-dae24e00340f\") " pod="openstack/ceilometer-0" Nov 29 08:06:47 crc kubenswrapper[4795]: I1129 08:06:47.023126 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/569960ad-ab7f-454b-b142-dae24e00340f-run-httpd\") pod \"ceilometer-0\" (UID: \"569960ad-ab7f-454b-b142-dae24e00340f\") " pod="openstack/ceilometer-0" Nov 29 08:06:47 crc kubenswrapper[4795]: I1129 08:06:47.023296 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/569960ad-ab7f-454b-b142-dae24e00340f-log-httpd\") pod \"ceilometer-0\" (UID: \"569960ad-ab7f-454b-b142-dae24e00340f\") " pod="openstack/ceilometer-0" Nov 29 08:06:47 crc kubenswrapper[4795]: I1129 08:06:47.023451 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sk8q\" (UniqueName: \"kubernetes.io/projected/569960ad-ab7f-454b-b142-dae24e00340f-kube-api-access-4sk8q\") pod \"ceilometer-0\" (UID: \"569960ad-ab7f-454b-b142-dae24e00340f\") " pod="openstack/ceilometer-0" Nov 29 08:06:47 crc kubenswrapper[4795]: I1129 08:06:47.023507 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/569960ad-ab7f-454b-b142-dae24e00340f-config-data\") pod \"ceilometer-0\" (UID: \"569960ad-ab7f-454b-b142-dae24e00340f\") " pod="openstack/ceilometer-0" Nov 29 08:06:47 crc kubenswrapper[4795]: I1129 08:06:47.108034 4795 scope.go:117] "RemoveContainer" containerID="56fec66f0900f9de49b4ae67aa68059931b4f857f338a3a9649da3926ded1f26" Nov 29 08:06:47 crc kubenswrapper[4795]: I1129 08:06:47.126040 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/569960ad-ab7f-454b-b142-dae24e00340f-scripts\") pod \"ceilometer-0\" (UID: \"569960ad-ab7f-454b-b142-dae24e00340f\") " pod="openstack/ceilometer-0" Nov 29 08:06:47 crc kubenswrapper[4795]: I1129 08:06:47.126106 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/569960ad-ab7f-454b-b142-dae24e00340f-run-httpd\") pod \"ceilometer-0\" (UID: \"569960ad-ab7f-454b-b142-dae24e00340f\") " pod="openstack/ceilometer-0" Nov 29 08:06:47 crc kubenswrapper[4795]: I1129 08:06:47.126157 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/569960ad-ab7f-454b-b142-dae24e00340f-log-httpd\") pod \"ceilometer-0\" (UID: \"569960ad-ab7f-454b-b142-dae24e00340f\") " pod="openstack/ceilometer-0" Nov 29 08:06:47 crc kubenswrapper[4795]: I1129 08:06:47.126199 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sk8q\" (UniqueName: \"kubernetes.io/projected/569960ad-ab7f-454b-b142-dae24e00340f-kube-api-access-4sk8q\") pod \"ceilometer-0\" (UID: \"569960ad-ab7f-454b-b142-dae24e00340f\") " pod="openstack/ceilometer-0" Nov 29 08:06:47 crc kubenswrapper[4795]: I1129 08:06:47.126224 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/569960ad-ab7f-454b-b142-dae24e00340f-config-data\") pod \"ceilometer-0\" (UID: \"569960ad-ab7f-454b-b142-dae24e00340f\") " pod="openstack/ceilometer-0" Nov 29 08:06:47 crc kubenswrapper[4795]: I1129 08:06:47.126291 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/569960ad-ab7f-454b-b142-dae24e00340f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"569960ad-ab7f-454b-b142-dae24e00340f\") " pod="openstack/ceilometer-0" Nov 29 08:06:47 crc kubenswrapper[4795]: I1129 08:06:47.126321 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/569960ad-ab7f-454b-b142-dae24e00340f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"569960ad-ab7f-454b-b142-dae24e00340f\") " pod="openstack/ceilometer-0" Nov 29 08:06:47 crc kubenswrapper[4795]: I1129 08:06:47.127024 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/569960ad-ab7f-454b-b142-dae24e00340f-run-httpd\") pod \"ceilometer-0\" (UID: \"569960ad-ab7f-454b-b142-dae24e00340f\") " pod="openstack/ceilometer-0" Nov 29 08:06:47 crc kubenswrapper[4795]: I1129 08:06:47.127109 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/569960ad-ab7f-454b-b142-dae24e00340f-log-httpd\") pod \"ceilometer-0\" (UID: \"569960ad-ab7f-454b-b142-dae24e00340f\") " pod="openstack/ceilometer-0" Nov 29 08:06:47 crc kubenswrapper[4795]: I1129 08:06:47.130713 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/569960ad-ab7f-454b-b142-dae24e00340f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"569960ad-ab7f-454b-b142-dae24e00340f\") " pod="openstack/ceilometer-0" Nov 29 08:06:47 crc kubenswrapper[4795]: I1129 08:06:47.131339 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/569960ad-ab7f-454b-b142-dae24e00340f-config-data\") pod \"ceilometer-0\" (UID: \"569960ad-ab7f-454b-b142-dae24e00340f\") " pod="openstack/ceilometer-0" Nov 29 08:06:47 crc kubenswrapper[4795]: I1129 08:06:47.133720 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/569960ad-ab7f-454b-b142-dae24e00340f-scripts\") pod \"ceilometer-0\" (UID: \"569960ad-ab7f-454b-b142-dae24e00340f\") " pod="openstack/ceilometer-0" Nov 29 08:06:47 crc kubenswrapper[4795]: I1129 08:06:47.134016 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/569960ad-ab7f-454b-b142-dae24e00340f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"569960ad-ab7f-454b-b142-dae24e00340f\") " pod="openstack/ceilometer-0" Nov 29 08:06:47 crc kubenswrapper[4795]: I1129 08:06:47.144039 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sk8q\" (UniqueName: \"kubernetes.io/projected/569960ad-ab7f-454b-b142-dae24e00340f-kube-api-access-4sk8q\") pod \"ceilometer-0\" (UID: \"569960ad-ab7f-454b-b142-dae24e00340f\") " pod="openstack/ceilometer-0" Nov 29 08:06:47 crc kubenswrapper[4795]: I1129 08:06:47.177283 4795 scope.go:117] "RemoveContainer" containerID="d02ef8c2d9b8e692689ea53b5d36d4aba55d1ca6d7e37f4309deb1a640ba6063" Nov 29 08:06:47 crc kubenswrapper[4795]: I1129 08:06:47.211683 4795 scope.go:117] "RemoveContainer" containerID="9b337f0b4a586b4a0d1fb02ca546492820bae61736791b1c2a523ff1b1866713" Nov 29 08:06:47 crc kubenswrapper[4795]: I1129 08:06:47.228995 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 08:06:47 crc kubenswrapper[4795]: I1129 08:06:47.518928 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d978555f9-krdd7" event={"ID":"f7a16be4-8056-4ba8-9720-6503361132f4","Type":"ContainerStarted","Data":"f90c34869a371cd02f74b97dc6ac1d2869ecf9d7bf1f0bd98b84d1a688640cfe"} Nov 29 08:06:47 crc kubenswrapper[4795]: I1129 08:06:47.519437 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7d978555f9-krdd7" Nov 29 08:06:47 crc kubenswrapper[4795]: I1129 08:06:47.559688 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7d978555f9-krdd7" podStartSLOduration=7.559669717 podStartE2EDuration="7.559669717s" podCreationTimestamp="2025-11-29 08:06:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 08:06:47.545050743 +0000 UTC m=+1653.520626533" watchObservedRunningTime="2025-11-29 08:06:47.559669717 +0000 UTC m=+1653.535245507" Nov 29 08:06:47 crc kubenswrapper[4795]: I1129 08:06:47.845813 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 08:06:48 crc kubenswrapper[4795]: I1129 08:06:48.027497 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-746fb69fd5-n8596"] Nov 29 08:06:48 crc kubenswrapper[4795]: I1129 08:06:48.029857 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-746fb69fd5-n8596" Nov 29 08:06:48 crc kubenswrapper[4795]: I1129 08:06:48.048999 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-746fb69fd5-n8596"] Nov 29 08:06:48 crc kubenswrapper[4795]: I1129 08:06:48.089744 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-6bcb7f45-v58x5"] Nov 29 08:06:48 crc kubenswrapper[4795]: I1129 08:06:48.091519 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6bcb7f45-v58x5" Nov 29 08:06:48 crc kubenswrapper[4795]: I1129 08:06:48.151062 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-565648db59-vzdgj"] Nov 29 08:06:48 crc kubenswrapper[4795]: I1129 08:06:48.152755 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-565648db59-vzdgj" Nov 29 08:06:48 crc kubenswrapper[4795]: I1129 08:06:48.168136 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6bcb7f45-v58x5"] Nov 29 08:06:48 crc kubenswrapper[4795]: I1129 08:06:48.177057 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k64sf\" (UniqueName: \"kubernetes.io/projected/3d1c661f-8246-4ec2-96d0-13feaaaaf35a-kube-api-access-k64sf\") pod \"heat-cfnapi-6bcb7f45-v58x5\" (UID: \"3d1c661f-8246-4ec2-96d0-13feaaaaf35a\") " pod="openstack/heat-cfnapi-6bcb7f45-v58x5" Nov 29 08:06:48 crc kubenswrapper[4795]: I1129 08:06:48.177203 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3d1c661f-8246-4ec2-96d0-13feaaaaf35a-config-data-custom\") pod \"heat-cfnapi-6bcb7f45-v58x5\" (UID: \"3d1c661f-8246-4ec2-96d0-13feaaaaf35a\") " pod="openstack/heat-cfnapi-6bcb7f45-v58x5" Nov 29 08:06:48 crc kubenswrapper[4795]: I1129 08:06:48.177668 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c0aa8b2b-26ef-48f3-9a24-da2a5b2c9d3a-config-data-custom\") pod \"heat-engine-746fb69fd5-n8596\" (UID: \"c0aa8b2b-26ef-48f3-9a24-da2a5b2c9d3a\") " pod="openstack/heat-engine-746fb69fd5-n8596" Nov 29 08:06:48 crc kubenswrapper[4795]: I1129 08:06:48.177770 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0aa8b2b-26ef-48f3-9a24-da2a5b2c9d3a-combined-ca-bundle\") pod \"heat-engine-746fb69fd5-n8596\" (UID: \"c0aa8b2b-26ef-48f3-9a24-da2a5b2c9d3a\") " pod="openstack/heat-engine-746fb69fd5-n8596" Nov 29 08:06:48 crc kubenswrapper[4795]: I1129 08:06:48.178168 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d1c661f-8246-4ec2-96d0-13feaaaaf35a-config-data\") pod \"heat-cfnapi-6bcb7f45-v58x5\" (UID: \"3d1c661f-8246-4ec2-96d0-13feaaaaf35a\") " pod="openstack/heat-cfnapi-6bcb7f45-v58x5" Nov 29 08:06:48 crc kubenswrapper[4795]: I1129 08:06:48.178274 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0aa8b2b-26ef-48f3-9a24-da2a5b2c9d3a-config-data\") pod \"heat-engine-746fb69fd5-n8596\" (UID: \"c0aa8b2b-26ef-48f3-9a24-da2a5b2c9d3a\") " pod="openstack/heat-engine-746fb69fd5-n8596" Nov 29 08:06:48 crc kubenswrapper[4795]: I1129 08:06:48.178340 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d1c661f-8246-4ec2-96d0-13feaaaaf35a-combined-ca-bundle\") pod \"heat-cfnapi-6bcb7f45-v58x5\" (UID: \"3d1c661f-8246-4ec2-96d0-13feaaaaf35a\") " pod="openstack/heat-cfnapi-6bcb7f45-v58x5" Nov 29 08:06:48 crc kubenswrapper[4795]: I1129 08:06:48.178392 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dn5f\" (UniqueName: \"kubernetes.io/projected/c0aa8b2b-26ef-48f3-9a24-da2a5b2c9d3a-kube-api-access-4dn5f\") pod \"heat-engine-746fb69fd5-n8596\" (UID: \"c0aa8b2b-26ef-48f3-9a24-da2a5b2c9d3a\") " pod="openstack/heat-engine-746fb69fd5-n8596" Nov 29 08:06:48 crc kubenswrapper[4795]: I1129 08:06:48.193277 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-565648db59-vzdgj"] Nov 29 08:06:48 crc kubenswrapper[4795]: I1129 08:06:48.280003 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a930422-6c40-4a1a-963d-126d16f160a2-combined-ca-bundle\") pod \"heat-api-565648db59-vzdgj\" (UID: \"5a930422-6c40-4a1a-963d-126d16f160a2\") " pod="openstack/heat-api-565648db59-vzdgj" Nov 29 08:06:48 crc kubenswrapper[4795]: I1129 08:06:48.280059 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k64sf\" (UniqueName: \"kubernetes.io/projected/3d1c661f-8246-4ec2-96d0-13feaaaaf35a-kube-api-access-k64sf\") pod \"heat-cfnapi-6bcb7f45-v58x5\" (UID: \"3d1c661f-8246-4ec2-96d0-13feaaaaf35a\") " pod="openstack/heat-cfnapi-6bcb7f45-v58x5" Nov 29 08:06:48 crc kubenswrapper[4795]: I1129 08:06:48.280112 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3d1c661f-8246-4ec2-96d0-13feaaaaf35a-config-data-custom\") pod \"heat-cfnapi-6bcb7f45-v58x5\" (UID: \"3d1c661f-8246-4ec2-96d0-13feaaaaf35a\") " pod="openstack/heat-cfnapi-6bcb7f45-v58x5" Nov 29 08:06:48 crc kubenswrapper[4795]: I1129 08:06:48.280152 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c0aa8b2b-26ef-48f3-9a24-da2a5b2c9d3a-config-data-custom\") pod \"heat-engine-746fb69fd5-n8596\" (UID: \"c0aa8b2b-26ef-48f3-9a24-da2a5b2c9d3a\") " pod="openstack/heat-engine-746fb69fd5-n8596" Nov 29 08:06:48 crc kubenswrapper[4795]: I1129 08:06:48.280185 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a930422-6c40-4a1a-963d-126d16f160a2-config-data\") pod \"heat-api-565648db59-vzdgj\" (UID: \"5a930422-6c40-4a1a-963d-126d16f160a2\") " pod="openstack/heat-api-565648db59-vzdgj" Nov 29 08:06:48 crc kubenswrapper[4795]: I1129 08:06:48.280223 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0aa8b2b-26ef-48f3-9a24-da2a5b2c9d3a-combined-ca-bundle\") pod \"heat-engine-746fb69fd5-n8596\" (UID: \"c0aa8b2b-26ef-48f3-9a24-da2a5b2c9d3a\") " pod="openstack/heat-engine-746fb69fd5-n8596" Nov 29 08:06:48 crc kubenswrapper[4795]: I1129 08:06:48.280297 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5a930422-6c40-4a1a-963d-126d16f160a2-config-data-custom\") pod \"heat-api-565648db59-vzdgj\" (UID: \"5a930422-6c40-4a1a-963d-126d16f160a2\") " pod="openstack/heat-api-565648db59-vzdgj" Nov 29 08:06:48 crc kubenswrapper[4795]: I1129 08:06:48.280328 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d1c661f-8246-4ec2-96d0-13feaaaaf35a-config-data\") pod \"heat-cfnapi-6bcb7f45-v58x5\" (UID: \"3d1c661f-8246-4ec2-96d0-13feaaaaf35a\") " pod="openstack/heat-cfnapi-6bcb7f45-v58x5" Nov 29 08:06:48 crc kubenswrapper[4795]: I1129 08:06:48.280349 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0aa8b2b-26ef-48f3-9a24-da2a5b2c9d3a-config-data\") pod \"heat-engine-746fb69fd5-n8596\" (UID: \"c0aa8b2b-26ef-48f3-9a24-da2a5b2c9d3a\") " pod="openstack/heat-engine-746fb69fd5-n8596" Nov 29 08:06:48 crc kubenswrapper[4795]: I1129 08:06:48.280373 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d1c661f-8246-4ec2-96d0-13feaaaaf35a-combined-ca-bundle\") pod \"heat-cfnapi-6bcb7f45-v58x5\" (UID: \"3d1c661f-8246-4ec2-96d0-13feaaaaf35a\") " pod="openstack/heat-cfnapi-6bcb7f45-v58x5" Nov 29 08:06:48 crc kubenswrapper[4795]: I1129 08:06:48.280413 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dn5f\" (UniqueName: \"kubernetes.io/projected/c0aa8b2b-26ef-48f3-9a24-da2a5b2c9d3a-kube-api-access-4dn5f\") pod \"heat-engine-746fb69fd5-n8596\" (UID: \"c0aa8b2b-26ef-48f3-9a24-da2a5b2c9d3a\") " pod="openstack/heat-engine-746fb69fd5-n8596" Nov 29 08:06:48 crc kubenswrapper[4795]: I1129 08:06:48.280455 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xmxs\" (UniqueName: \"kubernetes.io/projected/5a930422-6c40-4a1a-963d-126d16f160a2-kube-api-access-7xmxs\") pod \"heat-api-565648db59-vzdgj\" (UID: \"5a930422-6c40-4a1a-963d-126d16f160a2\") " pod="openstack/heat-api-565648db59-vzdgj" Nov 29 08:06:48 crc kubenswrapper[4795]: I1129 08:06:48.305544 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c0aa8b2b-26ef-48f3-9a24-da2a5b2c9d3a-config-data-custom\") pod \"heat-engine-746fb69fd5-n8596\" (UID: \"c0aa8b2b-26ef-48f3-9a24-da2a5b2c9d3a\") " pod="openstack/heat-engine-746fb69fd5-n8596" Nov 29 08:06:48 crc kubenswrapper[4795]: I1129 08:06:48.306914 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0aa8b2b-26ef-48f3-9a24-da2a5b2c9d3a-config-data\") pod \"heat-engine-746fb69fd5-n8596\" (UID: \"c0aa8b2b-26ef-48f3-9a24-da2a5b2c9d3a\") " pod="openstack/heat-engine-746fb69fd5-n8596" Nov 29 08:06:48 crc kubenswrapper[4795]: I1129 08:06:48.307567 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3d1c661f-8246-4ec2-96d0-13feaaaaf35a-config-data-custom\") pod \"heat-cfnapi-6bcb7f45-v58x5\" (UID: \"3d1c661f-8246-4ec2-96d0-13feaaaaf35a\") " pod="openstack/heat-cfnapi-6bcb7f45-v58x5" Nov 29 08:06:48 crc kubenswrapper[4795]: I1129 08:06:48.307763 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dn5f\" (UniqueName: \"kubernetes.io/projected/c0aa8b2b-26ef-48f3-9a24-da2a5b2c9d3a-kube-api-access-4dn5f\") pod \"heat-engine-746fb69fd5-n8596\" (UID: \"c0aa8b2b-26ef-48f3-9a24-da2a5b2c9d3a\") " pod="openstack/heat-engine-746fb69fd5-n8596" Nov 29 08:06:48 crc kubenswrapper[4795]: I1129 08:06:48.307825 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0aa8b2b-26ef-48f3-9a24-da2a5b2c9d3a-combined-ca-bundle\") pod \"heat-engine-746fb69fd5-n8596\" (UID: \"c0aa8b2b-26ef-48f3-9a24-da2a5b2c9d3a\") " pod="openstack/heat-engine-746fb69fd5-n8596" Nov 29 08:06:48 crc kubenswrapper[4795]: I1129 08:06:48.317721 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k64sf\" (UniqueName: \"kubernetes.io/projected/3d1c661f-8246-4ec2-96d0-13feaaaaf35a-kube-api-access-k64sf\") pod \"heat-cfnapi-6bcb7f45-v58x5\" (UID: \"3d1c661f-8246-4ec2-96d0-13feaaaaf35a\") " pod="openstack/heat-cfnapi-6bcb7f45-v58x5" Nov 29 08:06:48 crc kubenswrapper[4795]: I1129 08:06:48.318107 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="feb0fddc-7d3b-4aec-bc39-a9f36e23d61d" path="/var/lib/kubelet/pods/feb0fddc-7d3b-4aec-bc39-a9f36e23d61d/volumes" Nov 29 08:06:48 crc kubenswrapper[4795]: I1129 08:06:48.330347 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d1c661f-8246-4ec2-96d0-13feaaaaf35a-combined-ca-bundle\") pod \"heat-cfnapi-6bcb7f45-v58x5\" (UID: \"3d1c661f-8246-4ec2-96d0-13feaaaaf35a\") " pod="openstack/heat-cfnapi-6bcb7f45-v58x5" Nov 29 08:06:48 crc kubenswrapper[4795]: I1129 08:06:48.334401 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d1c661f-8246-4ec2-96d0-13feaaaaf35a-config-data\") pod \"heat-cfnapi-6bcb7f45-v58x5\" (UID: \"3d1c661f-8246-4ec2-96d0-13feaaaaf35a\") " pod="openstack/heat-cfnapi-6bcb7f45-v58x5" Nov 29 08:06:48 crc kubenswrapper[4795]: I1129 08:06:48.356826 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-746fb69fd5-n8596" Nov 29 08:06:48 crc kubenswrapper[4795]: I1129 08:06:48.435098 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a930422-6c40-4a1a-963d-126d16f160a2-combined-ca-bundle\") pod \"heat-api-565648db59-vzdgj\" (UID: \"5a930422-6c40-4a1a-963d-126d16f160a2\") " pod="openstack/heat-api-565648db59-vzdgj" Nov 29 08:06:48 crc kubenswrapper[4795]: I1129 08:06:48.435242 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a930422-6c40-4a1a-963d-126d16f160a2-config-data\") pod \"heat-api-565648db59-vzdgj\" (UID: \"5a930422-6c40-4a1a-963d-126d16f160a2\") " pod="openstack/heat-api-565648db59-vzdgj" Nov 29 08:06:48 crc kubenswrapper[4795]: I1129 08:06:48.435366 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5a930422-6c40-4a1a-963d-126d16f160a2-config-data-custom\") pod \"heat-api-565648db59-vzdgj\" (UID: \"5a930422-6c40-4a1a-963d-126d16f160a2\") " pod="openstack/heat-api-565648db59-vzdgj" Nov 29 08:06:48 crc kubenswrapper[4795]: I1129 08:06:48.435464 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xmxs\" (UniqueName: \"kubernetes.io/projected/5a930422-6c40-4a1a-963d-126d16f160a2-kube-api-access-7xmxs\") pod \"heat-api-565648db59-vzdgj\" (UID: \"5a930422-6c40-4a1a-963d-126d16f160a2\") " pod="openstack/heat-api-565648db59-vzdgj" Nov 29 08:06:48 crc kubenswrapper[4795]: I1129 08:06:48.452485 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5a930422-6c40-4a1a-963d-126d16f160a2-config-data-custom\") pod \"heat-api-565648db59-vzdgj\" (UID: \"5a930422-6c40-4a1a-963d-126d16f160a2\") " pod="openstack/heat-api-565648db59-vzdgj" Nov 29 08:06:48 crc kubenswrapper[4795]: I1129 08:06:48.459984 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a930422-6c40-4a1a-963d-126d16f160a2-config-data\") pod \"heat-api-565648db59-vzdgj\" (UID: \"5a930422-6c40-4a1a-963d-126d16f160a2\") " pod="openstack/heat-api-565648db59-vzdgj" Nov 29 08:06:48 crc kubenswrapper[4795]: I1129 08:06:48.478485 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6bcb7f45-v58x5" Nov 29 08:06:48 crc kubenswrapper[4795]: I1129 08:06:48.482812 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xmxs\" (UniqueName: \"kubernetes.io/projected/5a930422-6c40-4a1a-963d-126d16f160a2-kube-api-access-7xmxs\") pod \"heat-api-565648db59-vzdgj\" (UID: \"5a930422-6c40-4a1a-963d-126d16f160a2\") " pod="openstack/heat-api-565648db59-vzdgj" Nov 29 08:06:48 crc kubenswrapper[4795]: I1129 08:06:48.515107 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a930422-6c40-4a1a-963d-126d16f160a2-combined-ca-bundle\") pod \"heat-api-565648db59-vzdgj\" (UID: \"5a930422-6c40-4a1a-963d-126d16f160a2\") " pod="openstack/heat-api-565648db59-vzdgj" Nov 29 08:06:48 crc kubenswrapper[4795]: I1129 08:06:48.539280 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-55879c458d-ms4f2" event={"ID":"2b704537-db2a-453a-b977-b2a3c31cf61b","Type":"ContainerStarted","Data":"0b9ef359890bc07c5790ac9d74b08523669c84c50652699e8ef3e25f2a792965"} Nov 29 08:06:48 crc kubenswrapper[4795]: I1129 08:06:48.540465 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-55879c458d-ms4f2" Nov 29 08:06:48 crc kubenswrapper[4795]: I1129 08:06:48.544568 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"569960ad-ab7f-454b-b142-dae24e00340f","Type":"ContainerStarted","Data":"9571b828ec89c69dfce219be181cc57f71afaf9312711400fb2781639bf296ad"} Nov 29 08:06:48 crc kubenswrapper[4795]: I1129 08:06:48.547906 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-66ffb89474-rscfw" event={"ID":"323b7b10-963c-41ff-b3ce-69dd3374f746","Type":"ContainerStarted","Data":"f07aeea153f44d37e9b18e8cdb19b1eaa7c3a360d55f3f809c034f5d4152048f"} Nov 29 08:06:48 crc kubenswrapper[4795]: I1129 08:06:48.548813 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-66ffb89474-rscfw" Nov 29 08:06:48 crc kubenswrapper[4795]: I1129 08:06:48.573620 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-55879c458d-ms4f2" podStartSLOduration=5.601180286 podStartE2EDuration="8.573599271s" podCreationTimestamp="2025-11-29 08:06:40 +0000 UTC" firstStartedPulling="2025-11-29 08:06:44.205929356 +0000 UTC m=+1650.181505146" lastFinishedPulling="2025-11-29 08:06:47.178348341 +0000 UTC m=+1653.153924131" observedRunningTime="2025-11-29 08:06:48.571487371 +0000 UTC m=+1654.547063161" watchObservedRunningTime="2025-11-29 08:06:48.573599271 +0000 UTC m=+1654.549175061" Nov 29 08:06:48 crc kubenswrapper[4795]: I1129 08:06:48.807028 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-565648db59-vzdgj" Nov 29 08:06:48 crc kubenswrapper[4795]: I1129 08:06:48.988075 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-66ffb89474-rscfw" podStartSLOduration=4.82255806 podStartE2EDuration="7.988052395s" podCreationTimestamp="2025-11-29 08:06:41 +0000 UTC" firstStartedPulling="2025-11-29 08:06:44.011788615 +0000 UTC m=+1649.987364405" lastFinishedPulling="2025-11-29 08:06:47.17728295 +0000 UTC m=+1653.152858740" observedRunningTime="2025-11-29 08:06:48.596090828 +0000 UTC m=+1654.571666618" watchObservedRunningTime="2025-11-29 08:06:48.988052395 +0000 UTC m=+1654.963628185" Nov 29 08:06:48 crc kubenswrapper[4795]: I1129 08:06:48.996257 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-746fb69fd5-n8596"] Nov 29 08:06:49 crc kubenswrapper[4795]: I1129 08:06:49.732193 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-66ffb89474-rscfw"] Nov 29 08:06:49 crc kubenswrapper[4795]: I1129 08:06:49.757566 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-55879c458d-ms4f2"] Nov 29 08:06:49 crc kubenswrapper[4795]: I1129 08:06:49.814287 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-b49cd6b59-h4cg5"] Nov 29 08:06:49 crc kubenswrapper[4795]: I1129 08:06:49.815886 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-b49cd6b59-h4cg5" Nov 29 08:06:49 crc kubenswrapper[4795]: I1129 08:06:49.820268 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Nov 29 08:06:49 crc kubenswrapper[4795]: I1129 08:06:49.823540 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Nov 29 08:06:49 crc kubenswrapper[4795]: I1129 08:06:49.836707 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-67fb9f5ff-86s7p"] Nov 29 08:06:49 crc kubenswrapper[4795]: I1129 08:06:49.838938 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-67fb9f5ff-86s7p" Nov 29 08:06:49 crc kubenswrapper[4795]: I1129 08:06:49.843544 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Nov 29 08:06:49 crc kubenswrapper[4795]: I1129 08:06:49.852273 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Nov 29 08:06:49 crc kubenswrapper[4795]: I1129 08:06:49.875252 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-b49cd6b59-h4cg5"] Nov 29 08:06:49 crc kubenswrapper[4795]: I1129 08:06:49.899986 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-67fb9f5ff-86s7p"] Nov 29 08:06:49 crc kubenswrapper[4795]: I1129 08:06:49.976062 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5-config-data-custom\") pod \"heat-api-b49cd6b59-h4cg5\" (UID: \"3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5\") " pod="openstack/heat-api-b49cd6b59-h4cg5" Nov 29 08:06:49 crc kubenswrapper[4795]: I1129 08:06:49.976207 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a4c1b6d-0527-4254-819b-ce068ddb20d8-combined-ca-bundle\") pod \"heat-cfnapi-67fb9f5ff-86s7p\" (UID: \"8a4c1b6d-0527-4254-819b-ce068ddb20d8\") " pod="openstack/heat-cfnapi-67fb9f5ff-86s7p" Nov 29 08:06:49 crc kubenswrapper[4795]: I1129 08:06:49.976245 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5-internal-tls-certs\") pod \"heat-api-b49cd6b59-h4cg5\" (UID: \"3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5\") " pod="openstack/heat-api-b49cd6b59-h4cg5" Nov 29 08:06:49 crc kubenswrapper[4795]: I1129 08:06:49.976291 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5-config-data\") pod \"heat-api-b49cd6b59-h4cg5\" (UID: \"3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5\") " pod="openstack/heat-api-b49cd6b59-h4cg5" Nov 29 08:06:49 crc kubenswrapper[4795]: I1129 08:06:49.976340 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8a4c1b6d-0527-4254-819b-ce068ddb20d8-config-data-custom\") pod \"heat-cfnapi-67fb9f5ff-86s7p\" (UID: \"8a4c1b6d-0527-4254-819b-ce068ddb20d8\") " pod="openstack/heat-cfnapi-67fb9f5ff-86s7p" Nov 29 08:06:49 crc kubenswrapper[4795]: I1129 08:06:49.976375 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a4c1b6d-0527-4254-819b-ce068ddb20d8-internal-tls-certs\") pod \"heat-cfnapi-67fb9f5ff-86s7p\" (UID: \"8a4c1b6d-0527-4254-819b-ce068ddb20d8\") " pod="openstack/heat-cfnapi-67fb9f5ff-86s7p" Nov 29 08:06:49 crc kubenswrapper[4795]: I1129 08:06:49.976530 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5-combined-ca-bundle\") pod \"heat-api-b49cd6b59-h4cg5\" (UID: \"3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5\") " pod="openstack/heat-api-b49cd6b59-h4cg5" Nov 29 08:06:49 crc kubenswrapper[4795]: I1129 08:06:49.976609 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5-public-tls-certs\") pod \"heat-api-b49cd6b59-h4cg5\" (UID: \"3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5\") " pod="openstack/heat-api-b49cd6b59-h4cg5" Nov 29 08:06:49 crc kubenswrapper[4795]: I1129 08:06:49.976692 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f48nm\" (UniqueName: \"kubernetes.io/projected/3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5-kube-api-access-f48nm\") pod \"heat-api-b49cd6b59-h4cg5\" (UID: \"3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5\") " pod="openstack/heat-api-b49cd6b59-h4cg5" Nov 29 08:06:49 crc kubenswrapper[4795]: I1129 08:06:49.976895 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2924q\" (UniqueName: \"kubernetes.io/projected/8a4c1b6d-0527-4254-819b-ce068ddb20d8-kube-api-access-2924q\") pod \"heat-cfnapi-67fb9f5ff-86s7p\" (UID: \"8a4c1b6d-0527-4254-819b-ce068ddb20d8\") " pod="openstack/heat-cfnapi-67fb9f5ff-86s7p" Nov 29 08:06:49 crc kubenswrapper[4795]: I1129 08:06:49.976983 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a4c1b6d-0527-4254-819b-ce068ddb20d8-public-tls-certs\") pod \"heat-cfnapi-67fb9f5ff-86s7p\" (UID: \"8a4c1b6d-0527-4254-819b-ce068ddb20d8\") " pod="openstack/heat-cfnapi-67fb9f5ff-86s7p" Nov 29 08:06:49 crc kubenswrapper[4795]: I1129 08:06:49.977096 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a4c1b6d-0527-4254-819b-ce068ddb20d8-config-data\") pod \"heat-cfnapi-67fb9f5ff-86s7p\" (UID: \"8a4c1b6d-0527-4254-819b-ce068ddb20d8\") " pod="openstack/heat-cfnapi-67fb9f5ff-86s7p" Nov 29 08:06:50 crc kubenswrapper[4795]: I1129 08:06:50.080038 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5-public-tls-certs\") pod \"heat-api-b49cd6b59-h4cg5\" (UID: \"3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5\") " pod="openstack/heat-api-b49cd6b59-h4cg5" Nov 29 08:06:50 crc kubenswrapper[4795]: I1129 08:06:50.080130 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f48nm\" (UniqueName: \"kubernetes.io/projected/3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5-kube-api-access-f48nm\") pod \"heat-api-b49cd6b59-h4cg5\" (UID: \"3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5\") " pod="openstack/heat-api-b49cd6b59-h4cg5" Nov 29 08:06:50 crc kubenswrapper[4795]: I1129 08:06:50.080201 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2924q\" (UniqueName: \"kubernetes.io/projected/8a4c1b6d-0527-4254-819b-ce068ddb20d8-kube-api-access-2924q\") pod \"heat-cfnapi-67fb9f5ff-86s7p\" (UID: \"8a4c1b6d-0527-4254-819b-ce068ddb20d8\") " pod="openstack/heat-cfnapi-67fb9f5ff-86s7p" Nov 29 08:06:50 crc kubenswrapper[4795]: I1129 08:06:50.080236 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a4c1b6d-0527-4254-819b-ce068ddb20d8-public-tls-certs\") pod \"heat-cfnapi-67fb9f5ff-86s7p\" (UID: \"8a4c1b6d-0527-4254-819b-ce068ddb20d8\") " pod="openstack/heat-cfnapi-67fb9f5ff-86s7p" Nov 29 08:06:50 crc kubenswrapper[4795]: I1129 08:06:50.080270 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a4c1b6d-0527-4254-819b-ce068ddb20d8-config-data\") pod \"heat-cfnapi-67fb9f5ff-86s7p\" (UID: \"8a4c1b6d-0527-4254-819b-ce068ddb20d8\") " pod="openstack/heat-cfnapi-67fb9f5ff-86s7p" Nov 29 08:06:50 crc kubenswrapper[4795]: I1129 08:06:50.080351 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5-config-data-custom\") pod \"heat-api-b49cd6b59-h4cg5\" (UID: \"3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5\") " pod="openstack/heat-api-b49cd6b59-h4cg5" Nov 29 08:06:50 crc kubenswrapper[4795]: I1129 08:06:50.080402 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a4c1b6d-0527-4254-819b-ce068ddb20d8-combined-ca-bundle\") pod \"heat-cfnapi-67fb9f5ff-86s7p\" (UID: \"8a4c1b6d-0527-4254-819b-ce068ddb20d8\") " pod="openstack/heat-cfnapi-67fb9f5ff-86s7p" Nov 29 08:06:50 crc kubenswrapper[4795]: I1129 08:06:50.080440 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5-internal-tls-certs\") pod \"heat-api-b49cd6b59-h4cg5\" (UID: \"3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5\") " pod="openstack/heat-api-b49cd6b59-h4cg5" Nov 29 08:06:50 crc kubenswrapper[4795]: I1129 08:06:50.080462 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5-config-data\") pod \"heat-api-b49cd6b59-h4cg5\" (UID: \"3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5\") " pod="openstack/heat-api-b49cd6b59-h4cg5" Nov 29 08:06:50 crc kubenswrapper[4795]: I1129 08:06:50.080495 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8a4c1b6d-0527-4254-819b-ce068ddb20d8-config-data-custom\") pod \"heat-cfnapi-67fb9f5ff-86s7p\" (UID: \"8a4c1b6d-0527-4254-819b-ce068ddb20d8\") " pod="openstack/heat-cfnapi-67fb9f5ff-86s7p" Nov 29 08:06:50 crc kubenswrapper[4795]: I1129 08:06:50.080539 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a4c1b6d-0527-4254-819b-ce068ddb20d8-internal-tls-certs\") pod \"heat-cfnapi-67fb9f5ff-86s7p\" (UID: \"8a4c1b6d-0527-4254-819b-ce068ddb20d8\") " pod="openstack/heat-cfnapi-67fb9f5ff-86s7p" Nov 29 08:06:50 crc kubenswrapper[4795]: I1129 08:06:50.080677 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5-combined-ca-bundle\") pod \"heat-api-b49cd6b59-h4cg5\" (UID: \"3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5\") " pod="openstack/heat-api-b49cd6b59-h4cg5" Nov 29 08:06:50 crc kubenswrapper[4795]: I1129 08:06:50.090706 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5-combined-ca-bundle\") pod \"heat-api-b49cd6b59-h4cg5\" (UID: \"3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5\") " pod="openstack/heat-api-b49cd6b59-h4cg5" Nov 29 08:06:50 crc kubenswrapper[4795]: I1129 08:06:50.094244 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a4c1b6d-0527-4254-819b-ce068ddb20d8-internal-tls-certs\") pod \"heat-cfnapi-67fb9f5ff-86s7p\" (UID: \"8a4c1b6d-0527-4254-819b-ce068ddb20d8\") " pod="openstack/heat-cfnapi-67fb9f5ff-86s7p" Nov 29 08:06:50 crc kubenswrapper[4795]: I1129 08:06:50.095005 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5-public-tls-certs\") pod \"heat-api-b49cd6b59-h4cg5\" (UID: \"3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5\") " pod="openstack/heat-api-b49cd6b59-h4cg5" Nov 29 08:06:50 crc kubenswrapper[4795]: I1129 08:06:50.095212 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a4c1b6d-0527-4254-819b-ce068ddb20d8-config-data\") pod \"heat-cfnapi-67fb9f5ff-86s7p\" (UID: \"8a4c1b6d-0527-4254-819b-ce068ddb20d8\") " pod="openstack/heat-cfnapi-67fb9f5ff-86s7p" Nov 29 08:06:50 crc kubenswrapper[4795]: I1129 08:06:50.096079 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8a4c1b6d-0527-4254-819b-ce068ddb20d8-config-data-custom\") pod \"heat-cfnapi-67fb9f5ff-86s7p\" (UID: \"8a4c1b6d-0527-4254-819b-ce068ddb20d8\") " pod="openstack/heat-cfnapi-67fb9f5ff-86s7p" Nov 29 08:06:50 crc kubenswrapper[4795]: I1129 08:06:50.096328 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5-internal-tls-certs\") pod \"heat-api-b49cd6b59-h4cg5\" (UID: \"3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5\") " pod="openstack/heat-api-b49cd6b59-h4cg5" Nov 29 08:06:50 crc kubenswrapper[4795]: I1129 08:06:50.097496 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a4c1b6d-0527-4254-819b-ce068ddb20d8-combined-ca-bundle\") pod \"heat-cfnapi-67fb9f5ff-86s7p\" (UID: \"8a4c1b6d-0527-4254-819b-ce068ddb20d8\") " pod="openstack/heat-cfnapi-67fb9f5ff-86s7p" Nov 29 08:06:50 crc kubenswrapper[4795]: I1129 08:06:50.098029 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5-config-data\") pod \"heat-api-b49cd6b59-h4cg5\" (UID: \"3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5\") " pod="openstack/heat-api-b49cd6b59-h4cg5" Nov 29 08:06:50 crc kubenswrapper[4795]: I1129 08:06:50.100203 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f48nm\" (UniqueName: \"kubernetes.io/projected/3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5-kube-api-access-f48nm\") pod \"heat-api-b49cd6b59-h4cg5\" (UID: \"3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5\") " pod="openstack/heat-api-b49cd6b59-h4cg5" Nov 29 08:06:50 crc kubenswrapper[4795]: I1129 08:06:50.102403 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a4c1b6d-0527-4254-819b-ce068ddb20d8-public-tls-certs\") pod \"heat-cfnapi-67fb9f5ff-86s7p\" (UID: \"8a4c1b6d-0527-4254-819b-ce068ddb20d8\") " pod="openstack/heat-cfnapi-67fb9f5ff-86s7p" Nov 29 08:06:50 crc kubenswrapper[4795]: I1129 08:06:50.118940 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5-config-data-custom\") pod \"heat-api-b49cd6b59-h4cg5\" (UID: \"3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5\") " pod="openstack/heat-api-b49cd6b59-h4cg5" Nov 29 08:06:50 crc kubenswrapper[4795]: I1129 08:06:50.119514 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2924q\" (UniqueName: \"kubernetes.io/projected/8a4c1b6d-0527-4254-819b-ce068ddb20d8-kube-api-access-2924q\") pod \"heat-cfnapi-67fb9f5ff-86s7p\" (UID: \"8a4c1b6d-0527-4254-819b-ce068ddb20d8\") " pod="openstack/heat-cfnapi-67fb9f5ff-86s7p" Nov 29 08:06:50 crc kubenswrapper[4795]: I1129 08:06:50.160305 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-b49cd6b59-h4cg5" Nov 29 08:06:50 crc kubenswrapper[4795]: I1129 08:06:50.168962 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-67fb9f5ff-86s7p" Nov 29 08:06:50 crc kubenswrapper[4795]: I1129 08:06:50.572480 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-66ffb89474-rscfw" podUID="323b7b10-963c-41ff-b3ce-69dd3374f746" containerName="heat-api" containerID="cri-o://f07aeea153f44d37e9b18e8cdb19b1eaa7c3a360d55f3f809c034f5d4152048f" gracePeriod=60 Nov 29 08:06:50 crc kubenswrapper[4795]: W1129 08:06:50.744332 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc0aa8b2b_26ef_48f3_9a24_da2a5b2c9d3a.slice/crio-40496373a728dce126728d64b96e580ef3c93fe0f365db9dfc7720949b17b47c WatchSource:0}: Error finding container 40496373a728dce126728d64b96e580ef3c93fe0f365db9dfc7720949b17b47c: Status 404 returned error can't find the container with id 40496373a728dce126728d64b96e580ef3c93fe0f365db9dfc7720949b17b47c Nov 29 08:06:51 crc kubenswrapper[4795]: I1129 08:06:51.405388 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-67fb9f5ff-86s7p"] Nov 29 08:06:51 crc kubenswrapper[4795]: I1129 08:06:51.650053 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-746fb69fd5-n8596" event={"ID":"c0aa8b2b-26ef-48f3-9a24-da2a5b2c9d3a","Type":"ContainerStarted","Data":"35f22de4a7a9f0afaacfcd28c644998d37499261b4a684b69d858a9a5449bbe9"} Nov 29 08:06:51 crc kubenswrapper[4795]: I1129 08:06:51.650156 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-746fb69fd5-n8596" event={"ID":"c0aa8b2b-26ef-48f3-9a24-da2a5b2c9d3a","Type":"ContainerStarted","Data":"40496373a728dce126728d64b96e580ef3c93fe0f365db9dfc7720949b17b47c"} Nov 29 08:06:51 crc kubenswrapper[4795]: I1129 08:06:51.650513 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-746fb69fd5-n8596" Nov 29 08:06:51 crc kubenswrapper[4795]: I1129 08:06:51.689668 4795 generic.go:334] "Generic (PLEG): container finished" podID="323b7b10-963c-41ff-b3ce-69dd3374f746" containerID="f07aeea153f44d37e9b18e8cdb19b1eaa7c3a360d55f3f809c034f5d4152048f" exitCode=0 Nov 29 08:06:51 crc kubenswrapper[4795]: I1129 08:06:51.689921 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-66ffb89474-rscfw" event={"ID":"323b7b10-963c-41ff-b3ce-69dd3374f746","Type":"ContainerDied","Data":"f07aeea153f44d37e9b18e8cdb19b1eaa7c3a360d55f3f809c034f5d4152048f"} Nov 29 08:06:51 crc kubenswrapper[4795]: I1129 08:06:51.692812 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-55879c458d-ms4f2" podUID="2b704537-db2a-453a-b977-b2a3c31cf61b" containerName="heat-cfnapi" containerID="cri-o://0b9ef359890bc07c5790ac9d74b08523669c84c50652699e8ef3e25f2a792965" gracePeriod=60 Nov 29 08:06:51 crc kubenswrapper[4795]: I1129 08:06:51.692934 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-67fb9f5ff-86s7p" event={"ID":"8a4c1b6d-0527-4254-819b-ce068ddb20d8","Type":"ContainerStarted","Data":"7839e15622d0d53d71a37f793b83e011854b1e7b488021199c579e1126edd358"} Nov 29 08:06:51 crc kubenswrapper[4795]: I1129 08:06:51.701534 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-746fb69fd5-n8596" podStartSLOduration=3.7015119 podStartE2EDuration="3.7015119s" podCreationTimestamp="2025-11-29 08:06:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 08:06:51.67643495 +0000 UTC m=+1657.652010740" watchObservedRunningTime="2025-11-29 08:06:51.7015119 +0000 UTC m=+1657.677087690" Nov 29 08:06:51 crc kubenswrapper[4795]: I1129 08:06:51.739095 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-66ffb89474-rscfw" Nov 29 08:06:51 crc kubenswrapper[4795]: I1129 08:06:51.835348 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/323b7b10-963c-41ff-b3ce-69dd3374f746-config-data\") pod \"323b7b10-963c-41ff-b3ce-69dd3374f746\" (UID: \"323b7b10-963c-41ff-b3ce-69dd3374f746\") " Nov 29 08:06:51 crc kubenswrapper[4795]: I1129 08:06:51.835420 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/323b7b10-963c-41ff-b3ce-69dd3374f746-config-data-custom\") pod \"323b7b10-963c-41ff-b3ce-69dd3374f746\" (UID: \"323b7b10-963c-41ff-b3ce-69dd3374f746\") " Nov 29 08:06:51 crc kubenswrapper[4795]: I1129 08:06:51.839032 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/323b7b10-963c-41ff-b3ce-69dd3374f746-combined-ca-bundle\") pod \"323b7b10-963c-41ff-b3ce-69dd3374f746\" (UID: \"323b7b10-963c-41ff-b3ce-69dd3374f746\") " Nov 29 08:06:51 crc kubenswrapper[4795]: I1129 08:06:51.839183 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-psxb6\" (UniqueName: \"kubernetes.io/projected/323b7b10-963c-41ff-b3ce-69dd3374f746-kube-api-access-psxb6\") pod \"323b7b10-963c-41ff-b3ce-69dd3374f746\" (UID: \"323b7b10-963c-41ff-b3ce-69dd3374f746\") " Nov 29 08:06:51 crc kubenswrapper[4795]: I1129 08:06:51.852663 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/323b7b10-963c-41ff-b3ce-69dd3374f746-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "323b7b10-963c-41ff-b3ce-69dd3374f746" (UID: "323b7b10-963c-41ff-b3ce-69dd3374f746"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:06:51 crc kubenswrapper[4795]: I1129 08:06:51.876204 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/323b7b10-963c-41ff-b3ce-69dd3374f746-kube-api-access-psxb6" (OuterVolumeSpecName: "kube-api-access-psxb6") pod "323b7b10-963c-41ff-b3ce-69dd3374f746" (UID: "323b7b10-963c-41ff-b3ce-69dd3374f746"). InnerVolumeSpecName "kube-api-access-psxb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:06:51 crc kubenswrapper[4795]: I1129 08:06:51.878710 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6bcb7f45-v58x5"] Nov 29 08:06:51 crc kubenswrapper[4795]: I1129 08:06:51.904224 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-565648db59-vzdgj"] Nov 29 08:06:51 crc kubenswrapper[4795]: I1129 08:06:51.930209 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-b49cd6b59-h4cg5"] Nov 29 08:06:51 crc kubenswrapper[4795]: I1129 08:06:51.955379 4795 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/323b7b10-963c-41ff-b3ce-69dd3374f746-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:51 crc kubenswrapper[4795]: I1129 08:06:51.955434 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-psxb6\" (UniqueName: \"kubernetes.io/projected/323b7b10-963c-41ff-b3ce-69dd3374f746-kube-api-access-psxb6\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:52 crc kubenswrapper[4795]: I1129 08:06:52.029924 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/323b7b10-963c-41ff-b3ce-69dd3374f746-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "323b7b10-963c-41ff-b3ce-69dd3374f746" (UID: "323b7b10-963c-41ff-b3ce-69dd3374f746"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:06:52 crc kubenswrapper[4795]: I1129 08:06:52.063372 4795 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/323b7b10-963c-41ff-b3ce-69dd3374f746-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:52 crc kubenswrapper[4795]: I1129 08:06:52.083543 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/323b7b10-963c-41ff-b3ce-69dd3374f746-config-data" (OuterVolumeSpecName: "config-data") pod "323b7b10-963c-41ff-b3ce-69dd3374f746" (UID: "323b7b10-963c-41ff-b3ce-69dd3374f746"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:06:52 crc kubenswrapper[4795]: I1129 08:06:52.168683 4795 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/323b7b10-963c-41ff-b3ce-69dd3374f746-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:52 crc kubenswrapper[4795]: I1129 08:06:52.557674 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-55879c458d-ms4f2" Nov 29 08:06:52 crc kubenswrapper[4795]: I1129 08:06:52.691646 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2b704537-db2a-453a-b977-b2a3c31cf61b-config-data-custom\") pod \"2b704537-db2a-453a-b977-b2a3c31cf61b\" (UID: \"2b704537-db2a-453a-b977-b2a3c31cf61b\") " Nov 29 08:06:52 crc kubenswrapper[4795]: I1129 08:06:52.691870 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b704537-db2a-453a-b977-b2a3c31cf61b-combined-ca-bundle\") pod \"2b704537-db2a-453a-b977-b2a3c31cf61b\" (UID: \"2b704537-db2a-453a-b977-b2a3c31cf61b\") " Nov 29 08:06:52 crc kubenswrapper[4795]: I1129 08:06:52.693266 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tc47l\" (UniqueName: \"kubernetes.io/projected/2b704537-db2a-453a-b977-b2a3c31cf61b-kube-api-access-tc47l\") pod \"2b704537-db2a-453a-b977-b2a3c31cf61b\" (UID: \"2b704537-db2a-453a-b977-b2a3c31cf61b\") " Nov 29 08:06:52 crc kubenswrapper[4795]: I1129 08:06:52.693359 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b704537-db2a-453a-b977-b2a3c31cf61b-config-data\") pod \"2b704537-db2a-453a-b977-b2a3c31cf61b\" (UID: \"2b704537-db2a-453a-b977-b2a3c31cf61b\") " Nov 29 08:06:52 crc kubenswrapper[4795]: I1129 08:06:52.704903 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b704537-db2a-453a-b977-b2a3c31cf61b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2b704537-db2a-453a-b977-b2a3c31cf61b" (UID: "2b704537-db2a-453a-b977-b2a3c31cf61b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:06:52 crc kubenswrapper[4795]: I1129 08:06:52.705061 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b704537-db2a-453a-b977-b2a3c31cf61b-kube-api-access-tc47l" (OuterVolumeSpecName: "kube-api-access-tc47l") pod "2b704537-db2a-453a-b977-b2a3c31cf61b" (UID: "2b704537-db2a-453a-b977-b2a3c31cf61b"). InnerVolumeSpecName "kube-api-access-tc47l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:06:52 crc kubenswrapper[4795]: I1129 08:06:52.718894 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6bcb7f45-v58x5" event={"ID":"3d1c661f-8246-4ec2-96d0-13feaaaaf35a","Type":"ContainerStarted","Data":"5ef729473668163fbdd368f4a9d732218c4805f33255eeb2ceb41d07354ce889"} Nov 29 08:06:52 crc kubenswrapper[4795]: I1129 08:06:52.718947 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6bcb7f45-v58x5" event={"ID":"3d1c661f-8246-4ec2-96d0-13feaaaaf35a","Type":"ContainerStarted","Data":"5773180c80658601463424594cbaf86575ccec031b2814fb48633200f910e1fd"} Nov 29 08:06:52 crc kubenswrapper[4795]: I1129 08:06:52.720541 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-6bcb7f45-v58x5" Nov 29 08:06:52 crc kubenswrapper[4795]: I1129 08:06:52.722526 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-67fb9f5ff-86s7p" event={"ID":"8a4c1b6d-0527-4254-819b-ce068ddb20d8","Type":"ContainerStarted","Data":"f33376e1c1e9327f8fd667be8faa6f1666b224e32f6426459c3eacb213772e60"} Nov 29 08:06:52 crc kubenswrapper[4795]: I1129 08:06:52.722868 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-67fb9f5ff-86s7p" Nov 29 08:06:52 crc kubenswrapper[4795]: I1129 08:06:52.725960 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-565648db59-vzdgj" event={"ID":"5a930422-6c40-4a1a-963d-126d16f160a2","Type":"ContainerStarted","Data":"37c3a8e67daeb3949e4b7732b04882ea1254aa2d6ffb541288d8c72f55688c01"} Nov 29 08:06:52 crc kubenswrapper[4795]: I1129 08:06:52.726004 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-565648db59-vzdgj" event={"ID":"5a930422-6c40-4a1a-963d-126d16f160a2","Type":"ContainerStarted","Data":"566adfd29a7a30293fec697f9bb92516b1a5c5e882b0b0468b2d20800692bb19"} Nov 29 08:06:52 crc kubenswrapper[4795]: I1129 08:06:52.726227 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-565648db59-vzdgj" Nov 29 08:06:52 crc kubenswrapper[4795]: I1129 08:06:52.731870 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"569960ad-ab7f-454b-b142-dae24e00340f","Type":"ContainerStarted","Data":"54d689b5c4a8a4ba1c171dd892162e93f0d06097d24ec5730b0bedca6d37472f"} Nov 29 08:06:52 crc kubenswrapper[4795]: I1129 08:06:52.740091 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-66ffb89474-rscfw" event={"ID":"323b7b10-963c-41ff-b3ce-69dd3374f746","Type":"ContainerDied","Data":"3c1eb86ee26e4e0fa785d977345d8a1e43035b8d09d105743f4b06c4b6200429"} Nov 29 08:06:52 crc kubenswrapper[4795]: I1129 08:06:52.740156 4795 scope.go:117] "RemoveContainer" containerID="f07aeea153f44d37e9b18e8cdb19b1eaa7c3a360d55f3f809c034f5d4152048f" Nov 29 08:06:52 crc kubenswrapper[4795]: I1129 08:06:52.740355 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-66ffb89474-rscfw" Nov 29 08:06:52 crc kubenswrapper[4795]: I1129 08:06:52.741806 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-6bcb7f45-v58x5" podStartSLOduration=4.741785141 podStartE2EDuration="4.741785141s" podCreationTimestamp="2025-11-29 08:06:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 08:06:52.739485036 +0000 UTC m=+1658.715060826" watchObservedRunningTime="2025-11-29 08:06:52.741785141 +0000 UTC m=+1658.717360921" Nov 29 08:06:52 crc kubenswrapper[4795]: I1129 08:06:52.752372 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b704537-db2a-453a-b977-b2a3c31cf61b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2b704537-db2a-453a-b977-b2a3c31cf61b" (UID: "2b704537-db2a-453a-b977-b2a3c31cf61b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:06:52 crc kubenswrapper[4795]: I1129 08:06:52.758151 4795 generic.go:334] "Generic (PLEG): container finished" podID="2b704537-db2a-453a-b977-b2a3c31cf61b" containerID="0b9ef359890bc07c5790ac9d74b08523669c84c50652699e8ef3e25f2a792965" exitCode=0 Nov 29 08:06:52 crc kubenswrapper[4795]: I1129 08:06:52.758293 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-55879c458d-ms4f2" event={"ID":"2b704537-db2a-453a-b977-b2a3c31cf61b","Type":"ContainerDied","Data":"0b9ef359890bc07c5790ac9d74b08523669c84c50652699e8ef3e25f2a792965"} Nov 29 08:06:52 crc kubenswrapper[4795]: I1129 08:06:52.758345 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-55879c458d-ms4f2" event={"ID":"2b704537-db2a-453a-b977-b2a3c31cf61b","Type":"ContainerDied","Data":"825815a76bf539ea3755a77f8bfb5bb233ff18265583b05cee48fe3eb7f1bf38"} Nov 29 08:06:52 crc kubenswrapper[4795]: I1129 08:06:52.758447 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-55879c458d-ms4f2" Nov 29 08:06:52 crc kubenswrapper[4795]: I1129 08:06:52.783523 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-b49cd6b59-h4cg5" event={"ID":"3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5","Type":"ContainerStarted","Data":"abe61bedf1ec9c606b02562c7d6cc6685d793dc113e02a4333b31f8db4dc6c40"} Nov 29 08:06:52 crc kubenswrapper[4795]: I1129 08:06:52.783567 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-b49cd6b59-h4cg5" Nov 29 08:06:52 crc kubenswrapper[4795]: I1129 08:06:52.783578 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-b49cd6b59-h4cg5" event={"ID":"3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5","Type":"ContainerStarted","Data":"8a83e0c389b24600720243a375bce209c86afe70a471b447c354e7aafa7b83b5"} Nov 29 08:06:52 crc kubenswrapper[4795]: I1129 08:06:52.794173 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-565648db59-vzdgj" podStartSLOduration=4.794145565 podStartE2EDuration="4.794145565s" podCreationTimestamp="2025-11-29 08:06:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 08:06:52.775678041 +0000 UTC m=+1658.751253831" watchObservedRunningTime="2025-11-29 08:06:52.794145565 +0000 UTC m=+1658.769721355" Nov 29 08:06:52 crc kubenswrapper[4795]: I1129 08:06:52.799382 4795 scope.go:117] "RemoveContainer" containerID="0b9ef359890bc07c5790ac9d74b08523669c84c50652699e8ef3e25f2a792965" Nov 29 08:06:52 crc kubenswrapper[4795]: I1129 08:06:52.811918 4795 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b704537-db2a-453a-b977-b2a3c31cf61b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:52 crc kubenswrapper[4795]: I1129 08:06:52.811955 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tc47l\" (UniqueName: \"kubernetes.io/projected/2b704537-db2a-453a-b977-b2a3c31cf61b-kube-api-access-tc47l\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:52 crc kubenswrapper[4795]: I1129 08:06:52.811969 4795 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2b704537-db2a-453a-b977-b2a3c31cf61b-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:52 crc kubenswrapper[4795]: I1129 08:06:52.841192 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-67fb9f5ff-86s7p" podStartSLOduration=3.841172047 podStartE2EDuration="3.841172047s" podCreationTimestamp="2025-11-29 08:06:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 08:06:52.808704707 +0000 UTC m=+1658.784280507" watchObservedRunningTime="2025-11-29 08:06:52.841172047 +0000 UTC m=+1658.816747837" Nov 29 08:06:52 crc kubenswrapper[4795]: I1129 08:06:52.841884 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b704537-db2a-453a-b977-b2a3c31cf61b-config-data" (OuterVolumeSpecName: "config-data") pod "2b704537-db2a-453a-b977-b2a3c31cf61b" (UID: "2b704537-db2a-453a-b977-b2a3c31cf61b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:06:52 crc kubenswrapper[4795]: I1129 08:06:52.846012 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-b49cd6b59-h4cg5" podStartSLOduration=3.845994724 podStartE2EDuration="3.845994724s" podCreationTimestamp="2025-11-29 08:06:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 08:06:52.832211363 +0000 UTC m=+1658.807787153" watchObservedRunningTime="2025-11-29 08:06:52.845994724 +0000 UTC m=+1658.821570514" Nov 29 08:06:52 crc kubenswrapper[4795]: I1129 08:06:52.868633 4795 scope.go:117] "RemoveContainer" containerID="0b9ef359890bc07c5790ac9d74b08523669c84c50652699e8ef3e25f2a792965" Nov 29 08:06:52 crc kubenswrapper[4795]: E1129 08:06:52.869201 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b9ef359890bc07c5790ac9d74b08523669c84c50652699e8ef3e25f2a792965\": container with ID starting with 0b9ef359890bc07c5790ac9d74b08523669c84c50652699e8ef3e25f2a792965 not found: ID does not exist" containerID="0b9ef359890bc07c5790ac9d74b08523669c84c50652699e8ef3e25f2a792965" Nov 29 08:06:52 crc kubenswrapper[4795]: I1129 08:06:52.869249 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b9ef359890bc07c5790ac9d74b08523669c84c50652699e8ef3e25f2a792965"} err="failed to get container status \"0b9ef359890bc07c5790ac9d74b08523669c84c50652699e8ef3e25f2a792965\": rpc error: code = NotFound desc = could not find container \"0b9ef359890bc07c5790ac9d74b08523669c84c50652699e8ef3e25f2a792965\": container with ID starting with 0b9ef359890bc07c5790ac9d74b08523669c84c50652699e8ef3e25f2a792965 not found: ID does not exist" Nov 29 08:06:52 crc kubenswrapper[4795]: I1129 08:06:52.874492 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-66ffb89474-rscfw"] Nov 29 08:06:52 crc kubenswrapper[4795]: I1129 08:06:52.890134 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-66ffb89474-rscfw"] Nov 29 08:06:52 crc kubenswrapper[4795]: I1129 08:06:52.914109 4795 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b704537-db2a-453a-b977-b2a3c31cf61b-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:53 crc kubenswrapper[4795]: I1129 08:06:53.107762 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-55879c458d-ms4f2"] Nov 29 08:06:53 crc kubenswrapper[4795]: I1129 08:06:53.126687 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-55879c458d-ms4f2"] Nov 29 08:06:53 crc kubenswrapper[4795]: I1129 08:06:53.650052 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 08:06:53 crc kubenswrapper[4795]: I1129 08:06:53.807026 4795 generic.go:334] "Generic (PLEG): container finished" podID="3d1c661f-8246-4ec2-96d0-13feaaaaf35a" containerID="5ef729473668163fbdd368f4a9d732218c4805f33255eeb2ceb41d07354ce889" exitCode=1 Nov 29 08:06:53 crc kubenswrapper[4795]: I1129 08:06:53.807229 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6bcb7f45-v58x5" event={"ID":"3d1c661f-8246-4ec2-96d0-13feaaaaf35a","Type":"ContainerDied","Data":"5ef729473668163fbdd368f4a9d732218c4805f33255eeb2ceb41d07354ce889"} Nov 29 08:06:53 crc kubenswrapper[4795]: I1129 08:06:53.809803 4795 scope.go:117] "RemoveContainer" containerID="5ef729473668163fbdd368f4a9d732218c4805f33255eeb2ceb41d07354ce889" Nov 29 08:06:53 crc kubenswrapper[4795]: I1129 08:06:53.819651 4795 generic.go:334] "Generic (PLEG): container finished" podID="5a930422-6c40-4a1a-963d-126d16f160a2" containerID="37c3a8e67daeb3949e4b7732b04882ea1254aa2d6ffb541288d8c72f55688c01" exitCode=1 Nov 29 08:06:53 crc kubenswrapper[4795]: I1129 08:06:53.819741 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-565648db59-vzdgj" event={"ID":"5a930422-6c40-4a1a-963d-126d16f160a2","Type":"ContainerDied","Data":"37c3a8e67daeb3949e4b7732b04882ea1254aa2d6ffb541288d8c72f55688c01"} Nov 29 08:06:53 crc kubenswrapper[4795]: I1129 08:06:53.820149 4795 scope.go:117] "RemoveContainer" containerID="37c3a8e67daeb3949e4b7732b04882ea1254aa2d6ffb541288d8c72f55688c01" Nov 29 08:06:53 crc kubenswrapper[4795]: I1129 08:06:53.831772 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"569960ad-ab7f-454b-b142-dae24e00340f","Type":"ContainerStarted","Data":"93f7ecd76efba30a5b144876a832efb59a9fe30245e7dd0500e1efcce1cf2ce4"} Nov 29 08:06:54 crc kubenswrapper[4795]: I1129 08:06:54.301850 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b704537-db2a-453a-b977-b2a3c31cf61b" path="/var/lib/kubelet/pods/2b704537-db2a-453a-b977-b2a3c31cf61b/volumes" Nov 29 08:06:54 crc kubenswrapper[4795]: I1129 08:06:54.302857 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="323b7b10-963c-41ff-b3ce-69dd3374f746" path="/var/lib/kubelet/pods/323b7b10-963c-41ff-b3ce-69dd3374f746/volumes" Nov 29 08:06:54 crc kubenswrapper[4795]: I1129 08:06:54.872140 4795 generic.go:334] "Generic (PLEG): container finished" podID="5a930422-6c40-4a1a-963d-126d16f160a2" containerID="c63523c1cc4a3dba730dddd1444495e48e68efdc4c913f5a9c59b650c67c6c78" exitCode=1 Nov 29 08:06:54 crc kubenswrapper[4795]: I1129 08:06:54.872205 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-565648db59-vzdgj" event={"ID":"5a930422-6c40-4a1a-963d-126d16f160a2","Type":"ContainerDied","Data":"c63523c1cc4a3dba730dddd1444495e48e68efdc4c913f5a9c59b650c67c6c78"} Nov 29 08:06:54 crc kubenswrapper[4795]: I1129 08:06:54.872278 4795 scope.go:117] "RemoveContainer" containerID="37c3a8e67daeb3949e4b7732b04882ea1254aa2d6ffb541288d8c72f55688c01" Nov 29 08:06:54 crc kubenswrapper[4795]: I1129 08:06:54.873498 4795 scope.go:117] "RemoveContainer" containerID="c63523c1cc4a3dba730dddd1444495e48e68efdc4c913f5a9c59b650c67c6c78" Nov 29 08:06:54 crc kubenswrapper[4795]: E1129 08:06:54.874142 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-565648db59-vzdgj_openstack(5a930422-6c40-4a1a-963d-126d16f160a2)\"" pod="openstack/heat-api-565648db59-vzdgj" podUID="5a930422-6c40-4a1a-963d-126d16f160a2" Nov 29 08:06:54 crc kubenswrapper[4795]: I1129 08:06:54.876937 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"569960ad-ab7f-454b-b142-dae24e00340f","Type":"ContainerStarted","Data":"d391db1ee9e1b18be80d972b0c3b6ac2b4b1f01cef28302248aa1682e5d5d0df"} Nov 29 08:06:54 crc kubenswrapper[4795]: I1129 08:06:54.879726 4795 generic.go:334] "Generic (PLEG): container finished" podID="3d1c661f-8246-4ec2-96d0-13feaaaaf35a" containerID="3ac2eef8825565d3462339e092bf148f6de3112fdcc91330655a21607cbc1a85" exitCode=1 Nov 29 08:06:54 crc kubenswrapper[4795]: I1129 08:06:54.879759 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6bcb7f45-v58x5" event={"ID":"3d1c661f-8246-4ec2-96d0-13feaaaaf35a","Type":"ContainerDied","Data":"3ac2eef8825565d3462339e092bf148f6de3112fdcc91330655a21607cbc1a85"} Nov 29 08:06:54 crc kubenswrapper[4795]: I1129 08:06:54.880741 4795 scope.go:117] "RemoveContainer" containerID="3ac2eef8825565d3462339e092bf148f6de3112fdcc91330655a21607cbc1a85" Nov 29 08:06:54 crc kubenswrapper[4795]: E1129 08:06:54.881192 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-6bcb7f45-v58x5_openstack(3d1c661f-8246-4ec2-96d0-13feaaaaf35a)\"" pod="openstack/heat-cfnapi-6bcb7f45-v58x5" podUID="3d1c661f-8246-4ec2-96d0-13feaaaaf35a" Nov 29 08:06:54 crc kubenswrapper[4795]: I1129 08:06:54.974399 4795 scope.go:117] "RemoveContainer" containerID="5ef729473668163fbdd368f4a9d732218c4805f33255eeb2ceb41d07354ce889" Nov 29 08:06:55 crc kubenswrapper[4795]: I1129 08:06:55.893537 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"569960ad-ab7f-454b-b142-dae24e00340f","Type":"ContainerStarted","Data":"e157dbb58ff15a1ab1105d74992b5fcb067df0425fca9c798c7f42c6439b79c2"} Nov 29 08:06:55 crc kubenswrapper[4795]: I1129 08:06:55.894072 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="569960ad-ab7f-454b-b142-dae24e00340f" containerName="ceilometer-central-agent" containerID="cri-o://54d689b5c4a8a4ba1c171dd892162e93f0d06097d24ec5730b0bedca6d37472f" gracePeriod=30 Nov 29 08:06:55 crc kubenswrapper[4795]: I1129 08:06:55.894330 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 29 08:06:55 crc kubenswrapper[4795]: I1129 08:06:55.894758 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="569960ad-ab7f-454b-b142-dae24e00340f" containerName="proxy-httpd" containerID="cri-o://e157dbb58ff15a1ab1105d74992b5fcb067df0425fca9c798c7f42c6439b79c2" gracePeriod=30 Nov 29 08:06:55 crc kubenswrapper[4795]: I1129 08:06:55.894809 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="569960ad-ab7f-454b-b142-dae24e00340f" containerName="sg-core" containerID="cri-o://d391db1ee9e1b18be80d972b0c3b6ac2b4b1f01cef28302248aa1682e5d5d0df" gracePeriod=30 Nov 29 08:06:55 crc kubenswrapper[4795]: I1129 08:06:55.894839 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="569960ad-ab7f-454b-b142-dae24e00340f" containerName="ceilometer-notification-agent" containerID="cri-o://93f7ecd76efba30a5b144876a832efb59a9fe30245e7dd0500e1efcce1cf2ce4" gracePeriod=30 Nov 29 08:06:55 crc kubenswrapper[4795]: I1129 08:06:55.901200 4795 scope.go:117] "RemoveContainer" containerID="3ac2eef8825565d3462339e092bf148f6de3112fdcc91330655a21607cbc1a85" Nov 29 08:06:55 crc kubenswrapper[4795]: E1129 08:06:55.901698 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-6bcb7f45-v58x5_openstack(3d1c661f-8246-4ec2-96d0-13feaaaaf35a)\"" pod="openstack/heat-cfnapi-6bcb7f45-v58x5" podUID="3d1c661f-8246-4ec2-96d0-13feaaaaf35a" Nov 29 08:06:55 crc kubenswrapper[4795]: I1129 08:06:55.902771 4795 scope.go:117] "RemoveContainer" containerID="c63523c1cc4a3dba730dddd1444495e48e68efdc4c913f5a9c59b650c67c6c78" Nov 29 08:06:55 crc kubenswrapper[4795]: E1129 08:06:55.903041 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-565648db59-vzdgj_openstack(5a930422-6c40-4a1a-963d-126d16f160a2)\"" pod="openstack/heat-api-565648db59-vzdgj" podUID="5a930422-6c40-4a1a-963d-126d16f160a2" Nov 29 08:06:55 crc kubenswrapper[4795]: I1129 08:06:55.929074 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.30329567 podStartE2EDuration="9.929052833s" podCreationTimestamp="2025-11-29 08:06:46 +0000 UTC" firstStartedPulling="2025-11-29 08:06:47.836471051 +0000 UTC m=+1653.812046841" lastFinishedPulling="2025-11-29 08:06:55.462228214 +0000 UTC m=+1661.437804004" observedRunningTime="2025-11-29 08:06:55.925137583 +0000 UTC m=+1661.900713373" watchObservedRunningTime="2025-11-29 08:06:55.929052833 +0000 UTC m=+1661.904628623" Nov 29 08:06:56 crc kubenswrapper[4795]: I1129 08:06:56.359836 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7d978555f9-krdd7" Nov 29 08:06:56 crc kubenswrapper[4795]: I1129 08:06:56.440289 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-hxtm7"] Nov 29 08:06:56 crc kubenswrapper[4795]: I1129 08:06:56.440879 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bb4fc677f-hxtm7" podUID="1aa67cd3-59ee-4a03-bc97-fc1fa5729e40" containerName="dnsmasq-dns" containerID="cri-o://6df4be89eb8944158fdfcea161de794cf380ad51baaf08cca57f9debff70642c" gracePeriod=10 Nov 29 08:06:56 crc kubenswrapper[4795]: I1129 08:06:56.916172 4795 generic.go:334] "Generic (PLEG): container finished" podID="1aa67cd3-59ee-4a03-bc97-fc1fa5729e40" containerID="6df4be89eb8944158fdfcea161de794cf380ad51baaf08cca57f9debff70642c" exitCode=0 Nov 29 08:06:56 crc kubenswrapper[4795]: I1129 08:06:56.916548 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-hxtm7" event={"ID":"1aa67cd3-59ee-4a03-bc97-fc1fa5729e40","Type":"ContainerDied","Data":"6df4be89eb8944158fdfcea161de794cf380ad51baaf08cca57f9debff70642c"} Nov 29 08:06:56 crc kubenswrapper[4795]: I1129 08:06:56.919978 4795 generic.go:334] "Generic (PLEG): container finished" podID="569960ad-ab7f-454b-b142-dae24e00340f" containerID="d391db1ee9e1b18be80d972b0c3b6ac2b4b1f01cef28302248aa1682e5d5d0df" exitCode=2 Nov 29 08:06:56 crc kubenswrapper[4795]: I1129 08:06:56.920003 4795 generic.go:334] "Generic (PLEG): container finished" podID="569960ad-ab7f-454b-b142-dae24e00340f" containerID="93f7ecd76efba30a5b144876a832efb59a9fe30245e7dd0500e1efcce1cf2ce4" exitCode=0 Nov 29 08:06:56 crc kubenswrapper[4795]: I1129 08:06:56.920019 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"569960ad-ab7f-454b-b142-dae24e00340f","Type":"ContainerDied","Data":"d391db1ee9e1b18be80d972b0c3b6ac2b4b1f01cef28302248aa1682e5d5d0df"} Nov 29 08:06:56 crc kubenswrapper[4795]: I1129 08:06:56.920039 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"569960ad-ab7f-454b-b142-dae24e00340f","Type":"ContainerDied","Data":"93f7ecd76efba30a5b144876a832efb59a9fe30245e7dd0500e1efcce1cf2ce4"} Nov 29 08:06:57 crc kubenswrapper[4795]: I1129 08:06:57.100003 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-hxtm7" Nov 29 08:06:57 crc kubenswrapper[4795]: I1129 08:06:57.287754 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1aa67cd3-59ee-4a03-bc97-fc1fa5729e40-dns-swift-storage-0\") pod \"1aa67cd3-59ee-4a03-bc97-fc1fa5729e40\" (UID: \"1aa67cd3-59ee-4a03-bc97-fc1fa5729e40\") " Nov 29 08:06:57 crc kubenswrapper[4795]: I1129 08:06:57.287983 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1aa67cd3-59ee-4a03-bc97-fc1fa5729e40-dns-svc\") pod \"1aa67cd3-59ee-4a03-bc97-fc1fa5729e40\" (UID: \"1aa67cd3-59ee-4a03-bc97-fc1fa5729e40\") " Nov 29 08:06:57 crc kubenswrapper[4795]: I1129 08:06:57.288086 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1aa67cd3-59ee-4a03-bc97-fc1fa5729e40-ovsdbserver-sb\") pod \"1aa67cd3-59ee-4a03-bc97-fc1fa5729e40\" (UID: \"1aa67cd3-59ee-4a03-bc97-fc1fa5729e40\") " Nov 29 08:06:57 crc kubenswrapper[4795]: I1129 08:06:57.288152 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1aa67cd3-59ee-4a03-bc97-fc1fa5729e40-ovsdbserver-nb\") pod \"1aa67cd3-59ee-4a03-bc97-fc1fa5729e40\" (UID: \"1aa67cd3-59ee-4a03-bc97-fc1fa5729e40\") " Nov 29 08:06:57 crc kubenswrapper[4795]: I1129 08:06:57.288215 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ghsx\" (UniqueName: \"kubernetes.io/projected/1aa67cd3-59ee-4a03-bc97-fc1fa5729e40-kube-api-access-8ghsx\") pod \"1aa67cd3-59ee-4a03-bc97-fc1fa5729e40\" (UID: \"1aa67cd3-59ee-4a03-bc97-fc1fa5729e40\") " Nov 29 08:06:57 crc kubenswrapper[4795]: I1129 08:06:57.288293 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1aa67cd3-59ee-4a03-bc97-fc1fa5729e40-config\") pod \"1aa67cd3-59ee-4a03-bc97-fc1fa5729e40\" (UID: \"1aa67cd3-59ee-4a03-bc97-fc1fa5729e40\") " Nov 29 08:06:57 crc kubenswrapper[4795]: I1129 08:06:57.293897 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1aa67cd3-59ee-4a03-bc97-fc1fa5729e40-kube-api-access-8ghsx" (OuterVolumeSpecName: "kube-api-access-8ghsx") pod "1aa67cd3-59ee-4a03-bc97-fc1fa5729e40" (UID: "1aa67cd3-59ee-4a03-bc97-fc1fa5729e40"). InnerVolumeSpecName "kube-api-access-8ghsx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:06:57 crc kubenswrapper[4795]: I1129 08:06:57.362387 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1aa67cd3-59ee-4a03-bc97-fc1fa5729e40-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1aa67cd3-59ee-4a03-bc97-fc1fa5729e40" (UID: "1aa67cd3-59ee-4a03-bc97-fc1fa5729e40"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:06:57 crc kubenswrapper[4795]: I1129 08:06:57.362556 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1aa67cd3-59ee-4a03-bc97-fc1fa5729e40-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1aa67cd3-59ee-4a03-bc97-fc1fa5729e40" (UID: "1aa67cd3-59ee-4a03-bc97-fc1fa5729e40"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:06:57 crc kubenswrapper[4795]: I1129 08:06:57.372681 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1aa67cd3-59ee-4a03-bc97-fc1fa5729e40-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1aa67cd3-59ee-4a03-bc97-fc1fa5729e40" (UID: "1aa67cd3-59ee-4a03-bc97-fc1fa5729e40"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:06:57 crc kubenswrapper[4795]: I1129 08:06:57.390393 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1aa67cd3-59ee-4a03-bc97-fc1fa5729e40-config" (OuterVolumeSpecName: "config") pod "1aa67cd3-59ee-4a03-bc97-fc1fa5729e40" (UID: "1aa67cd3-59ee-4a03-bc97-fc1fa5729e40"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:06:57 crc kubenswrapper[4795]: I1129 08:06:57.390677 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1aa67cd3-59ee-4a03-bc97-fc1fa5729e40-config\") pod \"1aa67cd3-59ee-4a03-bc97-fc1fa5729e40\" (UID: \"1aa67cd3-59ee-4a03-bc97-fc1fa5729e40\") " Nov 29 08:06:57 crc kubenswrapper[4795]: W1129 08:06:57.390758 4795 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/1aa67cd3-59ee-4a03-bc97-fc1fa5729e40/volumes/kubernetes.io~configmap/config Nov 29 08:06:57 crc kubenswrapper[4795]: I1129 08:06:57.390772 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1aa67cd3-59ee-4a03-bc97-fc1fa5729e40-config" (OuterVolumeSpecName: "config") pod "1aa67cd3-59ee-4a03-bc97-fc1fa5729e40" (UID: "1aa67cd3-59ee-4a03-bc97-fc1fa5729e40"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:06:57 crc kubenswrapper[4795]: I1129 08:06:57.391720 4795 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1aa67cd3-59ee-4a03-bc97-fc1fa5729e40-config\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:57 crc kubenswrapper[4795]: I1129 08:06:57.391747 4795 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1aa67cd3-59ee-4a03-bc97-fc1fa5729e40-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:57 crc kubenswrapper[4795]: I1129 08:06:57.391760 4795 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1aa67cd3-59ee-4a03-bc97-fc1fa5729e40-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:57 crc kubenswrapper[4795]: I1129 08:06:57.391774 4795 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1aa67cd3-59ee-4a03-bc97-fc1fa5729e40-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:57 crc kubenswrapper[4795]: I1129 08:06:57.391787 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ghsx\" (UniqueName: \"kubernetes.io/projected/1aa67cd3-59ee-4a03-bc97-fc1fa5729e40-kube-api-access-8ghsx\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:57 crc kubenswrapper[4795]: I1129 08:06:57.399584 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1aa67cd3-59ee-4a03-bc97-fc1fa5729e40-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1aa67cd3-59ee-4a03-bc97-fc1fa5729e40" (UID: "1aa67cd3-59ee-4a03-bc97-fc1fa5729e40"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:06:57 crc kubenswrapper[4795]: I1129 08:06:57.494156 4795 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1aa67cd3-59ee-4a03-bc97-fc1fa5729e40-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:57 crc kubenswrapper[4795]: I1129 08:06:57.954749 4795 generic.go:334] "Generic (PLEG): container finished" podID="569960ad-ab7f-454b-b142-dae24e00340f" containerID="54d689b5c4a8a4ba1c171dd892162e93f0d06097d24ec5730b0bedca6d37472f" exitCode=0 Nov 29 08:06:57 crc kubenswrapper[4795]: I1129 08:06:57.955011 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"569960ad-ab7f-454b-b142-dae24e00340f","Type":"ContainerDied","Data":"54d689b5c4a8a4ba1c171dd892162e93f0d06097d24ec5730b0bedca6d37472f"} Nov 29 08:06:57 crc kubenswrapper[4795]: I1129 08:06:57.958400 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-hxtm7" event={"ID":"1aa67cd3-59ee-4a03-bc97-fc1fa5729e40","Type":"ContainerDied","Data":"a0f480cc8e03d96e3904ff9f7bf19f8b00a0c28fb2c842fe45fac61d7aa037d0"} Nov 29 08:06:57 crc kubenswrapper[4795]: I1129 08:06:57.958555 4795 scope.go:117] "RemoveContainer" containerID="6df4be89eb8944158fdfcea161de794cf380ad51baaf08cca57f9debff70642c" Nov 29 08:06:57 crc kubenswrapper[4795]: I1129 08:06:57.959105 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-hxtm7" Nov 29 08:06:58 crc kubenswrapper[4795]: I1129 08:06:58.031154 4795 scope.go:117] "RemoveContainer" containerID="a4af938d4794130d91d0e6907f3fea2a831bc12ff4ce6ef2136ba533df8b3365" Nov 29 08:06:58 crc kubenswrapper[4795]: I1129 08:06:58.034444 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-hxtm7"] Nov 29 08:06:58 crc kubenswrapper[4795]: I1129 08:06:58.072544 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-hxtm7"] Nov 29 08:06:58 crc kubenswrapper[4795]: E1129 08:06:58.210427 4795 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1aa67cd3_59ee_4a03_bc97_fc1fa5729e40.slice/crio-a0f480cc8e03d96e3904ff9f7bf19f8b00a0c28fb2c842fe45fac61d7aa037d0\": RecentStats: unable to find data in memory cache]" Nov 29 08:06:58 crc kubenswrapper[4795]: I1129 08:06:58.291450 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1aa67cd3-59ee-4a03-bc97-fc1fa5729e40" path="/var/lib/kubelet/pods/1aa67cd3-59ee-4a03-bc97-fc1fa5729e40/volumes" Nov 29 08:06:58 crc kubenswrapper[4795]: I1129 08:06:58.484922 4795 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-6bcb7f45-v58x5" Nov 29 08:06:58 crc kubenswrapper[4795]: I1129 08:06:58.485733 4795 scope.go:117] "RemoveContainer" containerID="3ac2eef8825565d3462339e092bf148f6de3112fdcc91330655a21607cbc1a85" Nov 29 08:06:58 crc kubenswrapper[4795]: E1129 08:06:58.485965 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-6bcb7f45-v58x5_openstack(3d1c661f-8246-4ec2-96d0-13feaaaaf35a)\"" pod="openstack/heat-cfnapi-6bcb7f45-v58x5" podUID="3d1c661f-8246-4ec2-96d0-13feaaaaf35a" Nov 29 08:06:58 crc kubenswrapper[4795]: I1129 08:06:58.486345 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-6bcb7f45-v58x5" Nov 29 08:06:58 crc kubenswrapper[4795]: I1129 08:06:58.808842 4795 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-565648db59-vzdgj" Nov 29 08:06:58 crc kubenswrapper[4795]: I1129 08:06:58.809260 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-565648db59-vzdgj" Nov 29 08:06:58 crc kubenswrapper[4795]: I1129 08:06:58.810146 4795 scope.go:117] "RemoveContainer" containerID="c63523c1cc4a3dba730dddd1444495e48e68efdc4c913f5a9c59b650c67c6c78" Nov 29 08:06:58 crc kubenswrapper[4795]: E1129 08:06:58.810389 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-565648db59-vzdgj_openstack(5a930422-6c40-4a1a-963d-126d16f160a2)\"" pod="openstack/heat-api-565648db59-vzdgj" podUID="5a930422-6c40-4a1a-963d-126d16f160a2" Nov 29 08:06:58 crc kubenswrapper[4795]: I1129 08:06:58.970109 4795 scope.go:117] "RemoveContainer" containerID="3ac2eef8825565d3462339e092bf148f6de3112fdcc91330655a21607cbc1a85" Nov 29 08:06:58 crc kubenswrapper[4795]: E1129 08:06:58.970363 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-6bcb7f45-v58x5_openstack(3d1c661f-8246-4ec2-96d0-13feaaaaf35a)\"" pod="openstack/heat-cfnapi-6bcb7f45-v58x5" podUID="3d1c661f-8246-4ec2-96d0-13feaaaaf35a" Nov 29 08:06:59 crc kubenswrapper[4795]: I1129 08:06:59.031213 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 29 08:06:59 crc kubenswrapper[4795]: I1129 08:06:59.031539 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="4cc813d1-4929-48c8-9268-49a4b96562ac" containerName="glance-log" containerID="cri-o://63e8511f39602027652081d731fc79d74ebb125214e08f7fb390d8a7d03be9a6" gracePeriod=30 Nov 29 08:06:59 crc kubenswrapper[4795]: I1129 08:06:59.031682 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="4cc813d1-4929-48c8-9268-49a4b96562ac" containerName="glance-httpd" containerID="cri-o://6baeb754dc6c6709913872a997c4f4e688976a0e25ef7759f36ad0f0f3284123" gracePeriod=30 Nov 29 08:06:59 crc kubenswrapper[4795]: I1129 08:06:59.982179 4795 generic.go:334] "Generic (PLEG): container finished" podID="4cc813d1-4929-48c8-9268-49a4b96562ac" containerID="63e8511f39602027652081d731fc79d74ebb125214e08f7fb390d8a7d03be9a6" exitCode=143 Nov 29 08:06:59 crc kubenswrapper[4795]: I1129 08:06:59.982253 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4cc813d1-4929-48c8-9268-49a4b96562ac","Type":"ContainerDied","Data":"63e8511f39602027652081d731fc79d74ebb125214e08f7fb390d8a7d03be9a6"} Nov 29 08:07:01 crc kubenswrapper[4795]: I1129 08:07:01.068642 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 29 08:07:01 crc kubenswrapper[4795]: I1129 08:07:01.069386 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314" containerName="glance-log" containerID="cri-o://3801e5cabb1fda13b76b19bc936d3b2b47d646d50dfb30a2b45e6f75f0068456" gracePeriod=30 Nov 29 08:07:01 crc kubenswrapper[4795]: I1129 08:07:01.069514 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314" containerName="glance-httpd" containerID="cri-o://cd7ac6c1c1e2a29845feefcb848633a58a7d9e2ccf00e46fad7dc38e0e2bd944" gracePeriod=30 Nov 29 08:07:01 crc kubenswrapper[4795]: I1129 08:07:01.270007 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-fd5dc85fb-fss4q" Nov 29 08:07:01 crc kubenswrapper[4795]: I1129 08:07:01.879071 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-s4hqz"] Nov 29 08:07:01 crc kubenswrapper[4795]: E1129 08:07:01.880060 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b704537-db2a-453a-b977-b2a3c31cf61b" containerName="heat-cfnapi" Nov 29 08:07:01 crc kubenswrapper[4795]: I1129 08:07:01.880088 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b704537-db2a-453a-b977-b2a3c31cf61b" containerName="heat-cfnapi" Nov 29 08:07:01 crc kubenswrapper[4795]: E1129 08:07:01.883529 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1aa67cd3-59ee-4a03-bc97-fc1fa5729e40" containerName="init" Nov 29 08:07:01 crc kubenswrapper[4795]: I1129 08:07:01.883582 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="1aa67cd3-59ee-4a03-bc97-fc1fa5729e40" containerName="init" Nov 29 08:07:01 crc kubenswrapper[4795]: E1129 08:07:01.883634 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1aa67cd3-59ee-4a03-bc97-fc1fa5729e40" containerName="dnsmasq-dns" Nov 29 08:07:01 crc kubenswrapper[4795]: I1129 08:07:01.883647 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="1aa67cd3-59ee-4a03-bc97-fc1fa5729e40" containerName="dnsmasq-dns" Nov 29 08:07:01 crc kubenswrapper[4795]: E1129 08:07:01.883666 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="323b7b10-963c-41ff-b3ce-69dd3374f746" containerName="heat-api" Nov 29 08:07:01 crc kubenswrapper[4795]: I1129 08:07:01.883675 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="323b7b10-963c-41ff-b3ce-69dd3374f746" containerName="heat-api" Nov 29 08:07:01 crc kubenswrapper[4795]: I1129 08:07:01.884133 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="1aa67cd3-59ee-4a03-bc97-fc1fa5729e40" containerName="dnsmasq-dns" Nov 29 08:07:01 crc kubenswrapper[4795]: I1129 08:07:01.884169 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b704537-db2a-453a-b977-b2a3c31cf61b" containerName="heat-cfnapi" Nov 29 08:07:01 crc kubenswrapper[4795]: I1129 08:07:01.884188 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="323b7b10-963c-41ff-b3ce-69dd3374f746" containerName="heat-api" Nov 29 08:07:01 crc kubenswrapper[4795]: I1129 08:07:01.885262 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-s4hqz" Nov 29 08:07:01 crc kubenswrapper[4795]: I1129 08:07:01.901747 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-s4hqz"] Nov 29 08:07:01 crc kubenswrapper[4795]: I1129 08:07:01.970679 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-k678c"] Nov 29 08:07:01 crc kubenswrapper[4795]: I1129 08:07:01.972638 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-k678c" Nov 29 08:07:01 crc kubenswrapper[4795]: I1129 08:07:01.988074 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-k678c"] Nov 29 08:07:01 crc kubenswrapper[4795]: I1129 08:07:01.993716 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-67fb9f5ff-86s7p" Nov 29 08:07:02 crc kubenswrapper[4795]: I1129 08:07:02.012546 4795 generic.go:334] "Generic (PLEG): container finished" podID="eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314" containerID="3801e5cabb1fda13b76b19bc936d3b2b47d646d50dfb30a2b45e6f75f0068456" exitCode=143 Nov 29 08:07:02 crc kubenswrapper[4795]: I1129 08:07:02.012633 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314","Type":"ContainerDied","Data":"3801e5cabb1fda13b76b19bc936d3b2b47d646d50dfb30a2b45e6f75f0068456"} Nov 29 08:07:02 crc kubenswrapper[4795]: I1129 08:07:02.041877 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxns2\" (UniqueName: \"kubernetes.io/projected/2ec0c903-aae0-4c34-959f-15ec09782b09-kube-api-access-vxns2\") pod \"nova-api-db-create-s4hqz\" (UID: \"2ec0c903-aae0-4c34-959f-15ec09782b09\") " pod="openstack/nova-api-db-create-s4hqz" Nov 29 08:07:02 crc kubenswrapper[4795]: I1129 08:07:02.042012 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ec0c903-aae0-4c34-959f-15ec09782b09-operator-scripts\") pod \"nova-api-db-create-s4hqz\" (UID: \"2ec0c903-aae0-4c34-959f-15ec09782b09\") " pod="openstack/nova-api-db-create-s4hqz" Nov 29 08:07:02 crc kubenswrapper[4795]: I1129 08:07:02.051236 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-b49cd6b59-h4cg5" Nov 29 08:07:02 crc kubenswrapper[4795]: I1129 08:07:02.100561 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6bcb7f45-v58x5"] Nov 29 08:07:02 crc kubenswrapper[4795]: I1129 08:07:02.153211 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e56ba08f-a9c8-49ae-b4e8-d9cd70d63fb5-operator-scripts\") pod \"nova-cell0-db-create-k678c\" (UID: \"e56ba08f-a9c8-49ae-b4e8-d9cd70d63fb5\") " pod="openstack/nova-cell0-db-create-k678c" Nov 29 08:07:02 crc kubenswrapper[4795]: I1129 08:07:02.153300 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ec0c903-aae0-4c34-959f-15ec09782b09-operator-scripts\") pod \"nova-api-db-create-s4hqz\" (UID: \"2ec0c903-aae0-4c34-959f-15ec09782b09\") " pod="openstack/nova-api-db-create-s4hqz" Nov 29 08:07:02 crc kubenswrapper[4795]: I1129 08:07:02.153461 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxns2\" (UniqueName: \"kubernetes.io/projected/2ec0c903-aae0-4c34-959f-15ec09782b09-kube-api-access-vxns2\") pod \"nova-api-db-create-s4hqz\" (UID: \"2ec0c903-aae0-4c34-959f-15ec09782b09\") " pod="openstack/nova-api-db-create-s4hqz" Nov 29 08:07:02 crc kubenswrapper[4795]: I1129 08:07:02.153503 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8bhj\" (UniqueName: \"kubernetes.io/projected/e56ba08f-a9c8-49ae-b4e8-d9cd70d63fb5-kube-api-access-k8bhj\") pod \"nova-cell0-db-create-k678c\" (UID: \"e56ba08f-a9c8-49ae-b4e8-d9cd70d63fb5\") " pod="openstack/nova-cell0-db-create-k678c" Nov 29 08:07:02 crc kubenswrapper[4795]: I1129 08:07:02.155221 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ec0c903-aae0-4c34-959f-15ec09782b09-operator-scripts\") pod \"nova-api-db-create-s4hqz\" (UID: \"2ec0c903-aae0-4c34-959f-15ec09782b09\") " pod="openstack/nova-api-db-create-s4hqz" Nov 29 08:07:02 crc kubenswrapper[4795]: I1129 08:07:02.187984 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-9cfa-account-create-update-dm475"] Nov 29 08:07:02 crc kubenswrapper[4795]: I1129 08:07:02.189761 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9cfa-account-create-update-dm475" Nov 29 08:07:02 crc kubenswrapper[4795]: I1129 08:07:02.194305 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Nov 29 08:07:02 crc kubenswrapper[4795]: I1129 08:07:02.217713 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxns2\" (UniqueName: \"kubernetes.io/projected/2ec0c903-aae0-4c34-959f-15ec09782b09-kube-api-access-vxns2\") pod \"nova-api-db-create-s4hqz\" (UID: \"2ec0c903-aae0-4c34-959f-15ec09782b09\") " pod="openstack/nova-api-db-create-s4hqz" Nov 29 08:07:02 crc kubenswrapper[4795]: I1129 08:07:02.237064 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-9cfa-account-create-update-dm475"] Nov 29 08:07:02 crc kubenswrapper[4795]: I1129 08:07:02.257348 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8bhj\" (UniqueName: \"kubernetes.io/projected/e56ba08f-a9c8-49ae-b4e8-d9cd70d63fb5-kube-api-access-k8bhj\") pod \"nova-cell0-db-create-k678c\" (UID: \"e56ba08f-a9c8-49ae-b4e8-d9cd70d63fb5\") " pod="openstack/nova-cell0-db-create-k678c" Nov 29 08:07:02 crc kubenswrapper[4795]: I1129 08:07:02.257585 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e56ba08f-a9c8-49ae-b4e8-d9cd70d63fb5-operator-scripts\") pod \"nova-cell0-db-create-k678c\" (UID: \"e56ba08f-a9c8-49ae-b4e8-d9cd70d63fb5\") " pod="openstack/nova-cell0-db-create-k678c" Nov 29 08:07:02 crc kubenswrapper[4795]: I1129 08:07:02.258320 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e56ba08f-a9c8-49ae-b4e8-d9cd70d63fb5-operator-scripts\") pod \"nova-cell0-db-create-k678c\" (UID: \"e56ba08f-a9c8-49ae-b4e8-d9cd70d63fb5\") " pod="openstack/nova-cell0-db-create-k678c" Nov 29 08:07:02 crc kubenswrapper[4795]: I1129 08:07:02.334454 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8bhj\" (UniqueName: \"kubernetes.io/projected/e56ba08f-a9c8-49ae-b4e8-d9cd70d63fb5-kube-api-access-k8bhj\") pod \"nova-cell0-db-create-k678c\" (UID: \"e56ba08f-a9c8-49ae-b4e8-d9cd70d63fb5\") " pod="openstack/nova-cell0-db-create-k678c" Nov 29 08:07:02 crc kubenswrapper[4795]: I1129 08:07:02.361855 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdjtj\" (UniqueName: \"kubernetes.io/projected/9f24d558-d6ea-42f4-9147-1eca4481dcff-kube-api-access-kdjtj\") pod \"nova-api-9cfa-account-create-update-dm475\" (UID: \"9f24d558-d6ea-42f4-9147-1eca4481dcff\") " pod="openstack/nova-api-9cfa-account-create-update-dm475" Nov 29 08:07:02 crc kubenswrapper[4795]: I1129 08:07:02.361953 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f24d558-d6ea-42f4-9147-1eca4481dcff-operator-scripts\") pod \"nova-api-9cfa-account-create-update-dm475\" (UID: \"9f24d558-d6ea-42f4-9147-1eca4481dcff\") " pod="openstack/nova-api-9cfa-account-create-update-dm475" Nov 29 08:07:02 crc kubenswrapper[4795]: I1129 08:07:02.362516 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-565648db59-vzdgj"] Nov 29 08:07:02 crc kubenswrapper[4795]: I1129 08:07:02.441885 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-rgwdg"] Nov 29 08:07:02 crc kubenswrapper[4795]: I1129 08:07:02.462888 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-rgwdg"] Nov 29 08:07:02 crc kubenswrapper[4795]: I1129 08:07:02.463062 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-rgwdg" Nov 29 08:07:02 crc kubenswrapper[4795]: I1129 08:07:02.468472 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f24d558-d6ea-42f4-9147-1eca4481dcff-operator-scripts\") pod \"nova-api-9cfa-account-create-update-dm475\" (UID: \"9f24d558-d6ea-42f4-9147-1eca4481dcff\") " pod="openstack/nova-api-9cfa-account-create-update-dm475" Nov 29 08:07:02 crc kubenswrapper[4795]: I1129 08:07:02.468601 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkv67\" (UniqueName: \"kubernetes.io/projected/b93bb162-1cd1-4a20-888d-dd92a1affbd2-kube-api-access-fkv67\") pod \"nova-cell1-db-create-rgwdg\" (UID: \"b93bb162-1cd1-4a20-888d-dd92a1affbd2\") " pod="openstack/nova-cell1-db-create-rgwdg" Nov 29 08:07:02 crc kubenswrapper[4795]: I1129 08:07:02.468722 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdjtj\" (UniqueName: \"kubernetes.io/projected/9f24d558-d6ea-42f4-9147-1eca4481dcff-kube-api-access-kdjtj\") pod \"nova-api-9cfa-account-create-update-dm475\" (UID: \"9f24d558-d6ea-42f4-9147-1eca4481dcff\") " pod="openstack/nova-api-9cfa-account-create-update-dm475" Nov 29 08:07:02 crc kubenswrapper[4795]: I1129 08:07:02.468760 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b93bb162-1cd1-4a20-888d-dd92a1affbd2-operator-scripts\") pod \"nova-cell1-db-create-rgwdg\" (UID: \"b93bb162-1cd1-4a20-888d-dd92a1affbd2\") " pod="openstack/nova-cell1-db-create-rgwdg" Nov 29 08:07:02 crc kubenswrapper[4795]: I1129 08:07:02.469640 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f24d558-d6ea-42f4-9147-1eca4481dcff-operator-scripts\") pod \"nova-api-9cfa-account-create-update-dm475\" (UID: \"9f24d558-d6ea-42f4-9147-1eca4481dcff\") " pod="openstack/nova-api-9cfa-account-create-update-dm475" Nov 29 08:07:02 crc kubenswrapper[4795]: I1129 08:07:02.483064 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-8ada-account-create-update-4lg24"] Nov 29 08:07:02 crc kubenswrapper[4795]: I1129 08:07:02.485267 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-8ada-account-create-update-4lg24" Nov 29 08:07:02 crc kubenswrapper[4795]: I1129 08:07:02.487971 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Nov 29 08:07:02 crc kubenswrapper[4795]: I1129 08:07:02.506289 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-s4hqz" Nov 29 08:07:02 crc kubenswrapper[4795]: I1129 08:07:02.551266 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdjtj\" (UniqueName: \"kubernetes.io/projected/9f24d558-d6ea-42f4-9147-1eca4481dcff-kube-api-access-kdjtj\") pod \"nova-api-9cfa-account-create-update-dm475\" (UID: \"9f24d558-d6ea-42f4-9147-1eca4481dcff\") " pod="openstack/nova-api-9cfa-account-create-update-dm475" Nov 29 08:07:02 crc kubenswrapper[4795]: I1129 08:07:02.579769 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-8ada-account-create-update-4lg24"] Nov 29 08:07:02 crc kubenswrapper[4795]: I1129 08:07:02.582301 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/430ba7a8-1572-4074-8597-8a94c3c2c8a0-operator-scripts\") pod \"nova-cell0-8ada-account-create-update-4lg24\" (UID: \"430ba7a8-1572-4074-8597-8a94c3c2c8a0\") " pod="openstack/nova-cell0-8ada-account-create-update-4lg24" Nov 29 08:07:02 crc kubenswrapper[4795]: I1129 08:07:02.582515 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b93bb162-1cd1-4a20-888d-dd92a1affbd2-operator-scripts\") pod \"nova-cell1-db-create-rgwdg\" (UID: \"b93bb162-1cd1-4a20-888d-dd92a1affbd2\") " pod="openstack/nova-cell1-db-create-rgwdg" Nov 29 08:07:02 crc kubenswrapper[4795]: I1129 08:07:02.582636 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhcnm\" (UniqueName: \"kubernetes.io/projected/430ba7a8-1572-4074-8597-8a94c3c2c8a0-kube-api-access-lhcnm\") pod \"nova-cell0-8ada-account-create-update-4lg24\" (UID: \"430ba7a8-1572-4074-8597-8a94c3c2c8a0\") " pod="openstack/nova-cell0-8ada-account-create-update-4lg24" Nov 29 08:07:02 crc kubenswrapper[4795]: I1129 08:07:02.582771 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkv67\" (UniqueName: \"kubernetes.io/projected/b93bb162-1cd1-4a20-888d-dd92a1affbd2-kube-api-access-fkv67\") pod \"nova-cell1-db-create-rgwdg\" (UID: \"b93bb162-1cd1-4a20-888d-dd92a1affbd2\") " pod="openstack/nova-cell1-db-create-rgwdg" Nov 29 08:07:02 crc kubenswrapper[4795]: I1129 08:07:02.583820 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b93bb162-1cd1-4a20-888d-dd92a1affbd2-operator-scripts\") pod \"nova-cell1-db-create-rgwdg\" (UID: \"b93bb162-1cd1-4a20-888d-dd92a1affbd2\") " pod="openstack/nova-cell1-db-create-rgwdg" Nov 29 08:07:02 crc kubenswrapper[4795]: I1129 08:07:02.603241 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-k678c" Nov 29 08:07:02 crc kubenswrapper[4795]: I1129 08:07:02.614794 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-15a1-account-create-update-7f9hk"] Nov 29 08:07:02 crc kubenswrapper[4795]: I1129 08:07:02.616410 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-15a1-account-create-update-7f9hk" Nov 29 08:07:02 crc kubenswrapper[4795]: I1129 08:07:02.630948 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkv67\" (UniqueName: \"kubernetes.io/projected/b93bb162-1cd1-4a20-888d-dd92a1affbd2-kube-api-access-fkv67\") pod \"nova-cell1-db-create-rgwdg\" (UID: \"b93bb162-1cd1-4a20-888d-dd92a1affbd2\") " pod="openstack/nova-cell1-db-create-rgwdg" Nov 29 08:07:02 crc kubenswrapper[4795]: I1129 08:07:02.648924 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Nov 29 08:07:02 crc kubenswrapper[4795]: I1129 08:07:02.660840 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-15a1-account-create-update-7f9hk"] Nov 29 08:07:02 crc kubenswrapper[4795]: I1129 08:07:02.661584 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9cfa-account-create-update-dm475" Nov 29 08:07:02 crc kubenswrapper[4795]: I1129 08:07:02.780132 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhcnm\" (UniqueName: \"kubernetes.io/projected/430ba7a8-1572-4074-8597-8a94c3c2c8a0-kube-api-access-lhcnm\") pod \"nova-cell0-8ada-account-create-update-4lg24\" (UID: \"430ba7a8-1572-4074-8597-8a94c3c2c8a0\") " pod="openstack/nova-cell0-8ada-account-create-update-4lg24" Nov 29 08:07:02 crc kubenswrapper[4795]: I1129 08:07:02.780469 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/430ba7a8-1572-4074-8597-8a94c3c2c8a0-operator-scripts\") pod \"nova-cell0-8ada-account-create-update-4lg24\" (UID: \"430ba7a8-1572-4074-8597-8a94c3c2c8a0\") " pod="openstack/nova-cell0-8ada-account-create-update-4lg24" Nov 29 08:07:02 crc kubenswrapper[4795]: I1129 08:07:02.792486 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/430ba7a8-1572-4074-8597-8a94c3c2c8a0-operator-scripts\") pod \"nova-cell0-8ada-account-create-update-4lg24\" (UID: \"430ba7a8-1572-4074-8597-8a94c3c2c8a0\") " pod="openstack/nova-cell0-8ada-account-create-update-4lg24" Nov 29 08:07:02 crc kubenswrapper[4795]: I1129 08:07:02.807511 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-rgwdg" Nov 29 08:07:02 crc kubenswrapper[4795]: I1129 08:07:02.825359 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhcnm\" (UniqueName: \"kubernetes.io/projected/430ba7a8-1572-4074-8597-8a94c3c2c8a0-kube-api-access-lhcnm\") pod \"nova-cell0-8ada-account-create-update-4lg24\" (UID: \"430ba7a8-1572-4074-8597-8a94c3c2c8a0\") " pod="openstack/nova-cell0-8ada-account-create-update-4lg24" Nov 29 08:07:02 crc kubenswrapper[4795]: I1129 08:07:02.861747 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-8ada-account-create-update-4lg24" Nov 29 08:07:02 crc kubenswrapper[4795]: I1129 08:07:02.893464 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgnn9\" (UniqueName: \"kubernetes.io/projected/43275a86-fba2-41f6-b98c-c57c65e9c0c0-kube-api-access-sgnn9\") pod \"nova-cell1-15a1-account-create-update-7f9hk\" (UID: \"43275a86-fba2-41f6-b98c-c57c65e9c0c0\") " pod="openstack/nova-cell1-15a1-account-create-update-7f9hk" Nov 29 08:07:02 crc kubenswrapper[4795]: I1129 08:07:02.893616 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43275a86-fba2-41f6-b98c-c57c65e9c0c0-operator-scripts\") pod \"nova-cell1-15a1-account-create-update-7f9hk\" (UID: \"43275a86-fba2-41f6-b98c-c57c65e9c0c0\") " pod="openstack/nova-cell1-15a1-account-create-update-7f9hk" Nov 29 08:07:02 crc kubenswrapper[4795]: I1129 08:07:02.995216 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgnn9\" (UniqueName: \"kubernetes.io/projected/43275a86-fba2-41f6-b98c-c57c65e9c0c0-kube-api-access-sgnn9\") pod \"nova-cell1-15a1-account-create-update-7f9hk\" (UID: \"43275a86-fba2-41f6-b98c-c57c65e9c0c0\") " pod="openstack/nova-cell1-15a1-account-create-update-7f9hk" Nov 29 08:07:02 crc kubenswrapper[4795]: I1129 08:07:02.995905 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43275a86-fba2-41f6-b98c-c57c65e9c0c0-operator-scripts\") pod \"nova-cell1-15a1-account-create-update-7f9hk\" (UID: \"43275a86-fba2-41f6-b98c-c57c65e9c0c0\") " pod="openstack/nova-cell1-15a1-account-create-update-7f9hk" Nov 29 08:07:02 crc kubenswrapper[4795]: I1129 08:07:02.996885 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43275a86-fba2-41f6-b98c-c57c65e9c0c0-operator-scripts\") pod \"nova-cell1-15a1-account-create-update-7f9hk\" (UID: \"43275a86-fba2-41f6-b98c-c57c65e9c0c0\") " pod="openstack/nova-cell1-15a1-account-create-update-7f9hk" Nov 29 08:07:03 crc kubenswrapper[4795]: I1129 08:07:03.017075 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6bcb7f45-v58x5" Nov 29 08:07:03 crc kubenswrapper[4795]: I1129 08:07:03.019073 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgnn9\" (UniqueName: \"kubernetes.io/projected/43275a86-fba2-41f6-b98c-c57c65e9c0c0-kube-api-access-sgnn9\") pod \"nova-cell1-15a1-account-create-update-7f9hk\" (UID: \"43275a86-fba2-41f6-b98c-c57c65e9c0c0\") " pod="openstack/nova-cell1-15a1-account-create-update-7f9hk" Nov 29 08:07:03 crc kubenswrapper[4795]: I1129 08:07:03.059651 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6bcb7f45-v58x5" event={"ID":"3d1c661f-8246-4ec2-96d0-13feaaaaf35a","Type":"ContainerDied","Data":"5773180c80658601463424594cbaf86575ccec031b2814fb48633200f910e1fd"} Nov 29 08:07:03 crc kubenswrapper[4795]: I1129 08:07:03.059719 4795 scope.go:117] "RemoveContainer" containerID="3ac2eef8825565d3462339e092bf148f6de3112fdcc91330655a21607cbc1a85" Nov 29 08:07:03 crc kubenswrapper[4795]: I1129 08:07:03.059828 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6bcb7f45-v58x5" Nov 29 08:07:03 crc kubenswrapper[4795]: I1129 08:07:03.079424 4795 generic.go:334] "Generic (PLEG): container finished" podID="4cc813d1-4929-48c8-9268-49a4b96562ac" containerID="6baeb754dc6c6709913872a997c4f4e688976a0e25ef7759f36ad0f0f3284123" exitCode=0 Nov 29 08:07:03 crc kubenswrapper[4795]: I1129 08:07:03.079746 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4cc813d1-4929-48c8-9268-49a4b96562ac","Type":"ContainerDied","Data":"6baeb754dc6c6709913872a997c4f4e688976a0e25ef7759f36ad0f0f3284123"} Nov 29 08:07:03 crc kubenswrapper[4795]: I1129 08:07:03.114992 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d1c661f-8246-4ec2-96d0-13feaaaaf35a-combined-ca-bundle\") pod \"3d1c661f-8246-4ec2-96d0-13feaaaaf35a\" (UID: \"3d1c661f-8246-4ec2-96d0-13feaaaaf35a\") " Nov 29 08:07:03 crc kubenswrapper[4795]: I1129 08:07:03.115139 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k64sf\" (UniqueName: \"kubernetes.io/projected/3d1c661f-8246-4ec2-96d0-13feaaaaf35a-kube-api-access-k64sf\") pod \"3d1c661f-8246-4ec2-96d0-13feaaaaf35a\" (UID: \"3d1c661f-8246-4ec2-96d0-13feaaaaf35a\") " Nov 29 08:07:03 crc kubenswrapper[4795]: I1129 08:07:03.115186 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3d1c661f-8246-4ec2-96d0-13feaaaaf35a-config-data-custom\") pod \"3d1c661f-8246-4ec2-96d0-13feaaaaf35a\" (UID: \"3d1c661f-8246-4ec2-96d0-13feaaaaf35a\") " Nov 29 08:07:03 crc kubenswrapper[4795]: I1129 08:07:03.115251 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d1c661f-8246-4ec2-96d0-13feaaaaf35a-config-data\") pod \"3d1c661f-8246-4ec2-96d0-13feaaaaf35a\" (UID: \"3d1c661f-8246-4ec2-96d0-13feaaaaf35a\") " Nov 29 08:07:03 crc kubenswrapper[4795]: I1129 08:07:03.122501 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d1c661f-8246-4ec2-96d0-13feaaaaf35a-kube-api-access-k64sf" (OuterVolumeSpecName: "kube-api-access-k64sf") pod "3d1c661f-8246-4ec2-96d0-13feaaaaf35a" (UID: "3d1c661f-8246-4ec2-96d0-13feaaaaf35a"). InnerVolumeSpecName "kube-api-access-k64sf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:07:03 crc kubenswrapper[4795]: I1129 08:07:03.123047 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d1c661f-8246-4ec2-96d0-13feaaaaf35a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3d1c661f-8246-4ec2-96d0-13feaaaaf35a" (UID: "3d1c661f-8246-4ec2-96d0-13feaaaaf35a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:07:03 crc kubenswrapper[4795]: I1129 08:07:03.195163 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-15a1-account-create-update-7f9hk" Nov 29 08:07:03 crc kubenswrapper[4795]: I1129 08:07:03.204375 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d1c661f-8246-4ec2-96d0-13feaaaaf35a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3d1c661f-8246-4ec2-96d0-13feaaaaf35a" (UID: "3d1c661f-8246-4ec2-96d0-13feaaaaf35a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:07:03 crc kubenswrapper[4795]: I1129 08:07:03.237211 4795 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d1c661f-8246-4ec2-96d0-13feaaaaf35a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 08:07:03 crc kubenswrapper[4795]: I1129 08:07:03.237259 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k64sf\" (UniqueName: \"kubernetes.io/projected/3d1c661f-8246-4ec2-96d0-13feaaaaf35a-kube-api-access-k64sf\") on node \"crc\" DevicePath \"\"" Nov 29 08:07:03 crc kubenswrapper[4795]: I1129 08:07:03.237271 4795 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3d1c661f-8246-4ec2-96d0-13feaaaaf35a-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 29 08:07:03 crc kubenswrapper[4795]: I1129 08:07:03.284038 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d1c661f-8246-4ec2-96d0-13feaaaaf35a-config-data" (OuterVolumeSpecName: "config-data") pod "3d1c661f-8246-4ec2-96d0-13feaaaaf35a" (UID: "3d1c661f-8246-4ec2-96d0-13feaaaaf35a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:07:03 crc kubenswrapper[4795]: I1129 08:07:03.338993 4795 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d1c661f-8246-4ec2-96d0-13feaaaaf35a-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 08:07:03 crc kubenswrapper[4795]: I1129 08:07:03.584509 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-565648db59-vzdgj" Nov 29 08:07:03 crc kubenswrapper[4795]: I1129 08:07:03.593498 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6bcb7f45-v58x5"] Nov 29 08:07:03 crc kubenswrapper[4795]: I1129 08:07:03.627655 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-6bcb7f45-v58x5"] Nov 29 08:07:03 crc kubenswrapper[4795]: I1129 08:07:03.750499 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5a930422-6c40-4a1a-963d-126d16f160a2-config-data-custom\") pod \"5a930422-6c40-4a1a-963d-126d16f160a2\" (UID: \"5a930422-6c40-4a1a-963d-126d16f160a2\") " Nov 29 08:07:03 crc kubenswrapper[4795]: I1129 08:07:03.751168 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a930422-6c40-4a1a-963d-126d16f160a2-config-data\") pod \"5a930422-6c40-4a1a-963d-126d16f160a2\" (UID: \"5a930422-6c40-4a1a-963d-126d16f160a2\") " Nov 29 08:07:03 crc kubenswrapper[4795]: I1129 08:07:03.751262 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xmxs\" (UniqueName: \"kubernetes.io/projected/5a930422-6c40-4a1a-963d-126d16f160a2-kube-api-access-7xmxs\") pod \"5a930422-6c40-4a1a-963d-126d16f160a2\" (UID: \"5a930422-6c40-4a1a-963d-126d16f160a2\") " Nov 29 08:07:03 crc kubenswrapper[4795]: I1129 08:07:03.751415 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a930422-6c40-4a1a-963d-126d16f160a2-combined-ca-bundle\") pod \"5a930422-6c40-4a1a-963d-126d16f160a2\" (UID: \"5a930422-6c40-4a1a-963d-126d16f160a2\") " Nov 29 08:07:03 crc kubenswrapper[4795]: I1129 08:07:03.763755 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a930422-6c40-4a1a-963d-126d16f160a2-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "5a930422-6c40-4a1a-963d-126d16f160a2" (UID: "5a930422-6c40-4a1a-963d-126d16f160a2"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:07:03 crc kubenswrapper[4795]: I1129 08:07:03.774724 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a930422-6c40-4a1a-963d-126d16f160a2-kube-api-access-7xmxs" (OuterVolumeSpecName: "kube-api-access-7xmxs") pod "5a930422-6c40-4a1a-963d-126d16f160a2" (UID: "5a930422-6c40-4a1a-963d-126d16f160a2"). InnerVolumeSpecName "kube-api-access-7xmxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:07:03 crc kubenswrapper[4795]: I1129 08:07:03.838475 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a930422-6c40-4a1a-963d-126d16f160a2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5a930422-6c40-4a1a-963d-126d16f160a2" (UID: "5a930422-6c40-4a1a-963d-126d16f160a2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:07:03 crc kubenswrapper[4795]: I1129 08:07:03.865277 4795 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a930422-6c40-4a1a-963d-126d16f160a2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 08:07:03 crc kubenswrapper[4795]: I1129 08:07:03.865320 4795 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5a930422-6c40-4a1a-963d-126d16f160a2-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 29 08:07:03 crc kubenswrapper[4795]: I1129 08:07:03.865335 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xmxs\" (UniqueName: \"kubernetes.io/projected/5a930422-6c40-4a1a-963d-126d16f160a2-kube-api-access-7xmxs\") on node \"crc\" DevicePath \"\"" Nov 29 08:07:03 crc kubenswrapper[4795]: I1129 08:07:03.893930 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a930422-6c40-4a1a-963d-126d16f160a2-config-data" (OuterVolumeSpecName: "config-data") pod "5a930422-6c40-4a1a-963d-126d16f160a2" (UID: "5a930422-6c40-4a1a-963d-126d16f160a2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:07:03 crc kubenswrapper[4795]: I1129 08:07:03.915934 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 29 08:07:03 crc kubenswrapper[4795]: I1129 08:07:03.971779 4795 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a930422-6c40-4a1a-963d-126d16f160a2-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 08:07:04 crc kubenswrapper[4795]: I1129 08:07:04.036507 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-k678c"] Nov 29 08:07:04 crc kubenswrapper[4795]: I1129 08:07:04.056700 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-9cfa-account-create-update-dm475"] Nov 29 08:07:04 crc kubenswrapper[4795]: I1129 08:07:04.227001 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cc813d1-4929-48c8-9268-49a4b96562ac-public-tls-certs\") pod \"4cc813d1-4929-48c8-9268-49a4b96562ac\" (UID: \"4cc813d1-4929-48c8-9268-49a4b96562ac\") " Nov 29 08:07:04 crc kubenswrapper[4795]: I1129 08:07:04.229036 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4cc813d1-4929-48c8-9268-49a4b96562ac-logs\") pod \"4cc813d1-4929-48c8-9268-49a4b96562ac\" (UID: \"4cc813d1-4929-48c8-9268-49a4b96562ac\") " Nov 29 08:07:04 crc kubenswrapper[4795]: I1129 08:07:04.229229 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9b7k\" (UniqueName: \"kubernetes.io/projected/4cc813d1-4929-48c8-9268-49a4b96562ac-kube-api-access-x9b7k\") pod \"4cc813d1-4929-48c8-9268-49a4b96562ac\" (UID: \"4cc813d1-4929-48c8-9268-49a4b96562ac\") " Nov 29 08:07:04 crc kubenswrapper[4795]: I1129 08:07:04.229299 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"4cc813d1-4929-48c8-9268-49a4b96562ac\" (UID: \"4cc813d1-4929-48c8-9268-49a4b96562ac\") " Nov 29 08:07:04 crc kubenswrapper[4795]: I1129 08:07:04.229520 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4cc813d1-4929-48c8-9268-49a4b96562ac-httpd-run\") pod \"4cc813d1-4929-48c8-9268-49a4b96562ac\" (UID: \"4cc813d1-4929-48c8-9268-49a4b96562ac\") " Nov 29 08:07:04 crc kubenswrapper[4795]: I1129 08:07:04.229663 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4cc813d1-4929-48c8-9268-49a4b96562ac-scripts\") pod \"4cc813d1-4929-48c8-9268-49a4b96562ac\" (UID: \"4cc813d1-4929-48c8-9268-49a4b96562ac\") " Nov 29 08:07:04 crc kubenswrapper[4795]: I1129 08:07:04.229697 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cc813d1-4929-48c8-9268-49a4b96562ac-combined-ca-bundle\") pod \"4cc813d1-4929-48c8-9268-49a4b96562ac\" (UID: \"4cc813d1-4929-48c8-9268-49a4b96562ac\") " Nov 29 08:07:04 crc kubenswrapper[4795]: I1129 08:07:04.229828 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cc813d1-4929-48c8-9268-49a4b96562ac-config-data\") pod \"4cc813d1-4929-48c8-9268-49a4b96562ac\" (UID: \"4cc813d1-4929-48c8-9268-49a4b96562ac\") " Nov 29 08:07:04 crc kubenswrapper[4795]: I1129 08:07:04.232524 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4cc813d1-4929-48c8-9268-49a4b96562ac-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "4cc813d1-4929-48c8-9268-49a4b96562ac" (UID: "4cc813d1-4929-48c8-9268-49a4b96562ac"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:07:04 crc kubenswrapper[4795]: I1129 08:07:04.233908 4795 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4cc813d1-4929-48c8-9268-49a4b96562ac-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 29 08:07:04 crc kubenswrapper[4795]: I1129 08:07:04.241474 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4cc813d1-4929-48c8-9268-49a4b96562ac-logs" (OuterVolumeSpecName: "logs") pod "4cc813d1-4929-48c8-9268-49a4b96562ac" (UID: "4cc813d1-4929-48c8-9268-49a4b96562ac"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:07:04 crc kubenswrapper[4795]: I1129 08:07:04.243803 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "4cc813d1-4929-48c8-9268-49a4b96562ac" (UID: "4cc813d1-4929-48c8-9268-49a4b96562ac"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 29 08:07:04 crc kubenswrapper[4795]: I1129 08:07:04.245455 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cc813d1-4929-48c8-9268-49a4b96562ac-scripts" (OuterVolumeSpecName: "scripts") pod "4cc813d1-4929-48c8-9268-49a4b96562ac" (UID: "4cc813d1-4929-48c8-9268-49a4b96562ac"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:07:04 crc kubenswrapper[4795]: I1129 08:07:04.263951 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-s4hqz"] Nov 29 08:07:04 crc kubenswrapper[4795]: I1129 08:07:04.267886 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cc813d1-4929-48c8-9268-49a4b96562ac-kube-api-access-x9b7k" (OuterVolumeSpecName: "kube-api-access-x9b7k") pod "4cc813d1-4929-48c8-9268-49a4b96562ac" (UID: "4cc813d1-4929-48c8-9268-49a4b96562ac"). InnerVolumeSpecName "kube-api-access-x9b7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:07:04 crc kubenswrapper[4795]: I1129 08:07:04.336526 4795 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4cc813d1-4929-48c8-9268-49a4b96562ac-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 08:07:04 crc kubenswrapper[4795]: I1129 08:07:04.336558 4795 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4cc813d1-4929-48c8-9268-49a4b96562ac-logs\") on node \"crc\" DevicePath \"\"" Nov 29 08:07:04 crc kubenswrapper[4795]: I1129 08:07:04.336567 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9b7k\" (UniqueName: \"kubernetes.io/projected/4cc813d1-4929-48c8-9268-49a4b96562ac-kube-api-access-x9b7k\") on node \"crc\" DevicePath \"\"" Nov 29 08:07:04 crc kubenswrapper[4795]: I1129 08:07:04.336612 4795 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Nov 29 08:07:04 crc kubenswrapper[4795]: I1129 08:07:04.352441 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d1c661f-8246-4ec2-96d0-13feaaaaf35a" path="/var/lib/kubelet/pods/3d1c661f-8246-4ec2-96d0-13feaaaaf35a/volumes" Nov 29 08:07:04 crc kubenswrapper[4795]: I1129 08:07:04.360318 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cc813d1-4929-48c8-9268-49a4b96562ac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4cc813d1-4929-48c8-9268-49a4b96562ac" (UID: "4cc813d1-4929-48c8-9268-49a4b96562ac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:07:04 crc kubenswrapper[4795]: I1129 08:07:04.369006 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 29 08:07:04 crc kubenswrapper[4795]: I1129 08:07:04.375628 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cc813d1-4929-48c8-9268-49a4b96562ac-config-data" (OuterVolumeSpecName: "config-data") pod "4cc813d1-4929-48c8-9268-49a4b96562ac" (UID: "4cc813d1-4929-48c8-9268-49a4b96562ac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:07:04 crc kubenswrapper[4795]: I1129 08:07:04.375988 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-565648db59-vzdgj" Nov 29 08:07:04 crc kubenswrapper[4795]: I1129 08:07:04.393739 4795 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Nov 29 08:07:04 crc kubenswrapper[4795]: I1129 08:07:04.416747 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cc813d1-4929-48c8-9268-49a4b96562ac-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "4cc813d1-4929-48c8-9268-49a4b96562ac" (UID: "4cc813d1-4929-48c8-9268-49a4b96562ac"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:07:04 crc kubenswrapper[4795]: I1129 08:07:04.440002 4795 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Nov 29 08:07:04 crc kubenswrapper[4795]: I1129 08:07:04.440087 4795 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cc813d1-4929-48c8-9268-49a4b96562ac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 08:07:04 crc kubenswrapper[4795]: I1129 08:07:04.440099 4795 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cc813d1-4929-48c8-9268-49a4b96562ac-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 08:07:04 crc kubenswrapper[4795]: I1129 08:07:04.440108 4795 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cc813d1-4929-48c8-9268-49a4b96562ac-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 29 08:07:04 crc kubenswrapper[4795]: I1129 08:07:04.561072 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-k678c" event={"ID":"e56ba08f-a9c8-49ae-b4e8-d9cd70d63fb5","Type":"ContainerStarted","Data":"1026fad69d52376441b5832a7140b5e59196ecfb740ea773e5e5c909d17621c9"} Nov 29 08:07:04 crc kubenswrapper[4795]: I1129 08:07:04.561399 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4cc813d1-4929-48c8-9268-49a4b96562ac","Type":"ContainerDied","Data":"abcde8702966f4b71b0dab16893cd2483fabce33ecffd8277e2992621cef7a9e"} Nov 29 08:07:04 crc kubenswrapper[4795]: I1129 08:07:04.561422 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-565648db59-vzdgj" event={"ID":"5a930422-6c40-4a1a-963d-126d16f160a2","Type":"ContainerDied","Data":"566adfd29a7a30293fec697f9bb92516b1a5c5e882b0b0468b2d20800692bb19"} Nov 29 08:07:04 crc kubenswrapper[4795]: I1129 08:07:04.561435 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9cfa-account-create-update-dm475" event={"ID":"9f24d558-d6ea-42f4-9147-1eca4481dcff","Type":"ContainerStarted","Data":"b8db3af159c87d65d0257dd81177beb47d6217fbc9dbc64335aa703fb8a9adfc"} Nov 29 08:07:04 crc kubenswrapper[4795]: I1129 08:07:04.561451 4795 scope.go:117] "RemoveContainer" containerID="6baeb754dc6c6709913872a997c4f4e688976a0e25ef7759f36ad0f0f3284123" Nov 29 08:07:04 crc kubenswrapper[4795]: I1129 08:07:04.616815 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-565648db59-vzdgj"] Nov 29 08:07:04 crc kubenswrapper[4795]: I1129 08:07:04.629022 4795 scope.go:117] "RemoveContainer" containerID="63e8511f39602027652081d731fc79d74ebb125214e08f7fb390d8a7d03be9a6" Nov 29 08:07:04 crc kubenswrapper[4795]: I1129 08:07:04.661651 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-565648db59-vzdgj"] Nov 29 08:07:04 crc kubenswrapper[4795]: I1129 08:07:04.669974 4795 scope.go:117] "RemoveContainer" containerID="c63523c1cc4a3dba730dddd1444495e48e68efdc4c913f5a9c59b650c67c6c78" Nov 29 08:07:04 crc kubenswrapper[4795]: I1129 08:07:04.727846 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-8ada-account-create-update-4lg24"] Nov 29 08:07:04 crc kubenswrapper[4795]: W1129 08:07:04.745217 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb93bb162_1cd1_4a20_888d_dd92a1affbd2.slice/crio-2227d50a993b1190aa918ebbf2f7513aaa6aa8ef195d7f093201b534ccfc4b52 WatchSource:0}: Error finding container 2227d50a993b1190aa918ebbf2f7513aaa6aa8ef195d7f093201b534ccfc4b52: Status 404 returned error can't find the container with id 2227d50a993b1190aa918ebbf2f7513aaa6aa8ef195d7f093201b534ccfc4b52 Nov 29 08:07:04 crc kubenswrapper[4795]: I1129 08:07:04.757236 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-15a1-account-create-update-7f9hk"] Nov 29 08:07:04 crc kubenswrapper[4795]: W1129 08:07:04.769778 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43275a86_fba2_41f6_b98c_c57c65e9c0c0.slice/crio-c8b6ed860088046467d99769e10b5a9492f578fc194abf2209cc32ad98fc6635 WatchSource:0}: Error finding container c8b6ed860088046467d99769e10b5a9492f578fc194abf2209cc32ad98fc6635: Status 404 returned error can't find the container with id c8b6ed860088046467d99769e10b5a9492f578fc194abf2209cc32ad98fc6635 Nov 29 08:07:04 crc kubenswrapper[4795]: I1129 08:07:04.780907 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-rgwdg"] Nov 29 08:07:04 crc kubenswrapper[4795]: I1129 08:07:04.972423 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.020483 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.089059 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 29 08:07:05 crc kubenswrapper[4795]: E1129 08:07:05.089666 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cc813d1-4929-48c8-9268-49a4b96562ac" containerName="glance-log" Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.089684 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cc813d1-4929-48c8-9268-49a4b96562ac" containerName="glance-log" Nov 29 08:07:05 crc kubenswrapper[4795]: E1129 08:07:05.089712 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a930422-6c40-4a1a-963d-126d16f160a2" containerName="heat-api" Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.089719 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a930422-6c40-4a1a-963d-126d16f160a2" containerName="heat-api" Nov 29 08:07:05 crc kubenswrapper[4795]: E1129 08:07:05.089737 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a930422-6c40-4a1a-963d-126d16f160a2" containerName="heat-api" Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.089743 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a930422-6c40-4a1a-963d-126d16f160a2" containerName="heat-api" Nov 29 08:07:05 crc kubenswrapper[4795]: E1129 08:07:05.089760 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d1c661f-8246-4ec2-96d0-13feaaaaf35a" containerName="heat-cfnapi" Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.089767 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d1c661f-8246-4ec2-96d0-13feaaaaf35a" containerName="heat-cfnapi" Nov 29 08:07:05 crc kubenswrapper[4795]: E1129 08:07:05.089794 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cc813d1-4929-48c8-9268-49a4b96562ac" containerName="glance-httpd" Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.089804 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cc813d1-4929-48c8-9268-49a4b96562ac" containerName="glance-httpd" Nov 29 08:07:05 crc kubenswrapper[4795]: E1129 08:07:05.089817 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d1c661f-8246-4ec2-96d0-13feaaaaf35a" containerName="heat-cfnapi" Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.089823 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d1c661f-8246-4ec2-96d0-13feaaaaf35a" containerName="heat-cfnapi" Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.090076 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a930422-6c40-4a1a-963d-126d16f160a2" containerName="heat-api" Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.090092 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d1c661f-8246-4ec2-96d0-13feaaaaf35a" containerName="heat-cfnapi" Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.090118 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cc813d1-4929-48c8-9268-49a4b96562ac" containerName="glance-log" Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.090134 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a930422-6c40-4a1a-963d-126d16f160a2" containerName="heat-api" Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.090144 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cc813d1-4929-48c8-9268-49a4b96562ac" containerName="glance-httpd" Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.090620 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d1c661f-8246-4ec2-96d0-13feaaaaf35a" containerName="heat-cfnapi" Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.091764 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.094742 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.094933 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.102780 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.164476 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.201471 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3625e087-5469-4cc2-b580-13d7201ff475-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3625e087-5469-4cc2-b580-13d7201ff475\") " pod="openstack/glance-default-external-api-0" Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.201547 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3625e087-5469-4cc2-b580-13d7201ff475-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3625e087-5469-4cc2-b580-13d7201ff475\") " pod="openstack/glance-default-external-api-0" Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.206071 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3625e087-5469-4cc2-b580-13d7201ff475-scripts\") pod \"glance-default-external-api-0\" (UID: \"3625e087-5469-4cc2-b580-13d7201ff475\") " pod="openstack/glance-default-external-api-0" Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.206242 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsn4j\" (UniqueName: \"kubernetes.io/projected/3625e087-5469-4cc2-b580-13d7201ff475-kube-api-access-dsn4j\") pod \"glance-default-external-api-0\" (UID: \"3625e087-5469-4cc2-b580-13d7201ff475\") " pod="openstack/glance-default-external-api-0" Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.206307 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3625e087-5469-4cc2-b580-13d7201ff475-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"3625e087-5469-4cc2-b580-13d7201ff475\") " pod="openstack/glance-default-external-api-0" Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.206451 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"3625e087-5469-4cc2-b580-13d7201ff475\") " pod="openstack/glance-default-external-api-0" Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.206916 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3625e087-5469-4cc2-b580-13d7201ff475-config-data\") pod \"glance-default-external-api-0\" (UID: \"3625e087-5469-4cc2-b580-13d7201ff475\") " pod="openstack/glance-default-external-api-0" Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.206950 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3625e087-5469-4cc2-b580-13d7201ff475-logs\") pod \"glance-default-external-api-0\" (UID: \"3625e087-5469-4cc2-b580-13d7201ff475\") " pod="openstack/glance-default-external-api-0" Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.347236 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314-logs\") pod \"eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314\" (UID: \"eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314\") " Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.347407 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314\" (UID: \"eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314\") " Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.347519 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68tkt\" (UniqueName: \"kubernetes.io/projected/eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314-kube-api-access-68tkt\") pod \"eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314\" (UID: \"eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314\") " Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.352704 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314-logs" (OuterVolumeSpecName: "logs") pod "eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314" (UID: "eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.367412 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314-config-data\") pod \"eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314\" (UID: \"eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314\") " Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.367507 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314-scripts\") pod \"eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314\" (UID: \"eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314\") " Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.367660 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314-combined-ca-bundle\") pod \"eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314\" (UID: \"eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314\") " Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.367734 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314-internal-tls-certs\") pod \"eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314\" (UID: \"eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314\") " Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.367771 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314-httpd-run\") pod \"eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314\" (UID: \"eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314\") " Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.368502 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3625e087-5469-4cc2-b580-13d7201ff475-config-data\") pod \"glance-default-external-api-0\" (UID: \"3625e087-5469-4cc2-b580-13d7201ff475\") " pod="openstack/glance-default-external-api-0" Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.368546 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3625e087-5469-4cc2-b580-13d7201ff475-logs\") pod \"glance-default-external-api-0\" (UID: \"3625e087-5469-4cc2-b580-13d7201ff475\") " pod="openstack/glance-default-external-api-0" Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.368636 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3625e087-5469-4cc2-b580-13d7201ff475-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3625e087-5469-4cc2-b580-13d7201ff475\") " pod="openstack/glance-default-external-api-0" Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.368668 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3625e087-5469-4cc2-b580-13d7201ff475-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3625e087-5469-4cc2-b580-13d7201ff475\") " pod="openstack/glance-default-external-api-0" Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.368733 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3625e087-5469-4cc2-b580-13d7201ff475-scripts\") pod \"glance-default-external-api-0\" (UID: \"3625e087-5469-4cc2-b580-13d7201ff475\") " pod="openstack/glance-default-external-api-0" Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.368833 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsn4j\" (UniqueName: \"kubernetes.io/projected/3625e087-5469-4cc2-b580-13d7201ff475-kube-api-access-dsn4j\") pod \"glance-default-external-api-0\" (UID: \"3625e087-5469-4cc2-b580-13d7201ff475\") " pod="openstack/glance-default-external-api-0" Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.368890 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3625e087-5469-4cc2-b580-13d7201ff475-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"3625e087-5469-4cc2-b580-13d7201ff475\") " pod="openstack/glance-default-external-api-0" Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.369019 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"3625e087-5469-4cc2-b580-13d7201ff475\") " pod="openstack/glance-default-external-api-0" Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.369262 4795 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314-logs\") on node \"crc\" DevicePath \"\"" Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.369863 4795 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"3625e087-5469-4cc2-b580-13d7201ff475\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.408833 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3625e087-5469-4cc2-b580-13d7201ff475-logs\") pod \"glance-default-external-api-0\" (UID: \"3625e087-5469-4cc2-b580-13d7201ff475\") " pod="openstack/glance-default-external-api-0" Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.410844 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3625e087-5469-4cc2-b580-13d7201ff475-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3625e087-5469-4cc2-b580-13d7201ff475\") " pod="openstack/glance-default-external-api-0" Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.413095 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314" (UID: "eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.416944 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3625e087-5469-4cc2-b580-13d7201ff475-config-data\") pod \"glance-default-external-api-0\" (UID: \"3625e087-5469-4cc2-b580-13d7201ff475\") " pod="openstack/glance-default-external-api-0" Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.432528 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3625e087-5469-4cc2-b580-13d7201ff475-scripts\") pod \"glance-default-external-api-0\" (UID: \"3625e087-5469-4cc2-b580-13d7201ff475\") " pod="openstack/glance-default-external-api-0" Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.438544 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3625e087-5469-4cc2-b580-13d7201ff475-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3625e087-5469-4cc2-b580-13d7201ff475\") " pod="openstack/glance-default-external-api-0" Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.443342 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3625e087-5469-4cc2-b580-13d7201ff475-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"3625e087-5469-4cc2-b580-13d7201ff475\") " pod="openstack/glance-default-external-api-0" Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.483886 4795 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.517114 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsn4j\" (UniqueName: \"kubernetes.io/projected/3625e087-5469-4cc2-b580-13d7201ff475-kube-api-access-dsn4j\") pod \"glance-default-external-api-0\" (UID: \"3625e087-5469-4cc2-b580-13d7201ff475\") " pod="openstack/glance-default-external-api-0" Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.562109 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-8ada-account-create-update-4lg24" event={"ID":"430ba7a8-1572-4074-8597-8a94c3c2c8a0","Type":"ContainerStarted","Data":"91300cec0059a9300b7ab6314c7a208ee2ab33615dd97d812bc7fb29a0f0a9a5"} Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.563750 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314-kube-api-access-68tkt" (OuterVolumeSpecName: "kube-api-access-68tkt") pod "eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314" (UID: "eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314"). InnerVolumeSpecName "kube-api-access-68tkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.570343 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314" (UID: "eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.574654 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-k678c" event={"ID":"e56ba08f-a9c8-49ae-b4e8-d9cd70d63fb5","Type":"ContainerStarted","Data":"2f314c2f00f704130d6bec48c02535f0225bca6182dab65d3c925da58f5e06fe"} Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.594368 4795 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.595380 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68tkt\" (UniqueName: \"kubernetes.io/projected/eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314-kube-api-access-68tkt\") on node \"crc\" DevicePath \"\"" Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.601122 4795 generic.go:334] "Generic (PLEG): container finished" podID="eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314" containerID="cd7ac6c1c1e2a29845feefcb848633a58a7d9e2ccf00e46fad7dc38e0e2bd944" exitCode=0 Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.601287 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.601334 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314","Type":"ContainerDied","Data":"cd7ac6c1c1e2a29845feefcb848633a58a7d9e2ccf00e46fad7dc38e0e2bd944"} Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.601375 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314","Type":"ContainerDied","Data":"ce244b42c43bea236b423e7f317fa01d1994de63659dd8db3208992542473a29"} Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.601393 4795 scope.go:117] "RemoveContainer" containerID="cd7ac6c1c1e2a29845feefcb848633a58a7d9e2ccf00e46fad7dc38e0e2bd944" Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.624469 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314-scripts" (OuterVolumeSpecName: "scripts") pod "eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314" (UID: "eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.634517 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-k678c" podStartSLOduration=4.634496593 podStartE2EDuration="4.634496593s" podCreationTimestamp="2025-11-29 08:07:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 08:07:05.616462922 +0000 UTC m=+1671.592038712" watchObservedRunningTime="2025-11-29 08:07:05.634496593 +0000 UTC m=+1671.610072383" Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.637438 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"3625e087-5469-4cc2-b580-13d7201ff475\") " pod="openstack/glance-default-external-api-0" Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.652173 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-rgwdg" event={"ID":"b93bb162-1cd1-4a20-888d-dd92a1affbd2","Type":"ContainerStarted","Data":"f6c7f03832cc17c96a9456f66abc4c28bf56ecb19380cfc5f5c1fa8e9525dd60"} Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.652414 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-rgwdg" event={"ID":"b93bb162-1cd1-4a20-888d-dd92a1affbd2","Type":"ContainerStarted","Data":"2227d50a993b1190aa918ebbf2f7513aaa6aa8ef195d7f093201b534ccfc4b52"} Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.655978 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314" (UID: "eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.694124 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9cfa-account-create-update-dm475" event={"ID":"9f24d558-d6ea-42f4-9147-1eca4481dcff","Type":"ContainerStarted","Data":"169ebce03f7347894f6db834751a217a2e3089035fb7e5d2927d468767571f5c"} Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.704240 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-rgwdg" podStartSLOduration=3.704219568 podStartE2EDuration="3.704219568s" podCreationTimestamp="2025-11-29 08:07:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 08:07:05.677887552 +0000 UTC m=+1671.653463342" watchObservedRunningTime="2025-11-29 08:07:05.704219568 +0000 UTC m=+1671.679795358" Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.711602 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-s4hqz" event={"ID":"2ec0c903-aae0-4c34-959f-15ec09782b09","Type":"ContainerStarted","Data":"7a65042d0d8579c32131625ebcc12743d85c17a62858a3d7110c74d795004b19"} Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.711646 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-s4hqz" event={"ID":"2ec0c903-aae0-4c34-959f-15ec09782b09","Type":"ContainerStarted","Data":"687683aa6086158231ada9a5066b3ab03e00094715b51959bdb24b00ba783bd6"} Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.725391 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-15a1-account-create-update-7f9hk" event={"ID":"43275a86-fba2-41f6-b98c-c57c65e9c0c0","Type":"ContainerStarted","Data":"3ab63eb8c05f443ae46b6e41f773dce0baf7de1a1a65d394697375a9b9e082d2"} Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.725449 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-15a1-account-create-update-7f9hk" event={"ID":"43275a86-fba2-41f6-b98c-c57c65e9c0c0","Type":"ContainerStarted","Data":"c8b6ed860088046467d99769e10b5a9492f578fc194abf2209cc32ad98fc6635"} Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.764539 4795 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.771390 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.772213 4795 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.772231 4795 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.772243 4795 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.805555 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314" (UID: "eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.817917 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314-config-data" (OuterVolumeSpecName: "config-data") pod "eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314" (UID: "eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.858111 4795 scope.go:117] "RemoveContainer" containerID="3801e5cabb1fda13b76b19bc936d3b2b47d646d50dfb30a2b45e6f75f0068456" Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.875326 4795 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.875361 4795 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.910636 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-15a1-account-create-update-7f9hk" podStartSLOduration=3.910579416 podStartE2EDuration="3.910579416s" podCreationTimestamp="2025-11-29 08:07:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 08:07:05.782081095 +0000 UTC m=+1671.757656885" watchObservedRunningTime="2025-11-29 08:07:05.910579416 +0000 UTC m=+1671.886155196" Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.939141 4795 scope.go:117] "RemoveContainer" containerID="cd7ac6c1c1e2a29845feefcb848633a58a7d9e2ccf00e46fad7dc38e0e2bd944" Nov 29 08:07:05 crc kubenswrapper[4795]: E1129 08:07:05.939859 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd7ac6c1c1e2a29845feefcb848633a58a7d9e2ccf00e46fad7dc38e0e2bd944\": container with ID starting with cd7ac6c1c1e2a29845feefcb848633a58a7d9e2ccf00e46fad7dc38e0e2bd944 not found: ID does not exist" containerID="cd7ac6c1c1e2a29845feefcb848633a58a7d9e2ccf00e46fad7dc38e0e2bd944" Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.939892 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd7ac6c1c1e2a29845feefcb848633a58a7d9e2ccf00e46fad7dc38e0e2bd944"} err="failed to get container status \"cd7ac6c1c1e2a29845feefcb848633a58a7d9e2ccf00e46fad7dc38e0e2bd944\": rpc error: code = NotFound desc = could not find container \"cd7ac6c1c1e2a29845feefcb848633a58a7d9e2ccf00e46fad7dc38e0e2bd944\": container with ID starting with cd7ac6c1c1e2a29845feefcb848633a58a7d9e2ccf00e46fad7dc38e0e2bd944 not found: ID does not exist" Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.939915 4795 scope.go:117] "RemoveContainer" containerID="3801e5cabb1fda13b76b19bc936d3b2b47d646d50dfb30a2b45e6f75f0068456" Nov 29 08:07:05 crc kubenswrapper[4795]: E1129 08:07:05.941205 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3801e5cabb1fda13b76b19bc936d3b2b47d646d50dfb30a2b45e6f75f0068456\": container with ID starting with 3801e5cabb1fda13b76b19bc936d3b2b47d646d50dfb30a2b45e6f75f0068456 not found: ID does not exist" containerID="3801e5cabb1fda13b76b19bc936d3b2b47d646d50dfb30a2b45e6f75f0068456" Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.941230 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3801e5cabb1fda13b76b19bc936d3b2b47d646d50dfb30a2b45e6f75f0068456"} err="failed to get container status \"3801e5cabb1fda13b76b19bc936d3b2b47d646d50dfb30a2b45e6f75f0068456\": rpc error: code = NotFound desc = could not find container \"3801e5cabb1fda13b76b19bc936d3b2b47d646d50dfb30a2b45e6f75f0068456\": container with ID starting with 3801e5cabb1fda13b76b19bc936d3b2b47d646d50dfb30a2b45e6f75f0068456 not found: ID does not exist" Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.969435 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 29 08:07:05 crc kubenswrapper[4795]: I1129 08:07:05.985831 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 29 08:07:06 crc kubenswrapper[4795]: I1129 08:07:06.004210 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 29 08:07:06 crc kubenswrapper[4795]: E1129 08:07:06.004965 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314" containerName="glance-log" Nov 29 08:07:06 crc kubenswrapper[4795]: I1129 08:07:06.004986 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314" containerName="glance-log" Nov 29 08:07:06 crc kubenswrapper[4795]: E1129 08:07:06.005026 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314" containerName="glance-httpd" Nov 29 08:07:06 crc kubenswrapper[4795]: I1129 08:07:06.005033 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314" containerName="glance-httpd" Nov 29 08:07:06 crc kubenswrapper[4795]: I1129 08:07:06.005308 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314" containerName="glance-log" Nov 29 08:07:06 crc kubenswrapper[4795]: I1129 08:07:06.005339 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314" containerName="glance-httpd" Nov 29 08:07:06 crc kubenswrapper[4795]: I1129 08:07:06.007137 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 29 08:07:06 crc kubenswrapper[4795]: I1129 08:07:06.017316 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 29 08:07:06 crc kubenswrapper[4795]: I1129 08:07:06.018124 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 29 08:07:06 crc kubenswrapper[4795]: I1129 08:07:06.020568 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 29 08:07:06 crc kubenswrapper[4795]: I1129 08:07:06.185429 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6199696f-3f60-4893-8029-6e62879319f9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6199696f-3f60-4893-8029-6e62879319f9\") " pod="openstack/glance-default-internal-api-0" Nov 29 08:07:06 crc kubenswrapper[4795]: I1129 08:07:06.186001 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6199696f-3f60-4893-8029-6e62879319f9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6199696f-3f60-4893-8029-6e62879319f9\") " pod="openstack/glance-default-internal-api-0" Nov 29 08:07:06 crc kubenswrapper[4795]: I1129 08:07:06.186081 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6199696f-3f60-4893-8029-6e62879319f9-logs\") pod \"glance-default-internal-api-0\" (UID: \"6199696f-3f60-4893-8029-6e62879319f9\") " pod="openstack/glance-default-internal-api-0" Nov 29 08:07:06 crc kubenswrapper[4795]: I1129 08:07:06.186275 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6199696f-3f60-4893-8029-6e62879319f9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6199696f-3f60-4893-8029-6e62879319f9\") " pod="openstack/glance-default-internal-api-0" Nov 29 08:07:06 crc kubenswrapper[4795]: I1129 08:07:06.186415 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6199696f-3f60-4893-8029-6e62879319f9-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6199696f-3f60-4893-8029-6e62879319f9\") " pod="openstack/glance-default-internal-api-0" Nov 29 08:07:06 crc kubenswrapper[4795]: I1129 08:07:06.186550 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"6199696f-3f60-4893-8029-6e62879319f9\") " pod="openstack/glance-default-internal-api-0" Nov 29 08:07:06 crc kubenswrapper[4795]: I1129 08:07:06.186720 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dgkc\" (UniqueName: \"kubernetes.io/projected/6199696f-3f60-4893-8029-6e62879319f9-kube-api-access-9dgkc\") pod \"glance-default-internal-api-0\" (UID: \"6199696f-3f60-4893-8029-6e62879319f9\") " pod="openstack/glance-default-internal-api-0" Nov 29 08:07:06 crc kubenswrapper[4795]: I1129 08:07:06.186837 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6199696f-3f60-4893-8029-6e62879319f9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6199696f-3f60-4893-8029-6e62879319f9\") " pod="openstack/glance-default-internal-api-0" Nov 29 08:07:06 crc kubenswrapper[4795]: I1129 08:07:06.289319 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6199696f-3f60-4893-8029-6e62879319f9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6199696f-3f60-4893-8029-6e62879319f9\") " pod="openstack/glance-default-internal-api-0" Nov 29 08:07:06 crc kubenswrapper[4795]: I1129 08:07:06.289392 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6199696f-3f60-4893-8029-6e62879319f9-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6199696f-3f60-4893-8029-6e62879319f9\") " pod="openstack/glance-default-internal-api-0" Nov 29 08:07:06 crc kubenswrapper[4795]: I1129 08:07:06.289438 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"6199696f-3f60-4893-8029-6e62879319f9\") " pod="openstack/glance-default-internal-api-0" Nov 29 08:07:06 crc kubenswrapper[4795]: I1129 08:07:06.289465 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dgkc\" (UniqueName: \"kubernetes.io/projected/6199696f-3f60-4893-8029-6e62879319f9-kube-api-access-9dgkc\") pod \"glance-default-internal-api-0\" (UID: \"6199696f-3f60-4893-8029-6e62879319f9\") " pod="openstack/glance-default-internal-api-0" Nov 29 08:07:06 crc kubenswrapper[4795]: I1129 08:07:06.289498 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6199696f-3f60-4893-8029-6e62879319f9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6199696f-3f60-4893-8029-6e62879319f9\") " pod="openstack/glance-default-internal-api-0" Nov 29 08:07:06 crc kubenswrapper[4795]: I1129 08:07:06.289560 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6199696f-3f60-4893-8029-6e62879319f9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6199696f-3f60-4893-8029-6e62879319f9\") " pod="openstack/glance-default-internal-api-0" Nov 29 08:07:06 crc kubenswrapper[4795]: I1129 08:07:06.289607 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6199696f-3f60-4893-8029-6e62879319f9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6199696f-3f60-4893-8029-6e62879319f9\") " pod="openstack/glance-default-internal-api-0" Nov 29 08:07:06 crc kubenswrapper[4795]: I1129 08:07:06.289625 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6199696f-3f60-4893-8029-6e62879319f9-logs\") pod \"glance-default-internal-api-0\" (UID: \"6199696f-3f60-4893-8029-6e62879319f9\") " pod="openstack/glance-default-internal-api-0" Nov 29 08:07:06 crc kubenswrapper[4795]: I1129 08:07:06.290191 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6199696f-3f60-4893-8029-6e62879319f9-logs\") pod \"glance-default-internal-api-0\" (UID: \"6199696f-3f60-4893-8029-6e62879319f9\") " pod="openstack/glance-default-internal-api-0" Nov 29 08:07:06 crc kubenswrapper[4795]: I1129 08:07:06.290461 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6199696f-3f60-4893-8029-6e62879319f9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6199696f-3f60-4893-8029-6e62879319f9\") " pod="openstack/glance-default-internal-api-0" Nov 29 08:07:06 crc kubenswrapper[4795]: I1129 08:07:06.292693 4795 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"6199696f-3f60-4893-8029-6e62879319f9\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-internal-api-0" Nov 29 08:07:06 crc kubenswrapper[4795]: I1129 08:07:06.298152 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6199696f-3f60-4893-8029-6e62879319f9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6199696f-3f60-4893-8029-6e62879319f9\") " pod="openstack/glance-default-internal-api-0" Nov 29 08:07:06 crc kubenswrapper[4795]: I1129 08:07:06.298501 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6199696f-3f60-4893-8029-6e62879319f9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6199696f-3f60-4893-8029-6e62879319f9\") " pod="openstack/glance-default-internal-api-0" Nov 29 08:07:06 crc kubenswrapper[4795]: I1129 08:07:06.316947 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6199696f-3f60-4893-8029-6e62879319f9-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6199696f-3f60-4893-8029-6e62879319f9\") " pod="openstack/glance-default-internal-api-0" Nov 29 08:07:06 crc kubenswrapper[4795]: I1129 08:07:06.327769 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cc813d1-4929-48c8-9268-49a4b96562ac" path="/var/lib/kubelet/pods/4cc813d1-4929-48c8-9268-49a4b96562ac/volumes" Nov 29 08:07:06 crc kubenswrapper[4795]: I1129 08:07:06.330139 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a930422-6c40-4a1a-963d-126d16f160a2" path="/var/lib/kubelet/pods/5a930422-6c40-4a1a-963d-126d16f160a2/volumes" Nov 29 08:07:06 crc kubenswrapper[4795]: I1129 08:07:06.330992 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314" path="/var/lib/kubelet/pods/eaf5ab3c-9ff3-4d5d-a3b9-ae2adb05f314/volumes" Nov 29 08:07:06 crc kubenswrapper[4795]: I1129 08:07:06.335788 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dgkc\" (UniqueName: \"kubernetes.io/projected/6199696f-3f60-4893-8029-6e62879319f9-kube-api-access-9dgkc\") pod \"glance-default-internal-api-0\" (UID: \"6199696f-3f60-4893-8029-6e62879319f9\") " pod="openstack/glance-default-internal-api-0" Nov 29 08:07:06 crc kubenswrapper[4795]: I1129 08:07:06.351395 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6199696f-3f60-4893-8029-6e62879319f9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6199696f-3f60-4893-8029-6e62879319f9\") " pod="openstack/glance-default-internal-api-0" Nov 29 08:07:06 crc kubenswrapper[4795]: I1129 08:07:06.371565 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"6199696f-3f60-4893-8029-6e62879319f9\") " pod="openstack/glance-default-internal-api-0" Nov 29 08:07:06 crc kubenswrapper[4795]: I1129 08:07:06.746842 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 29 08:07:06 crc kubenswrapper[4795]: I1129 08:07:06.798343 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 29 08:07:06 crc kubenswrapper[4795]: I1129 08:07:06.814104 4795 generic.go:334] "Generic (PLEG): container finished" podID="43275a86-fba2-41f6-b98c-c57c65e9c0c0" containerID="3ab63eb8c05f443ae46b6e41f773dce0baf7de1a1a65d394697375a9b9e082d2" exitCode=0 Nov 29 08:07:06 crc kubenswrapper[4795]: I1129 08:07:06.814247 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-15a1-account-create-update-7f9hk" event={"ID":"43275a86-fba2-41f6-b98c-c57c65e9c0c0","Type":"ContainerDied","Data":"3ab63eb8c05f443ae46b6e41f773dce0baf7de1a1a65d394697375a9b9e082d2"} Nov 29 08:07:06 crc kubenswrapper[4795]: I1129 08:07:06.823742 4795 generic.go:334] "Generic (PLEG): container finished" podID="430ba7a8-1572-4074-8597-8a94c3c2c8a0" containerID="3c0bbfc9ccc426e2185cf41fd5a1c1a35b19db0b45c46899f1e1e29d5a3bba5d" exitCode=0 Nov 29 08:07:06 crc kubenswrapper[4795]: I1129 08:07:06.824176 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-8ada-account-create-update-4lg24" event={"ID":"430ba7a8-1572-4074-8597-8a94c3c2c8a0","Type":"ContainerDied","Data":"3c0bbfc9ccc426e2185cf41fd5a1c1a35b19db0b45c46899f1e1e29d5a3bba5d"} Nov 29 08:07:06 crc kubenswrapper[4795]: I1129 08:07:06.837036 4795 generic.go:334] "Generic (PLEG): container finished" podID="b93bb162-1cd1-4a20-888d-dd92a1affbd2" containerID="f6c7f03832cc17c96a9456f66abc4c28bf56ecb19380cfc5f5c1fa8e9525dd60" exitCode=0 Nov 29 08:07:06 crc kubenswrapper[4795]: I1129 08:07:06.837458 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-rgwdg" event={"ID":"b93bb162-1cd1-4a20-888d-dd92a1affbd2","Type":"ContainerDied","Data":"f6c7f03832cc17c96a9456f66abc4c28bf56ecb19380cfc5f5c1fa8e9525dd60"} Nov 29 08:07:06 crc kubenswrapper[4795]: I1129 08:07:06.852512 4795 generic.go:334] "Generic (PLEG): container finished" podID="9f24d558-d6ea-42f4-9147-1eca4481dcff" containerID="169ebce03f7347894f6db834751a217a2e3089035fb7e5d2927d468767571f5c" exitCode=0 Nov 29 08:07:06 crc kubenswrapper[4795]: I1129 08:07:06.852952 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9cfa-account-create-update-dm475" event={"ID":"9f24d558-d6ea-42f4-9147-1eca4481dcff","Type":"ContainerDied","Data":"169ebce03f7347894f6db834751a217a2e3089035fb7e5d2927d468767571f5c"} Nov 29 08:07:06 crc kubenswrapper[4795]: I1129 08:07:06.889044 4795 generic.go:334] "Generic (PLEG): container finished" podID="e56ba08f-a9c8-49ae-b4e8-d9cd70d63fb5" containerID="2f314c2f00f704130d6bec48c02535f0225bca6182dab65d3c925da58f5e06fe" exitCode=0 Nov 29 08:07:06 crc kubenswrapper[4795]: I1129 08:07:06.889257 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-k678c" event={"ID":"e56ba08f-a9c8-49ae-b4e8-d9cd70d63fb5","Type":"ContainerDied","Data":"2f314c2f00f704130d6bec48c02535f0225bca6182dab65d3c925da58f5e06fe"} Nov 29 08:07:06 crc kubenswrapper[4795]: I1129 08:07:06.913887 4795 generic.go:334] "Generic (PLEG): container finished" podID="2ec0c903-aae0-4c34-959f-15ec09782b09" containerID="7a65042d0d8579c32131625ebcc12743d85c17a62858a3d7110c74d795004b19" exitCode=0 Nov 29 08:07:06 crc kubenswrapper[4795]: I1129 08:07:06.914105 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-s4hqz" event={"ID":"2ec0c903-aae0-4c34-959f-15ec09782b09","Type":"ContainerDied","Data":"7a65042d0d8579c32131625ebcc12743d85c17a62858a3d7110c74d795004b19"} Nov 29 08:07:07 crc kubenswrapper[4795]: I1129 08:07:07.900957 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9cfa-account-create-update-dm475" Nov 29 08:07:07 crc kubenswrapper[4795]: I1129 08:07:07.904454 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-s4hqz" Nov 29 08:07:07 crc kubenswrapper[4795]: I1129 08:07:07.937477 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9cfa-account-create-update-dm475" event={"ID":"9f24d558-d6ea-42f4-9147-1eca4481dcff","Type":"ContainerDied","Data":"b8db3af159c87d65d0257dd81177beb47d6217fbc9dbc64335aa703fb8a9adfc"} Nov 29 08:07:07 crc kubenswrapper[4795]: I1129 08:07:07.937519 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8db3af159c87d65d0257dd81177beb47d6217fbc9dbc64335aa703fb8a9adfc" Nov 29 08:07:07 crc kubenswrapper[4795]: I1129 08:07:07.937606 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9cfa-account-create-update-dm475" Nov 29 08:07:07 crc kubenswrapper[4795]: I1129 08:07:07.943734 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3625e087-5469-4cc2-b580-13d7201ff475","Type":"ContainerStarted","Data":"7e3e957338b0f0e31d102b4e83bf29c75d9b8a2b1d082ee6370614d1a6d28962"} Nov 29 08:07:07 crc kubenswrapper[4795]: I1129 08:07:07.948219 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-s4hqz" Nov 29 08:07:07 crc kubenswrapper[4795]: I1129 08:07:07.948792 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-s4hqz" event={"ID":"2ec0c903-aae0-4c34-959f-15ec09782b09","Type":"ContainerDied","Data":"687683aa6086158231ada9a5066b3ab03e00094715b51959bdb24b00ba783bd6"} Nov 29 08:07:07 crc kubenswrapper[4795]: I1129 08:07:07.948835 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="687683aa6086158231ada9a5066b3ab03e00094715b51959bdb24b00ba783bd6" Nov 29 08:07:08 crc kubenswrapper[4795]: I1129 08:07:08.014219 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxns2\" (UniqueName: \"kubernetes.io/projected/2ec0c903-aae0-4c34-959f-15ec09782b09-kube-api-access-vxns2\") pod \"2ec0c903-aae0-4c34-959f-15ec09782b09\" (UID: \"2ec0c903-aae0-4c34-959f-15ec09782b09\") " Nov 29 08:07:08 crc kubenswrapper[4795]: I1129 08:07:08.014339 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ec0c903-aae0-4c34-959f-15ec09782b09-operator-scripts\") pod \"2ec0c903-aae0-4c34-959f-15ec09782b09\" (UID: \"2ec0c903-aae0-4c34-959f-15ec09782b09\") " Nov 29 08:07:08 crc kubenswrapper[4795]: I1129 08:07:08.014405 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdjtj\" (UniqueName: \"kubernetes.io/projected/9f24d558-d6ea-42f4-9147-1eca4481dcff-kube-api-access-kdjtj\") pod \"9f24d558-d6ea-42f4-9147-1eca4481dcff\" (UID: \"9f24d558-d6ea-42f4-9147-1eca4481dcff\") " Nov 29 08:07:08 crc kubenswrapper[4795]: I1129 08:07:08.014626 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f24d558-d6ea-42f4-9147-1eca4481dcff-operator-scripts\") pod \"9f24d558-d6ea-42f4-9147-1eca4481dcff\" (UID: \"9f24d558-d6ea-42f4-9147-1eca4481dcff\") " Nov 29 08:07:08 crc kubenswrapper[4795]: I1129 08:07:08.015758 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ec0c903-aae0-4c34-959f-15ec09782b09-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2ec0c903-aae0-4c34-959f-15ec09782b09" (UID: "2ec0c903-aae0-4c34-959f-15ec09782b09"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:07:08 crc kubenswrapper[4795]: I1129 08:07:08.015776 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f24d558-d6ea-42f4-9147-1eca4481dcff-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9f24d558-d6ea-42f4-9147-1eca4481dcff" (UID: "9f24d558-d6ea-42f4-9147-1eca4481dcff"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:07:08 crc kubenswrapper[4795]: I1129 08:07:08.027827 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ec0c903-aae0-4c34-959f-15ec09782b09-kube-api-access-vxns2" (OuterVolumeSpecName: "kube-api-access-vxns2") pod "2ec0c903-aae0-4c34-959f-15ec09782b09" (UID: "2ec0c903-aae0-4c34-959f-15ec09782b09"). InnerVolumeSpecName "kube-api-access-vxns2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:07:08 crc kubenswrapper[4795]: I1129 08:07:08.031885 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f24d558-d6ea-42f4-9147-1eca4481dcff-kube-api-access-kdjtj" (OuterVolumeSpecName: "kube-api-access-kdjtj") pod "9f24d558-d6ea-42f4-9147-1eca4481dcff" (UID: "9f24d558-d6ea-42f4-9147-1eca4481dcff"). InnerVolumeSpecName "kube-api-access-kdjtj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:07:08 crc kubenswrapper[4795]: I1129 08:07:08.044014 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 29 08:07:08 crc kubenswrapper[4795]: I1129 08:07:08.116825 4795 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f24d558-d6ea-42f4-9147-1eca4481dcff-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 08:07:08 crc kubenswrapper[4795]: I1129 08:07:08.116850 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxns2\" (UniqueName: \"kubernetes.io/projected/2ec0c903-aae0-4c34-959f-15ec09782b09-kube-api-access-vxns2\") on node \"crc\" DevicePath \"\"" Nov 29 08:07:08 crc kubenswrapper[4795]: I1129 08:07:08.116860 4795 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ec0c903-aae0-4c34-959f-15ec09782b09-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 08:07:08 crc kubenswrapper[4795]: I1129 08:07:08.116869 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdjtj\" (UniqueName: \"kubernetes.io/projected/9f24d558-d6ea-42f4-9147-1eca4481dcff-kube-api-access-kdjtj\") on node \"crc\" DevicePath \"\"" Nov 29 08:07:08 crc kubenswrapper[4795]: I1129 08:07:08.473815 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-746fb69fd5-n8596" Nov 29 08:07:08 crc kubenswrapper[4795]: I1129 08:07:08.555882 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-fd5dc85fb-fss4q"] Nov 29 08:07:08 crc kubenswrapper[4795]: I1129 08:07:08.556102 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-fd5dc85fb-fss4q" podUID="29304def-4cbe-4b1d-abb4-c7bd11587183" containerName="heat-engine" containerID="cri-o://52236dc856c9ff58b724e49ea15972d1a4376f5bffb0f668fd6fc0b638a4778d" gracePeriod=60 Nov 29 08:07:08 crc kubenswrapper[4795]: I1129 08:07:08.670835 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-k678c" Nov 29 08:07:08 crc kubenswrapper[4795]: E1129 08:07:08.780470 4795 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f24d558_d6ea_42f4_9147_1eca4481dcff.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ec0c903_aae0_4c34_959f_15ec09782b09.slice/crio-687683aa6086158231ada9a5066b3ab03e00094715b51959bdb24b00ba783bd6\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f24d558_d6ea_42f4_9147_1eca4481dcff.slice/crio-b8db3af159c87d65d0257dd81177beb47d6217fbc9dbc64335aa703fb8a9adfc\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ec0c903_aae0_4c34_959f_15ec09782b09.slice\": RecentStats: unable to find data in memory cache]" Nov 29 08:07:08 crc kubenswrapper[4795]: I1129 08:07:08.853945 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e56ba08f-a9c8-49ae-b4e8-d9cd70d63fb5-operator-scripts\") pod \"e56ba08f-a9c8-49ae-b4e8-d9cd70d63fb5\" (UID: \"e56ba08f-a9c8-49ae-b4e8-d9cd70d63fb5\") " Nov 29 08:07:08 crc kubenswrapper[4795]: I1129 08:07:08.854172 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8bhj\" (UniqueName: \"kubernetes.io/projected/e56ba08f-a9c8-49ae-b4e8-d9cd70d63fb5-kube-api-access-k8bhj\") pod \"e56ba08f-a9c8-49ae-b4e8-d9cd70d63fb5\" (UID: \"e56ba08f-a9c8-49ae-b4e8-d9cd70d63fb5\") " Nov 29 08:07:08 crc kubenswrapper[4795]: I1129 08:07:08.855063 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e56ba08f-a9c8-49ae-b4e8-d9cd70d63fb5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e56ba08f-a9c8-49ae-b4e8-d9cd70d63fb5" (UID: "e56ba08f-a9c8-49ae-b4e8-d9cd70d63fb5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:07:08 crc kubenswrapper[4795]: I1129 08:07:08.856182 4795 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e56ba08f-a9c8-49ae-b4e8-d9cd70d63fb5-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 08:07:08 crc kubenswrapper[4795]: I1129 08:07:08.865790 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e56ba08f-a9c8-49ae-b4e8-d9cd70d63fb5-kube-api-access-k8bhj" (OuterVolumeSpecName: "kube-api-access-k8bhj") pod "e56ba08f-a9c8-49ae-b4e8-d9cd70d63fb5" (UID: "e56ba08f-a9c8-49ae-b4e8-d9cd70d63fb5"). InnerVolumeSpecName "kube-api-access-k8bhj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:07:08 crc kubenswrapper[4795]: I1129 08:07:08.974110 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8bhj\" (UniqueName: \"kubernetes.io/projected/e56ba08f-a9c8-49ae-b4e8-d9cd70d63fb5-kube-api-access-k8bhj\") on node \"crc\" DevicePath \"\"" Nov 29 08:07:09 crc kubenswrapper[4795]: I1129 08:07:09.012455 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-k678c" event={"ID":"e56ba08f-a9c8-49ae-b4e8-d9cd70d63fb5","Type":"ContainerDied","Data":"1026fad69d52376441b5832a7140b5e59196ecfb740ea773e5e5c909d17621c9"} Nov 29 08:07:09 crc kubenswrapper[4795]: I1129 08:07:09.013330 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1026fad69d52376441b5832a7140b5e59196ecfb740ea773e5e5c909d17621c9" Nov 29 08:07:09 crc kubenswrapper[4795]: I1129 08:07:09.012806 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-k678c" Nov 29 08:07:09 crc kubenswrapper[4795]: I1129 08:07:09.019468 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-rgwdg" Nov 29 08:07:09 crc kubenswrapper[4795]: I1129 08:07:09.021197 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3625e087-5469-4cc2-b580-13d7201ff475","Type":"ContainerStarted","Data":"283ee0b576b1fa91fd8d30e1ad5f43b8e43f62312c5596cbfe0bf8a8a2116d00"} Nov 29 08:07:09 crc kubenswrapper[4795]: I1129 08:07:09.023624 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6199696f-3f60-4893-8029-6e62879319f9","Type":"ContainerStarted","Data":"cad92984946890e16a4204ab0b2ac1629a837d2a15c7f73b9c95935442bd1a34"} Nov 29 08:07:09 crc kubenswrapper[4795]: I1129 08:07:09.109400 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-15a1-account-create-update-7f9hk" Nov 29 08:07:09 crc kubenswrapper[4795]: I1129 08:07:09.118235 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-8ada-account-create-update-4lg24" Nov 29 08:07:09 crc kubenswrapper[4795]: I1129 08:07:09.201427 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkv67\" (UniqueName: \"kubernetes.io/projected/b93bb162-1cd1-4a20-888d-dd92a1affbd2-kube-api-access-fkv67\") pod \"b93bb162-1cd1-4a20-888d-dd92a1affbd2\" (UID: \"b93bb162-1cd1-4a20-888d-dd92a1affbd2\") " Nov 29 08:07:09 crc kubenswrapper[4795]: I1129 08:07:09.201953 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b93bb162-1cd1-4a20-888d-dd92a1affbd2-operator-scripts\") pod \"b93bb162-1cd1-4a20-888d-dd92a1affbd2\" (UID: \"b93bb162-1cd1-4a20-888d-dd92a1affbd2\") " Nov 29 08:07:09 crc kubenswrapper[4795]: I1129 08:07:09.202564 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b93bb162-1cd1-4a20-888d-dd92a1affbd2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b93bb162-1cd1-4a20-888d-dd92a1affbd2" (UID: "b93bb162-1cd1-4a20-888d-dd92a1affbd2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:07:09 crc kubenswrapper[4795]: I1129 08:07:09.203143 4795 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b93bb162-1cd1-4a20-888d-dd92a1affbd2-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 08:07:09 crc kubenswrapper[4795]: I1129 08:07:09.207432 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b93bb162-1cd1-4a20-888d-dd92a1affbd2-kube-api-access-fkv67" (OuterVolumeSpecName: "kube-api-access-fkv67") pod "b93bb162-1cd1-4a20-888d-dd92a1affbd2" (UID: "b93bb162-1cd1-4a20-888d-dd92a1affbd2"). InnerVolumeSpecName "kube-api-access-fkv67". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:07:09 crc kubenswrapper[4795]: I1129 08:07:09.304824 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhcnm\" (UniqueName: \"kubernetes.io/projected/430ba7a8-1572-4074-8597-8a94c3c2c8a0-kube-api-access-lhcnm\") pod \"430ba7a8-1572-4074-8597-8a94c3c2c8a0\" (UID: \"430ba7a8-1572-4074-8597-8a94c3c2c8a0\") " Nov 29 08:07:09 crc kubenswrapper[4795]: I1129 08:07:09.305044 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/430ba7a8-1572-4074-8597-8a94c3c2c8a0-operator-scripts\") pod \"430ba7a8-1572-4074-8597-8a94c3c2c8a0\" (UID: \"430ba7a8-1572-4074-8597-8a94c3c2c8a0\") " Nov 29 08:07:09 crc kubenswrapper[4795]: I1129 08:07:09.305104 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sgnn9\" (UniqueName: \"kubernetes.io/projected/43275a86-fba2-41f6-b98c-c57c65e9c0c0-kube-api-access-sgnn9\") pod \"43275a86-fba2-41f6-b98c-c57c65e9c0c0\" (UID: \"43275a86-fba2-41f6-b98c-c57c65e9c0c0\") " Nov 29 08:07:09 crc kubenswrapper[4795]: I1129 08:07:09.305459 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/430ba7a8-1572-4074-8597-8a94c3c2c8a0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "430ba7a8-1572-4074-8597-8a94c3c2c8a0" (UID: "430ba7a8-1572-4074-8597-8a94c3c2c8a0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:07:09 crc kubenswrapper[4795]: I1129 08:07:09.305517 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43275a86-fba2-41f6-b98c-c57c65e9c0c0-operator-scripts\") pod \"43275a86-fba2-41f6-b98c-c57c65e9c0c0\" (UID: \"43275a86-fba2-41f6-b98c-c57c65e9c0c0\") " Nov 29 08:07:09 crc kubenswrapper[4795]: I1129 08:07:09.305809 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43275a86-fba2-41f6-b98c-c57c65e9c0c0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "43275a86-fba2-41f6-b98c-c57c65e9c0c0" (UID: "43275a86-fba2-41f6-b98c-c57c65e9c0c0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:07:09 crc kubenswrapper[4795]: I1129 08:07:09.306568 4795 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/430ba7a8-1572-4074-8597-8a94c3c2c8a0-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 08:07:09 crc kubenswrapper[4795]: I1129 08:07:09.306587 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkv67\" (UniqueName: \"kubernetes.io/projected/b93bb162-1cd1-4a20-888d-dd92a1affbd2-kube-api-access-fkv67\") on node \"crc\" DevicePath \"\"" Nov 29 08:07:09 crc kubenswrapper[4795]: I1129 08:07:09.306616 4795 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43275a86-fba2-41f6-b98c-c57c65e9c0c0-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 08:07:09 crc kubenswrapper[4795]: I1129 08:07:09.309762 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/430ba7a8-1572-4074-8597-8a94c3c2c8a0-kube-api-access-lhcnm" (OuterVolumeSpecName: "kube-api-access-lhcnm") pod "430ba7a8-1572-4074-8597-8a94c3c2c8a0" (UID: "430ba7a8-1572-4074-8597-8a94c3c2c8a0"). InnerVolumeSpecName "kube-api-access-lhcnm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:07:09 crc kubenswrapper[4795]: I1129 08:07:09.309952 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43275a86-fba2-41f6-b98c-c57c65e9c0c0-kube-api-access-sgnn9" (OuterVolumeSpecName: "kube-api-access-sgnn9") pod "43275a86-fba2-41f6-b98c-c57c65e9c0c0" (UID: "43275a86-fba2-41f6-b98c-c57c65e9c0c0"). InnerVolumeSpecName "kube-api-access-sgnn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:07:09 crc kubenswrapper[4795]: I1129 08:07:09.408132 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhcnm\" (UniqueName: \"kubernetes.io/projected/430ba7a8-1572-4074-8597-8a94c3c2c8a0-kube-api-access-lhcnm\") on node \"crc\" DevicePath \"\"" Nov 29 08:07:09 crc kubenswrapper[4795]: I1129 08:07:09.408166 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sgnn9\" (UniqueName: \"kubernetes.io/projected/43275a86-fba2-41f6-b98c-c57c65e9c0c0-kube-api-access-sgnn9\") on node \"crc\" DevicePath \"\"" Nov 29 08:07:10 crc kubenswrapper[4795]: I1129 08:07:10.049178 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6199696f-3f60-4893-8029-6e62879319f9","Type":"ContainerStarted","Data":"e802c1ce03e1998270c01a7153513211e441fef149518d67696f5e8dc85550cb"} Nov 29 08:07:10 crc kubenswrapper[4795]: I1129 08:07:10.056461 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3625e087-5469-4cc2-b580-13d7201ff475","Type":"ContainerStarted","Data":"61df3b5b4a30190ed336b17c59da34cd1263476e68b229ea90ff6135d715735c"} Nov 29 08:07:10 crc kubenswrapper[4795]: I1129 08:07:10.067176 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-15a1-account-create-update-7f9hk" event={"ID":"43275a86-fba2-41f6-b98c-c57c65e9c0c0","Type":"ContainerDied","Data":"c8b6ed860088046467d99769e10b5a9492f578fc194abf2209cc32ad98fc6635"} Nov 29 08:07:10 crc kubenswrapper[4795]: I1129 08:07:10.067246 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8b6ed860088046467d99769e10b5a9492f578fc194abf2209cc32ad98fc6635" Nov 29 08:07:10 crc kubenswrapper[4795]: I1129 08:07:10.067245 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-15a1-account-create-update-7f9hk" Nov 29 08:07:10 crc kubenswrapper[4795]: I1129 08:07:10.072974 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-8ada-account-create-update-4lg24" event={"ID":"430ba7a8-1572-4074-8597-8a94c3c2c8a0","Type":"ContainerDied","Data":"91300cec0059a9300b7ab6314c7a208ee2ab33615dd97d812bc7fb29a0f0a9a5"} Nov 29 08:07:10 crc kubenswrapper[4795]: I1129 08:07:10.073013 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91300cec0059a9300b7ab6314c7a208ee2ab33615dd97d812bc7fb29a0f0a9a5" Nov 29 08:07:10 crc kubenswrapper[4795]: I1129 08:07:10.080884 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-8ada-account-create-update-4lg24" Nov 29 08:07:10 crc kubenswrapper[4795]: I1129 08:07:10.083670 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.083650496 podStartE2EDuration="6.083650496s" podCreationTimestamp="2025-11-29 08:07:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 08:07:10.083387929 +0000 UTC m=+1676.058963709" watchObservedRunningTime="2025-11-29 08:07:10.083650496 +0000 UTC m=+1676.059226286" Nov 29 08:07:10 crc kubenswrapper[4795]: I1129 08:07:10.101467 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-rgwdg" event={"ID":"b93bb162-1cd1-4a20-888d-dd92a1affbd2","Type":"ContainerDied","Data":"2227d50a993b1190aa918ebbf2f7513aaa6aa8ef195d7f093201b534ccfc4b52"} Nov 29 08:07:10 crc kubenswrapper[4795]: I1129 08:07:10.101935 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2227d50a993b1190aa918ebbf2f7513aaa6aa8ef195d7f093201b534ccfc4b52" Nov 29 08:07:10 crc kubenswrapper[4795]: I1129 08:07:10.101618 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-rgwdg" Nov 29 08:07:11 crc kubenswrapper[4795]: I1129 08:07:11.116502 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6199696f-3f60-4893-8029-6e62879319f9","Type":"ContainerStarted","Data":"310418c3eafd5ff96dc1304f781781523f14a8a366c10336cb75ed6d8df052ff"} Nov 29 08:07:11 crc kubenswrapper[4795]: I1129 08:07:11.167642 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.167546793 podStartE2EDuration="6.167546793s" podCreationTimestamp="2025-11-29 08:07:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 08:07:11.163546819 +0000 UTC m=+1677.139122609" watchObservedRunningTime="2025-11-29 08:07:11.167546793 +0000 UTC m=+1677.143122583" Nov 29 08:07:11 crc kubenswrapper[4795]: E1129 08:07:11.226451 4795 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="52236dc856c9ff58b724e49ea15972d1a4376f5bffb0f668fd6fc0b638a4778d" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 29 08:07:11 crc kubenswrapper[4795]: E1129 08:07:11.230432 4795 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="52236dc856c9ff58b724e49ea15972d1a4376f5bffb0f668fd6fc0b638a4778d" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 29 08:07:11 crc kubenswrapper[4795]: E1129 08:07:11.236005 4795 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="52236dc856c9ff58b724e49ea15972d1a4376f5bffb0f668fd6fc0b638a4778d" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 29 08:07:11 crc kubenswrapper[4795]: E1129 08:07:11.236148 4795 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-fd5dc85fb-fss4q" podUID="29304def-4cbe-4b1d-abb4-c7bd11587183" containerName="heat-engine" Nov 29 08:07:12 crc kubenswrapper[4795]: I1129 08:07:12.703683 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-s4c9j"] Nov 29 08:07:12 crc kubenswrapper[4795]: E1129 08:07:12.704480 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e56ba08f-a9c8-49ae-b4e8-d9cd70d63fb5" containerName="mariadb-database-create" Nov 29 08:07:12 crc kubenswrapper[4795]: I1129 08:07:12.704493 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="e56ba08f-a9c8-49ae-b4e8-d9cd70d63fb5" containerName="mariadb-database-create" Nov 29 08:07:12 crc kubenswrapper[4795]: E1129 08:07:12.704530 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ec0c903-aae0-4c34-959f-15ec09782b09" containerName="mariadb-database-create" Nov 29 08:07:12 crc kubenswrapper[4795]: I1129 08:07:12.704538 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ec0c903-aae0-4c34-959f-15ec09782b09" containerName="mariadb-database-create" Nov 29 08:07:12 crc kubenswrapper[4795]: E1129 08:07:12.704553 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f24d558-d6ea-42f4-9147-1eca4481dcff" containerName="mariadb-account-create-update" Nov 29 08:07:12 crc kubenswrapper[4795]: I1129 08:07:12.704559 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f24d558-d6ea-42f4-9147-1eca4481dcff" containerName="mariadb-account-create-update" Nov 29 08:07:12 crc kubenswrapper[4795]: E1129 08:07:12.704573 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="430ba7a8-1572-4074-8597-8a94c3c2c8a0" containerName="mariadb-account-create-update" Nov 29 08:07:12 crc kubenswrapper[4795]: I1129 08:07:12.704579 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="430ba7a8-1572-4074-8597-8a94c3c2c8a0" containerName="mariadb-account-create-update" Nov 29 08:07:12 crc kubenswrapper[4795]: E1129 08:07:12.704631 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43275a86-fba2-41f6-b98c-c57c65e9c0c0" containerName="mariadb-account-create-update" Nov 29 08:07:12 crc kubenswrapper[4795]: I1129 08:07:12.704638 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="43275a86-fba2-41f6-b98c-c57c65e9c0c0" containerName="mariadb-account-create-update" Nov 29 08:07:12 crc kubenswrapper[4795]: E1129 08:07:12.704654 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b93bb162-1cd1-4a20-888d-dd92a1affbd2" containerName="mariadb-database-create" Nov 29 08:07:12 crc kubenswrapper[4795]: I1129 08:07:12.704660 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="b93bb162-1cd1-4a20-888d-dd92a1affbd2" containerName="mariadb-database-create" Nov 29 08:07:12 crc kubenswrapper[4795]: I1129 08:07:12.704887 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="e56ba08f-a9c8-49ae-b4e8-d9cd70d63fb5" containerName="mariadb-database-create" Nov 29 08:07:12 crc kubenswrapper[4795]: I1129 08:07:12.704901 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="b93bb162-1cd1-4a20-888d-dd92a1affbd2" containerName="mariadb-database-create" Nov 29 08:07:12 crc kubenswrapper[4795]: I1129 08:07:12.704930 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="43275a86-fba2-41f6-b98c-c57c65e9c0c0" containerName="mariadb-account-create-update" Nov 29 08:07:12 crc kubenswrapper[4795]: I1129 08:07:12.704942 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ec0c903-aae0-4c34-959f-15ec09782b09" containerName="mariadb-database-create" Nov 29 08:07:12 crc kubenswrapper[4795]: I1129 08:07:12.704955 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f24d558-d6ea-42f4-9147-1eca4481dcff" containerName="mariadb-account-create-update" Nov 29 08:07:12 crc kubenswrapper[4795]: I1129 08:07:12.704964 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="430ba7a8-1572-4074-8597-8a94c3c2c8a0" containerName="mariadb-account-create-update" Nov 29 08:07:12 crc kubenswrapper[4795]: I1129 08:07:12.706010 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-s4c9j" Nov 29 08:07:12 crc kubenswrapper[4795]: I1129 08:07:12.715731 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 29 08:07:12 crc kubenswrapper[4795]: I1129 08:07:12.715760 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Nov 29 08:07:12 crc kubenswrapper[4795]: I1129 08:07:12.715914 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-hdgz4" Nov 29 08:07:12 crc kubenswrapper[4795]: I1129 08:07:12.723026 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-s4c9j"] Nov 29 08:07:12 crc kubenswrapper[4795]: I1129 08:07:12.830462 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13fa8118-f5bc-4c64-95dc-89cbfb601187-config-data\") pod \"nova-cell0-conductor-db-sync-s4c9j\" (UID: \"13fa8118-f5bc-4c64-95dc-89cbfb601187\") " pod="openstack/nova-cell0-conductor-db-sync-s4c9j" Nov 29 08:07:12 crc kubenswrapper[4795]: I1129 08:07:12.830738 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13fa8118-f5bc-4c64-95dc-89cbfb601187-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-s4c9j\" (UID: \"13fa8118-f5bc-4c64-95dc-89cbfb601187\") " pod="openstack/nova-cell0-conductor-db-sync-s4c9j" Nov 29 08:07:12 crc kubenswrapper[4795]: I1129 08:07:12.831207 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cskt\" (UniqueName: \"kubernetes.io/projected/13fa8118-f5bc-4c64-95dc-89cbfb601187-kube-api-access-4cskt\") pod \"nova-cell0-conductor-db-sync-s4c9j\" (UID: \"13fa8118-f5bc-4c64-95dc-89cbfb601187\") " pod="openstack/nova-cell0-conductor-db-sync-s4c9j" Nov 29 08:07:12 crc kubenswrapper[4795]: I1129 08:07:12.831387 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13fa8118-f5bc-4c64-95dc-89cbfb601187-scripts\") pod \"nova-cell0-conductor-db-sync-s4c9j\" (UID: \"13fa8118-f5bc-4c64-95dc-89cbfb601187\") " pod="openstack/nova-cell0-conductor-db-sync-s4c9j" Nov 29 08:07:12 crc kubenswrapper[4795]: I1129 08:07:12.933743 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cskt\" (UniqueName: \"kubernetes.io/projected/13fa8118-f5bc-4c64-95dc-89cbfb601187-kube-api-access-4cskt\") pod \"nova-cell0-conductor-db-sync-s4c9j\" (UID: \"13fa8118-f5bc-4c64-95dc-89cbfb601187\") " pod="openstack/nova-cell0-conductor-db-sync-s4c9j" Nov 29 08:07:12 crc kubenswrapper[4795]: I1129 08:07:12.933817 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13fa8118-f5bc-4c64-95dc-89cbfb601187-scripts\") pod \"nova-cell0-conductor-db-sync-s4c9j\" (UID: \"13fa8118-f5bc-4c64-95dc-89cbfb601187\") " pod="openstack/nova-cell0-conductor-db-sync-s4c9j" Nov 29 08:07:12 crc kubenswrapper[4795]: I1129 08:07:12.933867 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13fa8118-f5bc-4c64-95dc-89cbfb601187-config-data\") pod \"nova-cell0-conductor-db-sync-s4c9j\" (UID: \"13fa8118-f5bc-4c64-95dc-89cbfb601187\") " pod="openstack/nova-cell0-conductor-db-sync-s4c9j" Nov 29 08:07:12 crc kubenswrapper[4795]: I1129 08:07:12.933930 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13fa8118-f5bc-4c64-95dc-89cbfb601187-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-s4c9j\" (UID: \"13fa8118-f5bc-4c64-95dc-89cbfb601187\") " pod="openstack/nova-cell0-conductor-db-sync-s4c9j" Nov 29 08:07:12 crc kubenswrapper[4795]: I1129 08:07:12.942366 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13fa8118-f5bc-4c64-95dc-89cbfb601187-scripts\") pod \"nova-cell0-conductor-db-sync-s4c9j\" (UID: \"13fa8118-f5bc-4c64-95dc-89cbfb601187\") " pod="openstack/nova-cell0-conductor-db-sync-s4c9j" Nov 29 08:07:12 crc kubenswrapper[4795]: I1129 08:07:12.942487 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13fa8118-f5bc-4c64-95dc-89cbfb601187-config-data\") pod \"nova-cell0-conductor-db-sync-s4c9j\" (UID: \"13fa8118-f5bc-4c64-95dc-89cbfb601187\") " pod="openstack/nova-cell0-conductor-db-sync-s4c9j" Nov 29 08:07:12 crc kubenswrapper[4795]: I1129 08:07:12.942546 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13fa8118-f5bc-4c64-95dc-89cbfb601187-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-s4c9j\" (UID: \"13fa8118-f5bc-4c64-95dc-89cbfb601187\") " pod="openstack/nova-cell0-conductor-db-sync-s4c9j" Nov 29 08:07:13 crc kubenswrapper[4795]: I1129 08:07:12.953184 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cskt\" (UniqueName: \"kubernetes.io/projected/13fa8118-f5bc-4c64-95dc-89cbfb601187-kube-api-access-4cskt\") pod \"nova-cell0-conductor-db-sync-s4c9j\" (UID: \"13fa8118-f5bc-4c64-95dc-89cbfb601187\") " pod="openstack/nova-cell0-conductor-db-sync-s4c9j" Nov 29 08:07:13 crc kubenswrapper[4795]: I1129 08:07:13.047912 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-s4c9j" Nov 29 08:07:14 crc kubenswrapper[4795]: I1129 08:07:14.211225 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-s4c9j"] Nov 29 08:07:14 crc kubenswrapper[4795]: W1129 08:07:14.211494 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod13fa8118_f5bc_4c64_95dc_89cbfb601187.slice/crio-2a7162ef7e1c02ea0b744a11db284f0ed2ee66fd1acfbadc781f7f4f2c042947 WatchSource:0}: Error finding container 2a7162ef7e1c02ea0b744a11db284f0ed2ee66fd1acfbadc781f7f4f2c042947: Status 404 returned error can't find the container with id 2a7162ef7e1c02ea0b744a11db284f0ed2ee66fd1acfbadc781f7f4f2c042947 Nov 29 08:07:15 crc kubenswrapper[4795]: I1129 08:07:15.195210 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-s4c9j" event={"ID":"13fa8118-f5bc-4c64-95dc-89cbfb601187","Type":"ContainerStarted","Data":"2a7162ef7e1c02ea0b744a11db284f0ed2ee66fd1acfbadc781f7f4f2c042947"} Nov 29 08:07:15 crc kubenswrapper[4795]: I1129 08:07:15.772450 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 29 08:07:15 crc kubenswrapper[4795]: I1129 08:07:15.772523 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 29 08:07:15 crc kubenswrapper[4795]: I1129 08:07:15.833396 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 29 08:07:15 crc kubenswrapper[4795]: I1129 08:07:15.870365 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 29 08:07:16 crc kubenswrapper[4795]: I1129 08:07:16.209486 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 29 08:07:16 crc kubenswrapper[4795]: I1129 08:07:16.209540 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 29 08:07:16 crc kubenswrapper[4795]: I1129 08:07:16.747948 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 29 08:07:16 crc kubenswrapper[4795]: I1129 08:07:16.748314 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 29 08:07:16 crc kubenswrapper[4795]: I1129 08:07:16.806614 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 29 08:07:16 crc kubenswrapper[4795]: I1129 08:07:16.826097 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 29 08:07:17 crc kubenswrapper[4795]: I1129 08:07:17.222471 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 29 08:07:17 crc kubenswrapper[4795]: I1129 08:07:17.223632 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 29 08:07:17 crc kubenswrapper[4795]: I1129 08:07:17.236817 4795 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="569960ad-ab7f-454b-b142-dae24e00340f" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 29 08:07:19 crc kubenswrapper[4795]: I1129 08:07:19.245742 4795 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 29 08:07:19 crc kubenswrapper[4795]: I1129 08:07:19.246332 4795 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 29 08:07:19 crc kubenswrapper[4795]: I1129 08:07:19.440091 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 29 08:07:19 crc kubenswrapper[4795]: I1129 08:07:19.440244 4795 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 29 08:07:19 crc kubenswrapper[4795]: I1129 08:07:19.469932 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 29 08:07:19 crc kubenswrapper[4795]: I1129 08:07:19.470007 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 29 08:07:19 crc kubenswrapper[4795]: I1129 08:07:19.474188 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 29 08:07:21 crc kubenswrapper[4795]: E1129 08:07:21.224382 4795 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 52236dc856c9ff58b724e49ea15972d1a4376f5bffb0f668fd6fc0b638a4778d is running failed: container process not found" containerID="52236dc856c9ff58b724e49ea15972d1a4376f5bffb0f668fd6fc0b638a4778d" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 29 08:07:21 crc kubenswrapper[4795]: E1129 08:07:21.225257 4795 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 52236dc856c9ff58b724e49ea15972d1a4376f5bffb0f668fd6fc0b638a4778d is running failed: container process not found" containerID="52236dc856c9ff58b724e49ea15972d1a4376f5bffb0f668fd6fc0b638a4778d" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 29 08:07:21 crc kubenswrapper[4795]: E1129 08:07:21.225604 4795 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 52236dc856c9ff58b724e49ea15972d1a4376f5bffb0f668fd6fc0b638a4778d is running failed: container process not found" containerID="52236dc856c9ff58b724e49ea15972d1a4376f5bffb0f668fd6fc0b638a4778d" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 29 08:07:21 crc kubenswrapper[4795]: E1129 08:07:21.225656 4795 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 52236dc856c9ff58b724e49ea15972d1a4376f5bffb0f668fd6fc0b638a4778d is running failed: container process not found" probeType="Readiness" pod="openstack/heat-engine-fd5dc85fb-fss4q" podUID="29304def-4cbe-4b1d-abb4-c7bd11587183" containerName="heat-engine" Nov 29 08:07:21 crc kubenswrapper[4795]: I1129 08:07:21.276835 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-fd5dc85fb-fss4q" event={"ID":"29304def-4cbe-4b1d-abb4-c7bd11587183","Type":"ContainerDied","Data":"52236dc856c9ff58b724e49ea15972d1a4376f5bffb0f668fd6fc0b638a4778d"} Nov 29 08:07:21 crc kubenswrapper[4795]: I1129 08:07:21.276919 4795 generic.go:334] "Generic (PLEG): container finished" podID="29304def-4cbe-4b1d-abb4-c7bd11587183" containerID="52236dc856c9ff58b724e49ea15972d1a4376f5bffb0f668fd6fc0b638a4778d" exitCode=0 Nov 29 08:07:24 crc kubenswrapper[4795]: I1129 08:07:24.359063 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-fd5dc85fb-fss4q" event={"ID":"29304def-4cbe-4b1d-abb4-c7bd11587183","Type":"ContainerDied","Data":"4099c3d37028341066a096ffb68b3b1be1a54bc18cc4dd7ac258ec1f6900ebf9"} Nov 29 08:07:24 crc kubenswrapper[4795]: I1129 08:07:24.359750 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4099c3d37028341066a096ffb68b3b1be1a54bc18cc4dd7ac258ec1f6900ebf9" Nov 29 08:07:24 crc kubenswrapper[4795]: I1129 08:07:24.410002 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-fd5dc85fb-fss4q" Nov 29 08:07:24 crc kubenswrapper[4795]: I1129 08:07:24.562506 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/29304def-4cbe-4b1d-abb4-c7bd11587183-config-data-custom\") pod \"29304def-4cbe-4b1d-abb4-c7bd11587183\" (UID: \"29304def-4cbe-4b1d-abb4-c7bd11587183\") " Nov 29 08:07:24 crc kubenswrapper[4795]: I1129 08:07:24.562817 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29304def-4cbe-4b1d-abb4-c7bd11587183-config-data\") pod \"29304def-4cbe-4b1d-abb4-c7bd11587183\" (UID: \"29304def-4cbe-4b1d-abb4-c7bd11587183\") " Nov 29 08:07:24 crc kubenswrapper[4795]: I1129 08:07:24.562861 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29304def-4cbe-4b1d-abb4-c7bd11587183-combined-ca-bundle\") pod \"29304def-4cbe-4b1d-abb4-c7bd11587183\" (UID: \"29304def-4cbe-4b1d-abb4-c7bd11587183\") " Nov 29 08:07:24 crc kubenswrapper[4795]: I1129 08:07:24.562979 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48ddz\" (UniqueName: \"kubernetes.io/projected/29304def-4cbe-4b1d-abb4-c7bd11587183-kube-api-access-48ddz\") pod \"29304def-4cbe-4b1d-abb4-c7bd11587183\" (UID: \"29304def-4cbe-4b1d-abb4-c7bd11587183\") " Nov 29 08:07:24 crc kubenswrapper[4795]: I1129 08:07:24.569021 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29304def-4cbe-4b1d-abb4-c7bd11587183-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "29304def-4cbe-4b1d-abb4-c7bd11587183" (UID: "29304def-4cbe-4b1d-abb4-c7bd11587183"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:07:24 crc kubenswrapper[4795]: I1129 08:07:24.579876 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29304def-4cbe-4b1d-abb4-c7bd11587183-kube-api-access-48ddz" (OuterVolumeSpecName: "kube-api-access-48ddz") pod "29304def-4cbe-4b1d-abb4-c7bd11587183" (UID: "29304def-4cbe-4b1d-abb4-c7bd11587183"). InnerVolumeSpecName "kube-api-access-48ddz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:07:24 crc kubenswrapper[4795]: I1129 08:07:24.598942 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29304def-4cbe-4b1d-abb4-c7bd11587183-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "29304def-4cbe-4b1d-abb4-c7bd11587183" (UID: "29304def-4cbe-4b1d-abb4-c7bd11587183"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:07:24 crc kubenswrapper[4795]: I1129 08:07:24.684246 4795 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/29304def-4cbe-4b1d-abb4-c7bd11587183-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 29 08:07:24 crc kubenswrapper[4795]: I1129 08:07:24.684276 4795 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29304def-4cbe-4b1d-abb4-c7bd11587183-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 08:07:24 crc kubenswrapper[4795]: I1129 08:07:24.684286 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48ddz\" (UniqueName: \"kubernetes.io/projected/29304def-4cbe-4b1d-abb4-c7bd11587183-kube-api-access-48ddz\") on node \"crc\" DevicePath \"\"" Nov 29 08:07:24 crc kubenswrapper[4795]: I1129 08:07:24.694352 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29304def-4cbe-4b1d-abb4-c7bd11587183-config-data" (OuterVolumeSpecName: "config-data") pod "29304def-4cbe-4b1d-abb4-c7bd11587183" (UID: "29304def-4cbe-4b1d-abb4-c7bd11587183"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:07:24 crc kubenswrapper[4795]: I1129 08:07:24.786496 4795 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29304def-4cbe-4b1d-abb4-c7bd11587183-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 08:07:25 crc kubenswrapper[4795]: I1129 08:07:25.370443 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-fd5dc85fb-fss4q" Nov 29 08:07:25 crc kubenswrapper[4795]: I1129 08:07:25.370473 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-s4c9j" event={"ID":"13fa8118-f5bc-4c64-95dc-89cbfb601187","Type":"ContainerStarted","Data":"695eeba4b7ef1f6f2a60fc9b58be571eebb29d2667b03ecaf4001ceda62eaf88"} Nov 29 08:07:25 crc kubenswrapper[4795]: I1129 08:07:25.386941 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-s4c9j" podStartSLOduration=3.383199997 podStartE2EDuration="13.38692385s" podCreationTimestamp="2025-11-29 08:07:12 +0000 UTC" firstStartedPulling="2025-11-29 08:07:14.215141437 +0000 UTC m=+1680.190717227" lastFinishedPulling="2025-11-29 08:07:24.21886529 +0000 UTC m=+1690.194441080" observedRunningTime="2025-11-29 08:07:25.383480063 +0000 UTC m=+1691.359055863" watchObservedRunningTime="2025-11-29 08:07:25.38692385 +0000 UTC m=+1691.362499640" Nov 29 08:07:25 crc kubenswrapper[4795]: I1129 08:07:25.408536 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-fd5dc85fb-fss4q"] Nov 29 08:07:25 crc kubenswrapper[4795]: I1129 08:07:25.419731 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-fd5dc85fb-fss4q"] Nov 29 08:07:26 crc kubenswrapper[4795]: I1129 08:07:26.338280 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29304def-4cbe-4b1d-abb4-c7bd11587183" path="/var/lib/kubelet/pods/29304def-4cbe-4b1d-abb4-c7bd11587183/volumes" Nov 29 08:07:26 crc kubenswrapper[4795]: I1129 08:07:26.383390 4795 generic.go:334] "Generic (PLEG): container finished" podID="569960ad-ab7f-454b-b142-dae24e00340f" containerID="e157dbb58ff15a1ab1105d74992b5fcb067df0425fca9c798c7f42c6439b79c2" exitCode=137 Nov 29 08:07:26 crc kubenswrapper[4795]: I1129 08:07:26.384364 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"569960ad-ab7f-454b-b142-dae24e00340f","Type":"ContainerDied","Data":"e157dbb58ff15a1ab1105d74992b5fcb067df0425fca9c798c7f42c6439b79c2"} Nov 29 08:07:26 crc kubenswrapper[4795]: I1129 08:07:26.384390 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"569960ad-ab7f-454b-b142-dae24e00340f","Type":"ContainerDied","Data":"9571b828ec89c69dfce219be181cc57f71afaf9312711400fb2781639bf296ad"} Nov 29 08:07:26 crc kubenswrapper[4795]: I1129 08:07:26.384400 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9571b828ec89c69dfce219be181cc57f71afaf9312711400fb2781639bf296ad" Nov 29 08:07:26 crc kubenswrapper[4795]: I1129 08:07:26.447907 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 08:07:26 crc kubenswrapper[4795]: I1129 08:07:26.630308 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/569960ad-ab7f-454b-b142-dae24e00340f-run-httpd\") pod \"569960ad-ab7f-454b-b142-dae24e00340f\" (UID: \"569960ad-ab7f-454b-b142-dae24e00340f\") " Nov 29 08:07:26 crc kubenswrapper[4795]: I1129 08:07:26.630895 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/569960ad-ab7f-454b-b142-dae24e00340f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "569960ad-ab7f-454b-b142-dae24e00340f" (UID: "569960ad-ab7f-454b-b142-dae24e00340f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:07:26 crc kubenswrapper[4795]: I1129 08:07:26.631007 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/569960ad-ab7f-454b-b142-dae24e00340f-combined-ca-bundle\") pod \"569960ad-ab7f-454b-b142-dae24e00340f\" (UID: \"569960ad-ab7f-454b-b142-dae24e00340f\") " Nov 29 08:07:26 crc kubenswrapper[4795]: I1129 08:07:26.631103 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/569960ad-ab7f-454b-b142-dae24e00340f-sg-core-conf-yaml\") pod \"569960ad-ab7f-454b-b142-dae24e00340f\" (UID: \"569960ad-ab7f-454b-b142-dae24e00340f\") " Nov 29 08:07:26 crc kubenswrapper[4795]: I1129 08:07:26.631186 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/569960ad-ab7f-454b-b142-dae24e00340f-scripts\") pod \"569960ad-ab7f-454b-b142-dae24e00340f\" (UID: \"569960ad-ab7f-454b-b142-dae24e00340f\") " Nov 29 08:07:26 crc kubenswrapper[4795]: I1129 08:07:26.631289 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4sk8q\" (UniqueName: \"kubernetes.io/projected/569960ad-ab7f-454b-b142-dae24e00340f-kube-api-access-4sk8q\") pod \"569960ad-ab7f-454b-b142-dae24e00340f\" (UID: \"569960ad-ab7f-454b-b142-dae24e00340f\") " Nov 29 08:07:26 crc kubenswrapper[4795]: I1129 08:07:26.631346 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/569960ad-ab7f-454b-b142-dae24e00340f-config-data\") pod \"569960ad-ab7f-454b-b142-dae24e00340f\" (UID: \"569960ad-ab7f-454b-b142-dae24e00340f\") " Nov 29 08:07:26 crc kubenswrapper[4795]: I1129 08:07:26.631416 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/569960ad-ab7f-454b-b142-dae24e00340f-log-httpd\") pod \"569960ad-ab7f-454b-b142-dae24e00340f\" (UID: \"569960ad-ab7f-454b-b142-dae24e00340f\") " Nov 29 08:07:26 crc kubenswrapper[4795]: I1129 08:07:26.632152 4795 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/569960ad-ab7f-454b-b142-dae24e00340f-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 29 08:07:26 crc kubenswrapper[4795]: I1129 08:07:26.633081 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/569960ad-ab7f-454b-b142-dae24e00340f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "569960ad-ab7f-454b-b142-dae24e00340f" (UID: "569960ad-ab7f-454b-b142-dae24e00340f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:07:26 crc kubenswrapper[4795]: I1129 08:07:26.639280 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/569960ad-ab7f-454b-b142-dae24e00340f-kube-api-access-4sk8q" (OuterVolumeSpecName: "kube-api-access-4sk8q") pod "569960ad-ab7f-454b-b142-dae24e00340f" (UID: "569960ad-ab7f-454b-b142-dae24e00340f"). InnerVolumeSpecName "kube-api-access-4sk8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:07:26 crc kubenswrapper[4795]: I1129 08:07:26.641873 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/569960ad-ab7f-454b-b142-dae24e00340f-scripts" (OuterVolumeSpecName: "scripts") pod "569960ad-ab7f-454b-b142-dae24e00340f" (UID: "569960ad-ab7f-454b-b142-dae24e00340f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:07:26 crc kubenswrapper[4795]: I1129 08:07:26.678796 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/569960ad-ab7f-454b-b142-dae24e00340f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "569960ad-ab7f-454b-b142-dae24e00340f" (UID: "569960ad-ab7f-454b-b142-dae24e00340f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:07:26 crc kubenswrapper[4795]: I1129 08:07:26.733666 4795 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/569960ad-ab7f-454b-b142-dae24e00340f-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 29 08:07:26 crc kubenswrapper[4795]: I1129 08:07:26.733699 4795 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/569960ad-ab7f-454b-b142-dae24e00340f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 29 08:07:26 crc kubenswrapper[4795]: I1129 08:07:26.733711 4795 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/569960ad-ab7f-454b-b142-dae24e00340f-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 08:07:26 crc kubenswrapper[4795]: I1129 08:07:26.733722 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4sk8q\" (UniqueName: \"kubernetes.io/projected/569960ad-ab7f-454b-b142-dae24e00340f-kube-api-access-4sk8q\") on node \"crc\" DevicePath \"\"" Nov 29 08:07:26 crc kubenswrapper[4795]: I1129 08:07:26.735504 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/569960ad-ab7f-454b-b142-dae24e00340f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "569960ad-ab7f-454b-b142-dae24e00340f" (UID: "569960ad-ab7f-454b-b142-dae24e00340f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:07:26 crc kubenswrapper[4795]: I1129 08:07:26.748330 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/569960ad-ab7f-454b-b142-dae24e00340f-config-data" (OuterVolumeSpecName: "config-data") pod "569960ad-ab7f-454b-b142-dae24e00340f" (UID: "569960ad-ab7f-454b-b142-dae24e00340f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:07:26 crc kubenswrapper[4795]: I1129 08:07:26.836287 4795 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/569960ad-ab7f-454b-b142-dae24e00340f-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 08:07:26 crc kubenswrapper[4795]: I1129 08:07:26.836616 4795 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/569960ad-ab7f-454b-b142-dae24e00340f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 08:07:27 crc kubenswrapper[4795]: I1129 08:07:27.395644 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 08:07:27 crc kubenswrapper[4795]: I1129 08:07:27.501404 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 08:07:27 crc kubenswrapper[4795]: I1129 08:07:27.519372 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 29 08:07:27 crc kubenswrapper[4795]: I1129 08:07:27.533861 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 29 08:07:27 crc kubenswrapper[4795]: E1129 08:07:27.534557 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="569960ad-ab7f-454b-b142-dae24e00340f" containerName="sg-core" Nov 29 08:07:27 crc kubenswrapper[4795]: I1129 08:07:27.534680 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="569960ad-ab7f-454b-b142-dae24e00340f" containerName="sg-core" Nov 29 08:07:27 crc kubenswrapper[4795]: E1129 08:07:27.534720 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="569960ad-ab7f-454b-b142-dae24e00340f" containerName="proxy-httpd" Nov 29 08:07:27 crc kubenswrapper[4795]: I1129 08:07:27.534731 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="569960ad-ab7f-454b-b142-dae24e00340f" containerName="proxy-httpd" Nov 29 08:07:27 crc kubenswrapper[4795]: E1129 08:07:27.534750 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="569960ad-ab7f-454b-b142-dae24e00340f" containerName="ceilometer-central-agent" Nov 29 08:07:27 crc kubenswrapper[4795]: I1129 08:07:27.534759 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="569960ad-ab7f-454b-b142-dae24e00340f" containerName="ceilometer-central-agent" Nov 29 08:07:27 crc kubenswrapper[4795]: E1129 08:07:27.534810 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29304def-4cbe-4b1d-abb4-c7bd11587183" containerName="heat-engine" Nov 29 08:07:27 crc kubenswrapper[4795]: I1129 08:07:27.534819 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="29304def-4cbe-4b1d-abb4-c7bd11587183" containerName="heat-engine" Nov 29 08:07:27 crc kubenswrapper[4795]: E1129 08:07:27.534833 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="569960ad-ab7f-454b-b142-dae24e00340f" containerName="ceilometer-notification-agent" Nov 29 08:07:27 crc kubenswrapper[4795]: I1129 08:07:27.534841 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="569960ad-ab7f-454b-b142-dae24e00340f" containerName="ceilometer-notification-agent" Nov 29 08:07:27 crc kubenswrapper[4795]: I1129 08:07:27.535095 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="569960ad-ab7f-454b-b142-dae24e00340f" containerName="proxy-httpd" Nov 29 08:07:27 crc kubenswrapper[4795]: I1129 08:07:27.535119 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="569960ad-ab7f-454b-b142-dae24e00340f" containerName="sg-core" Nov 29 08:07:27 crc kubenswrapper[4795]: I1129 08:07:27.535136 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="569960ad-ab7f-454b-b142-dae24e00340f" containerName="ceilometer-central-agent" Nov 29 08:07:27 crc kubenswrapper[4795]: I1129 08:07:27.535154 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="29304def-4cbe-4b1d-abb4-c7bd11587183" containerName="heat-engine" Nov 29 08:07:27 crc kubenswrapper[4795]: I1129 08:07:27.535170 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="569960ad-ab7f-454b-b142-dae24e00340f" containerName="ceilometer-notification-agent" Nov 29 08:07:27 crc kubenswrapper[4795]: I1129 08:07:27.538039 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 08:07:27 crc kubenswrapper[4795]: I1129 08:07:27.542601 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 29 08:07:27 crc kubenswrapper[4795]: I1129 08:07:27.544076 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 29 08:07:27 crc kubenswrapper[4795]: I1129 08:07:27.549364 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 08:07:27 crc kubenswrapper[4795]: I1129 08:07:27.662281 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4d34ca31-2fe0-41cc-b563-25e471286134-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4d34ca31-2fe0-41cc-b563-25e471286134\") " pod="openstack/ceilometer-0" Nov 29 08:07:27 crc kubenswrapper[4795]: I1129 08:07:27.662367 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d34ca31-2fe0-41cc-b563-25e471286134-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4d34ca31-2fe0-41cc-b563-25e471286134\") " pod="openstack/ceilometer-0" Nov 29 08:07:27 crc kubenswrapper[4795]: I1129 08:07:27.662418 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqxrv\" (UniqueName: \"kubernetes.io/projected/4d34ca31-2fe0-41cc-b563-25e471286134-kube-api-access-mqxrv\") pod \"ceilometer-0\" (UID: \"4d34ca31-2fe0-41cc-b563-25e471286134\") " pod="openstack/ceilometer-0" Nov 29 08:07:27 crc kubenswrapper[4795]: I1129 08:07:27.662473 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4d34ca31-2fe0-41cc-b563-25e471286134-log-httpd\") pod \"ceilometer-0\" (UID: \"4d34ca31-2fe0-41cc-b563-25e471286134\") " pod="openstack/ceilometer-0" Nov 29 08:07:27 crc kubenswrapper[4795]: I1129 08:07:27.662502 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4d34ca31-2fe0-41cc-b563-25e471286134-run-httpd\") pod \"ceilometer-0\" (UID: \"4d34ca31-2fe0-41cc-b563-25e471286134\") " pod="openstack/ceilometer-0" Nov 29 08:07:27 crc kubenswrapper[4795]: I1129 08:07:27.662535 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d34ca31-2fe0-41cc-b563-25e471286134-config-data\") pod \"ceilometer-0\" (UID: \"4d34ca31-2fe0-41cc-b563-25e471286134\") " pod="openstack/ceilometer-0" Nov 29 08:07:27 crc kubenswrapper[4795]: I1129 08:07:27.662573 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d34ca31-2fe0-41cc-b563-25e471286134-scripts\") pod \"ceilometer-0\" (UID: \"4d34ca31-2fe0-41cc-b563-25e471286134\") " pod="openstack/ceilometer-0" Nov 29 08:07:27 crc kubenswrapper[4795]: I1129 08:07:27.764429 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d34ca31-2fe0-41cc-b563-25e471286134-scripts\") pod \"ceilometer-0\" (UID: \"4d34ca31-2fe0-41cc-b563-25e471286134\") " pod="openstack/ceilometer-0" Nov 29 08:07:27 crc kubenswrapper[4795]: I1129 08:07:27.764576 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4d34ca31-2fe0-41cc-b563-25e471286134-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4d34ca31-2fe0-41cc-b563-25e471286134\") " pod="openstack/ceilometer-0" Nov 29 08:07:27 crc kubenswrapper[4795]: I1129 08:07:27.764653 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d34ca31-2fe0-41cc-b563-25e471286134-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4d34ca31-2fe0-41cc-b563-25e471286134\") " pod="openstack/ceilometer-0" Nov 29 08:07:27 crc kubenswrapper[4795]: I1129 08:07:27.764687 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqxrv\" (UniqueName: \"kubernetes.io/projected/4d34ca31-2fe0-41cc-b563-25e471286134-kube-api-access-mqxrv\") pod \"ceilometer-0\" (UID: \"4d34ca31-2fe0-41cc-b563-25e471286134\") " pod="openstack/ceilometer-0" Nov 29 08:07:27 crc kubenswrapper[4795]: I1129 08:07:27.764752 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4d34ca31-2fe0-41cc-b563-25e471286134-log-httpd\") pod \"ceilometer-0\" (UID: \"4d34ca31-2fe0-41cc-b563-25e471286134\") " pod="openstack/ceilometer-0" Nov 29 08:07:27 crc kubenswrapper[4795]: I1129 08:07:27.764795 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4d34ca31-2fe0-41cc-b563-25e471286134-run-httpd\") pod \"ceilometer-0\" (UID: \"4d34ca31-2fe0-41cc-b563-25e471286134\") " pod="openstack/ceilometer-0" Nov 29 08:07:27 crc kubenswrapper[4795]: I1129 08:07:27.764830 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d34ca31-2fe0-41cc-b563-25e471286134-config-data\") pod \"ceilometer-0\" (UID: \"4d34ca31-2fe0-41cc-b563-25e471286134\") " pod="openstack/ceilometer-0" Nov 29 08:07:27 crc kubenswrapper[4795]: I1129 08:07:27.765508 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4d34ca31-2fe0-41cc-b563-25e471286134-log-httpd\") pod \"ceilometer-0\" (UID: \"4d34ca31-2fe0-41cc-b563-25e471286134\") " pod="openstack/ceilometer-0" Nov 29 08:07:27 crc kubenswrapper[4795]: I1129 08:07:27.765615 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4d34ca31-2fe0-41cc-b563-25e471286134-run-httpd\") pod \"ceilometer-0\" (UID: \"4d34ca31-2fe0-41cc-b563-25e471286134\") " pod="openstack/ceilometer-0" Nov 29 08:07:27 crc kubenswrapper[4795]: I1129 08:07:27.769067 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d34ca31-2fe0-41cc-b563-25e471286134-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4d34ca31-2fe0-41cc-b563-25e471286134\") " pod="openstack/ceilometer-0" Nov 29 08:07:27 crc kubenswrapper[4795]: I1129 08:07:27.769470 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4d34ca31-2fe0-41cc-b563-25e471286134-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4d34ca31-2fe0-41cc-b563-25e471286134\") " pod="openstack/ceilometer-0" Nov 29 08:07:27 crc kubenswrapper[4795]: I1129 08:07:27.770562 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d34ca31-2fe0-41cc-b563-25e471286134-scripts\") pod \"ceilometer-0\" (UID: \"4d34ca31-2fe0-41cc-b563-25e471286134\") " pod="openstack/ceilometer-0" Nov 29 08:07:27 crc kubenswrapper[4795]: I1129 08:07:27.770928 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d34ca31-2fe0-41cc-b563-25e471286134-config-data\") pod \"ceilometer-0\" (UID: \"4d34ca31-2fe0-41cc-b563-25e471286134\") " pod="openstack/ceilometer-0" Nov 29 08:07:27 crc kubenswrapper[4795]: I1129 08:07:27.806494 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqxrv\" (UniqueName: \"kubernetes.io/projected/4d34ca31-2fe0-41cc-b563-25e471286134-kube-api-access-mqxrv\") pod \"ceilometer-0\" (UID: \"4d34ca31-2fe0-41cc-b563-25e471286134\") " pod="openstack/ceilometer-0" Nov 29 08:07:27 crc kubenswrapper[4795]: I1129 08:07:27.856387 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 08:07:28 crc kubenswrapper[4795]: I1129 08:07:28.291212 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="569960ad-ab7f-454b-b142-dae24e00340f" path="/var/lib/kubelet/pods/569960ad-ab7f-454b-b142-dae24e00340f/volumes" Nov 29 08:07:28 crc kubenswrapper[4795]: W1129 08:07:28.326988 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4d34ca31_2fe0_41cc_b563_25e471286134.slice/crio-4f1dcd0e40538fbb5ac358672c1a8127681884dc5052b124460e97f11ebd60f4 WatchSource:0}: Error finding container 4f1dcd0e40538fbb5ac358672c1a8127681884dc5052b124460e97f11ebd60f4: Status 404 returned error can't find the container with id 4f1dcd0e40538fbb5ac358672c1a8127681884dc5052b124460e97f11ebd60f4 Nov 29 08:07:28 crc kubenswrapper[4795]: I1129 08:07:28.330504 4795 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 29 08:07:28 crc kubenswrapper[4795]: I1129 08:07:28.330734 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 08:07:28 crc kubenswrapper[4795]: I1129 08:07:28.409170 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4d34ca31-2fe0-41cc-b563-25e471286134","Type":"ContainerStarted","Data":"4f1dcd0e40538fbb5ac358672c1a8127681884dc5052b124460e97f11ebd60f4"} Nov 29 08:07:29 crc kubenswrapper[4795]: I1129 08:07:29.419852 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4d34ca31-2fe0-41cc-b563-25e471286134","Type":"ContainerStarted","Data":"a47fc11243059ba0c4162c939c0e8fa204dbd30deebbfc7d903a431d8bc13b59"} Nov 29 08:07:30 crc kubenswrapper[4795]: I1129 08:07:30.433107 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4d34ca31-2fe0-41cc-b563-25e471286134","Type":"ContainerStarted","Data":"bd299d9125f757d8dcf9d89dddc569e1df6d34fc3286b8bb738324d6361e32a6"} Nov 29 08:07:31 crc kubenswrapper[4795]: I1129 08:07:31.444546 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4d34ca31-2fe0-41cc-b563-25e471286134","Type":"ContainerStarted","Data":"3147142b65b37b65b7f2c4338bb6558b3eb4c5ffb738893a3092da3e32b2377f"} Nov 29 08:07:32 crc kubenswrapper[4795]: I1129 08:07:32.619420 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4d34ca31-2fe0-41cc-b563-25e471286134","Type":"ContainerStarted","Data":"93e013cce7f83a54feb3260e708bf3c1ee4ecda0abaed248d6d0192955816554"} Nov 29 08:07:32 crc kubenswrapper[4795]: I1129 08:07:32.620633 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 29 08:07:32 crc kubenswrapper[4795]: I1129 08:07:32.641061 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.164938204 podStartE2EDuration="5.641042352s" podCreationTimestamp="2025-11-29 08:07:27 +0000 UTC" firstStartedPulling="2025-11-29 08:07:28.33023168 +0000 UTC m=+1694.305807460" lastFinishedPulling="2025-11-29 08:07:31.806335828 +0000 UTC m=+1697.781911608" observedRunningTime="2025-11-29 08:07:32.640279161 +0000 UTC m=+1698.615854941" watchObservedRunningTime="2025-11-29 08:07:32.641042352 +0000 UTC m=+1698.616618142" Nov 29 08:07:33 crc kubenswrapper[4795]: I1129 08:07:33.211663 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 08:07:34 crc kubenswrapper[4795]: I1129 08:07:34.641176 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4d34ca31-2fe0-41cc-b563-25e471286134" containerName="ceilometer-central-agent" containerID="cri-o://a47fc11243059ba0c4162c939c0e8fa204dbd30deebbfc7d903a431d8bc13b59" gracePeriod=30 Nov 29 08:07:34 crc kubenswrapper[4795]: I1129 08:07:34.641231 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4d34ca31-2fe0-41cc-b563-25e471286134" containerName="sg-core" containerID="cri-o://3147142b65b37b65b7f2c4338bb6558b3eb4c5ffb738893a3092da3e32b2377f" gracePeriod=30 Nov 29 08:07:34 crc kubenswrapper[4795]: I1129 08:07:34.641263 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4d34ca31-2fe0-41cc-b563-25e471286134" containerName="proxy-httpd" containerID="cri-o://93e013cce7f83a54feb3260e708bf3c1ee4ecda0abaed248d6d0192955816554" gracePeriod=30 Nov 29 08:07:34 crc kubenswrapper[4795]: I1129 08:07:34.641389 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4d34ca31-2fe0-41cc-b563-25e471286134" containerName="ceilometer-notification-agent" containerID="cri-o://bd299d9125f757d8dcf9d89dddc569e1df6d34fc3286b8bb738324d6361e32a6" gracePeriod=30 Nov 29 08:07:35 crc kubenswrapper[4795]: I1129 08:07:35.658398 4795 generic.go:334] "Generic (PLEG): container finished" podID="4d34ca31-2fe0-41cc-b563-25e471286134" containerID="93e013cce7f83a54feb3260e708bf3c1ee4ecda0abaed248d6d0192955816554" exitCode=0 Nov 29 08:07:35 crc kubenswrapper[4795]: I1129 08:07:35.658793 4795 generic.go:334] "Generic (PLEG): container finished" podID="4d34ca31-2fe0-41cc-b563-25e471286134" containerID="3147142b65b37b65b7f2c4338bb6558b3eb4c5ffb738893a3092da3e32b2377f" exitCode=2 Nov 29 08:07:35 crc kubenswrapper[4795]: I1129 08:07:35.658806 4795 generic.go:334] "Generic (PLEG): container finished" podID="4d34ca31-2fe0-41cc-b563-25e471286134" containerID="bd299d9125f757d8dcf9d89dddc569e1df6d34fc3286b8bb738324d6361e32a6" exitCode=0 Nov 29 08:07:35 crc kubenswrapper[4795]: I1129 08:07:35.658827 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4d34ca31-2fe0-41cc-b563-25e471286134","Type":"ContainerDied","Data":"93e013cce7f83a54feb3260e708bf3c1ee4ecda0abaed248d6d0192955816554"} Nov 29 08:07:35 crc kubenswrapper[4795]: I1129 08:07:35.658857 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4d34ca31-2fe0-41cc-b563-25e471286134","Type":"ContainerDied","Data":"3147142b65b37b65b7f2c4338bb6558b3eb4c5ffb738893a3092da3e32b2377f"} Nov 29 08:07:35 crc kubenswrapper[4795]: I1129 08:07:35.658871 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4d34ca31-2fe0-41cc-b563-25e471286134","Type":"ContainerDied","Data":"bd299d9125f757d8dcf9d89dddc569e1df6d34fc3286b8bb738324d6361e32a6"} Nov 29 08:07:36 crc kubenswrapper[4795]: I1129 08:07:36.676940 4795 generic.go:334] "Generic (PLEG): container finished" podID="13fa8118-f5bc-4c64-95dc-89cbfb601187" containerID="695eeba4b7ef1f6f2a60fc9b58be571eebb29d2667b03ecaf4001ceda62eaf88" exitCode=0 Nov 29 08:07:36 crc kubenswrapper[4795]: I1129 08:07:36.677046 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-s4c9j" event={"ID":"13fa8118-f5bc-4c64-95dc-89cbfb601187","Type":"ContainerDied","Data":"695eeba4b7ef1f6f2a60fc9b58be571eebb29d2667b03ecaf4001ceda62eaf88"} Nov 29 08:07:36 crc kubenswrapper[4795]: I1129 08:07:36.933343 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-mnks7"] Nov 29 08:07:36 crc kubenswrapper[4795]: I1129 08:07:36.935647 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-mnks7" Nov 29 08:07:36 crc kubenswrapper[4795]: I1129 08:07:36.947898 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-6a39-account-create-update-q9sbd"] Nov 29 08:07:36 crc kubenswrapper[4795]: I1129 08:07:36.949840 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-6a39-account-create-update-q9sbd" Nov 29 08:07:36 crc kubenswrapper[4795]: I1129 08:07:36.955104 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Nov 29 08:07:36 crc kubenswrapper[4795]: I1129 08:07:36.960507 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-6a39-account-create-update-q9sbd"] Nov 29 08:07:36 crc kubenswrapper[4795]: I1129 08:07:36.976796 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-mnks7"] Nov 29 08:07:37 crc kubenswrapper[4795]: I1129 08:07:37.130204 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lm2s\" (UniqueName: \"kubernetes.io/projected/96ca16b5-5532-4172-bfe8-9154391fa708-kube-api-access-6lm2s\") pod \"aodh-db-create-mnks7\" (UID: \"96ca16b5-5532-4172-bfe8-9154391fa708\") " pod="openstack/aodh-db-create-mnks7" Nov 29 08:07:37 crc kubenswrapper[4795]: I1129 08:07:37.130296 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0cb8029-b3e4-4dc3-ae9a-2b181f2c1ce7-operator-scripts\") pod \"aodh-6a39-account-create-update-q9sbd\" (UID: \"f0cb8029-b3e4-4dc3-ae9a-2b181f2c1ce7\") " pod="openstack/aodh-6a39-account-create-update-q9sbd" Nov 29 08:07:37 crc kubenswrapper[4795]: I1129 08:07:37.130323 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96ca16b5-5532-4172-bfe8-9154391fa708-operator-scripts\") pod \"aodh-db-create-mnks7\" (UID: \"96ca16b5-5532-4172-bfe8-9154391fa708\") " pod="openstack/aodh-db-create-mnks7" Nov 29 08:07:37 crc kubenswrapper[4795]: I1129 08:07:37.130357 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4w42\" (UniqueName: \"kubernetes.io/projected/f0cb8029-b3e4-4dc3-ae9a-2b181f2c1ce7-kube-api-access-c4w42\") pod \"aodh-6a39-account-create-update-q9sbd\" (UID: \"f0cb8029-b3e4-4dc3-ae9a-2b181f2c1ce7\") " pod="openstack/aodh-6a39-account-create-update-q9sbd" Nov 29 08:07:37 crc kubenswrapper[4795]: I1129 08:07:37.232522 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lm2s\" (UniqueName: \"kubernetes.io/projected/96ca16b5-5532-4172-bfe8-9154391fa708-kube-api-access-6lm2s\") pod \"aodh-db-create-mnks7\" (UID: \"96ca16b5-5532-4172-bfe8-9154391fa708\") " pod="openstack/aodh-db-create-mnks7" Nov 29 08:07:37 crc kubenswrapper[4795]: I1129 08:07:37.232671 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0cb8029-b3e4-4dc3-ae9a-2b181f2c1ce7-operator-scripts\") pod \"aodh-6a39-account-create-update-q9sbd\" (UID: \"f0cb8029-b3e4-4dc3-ae9a-2b181f2c1ce7\") " pod="openstack/aodh-6a39-account-create-update-q9sbd" Nov 29 08:07:37 crc kubenswrapper[4795]: I1129 08:07:37.232705 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96ca16b5-5532-4172-bfe8-9154391fa708-operator-scripts\") pod \"aodh-db-create-mnks7\" (UID: \"96ca16b5-5532-4172-bfe8-9154391fa708\") " pod="openstack/aodh-db-create-mnks7" Nov 29 08:07:37 crc kubenswrapper[4795]: I1129 08:07:37.232739 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4w42\" (UniqueName: \"kubernetes.io/projected/f0cb8029-b3e4-4dc3-ae9a-2b181f2c1ce7-kube-api-access-c4w42\") pod \"aodh-6a39-account-create-update-q9sbd\" (UID: \"f0cb8029-b3e4-4dc3-ae9a-2b181f2c1ce7\") " pod="openstack/aodh-6a39-account-create-update-q9sbd" Nov 29 08:07:37 crc kubenswrapper[4795]: I1129 08:07:37.233672 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96ca16b5-5532-4172-bfe8-9154391fa708-operator-scripts\") pod \"aodh-db-create-mnks7\" (UID: \"96ca16b5-5532-4172-bfe8-9154391fa708\") " pod="openstack/aodh-db-create-mnks7" Nov 29 08:07:37 crc kubenswrapper[4795]: I1129 08:07:37.233679 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0cb8029-b3e4-4dc3-ae9a-2b181f2c1ce7-operator-scripts\") pod \"aodh-6a39-account-create-update-q9sbd\" (UID: \"f0cb8029-b3e4-4dc3-ae9a-2b181f2c1ce7\") " pod="openstack/aodh-6a39-account-create-update-q9sbd" Nov 29 08:07:37 crc kubenswrapper[4795]: I1129 08:07:37.257157 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lm2s\" (UniqueName: \"kubernetes.io/projected/96ca16b5-5532-4172-bfe8-9154391fa708-kube-api-access-6lm2s\") pod \"aodh-db-create-mnks7\" (UID: \"96ca16b5-5532-4172-bfe8-9154391fa708\") " pod="openstack/aodh-db-create-mnks7" Nov 29 08:07:37 crc kubenswrapper[4795]: I1129 08:07:37.257530 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4w42\" (UniqueName: \"kubernetes.io/projected/f0cb8029-b3e4-4dc3-ae9a-2b181f2c1ce7-kube-api-access-c4w42\") pod \"aodh-6a39-account-create-update-q9sbd\" (UID: \"f0cb8029-b3e4-4dc3-ae9a-2b181f2c1ce7\") " pod="openstack/aodh-6a39-account-create-update-q9sbd" Nov 29 08:07:37 crc kubenswrapper[4795]: I1129 08:07:37.260257 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-mnks7" Nov 29 08:07:37 crc kubenswrapper[4795]: I1129 08:07:37.275000 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-6a39-account-create-update-q9sbd" Nov 29 08:07:37 crc kubenswrapper[4795]: I1129 08:07:37.854904 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-mnks7"] Nov 29 08:07:37 crc kubenswrapper[4795]: I1129 08:07:37.874841 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-6a39-account-create-update-q9sbd"] Nov 29 08:07:38 crc kubenswrapper[4795]: I1129 08:07:38.210511 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-s4c9j" Nov 29 08:07:38 crc kubenswrapper[4795]: I1129 08:07:38.330537 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13fa8118-f5bc-4c64-95dc-89cbfb601187-config-data\") pod \"13fa8118-f5bc-4c64-95dc-89cbfb601187\" (UID: \"13fa8118-f5bc-4c64-95dc-89cbfb601187\") " Nov 29 08:07:38 crc kubenswrapper[4795]: I1129 08:07:38.330793 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4cskt\" (UniqueName: \"kubernetes.io/projected/13fa8118-f5bc-4c64-95dc-89cbfb601187-kube-api-access-4cskt\") pod \"13fa8118-f5bc-4c64-95dc-89cbfb601187\" (UID: \"13fa8118-f5bc-4c64-95dc-89cbfb601187\") " Nov 29 08:07:38 crc kubenswrapper[4795]: I1129 08:07:38.330817 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13fa8118-f5bc-4c64-95dc-89cbfb601187-scripts\") pod \"13fa8118-f5bc-4c64-95dc-89cbfb601187\" (UID: \"13fa8118-f5bc-4c64-95dc-89cbfb601187\") " Nov 29 08:07:38 crc kubenswrapper[4795]: I1129 08:07:38.331091 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13fa8118-f5bc-4c64-95dc-89cbfb601187-combined-ca-bundle\") pod \"13fa8118-f5bc-4c64-95dc-89cbfb601187\" (UID: \"13fa8118-f5bc-4c64-95dc-89cbfb601187\") " Nov 29 08:07:38 crc kubenswrapper[4795]: I1129 08:07:38.353983 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13fa8118-f5bc-4c64-95dc-89cbfb601187-kube-api-access-4cskt" (OuterVolumeSpecName: "kube-api-access-4cskt") pod "13fa8118-f5bc-4c64-95dc-89cbfb601187" (UID: "13fa8118-f5bc-4c64-95dc-89cbfb601187"). InnerVolumeSpecName "kube-api-access-4cskt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:07:38 crc kubenswrapper[4795]: I1129 08:07:38.356341 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13fa8118-f5bc-4c64-95dc-89cbfb601187-scripts" (OuterVolumeSpecName: "scripts") pod "13fa8118-f5bc-4c64-95dc-89cbfb601187" (UID: "13fa8118-f5bc-4c64-95dc-89cbfb601187"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:07:38 crc kubenswrapper[4795]: I1129 08:07:38.434472 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4cskt\" (UniqueName: \"kubernetes.io/projected/13fa8118-f5bc-4c64-95dc-89cbfb601187-kube-api-access-4cskt\") on node \"crc\" DevicePath \"\"" Nov 29 08:07:38 crc kubenswrapper[4795]: I1129 08:07:38.437507 4795 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13fa8118-f5bc-4c64-95dc-89cbfb601187-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 08:07:38 crc kubenswrapper[4795]: I1129 08:07:38.437705 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13fa8118-f5bc-4c64-95dc-89cbfb601187-config-data" (OuterVolumeSpecName: "config-data") pod "13fa8118-f5bc-4c64-95dc-89cbfb601187" (UID: "13fa8118-f5bc-4c64-95dc-89cbfb601187"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:07:38 crc kubenswrapper[4795]: I1129 08:07:38.451669 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13fa8118-f5bc-4c64-95dc-89cbfb601187-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "13fa8118-f5bc-4c64-95dc-89cbfb601187" (UID: "13fa8118-f5bc-4c64-95dc-89cbfb601187"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:07:38 crc kubenswrapper[4795]: I1129 08:07:38.541222 4795 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13fa8118-f5bc-4c64-95dc-89cbfb601187-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 08:07:38 crc kubenswrapper[4795]: I1129 08:07:38.541258 4795 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13fa8118-f5bc-4c64-95dc-89cbfb601187-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 08:07:38 crc kubenswrapper[4795]: I1129 08:07:38.705231 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-s4c9j" event={"ID":"13fa8118-f5bc-4c64-95dc-89cbfb601187","Type":"ContainerDied","Data":"2a7162ef7e1c02ea0b744a11db284f0ed2ee66fd1acfbadc781f7f4f2c042947"} Nov 29 08:07:38 crc kubenswrapper[4795]: I1129 08:07:38.705296 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a7162ef7e1c02ea0b744a11db284f0ed2ee66fd1acfbadc781f7f4f2c042947" Nov 29 08:07:38 crc kubenswrapper[4795]: I1129 08:07:38.705371 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-s4c9j" Nov 29 08:07:38 crc kubenswrapper[4795]: I1129 08:07:38.713347 4795 generic.go:334] "Generic (PLEG): container finished" podID="96ca16b5-5532-4172-bfe8-9154391fa708" containerID="00417553b6a1dd08bedd57806be9526f404052667614675a0a44b8b979c29ead" exitCode=0 Nov 29 08:07:38 crc kubenswrapper[4795]: I1129 08:07:38.713459 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-mnks7" event={"ID":"96ca16b5-5532-4172-bfe8-9154391fa708","Type":"ContainerDied","Data":"00417553b6a1dd08bedd57806be9526f404052667614675a0a44b8b979c29ead"} Nov 29 08:07:38 crc kubenswrapper[4795]: I1129 08:07:38.713502 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-mnks7" event={"ID":"96ca16b5-5532-4172-bfe8-9154391fa708","Type":"ContainerStarted","Data":"b6e36b182123a599488efab3a3ae057a36d1826eee5734458d5b550b2c40444a"} Nov 29 08:07:38 crc kubenswrapper[4795]: I1129 08:07:38.716923 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-6a39-account-create-update-q9sbd" event={"ID":"f0cb8029-b3e4-4dc3-ae9a-2b181f2c1ce7","Type":"ContainerStarted","Data":"a78eebbeff9ae167dfc2e49f3979bd87615fed98f86430cef8ec881e1a357c34"} Nov 29 08:07:38 crc kubenswrapper[4795]: I1129 08:07:38.716959 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-6a39-account-create-update-q9sbd" event={"ID":"f0cb8029-b3e4-4dc3-ae9a-2b181f2c1ce7","Type":"ContainerStarted","Data":"be082bf156c0e321337b310fcc3eb654a9a37e56ceebd41c25cb6a00e05cb635"} Nov 29 08:07:38 crc kubenswrapper[4795]: I1129 08:07:38.834720 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 29 08:07:38 crc kubenswrapper[4795]: E1129 08:07:38.836847 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13fa8118-f5bc-4c64-95dc-89cbfb601187" containerName="nova-cell0-conductor-db-sync" Nov 29 08:07:38 crc kubenswrapper[4795]: I1129 08:07:38.836877 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="13fa8118-f5bc-4c64-95dc-89cbfb601187" containerName="nova-cell0-conductor-db-sync" Nov 29 08:07:38 crc kubenswrapper[4795]: I1129 08:07:38.838140 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="13fa8118-f5bc-4c64-95dc-89cbfb601187" containerName="nova-cell0-conductor-db-sync" Nov 29 08:07:38 crc kubenswrapper[4795]: I1129 08:07:38.839339 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 29 08:07:38 crc kubenswrapper[4795]: I1129 08:07:38.850924 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 29 08:07:38 crc kubenswrapper[4795]: I1129 08:07:38.852828 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-hdgz4" Nov 29 08:07:38 crc kubenswrapper[4795]: I1129 08:07:38.885537 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 29 08:07:38 crc kubenswrapper[4795]: I1129 08:07:38.979447 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqn5p\" (UniqueName: \"kubernetes.io/projected/e29df3aa-49f9-4776-8b5d-6448d3032696-kube-api-access-sqn5p\") pod \"nova-cell0-conductor-0\" (UID: \"e29df3aa-49f9-4776-8b5d-6448d3032696\") " pod="openstack/nova-cell0-conductor-0" Nov 29 08:07:38 crc kubenswrapper[4795]: I1129 08:07:38.979857 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e29df3aa-49f9-4776-8b5d-6448d3032696-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"e29df3aa-49f9-4776-8b5d-6448d3032696\") " pod="openstack/nova-cell0-conductor-0" Nov 29 08:07:38 crc kubenswrapper[4795]: I1129 08:07:38.980140 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e29df3aa-49f9-4776-8b5d-6448d3032696-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"e29df3aa-49f9-4776-8b5d-6448d3032696\") " pod="openstack/nova-cell0-conductor-0" Nov 29 08:07:39 crc kubenswrapper[4795]: I1129 08:07:39.082779 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqn5p\" (UniqueName: \"kubernetes.io/projected/e29df3aa-49f9-4776-8b5d-6448d3032696-kube-api-access-sqn5p\") pod \"nova-cell0-conductor-0\" (UID: \"e29df3aa-49f9-4776-8b5d-6448d3032696\") " pod="openstack/nova-cell0-conductor-0" Nov 29 08:07:39 crc kubenswrapper[4795]: I1129 08:07:39.083428 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e29df3aa-49f9-4776-8b5d-6448d3032696-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"e29df3aa-49f9-4776-8b5d-6448d3032696\") " pod="openstack/nova-cell0-conductor-0" Nov 29 08:07:39 crc kubenswrapper[4795]: I1129 08:07:39.083519 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e29df3aa-49f9-4776-8b5d-6448d3032696-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"e29df3aa-49f9-4776-8b5d-6448d3032696\") " pod="openstack/nova-cell0-conductor-0" Nov 29 08:07:39 crc kubenswrapper[4795]: I1129 08:07:39.091177 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e29df3aa-49f9-4776-8b5d-6448d3032696-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"e29df3aa-49f9-4776-8b5d-6448d3032696\") " pod="openstack/nova-cell0-conductor-0" Nov 29 08:07:39 crc kubenswrapper[4795]: I1129 08:07:39.091204 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e29df3aa-49f9-4776-8b5d-6448d3032696-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"e29df3aa-49f9-4776-8b5d-6448d3032696\") " pod="openstack/nova-cell0-conductor-0" Nov 29 08:07:39 crc kubenswrapper[4795]: I1129 08:07:39.104290 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqn5p\" (UniqueName: \"kubernetes.io/projected/e29df3aa-49f9-4776-8b5d-6448d3032696-kube-api-access-sqn5p\") pod \"nova-cell0-conductor-0\" (UID: \"e29df3aa-49f9-4776-8b5d-6448d3032696\") " pod="openstack/nova-cell0-conductor-0" Nov 29 08:07:39 crc kubenswrapper[4795]: I1129 08:07:39.194550 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 29 08:07:39 crc kubenswrapper[4795]: I1129 08:07:39.733445 4795 generic.go:334] "Generic (PLEG): container finished" podID="f0cb8029-b3e4-4dc3-ae9a-2b181f2c1ce7" containerID="a78eebbeff9ae167dfc2e49f3979bd87615fed98f86430cef8ec881e1a357c34" exitCode=0 Nov 29 08:07:39 crc kubenswrapper[4795]: I1129 08:07:39.733534 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-6a39-account-create-update-q9sbd" event={"ID":"f0cb8029-b3e4-4dc3-ae9a-2b181f2c1ce7","Type":"ContainerDied","Data":"a78eebbeff9ae167dfc2e49f3979bd87615fed98f86430cef8ec881e1a357c34"} Nov 29 08:07:39 crc kubenswrapper[4795]: I1129 08:07:39.894668 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 29 08:07:39 crc kubenswrapper[4795]: W1129 08:07:39.903253 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode29df3aa_49f9_4776_8b5d_6448d3032696.slice/crio-bf234b3d8f06d0a8f78374d8daec86874de029561825d24addc9f5af1e2958b0 WatchSource:0}: Error finding container bf234b3d8f06d0a8f78374d8daec86874de029561825d24addc9f5af1e2958b0: Status 404 returned error can't find the container with id bf234b3d8f06d0a8f78374d8daec86874de029561825d24addc9f5af1e2958b0 Nov 29 08:07:40 crc kubenswrapper[4795]: I1129 08:07:40.099651 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-6a39-account-create-update-q9sbd" Nov 29 08:07:40 crc kubenswrapper[4795]: I1129 08:07:40.182394 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-mnks7" Nov 29 08:07:40 crc kubenswrapper[4795]: I1129 08:07:40.212858 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0cb8029-b3e4-4dc3-ae9a-2b181f2c1ce7-operator-scripts\") pod \"f0cb8029-b3e4-4dc3-ae9a-2b181f2c1ce7\" (UID: \"f0cb8029-b3e4-4dc3-ae9a-2b181f2c1ce7\") " Nov 29 08:07:40 crc kubenswrapper[4795]: I1129 08:07:40.212978 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4w42\" (UniqueName: \"kubernetes.io/projected/f0cb8029-b3e4-4dc3-ae9a-2b181f2c1ce7-kube-api-access-c4w42\") pod \"f0cb8029-b3e4-4dc3-ae9a-2b181f2c1ce7\" (UID: \"f0cb8029-b3e4-4dc3-ae9a-2b181f2c1ce7\") " Nov 29 08:07:40 crc kubenswrapper[4795]: I1129 08:07:40.214801 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0cb8029-b3e4-4dc3-ae9a-2b181f2c1ce7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f0cb8029-b3e4-4dc3-ae9a-2b181f2c1ce7" (UID: "f0cb8029-b3e4-4dc3-ae9a-2b181f2c1ce7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:07:40 crc kubenswrapper[4795]: I1129 08:07:40.218521 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0cb8029-b3e4-4dc3-ae9a-2b181f2c1ce7-kube-api-access-c4w42" (OuterVolumeSpecName: "kube-api-access-c4w42") pod "f0cb8029-b3e4-4dc3-ae9a-2b181f2c1ce7" (UID: "f0cb8029-b3e4-4dc3-ae9a-2b181f2c1ce7"). InnerVolumeSpecName "kube-api-access-c4w42". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:07:40 crc kubenswrapper[4795]: I1129 08:07:40.314447 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96ca16b5-5532-4172-bfe8-9154391fa708-operator-scripts\") pod \"96ca16b5-5532-4172-bfe8-9154391fa708\" (UID: \"96ca16b5-5532-4172-bfe8-9154391fa708\") " Nov 29 08:07:40 crc kubenswrapper[4795]: I1129 08:07:40.314944 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96ca16b5-5532-4172-bfe8-9154391fa708-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "96ca16b5-5532-4172-bfe8-9154391fa708" (UID: "96ca16b5-5532-4172-bfe8-9154391fa708"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:07:40 crc kubenswrapper[4795]: I1129 08:07:40.314984 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lm2s\" (UniqueName: \"kubernetes.io/projected/96ca16b5-5532-4172-bfe8-9154391fa708-kube-api-access-6lm2s\") pod \"96ca16b5-5532-4172-bfe8-9154391fa708\" (UID: \"96ca16b5-5532-4172-bfe8-9154391fa708\") " Nov 29 08:07:40 crc kubenswrapper[4795]: I1129 08:07:40.315478 4795 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0cb8029-b3e4-4dc3-ae9a-2b181f2c1ce7-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 08:07:40 crc kubenswrapper[4795]: I1129 08:07:40.315495 4795 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96ca16b5-5532-4172-bfe8-9154391fa708-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 08:07:40 crc kubenswrapper[4795]: I1129 08:07:40.315506 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4w42\" (UniqueName: \"kubernetes.io/projected/f0cb8029-b3e4-4dc3-ae9a-2b181f2c1ce7-kube-api-access-c4w42\") on node \"crc\" DevicePath \"\"" Nov 29 08:07:40 crc kubenswrapper[4795]: I1129 08:07:40.318712 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96ca16b5-5532-4172-bfe8-9154391fa708-kube-api-access-6lm2s" (OuterVolumeSpecName: "kube-api-access-6lm2s") pod "96ca16b5-5532-4172-bfe8-9154391fa708" (UID: "96ca16b5-5532-4172-bfe8-9154391fa708"). InnerVolumeSpecName "kube-api-access-6lm2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:07:40 crc kubenswrapper[4795]: I1129 08:07:40.418279 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lm2s\" (UniqueName: \"kubernetes.io/projected/96ca16b5-5532-4172-bfe8-9154391fa708-kube-api-access-6lm2s\") on node \"crc\" DevicePath \"\"" Nov 29 08:07:40 crc kubenswrapper[4795]: I1129 08:07:40.746077 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-mnks7" event={"ID":"96ca16b5-5532-4172-bfe8-9154391fa708","Type":"ContainerDied","Data":"b6e36b182123a599488efab3a3ae057a36d1826eee5734458d5b550b2c40444a"} Nov 29 08:07:40 crc kubenswrapper[4795]: I1129 08:07:40.746114 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6e36b182123a599488efab3a3ae057a36d1826eee5734458d5b550b2c40444a" Nov 29 08:07:40 crc kubenswrapper[4795]: I1129 08:07:40.746184 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-mnks7" Nov 29 08:07:40 crc kubenswrapper[4795]: I1129 08:07:40.762124 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-6a39-account-create-update-q9sbd" Nov 29 08:07:40 crc kubenswrapper[4795]: I1129 08:07:40.762397 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-6a39-account-create-update-q9sbd" event={"ID":"f0cb8029-b3e4-4dc3-ae9a-2b181f2c1ce7","Type":"ContainerDied","Data":"be082bf156c0e321337b310fcc3eb654a9a37e56ceebd41c25cb6a00e05cb635"} Nov 29 08:07:40 crc kubenswrapper[4795]: I1129 08:07:40.762441 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be082bf156c0e321337b310fcc3eb654a9a37e56ceebd41c25cb6a00e05cb635" Nov 29 08:07:40 crc kubenswrapper[4795]: I1129 08:07:40.764073 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"e29df3aa-49f9-4776-8b5d-6448d3032696","Type":"ContainerStarted","Data":"5e6f48dbf1544d57985fbfff4201daae7a7aeca759e73857fdec323ea92f3c7b"} Nov 29 08:07:40 crc kubenswrapper[4795]: I1129 08:07:40.764116 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"e29df3aa-49f9-4776-8b5d-6448d3032696","Type":"ContainerStarted","Data":"bf234b3d8f06d0a8f78374d8daec86874de029561825d24addc9f5af1e2958b0"} Nov 29 08:07:40 crc kubenswrapper[4795]: I1129 08:07:40.764209 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 29 08:07:40 crc kubenswrapper[4795]: I1129 08:07:40.791005 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.790990601 podStartE2EDuration="2.790990601s" podCreationTimestamp="2025-11-29 08:07:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 08:07:40.776333985 +0000 UTC m=+1706.751909775" watchObservedRunningTime="2025-11-29 08:07:40.790990601 +0000 UTC m=+1706.766566391" Nov 29 08:07:41 crc kubenswrapper[4795]: I1129 08:07:41.941199 4795 patch_prober.go:28] interesting pod/machine-config-daemon-bkmq6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:07:41 crc kubenswrapper[4795]: I1129 08:07:41.941629 4795 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:07:42 crc kubenswrapper[4795]: I1129 08:07:42.274456 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-2r5cb"] Nov 29 08:07:42 crc kubenswrapper[4795]: E1129 08:07:42.275053 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96ca16b5-5532-4172-bfe8-9154391fa708" containerName="mariadb-database-create" Nov 29 08:07:42 crc kubenswrapper[4795]: I1129 08:07:42.275073 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="96ca16b5-5532-4172-bfe8-9154391fa708" containerName="mariadb-database-create" Nov 29 08:07:42 crc kubenswrapper[4795]: E1129 08:07:42.275120 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0cb8029-b3e4-4dc3-ae9a-2b181f2c1ce7" containerName="mariadb-account-create-update" Nov 29 08:07:42 crc kubenswrapper[4795]: I1129 08:07:42.275128 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0cb8029-b3e4-4dc3-ae9a-2b181f2c1ce7" containerName="mariadb-account-create-update" Nov 29 08:07:42 crc kubenswrapper[4795]: I1129 08:07:42.275342 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="96ca16b5-5532-4172-bfe8-9154391fa708" containerName="mariadb-database-create" Nov 29 08:07:42 crc kubenswrapper[4795]: I1129 08:07:42.275382 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0cb8029-b3e4-4dc3-ae9a-2b181f2c1ce7" containerName="mariadb-account-create-update" Nov 29 08:07:42 crc kubenswrapper[4795]: I1129 08:07:42.276208 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-2r5cb" Nov 29 08:07:42 crc kubenswrapper[4795]: I1129 08:07:42.285342 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 29 08:07:42 crc kubenswrapper[4795]: I1129 08:07:42.285376 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 29 08:07:42 crc kubenswrapper[4795]: I1129 08:07:42.285854 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-xg4wp" Nov 29 08:07:42 crc kubenswrapper[4795]: I1129 08:07:42.285881 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 29 08:07:42 crc kubenswrapper[4795]: I1129 08:07:42.292116 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-2r5cb"] Nov 29 08:07:42 crc kubenswrapper[4795]: I1129 08:07:42.367497 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee6363b5-2fea-42a6-94fd-748c7c4c3e66-combined-ca-bundle\") pod \"aodh-db-sync-2r5cb\" (UID: \"ee6363b5-2fea-42a6-94fd-748c7c4c3e66\") " pod="openstack/aodh-db-sync-2r5cb" Nov 29 08:07:42 crc kubenswrapper[4795]: I1129 08:07:42.368172 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee6363b5-2fea-42a6-94fd-748c7c4c3e66-scripts\") pod \"aodh-db-sync-2r5cb\" (UID: \"ee6363b5-2fea-42a6-94fd-748c7c4c3e66\") " pod="openstack/aodh-db-sync-2r5cb" Nov 29 08:07:42 crc kubenswrapper[4795]: I1129 08:07:42.368512 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tv6rq\" (UniqueName: \"kubernetes.io/projected/ee6363b5-2fea-42a6-94fd-748c7c4c3e66-kube-api-access-tv6rq\") pod \"aodh-db-sync-2r5cb\" (UID: \"ee6363b5-2fea-42a6-94fd-748c7c4c3e66\") " pod="openstack/aodh-db-sync-2r5cb" Nov 29 08:07:42 crc kubenswrapper[4795]: I1129 08:07:42.368797 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee6363b5-2fea-42a6-94fd-748c7c4c3e66-config-data\") pod \"aodh-db-sync-2r5cb\" (UID: \"ee6363b5-2fea-42a6-94fd-748c7c4c3e66\") " pod="openstack/aodh-db-sync-2r5cb" Nov 29 08:07:42 crc kubenswrapper[4795]: I1129 08:07:42.471417 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee6363b5-2fea-42a6-94fd-748c7c4c3e66-scripts\") pod \"aodh-db-sync-2r5cb\" (UID: \"ee6363b5-2fea-42a6-94fd-748c7c4c3e66\") " pod="openstack/aodh-db-sync-2r5cb" Nov 29 08:07:42 crc kubenswrapper[4795]: I1129 08:07:42.471515 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tv6rq\" (UniqueName: \"kubernetes.io/projected/ee6363b5-2fea-42a6-94fd-748c7c4c3e66-kube-api-access-tv6rq\") pod \"aodh-db-sync-2r5cb\" (UID: \"ee6363b5-2fea-42a6-94fd-748c7c4c3e66\") " pod="openstack/aodh-db-sync-2r5cb" Nov 29 08:07:42 crc kubenswrapper[4795]: I1129 08:07:42.471580 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee6363b5-2fea-42a6-94fd-748c7c4c3e66-config-data\") pod \"aodh-db-sync-2r5cb\" (UID: \"ee6363b5-2fea-42a6-94fd-748c7c4c3e66\") " pod="openstack/aodh-db-sync-2r5cb" Nov 29 08:07:42 crc kubenswrapper[4795]: I1129 08:07:42.471735 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee6363b5-2fea-42a6-94fd-748c7c4c3e66-combined-ca-bundle\") pod \"aodh-db-sync-2r5cb\" (UID: \"ee6363b5-2fea-42a6-94fd-748c7c4c3e66\") " pod="openstack/aodh-db-sync-2r5cb" Nov 29 08:07:42 crc kubenswrapper[4795]: I1129 08:07:42.478375 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee6363b5-2fea-42a6-94fd-748c7c4c3e66-scripts\") pod \"aodh-db-sync-2r5cb\" (UID: \"ee6363b5-2fea-42a6-94fd-748c7c4c3e66\") " pod="openstack/aodh-db-sync-2r5cb" Nov 29 08:07:42 crc kubenswrapper[4795]: I1129 08:07:42.478634 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee6363b5-2fea-42a6-94fd-748c7c4c3e66-combined-ca-bundle\") pod \"aodh-db-sync-2r5cb\" (UID: \"ee6363b5-2fea-42a6-94fd-748c7c4c3e66\") " pod="openstack/aodh-db-sync-2r5cb" Nov 29 08:07:42 crc kubenswrapper[4795]: I1129 08:07:42.478835 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee6363b5-2fea-42a6-94fd-748c7c4c3e66-config-data\") pod \"aodh-db-sync-2r5cb\" (UID: \"ee6363b5-2fea-42a6-94fd-748c7c4c3e66\") " pod="openstack/aodh-db-sync-2r5cb" Nov 29 08:07:42 crc kubenswrapper[4795]: I1129 08:07:42.494606 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tv6rq\" (UniqueName: \"kubernetes.io/projected/ee6363b5-2fea-42a6-94fd-748c7c4c3e66-kube-api-access-tv6rq\") pod \"aodh-db-sync-2r5cb\" (UID: \"ee6363b5-2fea-42a6-94fd-748c7c4c3e66\") " pod="openstack/aodh-db-sync-2r5cb" Nov 29 08:07:42 crc kubenswrapper[4795]: I1129 08:07:42.613480 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-2r5cb" Nov 29 08:07:43 crc kubenswrapper[4795]: I1129 08:07:43.114879 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-2r5cb"] Nov 29 08:07:43 crc kubenswrapper[4795]: W1129 08:07:43.137832 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee6363b5_2fea_42a6_94fd_748c7c4c3e66.slice/crio-7e0ff3cc8fe1a9c5d2327797259ba08f18cafdbb645faaadec6c7054f3ff7dc6 WatchSource:0}: Error finding container 7e0ff3cc8fe1a9c5d2327797259ba08f18cafdbb645faaadec6c7054f3ff7dc6: Status 404 returned error can't find the container with id 7e0ff3cc8fe1a9c5d2327797259ba08f18cafdbb645faaadec6c7054f3ff7dc6 Nov 29 08:07:43 crc kubenswrapper[4795]: I1129 08:07:43.404640 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 08:07:43 crc kubenswrapper[4795]: I1129 08:07:43.494901 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d34ca31-2fe0-41cc-b563-25e471286134-config-data\") pod \"4d34ca31-2fe0-41cc-b563-25e471286134\" (UID: \"4d34ca31-2fe0-41cc-b563-25e471286134\") " Nov 29 08:07:43 crc kubenswrapper[4795]: I1129 08:07:43.494985 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d34ca31-2fe0-41cc-b563-25e471286134-scripts\") pod \"4d34ca31-2fe0-41cc-b563-25e471286134\" (UID: \"4d34ca31-2fe0-41cc-b563-25e471286134\") " Nov 29 08:07:43 crc kubenswrapper[4795]: I1129 08:07:43.495092 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4d34ca31-2fe0-41cc-b563-25e471286134-sg-core-conf-yaml\") pod \"4d34ca31-2fe0-41cc-b563-25e471286134\" (UID: \"4d34ca31-2fe0-41cc-b563-25e471286134\") " Nov 29 08:07:43 crc kubenswrapper[4795]: I1129 08:07:43.495131 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d34ca31-2fe0-41cc-b563-25e471286134-combined-ca-bundle\") pod \"4d34ca31-2fe0-41cc-b563-25e471286134\" (UID: \"4d34ca31-2fe0-41cc-b563-25e471286134\") " Nov 29 08:07:43 crc kubenswrapper[4795]: I1129 08:07:43.495216 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4d34ca31-2fe0-41cc-b563-25e471286134-log-httpd\") pod \"4d34ca31-2fe0-41cc-b563-25e471286134\" (UID: \"4d34ca31-2fe0-41cc-b563-25e471286134\") " Nov 29 08:07:43 crc kubenswrapper[4795]: I1129 08:07:43.495264 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqxrv\" (UniqueName: \"kubernetes.io/projected/4d34ca31-2fe0-41cc-b563-25e471286134-kube-api-access-mqxrv\") pod \"4d34ca31-2fe0-41cc-b563-25e471286134\" (UID: \"4d34ca31-2fe0-41cc-b563-25e471286134\") " Nov 29 08:07:43 crc kubenswrapper[4795]: I1129 08:07:43.495281 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4d34ca31-2fe0-41cc-b563-25e471286134-run-httpd\") pod \"4d34ca31-2fe0-41cc-b563-25e471286134\" (UID: \"4d34ca31-2fe0-41cc-b563-25e471286134\") " Nov 29 08:07:43 crc kubenswrapper[4795]: I1129 08:07:43.496284 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d34ca31-2fe0-41cc-b563-25e471286134-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4d34ca31-2fe0-41cc-b563-25e471286134" (UID: "4d34ca31-2fe0-41cc-b563-25e471286134"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:07:43 crc kubenswrapper[4795]: I1129 08:07:43.498076 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d34ca31-2fe0-41cc-b563-25e471286134-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4d34ca31-2fe0-41cc-b563-25e471286134" (UID: "4d34ca31-2fe0-41cc-b563-25e471286134"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:07:43 crc kubenswrapper[4795]: I1129 08:07:43.506492 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d34ca31-2fe0-41cc-b563-25e471286134-scripts" (OuterVolumeSpecName: "scripts") pod "4d34ca31-2fe0-41cc-b563-25e471286134" (UID: "4d34ca31-2fe0-41cc-b563-25e471286134"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:07:43 crc kubenswrapper[4795]: I1129 08:07:43.506551 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d34ca31-2fe0-41cc-b563-25e471286134-kube-api-access-mqxrv" (OuterVolumeSpecName: "kube-api-access-mqxrv") pod "4d34ca31-2fe0-41cc-b563-25e471286134" (UID: "4d34ca31-2fe0-41cc-b563-25e471286134"). InnerVolumeSpecName "kube-api-access-mqxrv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:07:43 crc kubenswrapper[4795]: I1129 08:07:43.549820 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d34ca31-2fe0-41cc-b563-25e471286134-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4d34ca31-2fe0-41cc-b563-25e471286134" (UID: "4d34ca31-2fe0-41cc-b563-25e471286134"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:07:43 crc kubenswrapper[4795]: I1129 08:07:43.598113 4795 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d34ca31-2fe0-41cc-b563-25e471286134-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 08:07:43 crc kubenswrapper[4795]: I1129 08:07:43.598147 4795 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4d34ca31-2fe0-41cc-b563-25e471286134-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 29 08:07:43 crc kubenswrapper[4795]: I1129 08:07:43.598162 4795 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4d34ca31-2fe0-41cc-b563-25e471286134-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 29 08:07:43 crc kubenswrapper[4795]: I1129 08:07:43.598196 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqxrv\" (UniqueName: \"kubernetes.io/projected/4d34ca31-2fe0-41cc-b563-25e471286134-kube-api-access-mqxrv\") on node \"crc\" DevicePath \"\"" Nov 29 08:07:43 crc kubenswrapper[4795]: I1129 08:07:43.598212 4795 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4d34ca31-2fe0-41cc-b563-25e471286134-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 29 08:07:43 crc kubenswrapper[4795]: I1129 08:07:43.631556 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d34ca31-2fe0-41cc-b563-25e471286134-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4d34ca31-2fe0-41cc-b563-25e471286134" (UID: "4d34ca31-2fe0-41cc-b563-25e471286134"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:07:43 crc kubenswrapper[4795]: I1129 08:07:43.654300 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d34ca31-2fe0-41cc-b563-25e471286134-config-data" (OuterVolumeSpecName: "config-data") pod "4d34ca31-2fe0-41cc-b563-25e471286134" (UID: "4d34ca31-2fe0-41cc-b563-25e471286134"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:07:43 crc kubenswrapper[4795]: I1129 08:07:43.701372 4795 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d34ca31-2fe0-41cc-b563-25e471286134-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 08:07:43 crc kubenswrapper[4795]: I1129 08:07:43.701430 4795 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d34ca31-2fe0-41cc-b563-25e471286134-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 08:07:43 crc kubenswrapper[4795]: I1129 08:07:43.813288 4795 generic.go:334] "Generic (PLEG): container finished" podID="4d34ca31-2fe0-41cc-b563-25e471286134" containerID="a47fc11243059ba0c4162c939c0e8fa204dbd30deebbfc7d903a431d8bc13b59" exitCode=0 Nov 29 08:07:43 crc kubenswrapper[4795]: I1129 08:07:43.813357 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4d34ca31-2fe0-41cc-b563-25e471286134","Type":"ContainerDied","Data":"a47fc11243059ba0c4162c939c0e8fa204dbd30deebbfc7d903a431d8bc13b59"} Nov 29 08:07:43 crc kubenswrapper[4795]: I1129 08:07:43.813405 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 08:07:43 crc kubenswrapper[4795]: I1129 08:07:43.813425 4795 scope.go:117] "RemoveContainer" containerID="93e013cce7f83a54feb3260e708bf3c1ee4ecda0abaed248d6d0192955816554" Nov 29 08:07:43 crc kubenswrapper[4795]: I1129 08:07:43.813413 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4d34ca31-2fe0-41cc-b563-25e471286134","Type":"ContainerDied","Data":"4f1dcd0e40538fbb5ac358672c1a8127681884dc5052b124460e97f11ebd60f4"} Nov 29 08:07:43 crc kubenswrapper[4795]: I1129 08:07:43.819665 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-2r5cb" event={"ID":"ee6363b5-2fea-42a6-94fd-748c7c4c3e66","Type":"ContainerStarted","Data":"7e0ff3cc8fe1a9c5d2327797259ba08f18cafdbb645faaadec6c7054f3ff7dc6"} Nov 29 08:07:43 crc kubenswrapper[4795]: I1129 08:07:43.844717 4795 scope.go:117] "RemoveContainer" containerID="3147142b65b37b65b7f2c4338bb6558b3eb4c5ffb738893a3092da3e32b2377f" Nov 29 08:07:43 crc kubenswrapper[4795]: I1129 08:07:43.864158 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 08:07:43 crc kubenswrapper[4795]: I1129 08:07:43.885317 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 29 08:07:43 crc kubenswrapper[4795]: I1129 08:07:43.898505 4795 scope.go:117] "RemoveContainer" containerID="bd299d9125f757d8dcf9d89dddc569e1df6d34fc3286b8bb738324d6361e32a6" Nov 29 08:07:43 crc kubenswrapper[4795]: I1129 08:07:43.908092 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 29 08:07:43 crc kubenswrapper[4795]: E1129 08:07:43.908810 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d34ca31-2fe0-41cc-b563-25e471286134" containerName="proxy-httpd" Nov 29 08:07:43 crc kubenswrapper[4795]: I1129 08:07:43.908833 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d34ca31-2fe0-41cc-b563-25e471286134" containerName="proxy-httpd" Nov 29 08:07:43 crc kubenswrapper[4795]: E1129 08:07:43.908856 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d34ca31-2fe0-41cc-b563-25e471286134" containerName="sg-core" Nov 29 08:07:43 crc kubenswrapper[4795]: I1129 08:07:43.908862 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d34ca31-2fe0-41cc-b563-25e471286134" containerName="sg-core" Nov 29 08:07:43 crc kubenswrapper[4795]: E1129 08:07:43.908903 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d34ca31-2fe0-41cc-b563-25e471286134" containerName="ceilometer-central-agent" Nov 29 08:07:43 crc kubenswrapper[4795]: I1129 08:07:43.908909 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d34ca31-2fe0-41cc-b563-25e471286134" containerName="ceilometer-central-agent" Nov 29 08:07:43 crc kubenswrapper[4795]: E1129 08:07:43.908919 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d34ca31-2fe0-41cc-b563-25e471286134" containerName="ceilometer-notification-agent" Nov 29 08:07:43 crc kubenswrapper[4795]: I1129 08:07:43.908926 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d34ca31-2fe0-41cc-b563-25e471286134" containerName="ceilometer-notification-agent" Nov 29 08:07:43 crc kubenswrapper[4795]: I1129 08:07:43.909161 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d34ca31-2fe0-41cc-b563-25e471286134" containerName="ceilometer-notification-agent" Nov 29 08:07:43 crc kubenswrapper[4795]: I1129 08:07:43.909185 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d34ca31-2fe0-41cc-b563-25e471286134" containerName="sg-core" Nov 29 08:07:43 crc kubenswrapper[4795]: I1129 08:07:43.909207 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d34ca31-2fe0-41cc-b563-25e471286134" containerName="proxy-httpd" Nov 29 08:07:43 crc kubenswrapper[4795]: I1129 08:07:43.909216 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d34ca31-2fe0-41cc-b563-25e471286134" containerName="ceilometer-central-agent" Nov 29 08:07:43 crc kubenswrapper[4795]: I1129 08:07:43.911898 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 08:07:43 crc kubenswrapper[4795]: I1129 08:07:43.918003 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 29 08:07:43 crc kubenswrapper[4795]: I1129 08:07:43.918449 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 29 08:07:43 crc kubenswrapper[4795]: I1129 08:07:43.921124 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 08:07:43 crc kubenswrapper[4795]: I1129 08:07:43.969341 4795 scope.go:117] "RemoveContainer" containerID="a47fc11243059ba0c4162c939c0e8fa204dbd30deebbfc7d903a431d8bc13b59" Nov 29 08:07:43 crc kubenswrapper[4795]: I1129 08:07:43.997931 4795 scope.go:117] "RemoveContainer" containerID="93e013cce7f83a54feb3260e708bf3c1ee4ecda0abaed248d6d0192955816554" Nov 29 08:07:43 crc kubenswrapper[4795]: E1129 08:07:43.998357 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93e013cce7f83a54feb3260e708bf3c1ee4ecda0abaed248d6d0192955816554\": container with ID starting with 93e013cce7f83a54feb3260e708bf3c1ee4ecda0abaed248d6d0192955816554 not found: ID does not exist" containerID="93e013cce7f83a54feb3260e708bf3c1ee4ecda0abaed248d6d0192955816554" Nov 29 08:07:43 crc kubenswrapper[4795]: I1129 08:07:43.998416 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93e013cce7f83a54feb3260e708bf3c1ee4ecda0abaed248d6d0192955816554"} err="failed to get container status \"93e013cce7f83a54feb3260e708bf3c1ee4ecda0abaed248d6d0192955816554\": rpc error: code = NotFound desc = could not find container \"93e013cce7f83a54feb3260e708bf3c1ee4ecda0abaed248d6d0192955816554\": container with ID starting with 93e013cce7f83a54feb3260e708bf3c1ee4ecda0abaed248d6d0192955816554 not found: ID does not exist" Nov 29 08:07:43 crc kubenswrapper[4795]: I1129 08:07:43.998439 4795 scope.go:117] "RemoveContainer" containerID="3147142b65b37b65b7f2c4338bb6558b3eb4c5ffb738893a3092da3e32b2377f" Nov 29 08:07:43 crc kubenswrapper[4795]: E1129 08:07:43.998814 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3147142b65b37b65b7f2c4338bb6558b3eb4c5ffb738893a3092da3e32b2377f\": container with ID starting with 3147142b65b37b65b7f2c4338bb6558b3eb4c5ffb738893a3092da3e32b2377f not found: ID does not exist" containerID="3147142b65b37b65b7f2c4338bb6558b3eb4c5ffb738893a3092da3e32b2377f" Nov 29 08:07:43 crc kubenswrapper[4795]: I1129 08:07:43.998834 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3147142b65b37b65b7f2c4338bb6558b3eb4c5ffb738893a3092da3e32b2377f"} err="failed to get container status \"3147142b65b37b65b7f2c4338bb6558b3eb4c5ffb738893a3092da3e32b2377f\": rpc error: code = NotFound desc = could not find container \"3147142b65b37b65b7f2c4338bb6558b3eb4c5ffb738893a3092da3e32b2377f\": container with ID starting with 3147142b65b37b65b7f2c4338bb6558b3eb4c5ffb738893a3092da3e32b2377f not found: ID does not exist" Nov 29 08:07:43 crc kubenswrapper[4795]: I1129 08:07:43.998850 4795 scope.go:117] "RemoveContainer" containerID="bd299d9125f757d8dcf9d89dddc569e1df6d34fc3286b8bb738324d6361e32a6" Nov 29 08:07:43 crc kubenswrapper[4795]: E1129 08:07:43.999095 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd299d9125f757d8dcf9d89dddc569e1df6d34fc3286b8bb738324d6361e32a6\": container with ID starting with bd299d9125f757d8dcf9d89dddc569e1df6d34fc3286b8bb738324d6361e32a6 not found: ID does not exist" containerID="bd299d9125f757d8dcf9d89dddc569e1df6d34fc3286b8bb738324d6361e32a6" Nov 29 08:07:43 crc kubenswrapper[4795]: I1129 08:07:43.999113 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd299d9125f757d8dcf9d89dddc569e1df6d34fc3286b8bb738324d6361e32a6"} err="failed to get container status \"bd299d9125f757d8dcf9d89dddc569e1df6d34fc3286b8bb738324d6361e32a6\": rpc error: code = NotFound desc = could not find container \"bd299d9125f757d8dcf9d89dddc569e1df6d34fc3286b8bb738324d6361e32a6\": container with ID starting with bd299d9125f757d8dcf9d89dddc569e1df6d34fc3286b8bb738324d6361e32a6 not found: ID does not exist" Nov 29 08:07:43 crc kubenswrapper[4795]: I1129 08:07:43.999126 4795 scope.go:117] "RemoveContainer" containerID="a47fc11243059ba0c4162c939c0e8fa204dbd30deebbfc7d903a431d8bc13b59" Nov 29 08:07:43 crc kubenswrapper[4795]: E1129 08:07:43.999348 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a47fc11243059ba0c4162c939c0e8fa204dbd30deebbfc7d903a431d8bc13b59\": container with ID starting with a47fc11243059ba0c4162c939c0e8fa204dbd30deebbfc7d903a431d8bc13b59 not found: ID does not exist" containerID="a47fc11243059ba0c4162c939c0e8fa204dbd30deebbfc7d903a431d8bc13b59" Nov 29 08:07:43 crc kubenswrapper[4795]: I1129 08:07:43.999368 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a47fc11243059ba0c4162c939c0e8fa204dbd30deebbfc7d903a431d8bc13b59"} err="failed to get container status \"a47fc11243059ba0c4162c939c0e8fa204dbd30deebbfc7d903a431d8bc13b59\": rpc error: code = NotFound desc = could not find container \"a47fc11243059ba0c4162c939c0e8fa204dbd30deebbfc7d903a431d8bc13b59\": container with ID starting with a47fc11243059ba0c4162c939c0e8fa204dbd30deebbfc7d903a431d8bc13b59 not found: ID does not exist" Nov 29 08:07:44 crc kubenswrapper[4795]: I1129 08:07:44.012152 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18b04c88-379f-448d-8443-dbfa7994e228-config-data\") pod \"ceilometer-0\" (UID: \"18b04c88-379f-448d-8443-dbfa7994e228\") " pod="openstack/ceilometer-0" Nov 29 08:07:44 crc kubenswrapper[4795]: I1129 08:07:44.012337 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/18b04c88-379f-448d-8443-dbfa7994e228-log-httpd\") pod \"ceilometer-0\" (UID: \"18b04c88-379f-448d-8443-dbfa7994e228\") " pod="openstack/ceilometer-0" Nov 29 08:07:44 crc kubenswrapper[4795]: I1129 08:07:44.012472 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18b04c88-379f-448d-8443-dbfa7994e228-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"18b04c88-379f-448d-8443-dbfa7994e228\") " pod="openstack/ceilometer-0" Nov 29 08:07:44 crc kubenswrapper[4795]: I1129 08:07:44.012522 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/18b04c88-379f-448d-8443-dbfa7994e228-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"18b04c88-379f-448d-8443-dbfa7994e228\") " pod="openstack/ceilometer-0" Nov 29 08:07:44 crc kubenswrapper[4795]: I1129 08:07:44.012610 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18b04c88-379f-448d-8443-dbfa7994e228-scripts\") pod \"ceilometer-0\" (UID: \"18b04c88-379f-448d-8443-dbfa7994e228\") " pod="openstack/ceilometer-0" Nov 29 08:07:44 crc kubenswrapper[4795]: I1129 08:07:44.012630 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/18b04c88-379f-448d-8443-dbfa7994e228-run-httpd\") pod \"ceilometer-0\" (UID: \"18b04c88-379f-448d-8443-dbfa7994e228\") " pod="openstack/ceilometer-0" Nov 29 08:07:44 crc kubenswrapper[4795]: I1129 08:07:44.012736 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t27b7\" (UniqueName: \"kubernetes.io/projected/18b04c88-379f-448d-8443-dbfa7994e228-kube-api-access-t27b7\") pod \"ceilometer-0\" (UID: \"18b04c88-379f-448d-8443-dbfa7994e228\") " pod="openstack/ceilometer-0" Nov 29 08:07:44 crc kubenswrapper[4795]: I1129 08:07:44.114992 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18b04c88-379f-448d-8443-dbfa7994e228-config-data\") pod \"ceilometer-0\" (UID: \"18b04c88-379f-448d-8443-dbfa7994e228\") " pod="openstack/ceilometer-0" Nov 29 08:07:44 crc kubenswrapper[4795]: I1129 08:07:44.115087 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/18b04c88-379f-448d-8443-dbfa7994e228-log-httpd\") pod \"ceilometer-0\" (UID: \"18b04c88-379f-448d-8443-dbfa7994e228\") " pod="openstack/ceilometer-0" Nov 29 08:07:44 crc kubenswrapper[4795]: I1129 08:07:44.115131 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18b04c88-379f-448d-8443-dbfa7994e228-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"18b04c88-379f-448d-8443-dbfa7994e228\") " pod="openstack/ceilometer-0" Nov 29 08:07:44 crc kubenswrapper[4795]: I1129 08:07:44.115156 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/18b04c88-379f-448d-8443-dbfa7994e228-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"18b04c88-379f-448d-8443-dbfa7994e228\") " pod="openstack/ceilometer-0" Nov 29 08:07:44 crc kubenswrapper[4795]: I1129 08:07:44.115193 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/18b04c88-379f-448d-8443-dbfa7994e228-run-httpd\") pod \"ceilometer-0\" (UID: \"18b04c88-379f-448d-8443-dbfa7994e228\") " pod="openstack/ceilometer-0" Nov 29 08:07:44 crc kubenswrapper[4795]: I1129 08:07:44.115209 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18b04c88-379f-448d-8443-dbfa7994e228-scripts\") pod \"ceilometer-0\" (UID: \"18b04c88-379f-448d-8443-dbfa7994e228\") " pod="openstack/ceilometer-0" Nov 29 08:07:44 crc kubenswrapper[4795]: I1129 08:07:44.115250 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t27b7\" (UniqueName: \"kubernetes.io/projected/18b04c88-379f-448d-8443-dbfa7994e228-kube-api-access-t27b7\") pod \"ceilometer-0\" (UID: \"18b04c88-379f-448d-8443-dbfa7994e228\") " pod="openstack/ceilometer-0" Nov 29 08:07:44 crc kubenswrapper[4795]: I1129 08:07:44.117075 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/18b04c88-379f-448d-8443-dbfa7994e228-log-httpd\") pod \"ceilometer-0\" (UID: \"18b04c88-379f-448d-8443-dbfa7994e228\") " pod="openstack/ceilometer-0" Nov 29 08:07:44 crc kubenswrapper[4795]: I1129 08:07:44.118362 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/18b04c88-379f-448d-8443-dbfa7994e228-run-httpd\") pod \"ceilometer-0\" (UID: \"18b04c88-379f-448d-8443-dbfa7994e228\") " pod="openstack/ceilometer-0" Nov 29 08:07:44 crc kubenswrapper[4795]: I1129 08:07:44.121322 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18b04c88-379f-448d-8443-dbfa7994e228-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"18b04c88-379f-448d-8443-dbfa7994e228\") " pod="openstack/ceilometer-0" Nov 29 08:07:44 crc kubenswrapper[4795]: I1129 08:07:44.135368 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t27b7\" (UniqueName: \"kubernetes.io/projected/18b04c88-379f-448d-8443-dbfa7994e228-kube-api-access-t27b7\") pod \"ceilometer-0\" (UID: \"18b04c88-379f-448d-8443-dbfa7994e228\") " pod="openstack/ceilometer-0" Nov 29 08:07:44 crc kubenswrapper[4795]: I1129 08:07:44.138770 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18b04c88-379f-448d-8443-dbfa7994e228-config-data\") pod \"ceilometer-0\" (UID: \"18b04c88-379f-448d-8443-dbfa7994e228\") " pod="openstack/ceilometer-0" Nov 29 08:07:44 crc kubenswrapper[4795]: I1129 08:07:44.139083 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/18b04c88-379f-448d-8443-dbfa7994e228-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"18b04c88-379f-448d-8443-dbfa7994e228\") " pod="openstack/ceilometer-0" Nov 29 08:07:44 crc kubenswrapper[4795]: I1129 08:07:44.139774 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18b04c88-379f-448d-8443-dbfa7994e228-scripts\") pod \"ceilometer-0\" (UID: \"18b04c88-379f-448d-8443-dbfa7994e228\") " pod="openstack/ceilometer-0" Nov 29 08:07:44 crc kubenswrapper[4795]: I1129 08:07:44.244281 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 08:07:44 crc kubenswrapper[4795]: I1129 08:07:44.301238 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d34ca31-2fe0-41cc-b563-25e471286134" path="/var/lib/kubelet/pods/4d34ca31-2fe0-41cc-b563-25e471286134/volumes" Nov 29 08:07:44 crc kubenswrapper[4795]: I1129 08:07:44.768057 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 08:07:44 crc kubenswrapper[4795]: I1129 08:07:44.835619 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"18b04c88-379f-448d-8443-dbfa7994e228","Type":"ContainerStarted","Data":"eeba9281b4c59f57287b1161f11a9655c54251b4c3cc16ebfb35e656923b66c2"} Nov 29 08:07:48 crc kubenswrapper[4795]: I1129 08:07:48.892169 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"18b04c88-379f-448d-8443-dbfa7994e228","Type":"ContainerStarted","Data":"acadca80a057d6477b9cb3e2ba412b9341aac67013024dd5dc88de8a257b99dd"} Nov 29 08:07:48 crc kubenswrapper[4795]: I1129 08:07:48.892752 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"18b04c88-379f-448d-8443-dbfa7994e228","Type":"ContainerStarted","Data":"5d8b40155304018bf6ca1ecd5005f2e907e8913c4e5b3f2ac8afc23ad087375b"} Nov 29 08:07:48 crc kubenswrapper[4795]: I1129 08:07:48.893460 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-2r5cb" event={"ID":"ee6363b5-2fea-42a6-94fd-748c7c4c3e66","Type":"ContainerStarted","Data":"9e4c6d901bad313d8ed61f0a899303ff524b3f193979e2ded8a07c8b2a277a31"} Nov 29 08:07:48 crc kubenswrapper[4795]: I1129 08:07:48.914870 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-2r5cb" podStartSLOduration=2.418988434 podStartE2EDuration="6.91484677s" podCreationTimestamp="2025-11-29 08:07:42 +0000 UTC" firstStartedPulling="2025-11-29 08:07:43.143629232 +0000 UTC m=+1709.119205022" lastFinishedPulling="2025-11-29 08:07:47.639487568 +0000 UTC m=+1713.615063358" observedRunningTime="2025-11-29 08:07:48.906200325 +0000 UTC m=+1714.881776115" watchObservedRunningTime="2025-11-29 08:07:48.91484677 +0000 UTC m=+1714.890422560" Nov 29 08:07:49 crc kubenswrapper[4795]: I1129 08:07:49.228680 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Nov 29 08:07:49 crc kubenswrapper[4795]: I1129 08:07:49.906019 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"18b04c88-379f-448d-8443-dbfa7994e228","Type":"ContainerStarted","Data":"e6f354fa94c5554bbf1c4808a21431dbb9b715822ccecb0500ca3bb734a54296"} Nov 29 08:07:49 crc kubenswrapper[4795]: I1129 08:07:49.995036 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-wx6m7"] Nov 29 08:07:49 crc kubenswrapper[4795]: I1129 08:07:49.996981 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-wx6m7" Nov 29 08:07:49 crc kubenswrapper[4795]: I1129 08:07:49.999634 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Nov 29 08:07:49 crc kubenswrapper[4795]: I1129 08:07:49.999841 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.039533 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-wx6m7"] Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.088081 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fc47eb1-ee22-476b-92c2-4ccb500fe572-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-wx6m7\" (UID: \"7fc47eb1-ee22-476b-92c2-4ccb500fe572\") " pod="openstack/nova-cell0-cell-mapping-wx6m7" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.088169 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7fc47eb1-ee22-476b-92c2-4ccb500fe572-scripts\") pod \"nova-cell0-cell-mapping-wx6m7\" (UID: \"7fc47eb1-ee22-476b-92c2-4ccb500fe572\") " pod="openstack/nova-cell0-cell-mapping-wx6m7" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.088212 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fc47eb1-ee22-476b-92c2-4ccb500fe572-config-data\") pod \"nova-cell0-cell-mapping-wx6m7\" (UID: \"7fc47eb1-ee22-476b-92c2-4ccb500fe572\") " pod="openstack/nova-cell0-cell-mapping-wx6m7" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.088334 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2g557\" (UniqueName: \"kubernetes.io/projected/7fc47eb1-ee22-476b-92c2-4ccb500fe572-kube-api-access-2g557\") pod \"nova-cell0-cell-mapping-wx6m7\" (UID: \"7fc47eb1-ee22-476b-92c2-4ccb500fe572\") " pod="openstack/nova-cell0-cell-mapping-wx6m7" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.155031 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.157198 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.164282 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.179967 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.190477 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fc47eb1-ee22-476b-92c2-4ccb500fe572-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-wx6m7\" (UID: \"7fc47eb1-ee22-476b-92c2-4ccb500fe572\") " pod="openstack/nova-cell0-cell-mapping-wx6m7" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.190554 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7fc47eb1-ee22-476b-92c2-4ccb500fe572-scripts\") pod \"nova-cell0-cell-mapping-wx6m7\" (UID: \"7fc47eb1-ee22-476b-92c2-4ccb500fe572\") " pod="openstack/nova-cell0-cell-mapping-wx6m7" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.190607 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fc47eb1-ee22-476b-92c2-4ccb500fe572-config-data\") pod \"nova-cell0-cell-mapping-wx6m7\" (UID: \"7fc47eb1-ee22-476b-92c2-4ccb500fe572\") " pod="openstack/nova-cell0-cell-mapping-wx6m7" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.190685 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2g557\" (UniqueName: \"kubernetes.io/projected/7fc47eb1-ee22-476b-92c2-4ccb500fe572-kube-api-access-2g557\") pod \"nova-cell0-cell-mapping-wx6m7\" (UID: \"7fc47eb1-ee22-476b-92c2-4ccb500fe572\") " pod="openstack/nova-cell0-cell-mapping-wx6m7" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.198824 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fc47eb1-ee22-476b-92c2-4ccb500fe572-config-data\") pod \"nova-cell0-cell-mapping-wx6m7\" (UID: \"7fc47eb1-ee22-476b-92c2-4ccb500fe572\") " pod="openstack/nova-cell0-cell-mapping-wx6m7" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.207266 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fc47eb1-ee22-476b-92c2-4ccb500fe572-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-wx6m7\" (UID: \"7fc47eb1-ee22-476b-92c2-4ccb500fe572\") " pod="openstack/nova-cell0-cell-mapping-wx6m7" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.222649 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.224680 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.226109 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7fc47eb1-ee22-476b-92c2-4ccb500fe572-scripts\") pod \"nova-cell0-cell-mapping-wx6m7\" (UID: \"7fc47eb1-ee22-476b-92c2-4ccb500fe572\") " pod="openstack/nova-cell0-cell-mapping-wx6m7" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.237412 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.238748 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.243220 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2g557\" (UniqueName: \"kubernetes.io/projected/7fc47eb1-ee22-476b-92c2-4ccb500fe572-kube-api-access-2g557\") pod \"nova-cell0-cell-mapping-wx6m7\" (UID: \"7fc47eb1-ee22-476b-92c2-4ccb500fe572\") " pod="openstack/nova-cell0-cell-mapping-wx6m7" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.297189 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c93ca60-c407-48a8-888b-d3e600653dc7-config-data\") pod \"nova-metadata-0\" (UID: \"2c93ca60-c407-48a8-888b-d3e600653dc7\") " pod="openstack/nova-metadata-0" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.297252 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d10ef1f-3b22-4019-a9e6-5ae072648212-logs\") pod \"nova-api-0\" (UID: \"6d10ef1f-3b22-4019-a9e6-5ae072648212\") " pod="openstack/nova-api-0" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.297300 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d10ef1f-3b22-4019-a9e6-5ae072648212-config-data\") pod \"nova-api-0\" (UID: \"6d10ef1f-3b22-4019-a9e6-5ae072648212\") " pod="openstack/nova-api-0" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.297358 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c93ca60-c407-48a8-888b-d3e600653dc7-logs\") pod \"nova-metadata-0\" (UID: \"2c93ca60-c407-48a8-888b-d3e600653dc7\") " pod="openstack/nova-metadata-0" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.297380 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpsgq\" (UniqueName: \"kubernetes.io/projected/6d10ef1f-3b22-4019-a9e6-5ae072648212-kube-api-access-gpsgq\") pod \"nova-api-0\" (UID: \"6d10ef1f-3b22-4019-a9e6-5ae072648212\") " pod="openstack/nova-api-0" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.297418 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmlt5\" (UniqueName: \"kubernetes.io/projected/2c93ca60-c407-48a8-888b-d3e600653dc7-kube-api-access-xmlt5\") pod \"nova-metadata-0\" (UID: \"2c93ca60-c407-48a8-888b-d3e600653dc7\") " pod="openstack/nova-metadata-0" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.297435 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c93ca60-c407-48a8-888b-d3e600653dc7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2c93ca60-c407-48a8-888b-d3e600653dc7\") " pod="openstack/nova-metadata-0" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.297530 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d10ef1f-3b22-4019-a9e6-5ae072648212-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6d10ef1f-3b22-4019-a9e6-5ae072648212\") " pod="openstack/nova-api-0" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.318640 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-wx6m7" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.407307 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d10ef1f-3b22-4019-a9e6-5ae072648212-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6d10ef1f-3b22-4019-a9e6-5ae072648212\") " pod="openstack/nova-api-0" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.407388 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c93ca60-c407-48a8-888b-d3e600653dc7-config-data\") pod \"nova-metadata-0\" (UID: \"2c93ca60-c407-48a8-888b-d3e600653dc7\") " pod="openstack/nova-metadata-0" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.407429 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d10ef1f-3b22-4019-a9e6-5ae072648212-logs\") pod \"nova-api-0\" (UID: \"6d10ef1f-3b22-4019-a9e6-5ae072648212\") " pod="openstack/nova-api-0" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.407488 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d10ef1f-3b22-4019-a9e6-5ae072648212-config-data\") pod \"nova-api-0\" (UID: \"6d10ef1f-3b22-4019-a9e6-5ae072648212\") " pod="openstack/nova-api-0" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.407584 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c93ca60-c407-48a8-888b-d3e600653dc7-logs\") pod \"nova-metadata-0\" (UID: \"2c93ca60-c407-48a8-888b-d3e600653dc7\") " pod="openstack/nova-metadata-0" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.407639 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpsgq\" (UniqueName: \"kubernetes.io/projected/6d10ef1f-3b22-4019-a9e6-5ae072648212-kube-api-access-gpsgq\") pod \"nova-api-0\" (UID: \"6d10ef1f-3b22-4019-a9e6-5ae072648212\") " pod="openstack/nova-api-0" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.407702 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmlt5\" (UniqueName: \"kubernetes.io/projected/2c93ca60-c407-48a8-888b-d3e600653dc7-kube-api-access-xmlt5\") pod \"nova-metadata-0\" (UID: \"2c93ca60-c407-48a8-888b-d3e600653dc7\") " pod="openstack/nova-metadata-0" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.407722 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c93ca60-c407-48a8-888b-d3e600653dc7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2c93ca60-c407-48a8-888b-d3e600653dc7\") " pod="openstack/nova-metadata-0" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.413292 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d10ef1f-3b22-4019-a9e6-5ae072648212-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6d10ef1f-3b22-4019-a9e6-5ae072648212\") " pod="openstack/nova-api-0" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.422549 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d10ef1f-3b22-4019-a9e6-5ae072648212-logs\") pod \"nova-api-0\" (UID: \"6d10ef1f-3b22-4019-a9e6-5ae072648212\") " pod="openstack/nova-api-0" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.443412 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c93ca60-c407-48a8-888b-d3e600653dc7-logs\") pod \"nova-metadata-0\" (UID: \"2c93ca60-c407-48a8-888b-d3e600653dc7\") " pod="openstack/nova-metadata-0" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.458298 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c93ca60-c407-48a8-888b-d3e600653dc7-config-data\") pod \"nova-metadata-0\" (UID: \"2c93ca60-c407-48a8-888b-d3e600653dc7\") " pod="openstack/nova-metadata-0" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.464432 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d10ef1f-3b22-4019-a9e6-5ae072648212-config-data\") pod \"nova-api-0\" (UID: \"6d10ef1f-3b22-4019-a9e6-5ae072648212\") " pod="openstack/nova-api-0" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.464934 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c93ca60-c407-48a8-888b-d3e600653dc7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2c93ca60-c407-48a8-888b-d3e600653dc7\") " pod="openstack/nova-metadata-0" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.466787 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpsgq\" (UniqueName: \"kubernetes.io/projected/6d10ef1f-3b22-4019-a9e6-5ae072648212-kube-api-access-gpsgq\") pod \"nova-api-0\" (UID: \"6d10ef1f-3b22-4019-a9e6-5ae072648212\") " pod="openstack/nova-api-0" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.469359 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmlt5\" (UniqueName: \"kubernetes.io/projected/2c93ca60-c407-48a8-888b-d3e600653dc7-kube-api-access-xmlt5\") pod \"nova-metadata-0\" (UID: \"2c93ca60-c407-48a8-888b-d3e600653dc7\") " pod="openstack/nova-metadata-0" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.487460 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.489466 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.492262 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.509692 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92d4ea06-dad6-4b2a-ace8-b87dbe308e40-config-data\") pod \"nova-scheduler-0\" (UID: \"92d4ea06-dad6-4b2a-ace8-b87dbe308e40\") " pod="openstack/nova-scheduler-0" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.509824 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zzxr\" (UniqueName: \"kubernetes.io/projected/92d4ea06-dad6-4b2a-ace8-b87dbe308e40-kube-api-access-7zzxr\") pod \"nova-scheduler-0\" (UID: \"92d4ea06-dad6-4b2a-ace8-b87dbe308e40\") " pod="openstack/nova-scheduler-0" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.509948 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92d4ea06-dad6-4b2a-ace8-b87dbe308e40-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"92d4ea06-dad6-4b2a-ace8-b87dbe308e40\") " pod="openstack/nova-scheduler-0" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.528210 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.544504 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.583527 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.643690 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zzxr\" (UniqueName: \"kubernetes.io/projected/92d4ea06-dad6-4b2a-ace8-b87dbe308e40-kube-api-access-7zzxr\") pod \"nova-scheduler-0\" (UID: \"92d4ea06-dad6-4b2a-ace8-b87dbe308e40\") " pod="openstack/nova-scheduler-0" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.644609 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7877d89589-5t79v"] Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.650262 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92d4ea06-dad6-4b2a-ace8-b87dbe308e40-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"92d4ea06-dad6-4b2a-ace8-b87dbe308e40\") " pod="openstack/nova-scheduler-0" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.650677 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92d4ea06-dad6-4b2a-ace8-b87dbe308e40-config-data\") pod \"nova-scheduler-0\" (UID: \"92d4ea06-dad6-4b2a-ace8-b87dbe308e40\") " pod="openstack/nova-scheduler-0" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.656143 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7877d89589-5t79v" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.656198 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92d4ea06-dad6-4b2a-ace8-b87dbe308e40-config-data\") pod \"nova-scheduler-0\" (UID: \"92d4ea06-dad6-4b2a-ace8-b87dbe308e40\") " pod="openstack/nova-scheduler-0" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.670872 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zzxr\" (UniqueName: \"kubernetes.io/projected/92d4ea06-dad6-4b2a-ace8-b87dbe308e40-kube-api-access-7zzxr\") pod \"nova-scheduler-0\" (UID: \"92d4ea06-dad6-4b2a-ace8-b87dbe308e40\") " pod="openstack/nova-scheduler-0" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.671714 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92d4ea06-dad6-4b2a-ace8-b87dbe308e40-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"92d4ea06-dad6-4b2a-ace8-b87dbe308e40\") " pod="openstack/nova-scheduler-0" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.743067 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7877d89589-5t79v"] Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.761026 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8678c9b5-b7c3-4448-9051-17a60a1b92d6-dns-svc\") pod \"dnsmasq-dns-7877d89589-5t79v\" (UID: \"8678c9b5-b7c3-4448-9051-17a60a1b92d6\") " pod="openstack/dnsmasq-dns-7877d89589-5t79v" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.761226 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8678c9b5-b7c3-4448-9051-17a60a1b92d6-ovsdbserver-nb\") pod \"dnsmasq-dns-7877d89589-5t79v\" (UID: \"8678c9b5-b7c3-4448-9051-17a60a1b92d6\") " pod="openstack/dnsmasq-dns-7877d89589-5t79v" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.761251 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8678c9b5-b7c3-4448-9051-17a60a1b92d6-config\") pod \"dnsmasq-dns-7877d89589-5t79v\" (UID: \"8678c9b5-b7c3-4448-9051-17a60a1b92d6\") " pod="openstack/dnsmasq-dns-7877d89589-5t79v" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.761293 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2htx\" (UniqueName: \"kubernetes.io/projected/8678c9b5-b7c3-4448-9051-17a60a1b92d6-kube-api-access-x2htx\") pod \"dnsmasq-dns-7877d89589-5t79v\" (UID: \"8678c9b5-b7c3-4448-9051-17a60a1b92d6\") " pod="openstack/dnsmasq-dns-7877d89589-5t79v" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.761331 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8678c9b5-b7c3-4448-9051-17a60a1b92d6-dns-swift-storage-0\") pod \"dnsmasq-dns-7877d89589-5t79v\" (UID: \"8678c9b5-b7c3-4448-9051-17a60a1b92d6\") " pod="openstack/dnsmasq-dns-7877d89589-5t79v" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.761400 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8678c9b5-b7c3-4448-9051-17a60a1b92d6-ovsdbserver-sb\") pod \"dnsmasq-dns-7877d89589-5t79v\" (UID: \"8678c9b5-b7c3-4448-9051-17a60a1b92d6\") " pod="openstack/dnsmasq-dns-7877d89589-5t79v" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.783966 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.794231 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.805087 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.808913 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.865217 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8678c9b5-b7c3-4448-9051-17a60a1b92d6-ovsdbserver-nb\") pod \"dnsmasq-dns-7877d89589-5t79v\" (UID: \"8678c9b5-b7c3-4448-9051-17a60a1b92d6\") " pod="openstack/dnsmasq-dns-7877d89589-5t79v" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.866368 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8678c9b5-b7c3-4448-9051-17a60a1b92d6-ovsdbserver-nb\") pod \"dnsmasq-dns-7877d89589-5t79v\" (UID: \"8678c9b5-b7c3-4448-9051-17a60a1b92d6\") " pod="openstack/dnsmasq-dns-7877d89589-5t79v" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.875360 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8678c9b5-b7c3-4448-9051-17a60a1b92d6-config\") pod \"dnsmasq-dns-7877d89589-5t79v\" (UID: \"8678c9b5-b7c3-4448-9051-17a60a1b92d6\") " pod="openstack/dnsmasq-dns-7877d89589-5t79v" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.875457 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2htx\" (UniqueName: \"kubernetes.io/projected/8678c9b5-b7c3-4448-9051-17a60a1b92d6-kube-api-access-x2htx\") pod \"dnsmasq-dns-7877d89589-5t79v\" (UID: \"8678c9b5-b7c3-4448-9051-17a60a1b92d6\") " pod="openstack/dnsmasq-dns-7877d89589-5t79v" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.875516 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8678c9b5-b7c3-4448-9051-17a60a1b92d6-dns-swift-storage-0\") pod \"dnsmasq-dns-7877d89589-5t79v\" (UID: \"8678c9b5-b7c3-4448-9051-17a60a1b92d6\") " pod="openstack/dnsmasq-dns-7877d89589-5t79v" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.875573 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8678c9b5-b7c3-4448-9051-17a60a1b92d6-ovsdbserver-sb\") pod \"dnsmasq-dns-7877d89589-5t79v\" (UID: \"8678c9b5-b7c3-4448-9051-17a60a1b92d6\") " pod="openstack/dnsmasq-dns-7877d89589-5t79v" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.875655 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l67xl\" (UniqueName: \"kubernetes.io/projected/3085d3da-ff4f-4b37-be0a-3f1754a7fbf0-kube-api-access-l67xl\") pod \"nova-cell1-novncproxy-0\" (UID: \"3085d3da-ff4f-4b37-be0a-3f1754a7fbf0\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.875890 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3085d3da-ff4f-4b37-be0a-3f1754a7fbf0-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3085d3da-ff4f-4b37-be0a-3f1754a7fbf0\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.875966 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3085d3da-ff4f-4b37-be0a-3f1754a7fbf0-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3085d3da-ff4f-4b37-be0a-3f1754a7fbf0\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.876036 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8678c9b5-b7c3-4448-9051-17a60a1b92d6-dns-svc\") pod \"dnsmasq-dns-7877d89589-5t79v\" (UID: \"8678c9b5-b7c3-4448-9051-17a60a1b92d6\") " pod="openstack/dnsmasq-dns-7877d89589-5t79v" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.876916 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8678c9b5-b7c3-4448-9051-17a60a1b92d6-dns-svc\") pod \"dnsmasq-dns-7877d89589-5t79v\" (UID: \"8678c9b5-b7c3-4448-9051-17a60a1b92d6\") " pod="openstack/dnsmasq-dns-7877d89589-5t79v" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.876999 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8678c9b5-b7c3-4448-9051-17a60a1b92d6-config\") pod \"dnsmasq-dns-7877d89589-5t79v\" (UID: \"8678c9b5-b7c3-4448-9051-17a60a1b92d6\") " pod="openstack/dnsmasq-dns-7877d89589-5t79v" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.881879 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8678c9b5-b7c3-4448-9051-17a60a1b92d6-dns-swift-storage-0\") pod \"dnsmasq-dns-7877d89589-5t79v\" (UID: \"8678c9b5-b7c3-4448-9051-17a60a1b92d6\") " pod="openstack/dnsmasq-dns-7877d89589-5t79v" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.889832 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8678c9b5-b7c3-4448-9051-17a60a1b92d6-ovsdbserver-sb\") pod \"dnsmasq-dns-7877d89589-5t79v\" (UID: \"8678c9b5-b7c3-4448-9051-17a60a1b92d6\") " pod="openstack/dnsmasq-dns-7877d89589-5t79v" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.905070 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.948361 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2htx\" (UniqueName: \"kubernetes.io/projected/8678c9b5-b7c3-4448-9051-17a60a1b92d6-kube-api-access-x2htx\") pod \"dnsmasq-dns-7877d89589-5t79v\" (UID: \"8678c9b5-b7c3-4448-9051-17a60a1b92d6\") " pod="openstack/dnsmasq-dns-7877d89589-5t79v" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.962950 4795 generic.go:334] "Generic (PLEG): container finished" podID="ee6363b5-2fea-42a6-94fd-748c7c4c3e66" containerID="9e4c6d901bad313d8ed61f0a899303ff524b3f193979e2ded8a07c8b2a277a31" exitCode=0 Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.962992 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-2r5cb" event={"ID":"ee6363b5-2fea-42a6-94fd-748c7c4c3e66","Type":"ContainerDied","Data":"9e4c6d901bad313d8ed61f0a899303ff524b3f193979e2ded8a07c8b2a277a31"} Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.987257 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l67xl\" (UniqueName: \"kubernetes.io/projected/3085d3da-ff4f-4b37-be0a-3f1754a7fbf0-kube-api-access-l67xl\") pod \"nova-cell1-novncproxy-0\" (UID: \"3085d3da-ff4f-4b37-be0a-3f1754a7fbf0\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.987397 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3085d3da-ff4f-4b37-be0a-3f1754a7fbf0-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3085d3da-ff4f-4b37-be0a-3f1754a7fbf0\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.987444 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3085d3da-ff4f-4b37-be0a-3f1754a7fbf0-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3085d3da-ff4f-4b37-be0a-3f1754a7fbf0\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.993328 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3085d3da-ff4f-4b37-be0a-3f1754a7fbf0-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3085d3da-ff4f-4b37-be0a-3f1754a7fbf0\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 08:07:50 crc kubenswrapper[4795]: I1129 08:07:50.996170 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3085d3da-ff4f-4b37-be0a-3f1754a7fbf0-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3085d3da-ff4f-4b37-be0a-3f1754a7fbf0\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 08:07:51 crc kubenswrapper[4795]: I1129 08:07:51.022538 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l67xl\" (UniqueName: \"kubernetes.io/projected/3085d3da-ff4f-4b37-be0a-3f1754a7fbf0-kube-api-access-l67xl\") pod \"nova-cell1-novncproxy-0\" (UID: \"3085d3da-ff4f-4b37-be0a-3f1754a7fbf0\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 08:07:51 crc kubenswrapper[4795]: I1129 08:07:51.024738 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7877d89589-5t79v" Nov 29 08:07:51 crc kubenswrapper[4795]: I1129 08:07:51.151172 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 29 08:07:51 crc kubenswrapper[4795]: I1129 08:07:51.170247 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-wx6m7"] Nov 29 08:07:51 crc kubenswrapper[4795]: I1129 08:07:51.399846 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 29 08:07:51 crc kubenswrapper[4795]: I1129 08:07:51.697803 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 29 08:07:51 crc kubenswrapper[4795]: W1129 08:07:51.724825 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2c93ca60_c407_48a8_888b_d3e600653dc7.slice/crio-26455926ac946057b762eaeec471943756f6101ce1ce771a9e7ea66cf46ed8b9 WatchSource:0}: Error finding container 26455926ac946057b762eaeec471943756f6101ce1ce771a9e7ea66cf46ed8b9: Status 404 returned error can't find the container with id 26455926ac946057b762eaeec471943756f6101ce1ce771a9e7ea66cf46ed8b9 Nov 29 08:07:51 crc kubenswrapper[4795]: I1129 08:07:51.759447 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 29 08:07:52 crc kubenswrapper[4795]: I1129 08:07:52.001981 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7877d89589-5t79v"] Nov 29 08:07:52 crc kubenswrapper[4795]: I1129 08:07:52.016954 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6d10ef1f-3b22-4019-a9e6-5ae072648212","Type":"ContainerStarted","Data":"50308beba8d7db2d624305f05b3bd3aa2dacfe49ea9a2ad192fdc2a51726d5e2"} Nov 29 08:07:52 crc kubenswrapper[4795]: I1129 08:07:52.032358 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2c93ca60-c407-48a8-888b-d3e600653dc7","Type":"ContainerStarted","Data":"26455926ac946057b762eaeec471943756f6101ce1ce771a9e7ea66cf46ed8b9"} Nov 29 08:07:52 crc kubenswrapper[4795]: I1129 08:07:52.033891 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-wx6m7" event={"ID":"7fc47eb1-ee22-476b-92c2-4ccb500fe572","Type":"ContainerStarted","Data":"ac773d0bbaf8eb4a577d844451949beb38fb9ca36039c8be1d79c0b248c06635"} Nov 29 08:07:52 crc kubenswrapper[4795]: I1129 08:07:52.033933 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-wx6m7" event={"ID":"7fc47eb1-ee22-476b-92c2-4ccb500fe572","Type":"ContainerStarted","Data":"42a2f8a877ae122cd40593f5e4a07b11b36201fc69622a6f93e48572537f8a27"} Nov 29 08:07:52 crc kubenswrapper[4795]: I1129 08:07:52.058536 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"92d4ea06-dad6-4b2a-ace8-b87dbe308e40","Type":"ContainerStarted","Data":"f359656805a2033aba2116ba59d48b27abf5e94080c95f21ec223bf39f865e26"} Nov 29 08:07:52 crc kubenswrapper[4795]: I1129 08:07:52.076547 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-6256x"] Nov 29 08:07:52 crc kubenswrapper[4795]: I1129 08:07:52.078554 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-6256x" Nov 29 08:07:52 crc kubenswrapper[4795]: I1129 08:07:52.080148 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"18b04c88-379f-448d-8443-dbfa7994e228","Type":"ContainerStarted","Data":"2870b60e0be138bbfaa09cf50cb787560961ef2662473138011bba8d1b61d316"} Nov 29 08:07:52 crc kubenswrapper[4795]: I1129 08:07:52.080373 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 29 08:07:52 crc kubenswrapper[4795]: I1129 08:07:52.087088 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 29 08:07:52 crc kubenswrapper[4795]: I1129 08:07:52.087289 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Nov 29 08:07:52 crc kubenswrapper[4795]: I1129 08:07:52.128129 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-6256x"] Nov 29 08:07:52 crc kubenswrapper[4795]: I1129 08:07:52.154553 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-wx6m7" podStartSLOduration=3.154535499 podStartE2EDuration="3.154535499s" podCreationTimestamp="2025-11-29 08:07:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 08:07:52.058159928 +0000 UTC m=+1718.033735718" watchObservedRunningTime="2025-11-29 08:07:52.154535499 +0000 UTC m=+1718.130111289" Nov 29 08:07:52 crc kubenswrapper[4795]: I1129 08:07:52.193856 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.580093308 podStartE2EDuration="9.193834873s" podCreationTimestamp="2025-11-29 08:07:43 +0000 UTC" firstStartedPulling="2025-11-29 08:07:44.778237564 +0000 UTC m=+1710.753813354" lastFinishedPulling="2025-11-29 08:07:51.391979129 +0000 UTC m=+1717.367554919" observedRunningTime="2025-11-29 08:07:52.12916845 +0000 UTC m=+1718.104744230" watchObservedRunningTime="2025-11-29 08:07:52.193834873 +0000 UTC m=+1718.169410663" Nov 29 08:07:52 crc kubenswrapper[4795]: I1129 08:07:52.215877 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 29 08:07:52 crc kubenswrapper[4795]: I1129 08:07:52.242093 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ac71396-7101-4507-b4cb-577fc1b95a46-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-6256x\" (UID: \"2ac71396-7101-4507-b4cb-577fc1b95a46\") " pod="openstack/nova-cell1-conductor-db-sync-6256x" Nov 29 08:07:52 crc kubenswrapper[4795]: I1129 08:07:52.242234 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6gzn\" (UniqueName: \"kubernetes.io/projected/2ac71396-7101-4507-b4cb-577fc1b95a46-kube-api-access-r6gzn\") pod \"nova-cell1-conductor-db-sync-6256x\" (UID: \"2ac71396-7101-4507-b4cb-577fc1b95a46\") " pod="openstack/nova-cell1-conductor-db-sync-6256x" Nov 29 08:07:52 crc kubenswrapper[4795]: I1129 08:07:52.242296 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ac71396-7101-4507-b4cb-577fc1b95a46-config-data\") pod \"nova-cell1-conductor-db-sync-6256x\" (UID: \"2ac71396-7101-4507-b4cb-577fc1b95a46\") " pod="openstack/nova-cell1-conductor-db-sync-6256x" Nov 29 08:07:52 crc kubenswrapper[4795]: I1129 08:07:52.242357 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ac71396-7101-4507-b4cb-577fc1b95a46-scripts\") pod \"nova-cell1-conductor-db-sync-6256x\" (UID: \"2ac71396-7101-4507-b4cb-577fc1b95a46\") " pod="openstack/nova-cell1-conductor-db-sync-6256x" Nov 29 08:07:52 crc kubenswrapper[4795]: W1129 08:07:52.242767 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3085d3da_ff4f_4b37_be0a_3f1754a7fbf0.slice/crio-12bbf9a73c120a50824091d91901d9ee07305141903bf390fd493e1ab71f4d1a WatchSource:0}: Error finding container 12bbf9a73c120a50824091d91901d9ee07305141903bf390fd493e1ab71f4d1a: Status 404 returned error can't find the container with id 12bbf9a73c120a50824091d91901d9ee07305141903bf390fd493e1ab71f4d1a Nov 29 08:07:52 crc kubenswrapper[4795]: I1129 08:07:52.346558 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6gzn\" (UniqueName: \"kubernetes.io/projected/2ac71396-7101-4507-b4cb-577fc1b95a46-kube-api-access-r6gzn\") pod \"nova-cell1-conductor-db-sync-6256x\" (UID: \"2ac71396-7101-4507-b4cb-577fc1b95a46\") " pod="openstack/nova-cell1-conductor-db-sync-6256x" Nov 29 08:07:52 crc kubenswrapper[4795]: I1129 08:07:52.361980 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ac71396-7101-4507-b4cb-577fc1b95a46-config-data\") pod \"nova-cell1-conductor-db-sync-6256x\" (UID: \"2ac71396-7101-4507-b4cb-577fc1b95a46\") " pod="openstack/nova-cell1-conductor-db-sync-6256x" Nov 29 08:07:52 crc kubenswrapper[4795]: I1129 08:07:52.362294 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ac71396-7101-4507-b4cb-577fc1b95a46-scripts\") pod \"nova-cell1-conductor-db-sync-6256x\" (UID: \"2ac71396-7101-4507-b4cb-577fc1b95a46\") " pod="openstack/nova-cell1-conductor-db-sync-6256x" Nov 29 08:07:52 crc kubenswrapper[4795]: I1129 08:07:52.362570 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ac71396-7101-4507-b4cb-577fc1b95a46-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-6256x\" (UID: \"2ac71396-7101-4507-b4cb-577fc1b95a46\") " pod="openstack/nova-cell1-conductor-db-sync-6256x" Nov 29 08:07:52 crc kubenswrapper[4795]: I1129 08:07:52.371727 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ac71396-7101-4507-b4cb-577fc1b95a46-config-data\") pod \"nova-cell1-conductor-db-sync-6256x\" (UID: \"2ac71396-7101-4507-b4cb-577fc1b95a46\") " pod="openstack/nova-cell1-conductor-db-sync-6256x" Nov 29 08:07:52 crc kubenswrapper[4795]: I1129 08:07:52.373001 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ac71396-7101-4507-b4cb-577fc1b95a46-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-6256x\" (UID: \"2ac71396-7101-4507-b4cb-577fc1b95a46\") " pod="openstack/nova-cell1-conductor-db-sync-6256x" Nov 29 08:07:52 crc kubenswrapper[4795]: I1129 08:07:52.376550 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ac71396-7101-4507-b4cb-577fc1b95a46-scripts\") pod \"nova-cell1-conductor-db-sync-6256x\" (UID: \"2ac71396-7101-4507-b4cb-577fc1b95a46\") " pod="openstack/nova-cell1-conductor-db-sync-6256x" Nov 29 08:07:52 crc kubenswrapper[4795]: I1129 08:07:52.379048 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6gzn\" (UniqueName: \"kubernetes.io/projected/2ac71396-7101-4507-b4cb-577fc1b95a46-kube-api-access-r6gzn\") pod \"nova-cell1-conductor-db-sync-6256x\" (UID: \"2ac71396-7101-4507-b4cb-577fc1b95a46\") " pod="openstack/nova-cell1-conductor-db-sync-6256x" Nov 29 08:07:52 crc kubenswrapper[4795]: I1129 08:07:52.434500 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-6256x" Nov 29 08:07:52 crc kubenswrapper[4795]: I1129 08:07:52.635491 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-2r5cb" Nov 29 08:07:52 crc kubenswrapper[4795]: I1129 08:07:52.785500 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee6363b5-2fea-42a6-94fd-748c7c4c3e66-scripts\") pod \"ee6363b5-2fea-42a6-94fd-748c7c4c3e66\" (UID: \"ee6363b5-2fea-42a6-94fd-748c7c4c3e66\") " Nov 29 08:07:52 crc kubenswrapper[4795]: I1129 08:07:52.786034 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee6363b5-2fea-42a6-94fd-748c7c4c3e66-combined-ca-bundle\") pod \"ee6363b5-2fea-42a6-94fd-748c7c4c3e66\" (UID: \"ee6363b5-2fea-42a6-94fd-748c7c4c3e66\") " Nov 29 08:07:52 crc kubenswrapper[4795]: I1129 08:07:52.786331 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee6363b5-2fea-42a6-94fd-748c7c4c3e66-config-data\") pod \"ee6363b5-2fea-42a6-94fd-748c7c4c3e66\" (UID: \"ee6363b5-2fea-42a6-94fd-748c7c4c3e66\") " Nov 29 08:07:52 crc kubenswrapper[4795]: I1129 08:07:52.786371 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tv6rq\" (UniqueName: \"kubernetes.io/projected/ee6363b5-2fea-42a6-94fd-748c7c4c3e66-kube-api-access-tv6rq\") pod \"ee6363b5-2fea-42a6-94fd-748c7c4c3e66\" (UID: \"ee6363b5-2fea-42a6-94fd-748c7c4c3e66\") " Nov 29 08:07:52 crc kubenswrapper[4795]: I1129 08:07:52.798731 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee6363b5-2fea-42a6-94fd-748c7c4c3e66-scripts" (OuterVolumeSpecName: "scripts") pod "ee6363b5-2fea-42a6-94fd-748c7c4c3e66" (UID: "ee6363b5-2fea-42a6-94fd-748c7c4c3e66"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:07:52 crc kubenswrapper[4795]: I1129 08:07:52.802802 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee6363b5-2fea-42a6-94fd-748c7c4c3e66-kube-api-access-tv6rq" (OuterVolumeSpecName: "kube-api-access-tv6rq") pod "ee6363b5-2fea-42a6-94fd-748c7c4c3e66" (UID: "ee6363b5-2fea-42a6-94fd-748c7c4c3e66"). InnerVolumeSpecName "kube-api-access-tv6rq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:07:52 crc kubenswrapper[4795]: I1129 08:07:52.840764 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee6363b5-2fea-42a6-94fd-748c7c4c3e66-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ee6363b5-2fea-42a6-94fd-748c7c4c3e66" (UID: "ee6363b5-2fea-42a6-94fd-748c7c4c3e66"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:07:52 crc kubenswrapper[4795]: I1129 08:07:52.844145 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee6363b5-2fea-42a6-94fd-748c7c4c3e66-config-data" (OuterVolumeSpecName: "config-data") pod "ee6363b5-2fea-42a6-94fd-748c7c4c3e66" (UID: "ee6363b5-2fea-42a6-94fd-748c7c4c3e66"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:07:52 crc kubenswrapper[4795]: I1129 08:07:52.889486 4795 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee6363b5-2fea-42a6-94fd-748c7c4c3e66-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 08:07:52 crc kubenswrapper[4795]: I1129 08:07:52.889519 4795 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee6363b5-2fea-42a6-94fd-748c7c4c3e66-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 08:07:52 crc kubenswrapper[4795]: I1129 08:07:52.889532 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tv6rq\" (UniqueName: \"kubernetes.io/projected/ee6363b5-2fea-42a6-94fd-748c7c4c3e66-kube-api-access-tv6rq\") on node \"crc\" DevicePath \"\"" Nov 29 08:07:52 crc kubenswrapper[4795]: I1129 08:07:52.889543 4795 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee6363b5-2fea-42a6-94fd-748c7c4c3e66-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 08:07:53 crc kubenswrapper[4795]: I1129 08:07:53.110222 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-2r5cb" Nov 29 08:07:53 crc kubenswrapper[4795]: I1129 08:07:53.110220 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-2r5cb" event={"ID":"ee6363b5-2fea-42a6-94fd-748c7c4c3e66","Type":"ContainerDied","Data":"7e0ff3cc8fe1a9c5d2327797259ba08f18cafdbb645faaadec6c7054f3ff7dc6"} Nov 29 08:07:53 crc kubenswrapper[4795]: I1129 08:07:53.110337 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e0ff3cc8fe1a9c5d2327797259ba08f18cafdbb645faaadec6c7054f3ff7dc6" Nov 29 08:07:53 crc kubenswrapper[4795]: I1129 08:07:53.125043 4795 generic.go:334] "Generic (PLEG): container finished" podID="8678c9b5-b7c3-4448-9051-17a60a1b92d6" containerID="25ddc0e31154f600166d574b3cdb585251e9beee99622657e991331a6b57b8c6" exitCode=0 Nov 29 08:07:53 crc kubenswrapper[4795]: I1129 08:07:53.125147 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7877d89589-5t79v" event={"ID":"8678c9b5-b7c3-4448-9051-17a60a1b92d6","Type":"ContainerDied","Data":"25ddc0e31154f600166d574b3cdb585251e9beee99622657e991331a6b57b8c6"} Nov 29 08:07:53 crc kubenswrapper[4795]: I1129 08:07:53.125178 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7877d89589-5t79v" event={"ID":"8678c9b5-b7c3-4448-9051-17a60a1b92d6","Type":"ContainerStarted","Data":"ee064aaf1e2d359354ae7dbd1d26c05bb8bcbf09ad0f125cee313b73dfe5ef20"} Nov 29 08:07:53 crc kubenswrapper[4795]: I1129 08:07:53.136127 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3085d3da-ff4f-4b37-be0a-3f1754a7fbf0","Type":"ContainerStarted","Data":"12bbf9a73c120a50824091d91901d9ee07305141903bf390fd493e1ab71f4d1a"} Nov 29 08:07:53 crc kubenswrapper[4795]: I1129 08:07:53.205778 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-6256x"] Nov 29 08:07:54 crc kubenswrapper[4795]: I1129 08:07:54.151931 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-6256x" event={"ID":"2ac71396-7101-4507-b4cb-577fc1b95a46","Type":"ContainerStarted","Data":"3549bed3fc9cc52d295eb5ae5d381d8c5c0b7b902b430ef59d544e1efe262dfa"} Nov 29 08:07:54 crc kubenswrapper[4795]: I1129 08:07:54.152574 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-6256x" event={"ID":"2ac71396-7101-4507-b4cb-577fc1b95a46","Type":"ContainerStarted","Data":"57c971f47024c2c08ffbedf05115634391e99cb4ce6e4af65c4ab2fd66621284"} Nov 29 08:07:54 crc kubenswrapper[4795]: I1129 08:07:54.159550 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7877d89589-5t79v" event={"ID":"8678c9b5-b7c3-4448-9051-17a60a1b92d6","Type":"ContainerStarted","Data":"cd8ff6cafe85973f6f3fc5a92709a648dd2baf5503097b3eb1d1a06dded7a3fe"} Nov 29 08:07:54 crc kubenswrapper[4795]: I1129 08:07:54.159820 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7877d89589-5t79v" Nov 29 08:07:54 crc kubenswrapper[4795]: I1129 08:07:54.197681 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-6256x" podStartSLOduration=3.197657339 podStartE2EDuration="3.197657339s" podCreationTimestamp="2025-11-29 08:07:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 08:07:54.170997803 +0000 UTC m=+1720.146573603" watchObservedRunningTime="2025-11-29 08:07:54.197657339 +0000 UTC m=+1720.173233129" Nov 29 08:07:54 crc kubenswrapper[4795]: I1129 08:07:54.202741 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7877d89589-5t79v" podStartSLOduration=4.202718782 podStartE2EDuration="4.202718782s" podCreationTimestamp="2025-11-29 08:07:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 08:07:54.189958501 +0000 UTC m=+1720.165534281" watchObservedRunningTime="2025-11-29 08:07:54.202718782 +0000 UTC m=+1720.178294572" Nov 29 08:07:54 crc kubenswrapper[4795]: I1129 08:07:54.446104 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 29 08:07:54 crc kubenswrapper[4795]: I1129 08:07:54.463724 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 29 08:07:56 crc kubenswrapper[4795]: I1129 08:07:56.979984 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Nov 29 08:07:56 crc kubenswrapper[4795]: E1129 08:07:56.987247 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee6363b5-2fea-42a6-94fd-748c7c4c3e66" containerName="aodh-db-sync" Nov 29 08:07:56 crc kubenswrapper[4795]: I1129 08:07:56.987301 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee6363b5-2fea-42a6-94fd-748c7c4c3e66" containerName="aodh-db-sync" Nov 29 08:07:56 crc kubenswrapper[4795]: I1129 08:07:56.988380 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee6363b5-2fea-42a6-94fd-748c7c4c3e66" containerName="aodh-db-sync" Nov 29 08:07:57 crc kubenswrapper[4795]: I1129 08:07:57.002311 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 29 08:07:57 crc kubenswrapper[4795]: I1129 08:07:57.005929 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 29 08:07:57 crc kubenswrapper[4795]: I1129 08:07:57.006067 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-xg4wp" Nov 29 08:07:57 crc kubenswrapper[4795]: I1129 08:07:57.007892 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 29 08:07:57 crc kubenswrapper[4795]: I1129 08:07:57.009818 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 29 08:07:57 crc kubenswrapper[4795]: I1129 08:07:57.106113 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4552f7de-b8a7-4456-81ce-1faa3a69d96b-combined-ca-bundle\") pod \"aodh-0\" (UID: \"4552f7de-b8a7-4456-81ce-1faa3a69d96b\") " pod="openstack/aodh-0" Nov 29 08:07:57 crc kubenswrapper[4795]: I1129 08:07:57.106727 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dq47\" (UniqueName: \"kubernetes.io/projected/4552f7de-b8a7-4456-81ce-1faa3a69d96b-kube-api-access-2dq47\") pod \"aodh-0\" (UID: \"4552f7de-b8a7-4456-81ce-1faa3a69d96b\") " pod="openstack/aodh-0" Nov 29 08:07:57 crc kubenswrapper[4795]: I1129 08:07:57.106767 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4552f7de-b8a7-4456-81ce-1faa3a69d96b-config-data\") pod \"aodh-0\" (UID: \"4552f7de-b8a7-4456-81ce-1faa3a69d96b\") " pod="openstack/aodh-0" Nov 29 08:07:57 crc kubenswrapper[4795]: I1129 08:07:57.106875 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4552f7de-b8a7-4456-81ce-1faa3a69d96b-scripts\") pod \"aodh-0\" (UID: \"4552f7de-b8a7-4456-81ce-1faa3a69d96b\") " pod="openstack/aodh-0" Nov 29 08:07:57 crc kubenswrapper[4795]: I1129 08:07:57.211969 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4552f7de-b8a7-4456-81ce-1faa3a69d96b-scripts\") pod \"aodh-0\" (UID: \"4552f7de-b8a7-4456-81ce-1faa3a69d96b\") " pod="openstack/aodh-0" Nov 29 08:07:57 crc kubenswrapper[4795]: I1129 08:07:57.212124 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4552f7de-b8a7-4456-81ce-1faa3a69d96b-combined-ca-bundle\") pod \"aodh-0\" (UID: \"4552f7de-b8a7-4456-81ce-1faa3a69d96b\") " pod="openstack/aodh-0" Nov 29 08:07:57 crc kubenswrapper[4795]: I1129 08:07:57.212203 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dq47\" (UniqueName: \"kubernetes.io/projected/4552f7de-b8a7-4456-81ce-1faa3a69d96b-kube-api-access-2dq47\") pod \"aodh-0\" (UID: \"4552f7de-b8a7-4456-81ce-1faa3a69d96b\") " pod="openstack/aodh-0" Nov 29 08:07:57 crc kubenswrapper[4795]: I1129 08:07:57.212237 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4552f7de-b8a7-4456-81ce-1faa3a69d96b-config-data\") pod \"aodh-0\" (UID: \"4552f7de-b8a7-4456-81ce-1faa3a69d96b\") " pod="openstack/aodh-0" Nov 29 08:07:57 crc kubenswrapper[4795]: I1129 08:07:57.218966 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4552f7de-b8a7-4456-81ce-1faa3a69d96b-combined-ca-bundle\") pod \"aodh-0\" (UID: \"4552f7de-b8a7-4456-81ce-1faa3a69d96b\") " pod="openstack/aodh-0" Nov 29 08:07:57 crc kubenswrapper[4795]: I1129 08:07:57.241223 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4552f7de-b8a7-4456-81ce-1faa3a69d96b-scripts\") pod \"aodh-0\" (UID: \"4552f7de-b8a7-4456-81ce-1faa3a69d96b\") " pod="openstack/aodh-0" Nov 29 08:07:57 crc kubenswrapper[4795]: I1129 08:07:57.246007 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dq47\" (UniqueName: \"kubernetes.io/projected/4552f7de-b8a7-4456-81ce-1faa3a69d96b-kube-api-access-2dq47\") pod \"aodh-0\" (UID: \"4552f7de-b8a7-4456-81ce-1faa3a69d96b\") " pod="openstack/aodh-0" Nov 29 08:07:57 crc kubenswrapper[4795]: I1129 08:07:57.264960 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4552f7de-b8a7-4456-81ce-1faa3a69d96b-config-data\") pod \"aodh-0\" (UID: \"4552f7de-b8a7-4456-81ce-1faa3a69d96b\") " pod="openstack/aodh-0" Nov 29 08:07:57 crc kubenswrapper[4795]: I1129 08:07:57.378737 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 29 08:07:58 crc kubenswrapper[4795]: W1129 08:07:58.012181 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4552f7de_b8a7_4456_81ce_1faa3a69d96b.slice/crio-f42c05addda10b6c3da07f39657dd92cf0b5e56aed9b4d12fbe0f5efad8710b2 WatchSource:0}: Error finding container f42c05addda10b6c3da07f39657dd92cf0b5e56aed9b4d12fbe0f5efad8710b2: Status 404 returned error can't find the container with id f42c05addda10b6c3da07f39657dd92cf0b5e56aed9b4d12fbe0f5efad8710b2 Nov 29 08:07:58 crc kubenswrapper[4795]: I1129 08:07:58.028217 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 29 08:07:58 crc kubenswrapper[4795]: I1129 08:07:58.267698 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2c93ca60-c407-48a8-888b-d3e600653dc7","Type":"ContainerStarted","Data":"def7f5910d1730a6e43214202d8cc16682f75f488b67a4b0fe92039d7c6bf132"} Nov 29 08:07:58 crc kubenswrapper[4795]: I1129 08:07:58.268055 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2c93ca60-c407-48a8-888b-d3e600653dc7","Type":"ContainerStarted","Data":"255f116de412296e32effbc55abbb6efb78da338a670a0f386d8190004713006"} Nov 29 08:07:58 crc kubenswrapper[4795]: I1129 08:07:58.268025 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2c93ca60-c407-48a8-888b-d3e600653dc7" containerName="nova-metadata-log" containerID="cri-o://255f116de412296e32effbc55abbb6efb78da338a670a0f386d8190004713006" gracePeriod=30 Nov 29 08:07:58 crc kubenswrapper[4795]: I1129 08:07:58.268170 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2c93ca60-c407-48a8-888b-d3e600653dc7" containerName="nova-metadata-metadata" containerID="cri-o://def7f5910d1730a6e43214202d8cc16682f75f488b67a4b0fe92039d7c6bf132" gracePeriod=30 Nov 29 08:07:58 crc kubenswrapper[4795]: I1129 08:07:58.273779 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"92d4ea06-dad6-4b2a-ace8-b87dbe308e40","Type":"ContainerStarted","Data":"a4f6aea1a0d61b730f6930f040262834f94192ecf4b467a6c67fdd61126f7ae0"} Nov 29 08:07:58 crc kubenswrapper[4795]: I1129 08:07:58.304269 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="3085d3da-ff4f-4b37-be0a-3f1754a7fbf0" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://9438cceefedbe627af13cc105670b054eaa22dae99e7507c69bd095fc311c8ef" gracePeriod=30 Nov 29 08:07:58 crc kubenswrapper[4795]: I1129 08:07:58.304771 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"4552f7de-b8a7-4456-81ce-1faa3a69d96b","Type":"ContainerStarted","Data":"f42c05addda10b6c3da07f39657dd92cf0b5e56aed9b4d12fbe0f5efad8710b2"} Nov 29 08:07:58 crc kubenswrapper[4795]: I1129 08:07:58.304827 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3085d3da-ff4f-4b37-be0a-3f1754a7fbf0","Type":"ContainerStarted","Data":"9438cceefedbe627af13cc105670b054eaa22dae99e7507c69bd095fc311c8ef"} Nov 29 08:07:58 crc kubenswrapper[4795]: I1129 08:07:58.324883 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6d10ef1f-3b22-4019-a9e6-5ae072648212","Type":"ContainerStarted","Data":"bed7b166cf6e44a1c1af812c4b22eb1bbedab6085744da5c8c38cd3ec0cd3a07"} Nov 29 08:07:58 crc kubenswrapper[4795]: I1129 08:07:58.324951 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6d10ef1f-3b22-4019-a9e6-5ae072648212","Type":"ContainerStarted","Data":"9689f21b7f18fe8e3a851ee3ba063b06971fff9cf95cbeade85d635cd8b0c26b"} Nov 29 08:07:58 crc kubenswrapper[4795]: I1129 08:07:58.327778 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.155094404 podStartE2EDuration="8.327751081s" podCreationTimestamp="2025-11-29 08:07:50 +0000 UTC" firstStartedPulling="2025-11-29 08:07:51.727482387 +0000 UTC m=+1717.703058177" lastFinishedPulling="2025-11-29 08:07:56.900139064 +0000 UTC m=+1722.875714854" observedRunningTime="2025-11-29 08:07:58.309817382 +0000 UTC m=+1724.285393172" watchObservedRunningTime="2025-11-29 08:07:58.327751081 +0000 UTC m=+1724.303326871" Nov 29 08:07:58 crc kubenswrapper[4795]: I1129 08:07:58.352358 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.129872688 podStartE2EDuration="8.352336987s" podCreationTimestamp="2025-11-29 08:07:50 +0000 UTC" firstStartedPulling="2025-11-29 08:07:51.680785333 +0000 UTC m=+1717.656361123" lastFinishedPulling="2025-11-29 08:07:56.903249632 +0000 UTC m=+1722.878825422" observedRunningTime="2025-11-29 08:07:58.328164502 +0000 UTC m=+1724.303740292" watchObservedRunningTime="2025-11-29 08:07:58.352336987 +0000 UTC m=+1724.327912767" Nov 29 08:07:58 crc kubenswrapper[4795]: I1129 08:07:58.378458 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.726966659 podStartE2EDuration="8.378435396s" podCreationTimestamp="2025-11-29 08:07:50 +0000 UTC" firstStartedPulling="2025-11-29 08:07:52.257870697 +0000 UTC m=+1718.233446487" lastFinishedPulling="2025-11-29 08:07:56.909339434 +0000 UTC m=+1722.884915224" observedRunningTime="2025-11-29 08:07:58.348066856 +0000 UTC m=+1724.323642646" watchObservedRunningTime="2025-11-29 08:07:58.378435396 +0000 UTC m=+1724.354011186" Nov 29 08:07:58 crc kubenswrapper[4795]: I1129 08:07:58.380482 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.995647584 podStartE2EDuration="8.380473374s" podCreationTimestamp="2025-11-29 08:07:50 +0000 UTC" firstStartedPulling="2025-11-29 08:07:51.517512176 +0000 UTC m=+1717.493087966" lastFinishedPulling="2025-11-29 08:07:56.902337966 +0000 UTC m=+1722.877913756" observedRunningTime="2025-11-29 08:07:58.376985805 +0000 UTC m=+1724.352561595" watchObservedRunningTime="2025-11-29 08:07:58.380473374 +0000 UTC m=+1724.356049164" Nov 29 08:07:59 crc kubenswrapper[4795]: I1129 08:07:59.350544 4795 generic.go:334] "Generic (PLEG): container finished" podID="2c93ca60-c407-48a8-888b-d3e600653dc7" containerID="def7f5910d1730a6e43214202d8cc16682f75f488b67a4b0fe92039d7c6bf132" exitCode=0 Nov 29 08:07:59 crc kubenswrapper[4795]: I1129 08:07:59.351142 4795 generic.go:334] "Generic (PLEG): container finished" podID="2c93ca60-c407-48a8-888b-d3e600653dc7" containerID="255f116de412296e32effbc55abbb6efb78da338a670a0f386d8190004713006" exitCode=143 Nov 29 08:07:59 crc kubenswrapper[4795]: I1129 08:07:59.350631 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2c93ca60-c407-48a8-888b-d3e600653dc7","Type":"ContainerDied","Data":"def7f5910d1730a6e43214202d8cc16682f75f488b67a4b0fe92039d7c6bf132"} Nov 29 08:07:59 crc kubenswrapper[4795]: I1129 08:07:59.351245 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2c93ca60-c407-48a8-888b-d3e600653dc7","Type":"ContainerDied","Data":"255f116de412296e32effbc55abbb6efb78da338a670a0f386d8190004713006"} Nov 29 08:07:59 crc kubenswrapper[4795]: I1129 08:07:59.351259 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2c93ca60-c407-48a8-888b-d3e600653dc7","Type":"ContainerDied","Data":"26455926ac946057b762eaeec471943756f6101ce1ce771a9e7ea66cf46ed8b9"} Nov 29 08:07:59 crc kubenswrapper[4795]: I1129 08:07:59.351269 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26455926ac946057b762eaeec471943756f6101ce1ce771a9e7ea66cf46ed8b9" Nov 29 08:07:59 crc kubenswrapper[4795]: I1129 08:07:59.354752 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"4552f7de-b8a7-4456-81ce-1faa3a69d96b","Type":"ContainerStarted","Data":"5139803ab35b32904c3a52717017f5244689ecab0487b355a9edc071e8312479"} Nov 29 08:07:59 crc kubenswrapper[4795]: I1129 08:07:59.390696 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 29 08:07:59 crc kubenswrapper[4795]: I1129 08:07:59.412265 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c93ca60-c407-48a8-888b-d3e600653dc7-combined-ca-bundle\") pod \"2c93ca60-c407-48a8-888b-d3e600653dc7\" (UID: \"2c93ca60-c407-48a8-888b-d3e600653dc7\") " Nov 29 08:07:59 crc kubenswrapper[4795]: I1129 08:07:59.412503 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c93ca60-c407-48a8-888b-d3e600653dc7-config-data\") pod \"2c93ca60-c407-48a8-888b-d3e600653dc7\" (UID: \"2c93ca60-c407-48a8-888b-d3e600653dc7\") " Nov 29 08:07:59 crc kubenswrapper[4795]: I1129 08:07:59.476842 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c93ca60-c407-48a8-888b-d3e600653dc7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2c93ca60-c407-48a8-888b-d3e600653dc7" (UID: "2c93ca60-c407-48a8-888b-d3e600653dc7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:07:59 crc kubenswrapper[4795]: I1129 08:07:59.515080 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmlt5\" (UniqueName: \"kubernetes.io/projected/2c93ca60-c407-48a8-888b-d3e600653dc7-kube-api-access-xmlt5\") pod \"2c93ca60-c407-48a8-888b-d3e600653dc7\" (UID: \"2c93ca60-c407-48a8-888b-d3e600653dc7\") " Nov 29 08:07:59 crc kubenswrapper[4795]: I1129 08:07:59.515136 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c93ca60-c407-48a8-888b-d3e600653dc7-logs\") pod \"2c93ca60-c407-48a8-888b-d3e600653dc7\" (UID: \"2c93ca60-c407-48a8-888b-d3e600653dc7\") " Nov 29 08:07:59 crc kubenswrapper[4795]: I1129 08:07:59.516269 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c93ca60-c407-48a8-888b-d3e600653dc7-logs" (OuterVolumeSpecName: "logs") pod "2c93ca60-c407-48a8-888b-d3e600653dc7" (UID: "2c93ca60-c407-48a8-888b-d3e600653dc7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:07:59 crc kubenswrapper[4795]: I1129 08:07:59.517036 4795 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c93ca60-c407-48a8-888b-d3e600653dc7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 08:07:59 crc kubenswrapper[4795]: I1129 08:07:59.569223 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c93ca60-c407-48a8-888b-d3e600653dc7-kube-api-access-xmlt5" (OuterVolumeSpecName: "kube-api-access-xmlt5") pod "2c93ca60-c407-48a8-888b-d3e600653dc7" (UID: "2c93ca60-c407-48a8-888b-d3e600653dc7"). InnerVolumeSpecName "kube-api-access-xmlt5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:07:59 crc kubenswrapper[4795]: I1129 08:07:59.578812 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c93ca60-c407-48a8-888b-d3e600653dc7-config-data" (OuterVolumeSpecName: "config-data") pod "2c93ca60-c407-48a8-888b-d3e600653dc7" (UID: "2c93ca60-c407-48a8-888b-d3e600653dc7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:07:59 crc kubenswrapper[4795]: I1129 08:07:59.620268 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xmlt5\" (UniqueName: \"kubernetes.io/projected/2c93ca60-c407-48a8-888b-d3e600653dc7-kube-api-access-xmlt5\") on node \"crc\" DevicePath \"\"" Nov 29 08:07:59 crc kubenswrapper[4795]: I1129 08:07:59.620308 4795 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c93ca60-c407-48a8-888b-d3e600653dc7-logs\") on node \"crc\" DevicePath \"\"" Nov 29 08:07:59 crc kubenswrapper[4795]: I1129 08:07:59.620321 4795 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c93ca60-c407-48a8-888b-d3e600653dc7-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:00 crc kubenswrapper[4795]: I1129 08:08:00.370677 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 29 08:08:00 crc kubenswrapper[4795]: I1129 08:08:00.412089 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 29 08:08:00 crc kubenswrapper[4795]: I1129 08:08:00.435910 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 29 08:08:00 crc kubenswrapper[4795]: I1129 08:08:00.449168 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Nov 29 08:08:00 crc kubenswrapper[4795]: I1129 08:08:00.464574 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 29 08:08:00 crc kubenswrapper[4795]: E1129 08:08:00.465191 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c93ca60-c407-48a8-888b-d3e600653dc7" containerName="nova-metadata-log" Nov 29 08:08:00 crc kubenswrapper[4795]: I1129 08:08:00.465209 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c93ca60-c407-48a8-888b-d3e600653dc7" containerName="nova-metadata-log" Nov 29 08:08:00 crc kubenswrapper[4795]: E1129 08:08:00.465228 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c93ca60-c407-48a8-888b-d3e600653dc7" containerName="nova-metadata-metadata" Nov 29 08:08:00 crc kubenswrapper[4795]: I1129 08:08:00.465234 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c93ca60-c407-48a8-888b-d3e600653dc7" containerName="nova-metadata-metadata" Nov 29 08:08:00 crc kubenswrapper[4795]: I1129 08:08:00.465446 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c93ca60-c407-48a8-888b-d3e600653dc7" containerName="nova-metadata-log" Nov 29 08:08:00 crc kubenswrapper[4795]: I1129 08:08:00.465485 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c93ca60-c407-48a8-888b-d3e600653dc7" containerName="nova-metadata-metadata" Nov 29 08:08:00 crc kubenswrapper[4795]: I1129 08:08:00.466890 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 29 08:08:00 crc kubenswrapper[4795]: I1129 08:08:00.473841 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 29 08:08:00 crc kubenswrapper[4795]: I1129 08:08:00.474103 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 29 08:08:00 crc kubenswrapper[4795]: I1129 08:08:00.498955 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 29 08:08:00 crc kubenswrapper[4795]: I1129 08:08:00.530065 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 29 08:08:00 crc kubenswrapper[4795]: I1129 08:08:00.530105 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 29 08:08:00 crc kubenswrapper[4795]: I1129 08:08:00.660035 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a467cdb-1c68-446a-853a-3bb61468677f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9a467cdb-1c68-446a-853a-3bb61468677f\") " pod="openstack/nova-metadata-0" Nov 29 08:08:00 crc kubenswrapper[4795]: I1129 08:08:00.660107 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a467cdb-1c68-446a-853a-3bb61468677f-config-data\") pod \"nova-metadata-0\" (UID: \"9a467cdb-1c68-446a-853a-3bb61468677f\") " pod="openstack/nova-metadata-0" Nov 29 08:08:00 crc kubenswrapper[4795]: I1129 08:08:00.660622 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a467cdb-1c68-446a-853a-3bb61468677f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9a467cdb-1c68-446a-853a-3bb61468677f\") " pod="openstack/nova-metadata-0" Nov 29 08:08:00 crc kubenswrapper[4795]: I1129 08:08:00.660772 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l8tx\" (UniqueName: \"kubernetes.io/projected/9a467cdb-1c68-446a-853a-3bb61468677f-kube-api-access-5l8tx\") pod \"nova-metadata-0\" (UID: \"9a467cdb-1c68-446a-853a-3bb61468677f\") " pod="openstack/nova-metadata-0" Nov 29 08:08:00 crc kubenswrapper[4795]: I1129 08:08:00.660835 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a467cdb-1c68-446a-853a-3bb61468677f-logs\") pod \"nova-metadata-0\" (UID: \"9a467cdb-1c68-446a-853a-3bb61468677f\") " pod="openstack/nova-metadata-0" Nov 29 08:08:00 crc kubenswrapper[4795]: I1129 08:08:00.763179 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a467cdb-1c68-446a-853a-3bb61468677f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9a467cdb-1c68-446a-853a-3bb61468677f\") " pod="openstack/nova-metadata-0" Nov 29 08:08:00 crc kubenswrapper[4795]: I1129 08:08:00.763232 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a467cdb-1c68-446a-853a-3bb61468677f-config-data\") pod \"nova-metadata-0\" (UID: \"9a467cdb-1c68-446a-853a-3bb61468677f\") " pod="openstack/nova-metadata-0" Nov 29 08:08:00 crc kubenswrapper[4795]: I1129 08:08:00.763381 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a467cdb-1c68-446a-853a-3bb61468677f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9a467cdb-1c68-446a-853a-3bb61468677f\") " pod="openstack/nova-metadata-0" Nov 29 08:08:00 crc kubenswrapper[4795]: I1129 08:08:00.763416 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5l8tx\" (UniqueName: \"kubernetes.io/projected/9a467cdb-1c68-446a-853a-3bb61468677f-kube-api-access-5l8tx\") pod \"nova-metadata-0\" (UID: \"9a467cdb-1c68-446a-853a-3bb61468677f\") " pod="openstack/nova-metadata-0" Nov 29 08:08:00 crc kubenswrapper[4795]: I1129 08:08:00.763454 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a467cdb-1c68-446a-853a-3bb61468677f-logs\") pod \"nova-metadata-0\" (UID: \"9a467cdb-1c68-446a-853a-3bb61468677f\") " pod="openstack/nova-metadata-0" Nov 29 08:08:00 crc kubenswrapper[4795]: I1129 08:08:00.763880 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a467cdb-1c68-446a-853a-3bb61468677f-logs\") pod \"nova-metadata-0\" (UID: \"9a467cdb-1c68-446a-853a-3bb61468677f\") " pod="openstack/nova-metadata-0" Nov 29 08:08:00 crc kubenswrapper[4795]: I1129 08:08:00.771623 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a467cdb-1c68-446a-853a-3bb61468677f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9a467cdb-1c68-446a-853a-3bb61468677f\") " pod="openstack/nova-metadata-0" Nov 29 08:08:00 crc kubenswrapper[4795]: I1129 08:08:00.778425 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a467cdb-1c68-446a-853a-3bb61468677f-config-data\") pod \"nova-metadata-0\" (UID: \"9a467cdb-1c68-446a-853a-3bb61468677f\") " pod="openstack/nova-metadata-0" Nov 29 08:08:00 crc kubenswrapper[4795]: I1129 08:08:00.779208 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a467cdb-1c68-446a-853a-3bb61468677f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9a467cdb-1c68-446a-853a-3bb61468677f\") " pod="openstack/nova-metadata-0" Nov 29 08:08:00 crc kubenswrapper[4795]: I1129 08:08:00.800752 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5l8tx\" (UniqueName: \"kubernetes.io/projected/9a467cdb-1c68-446a-853a-3bb61468677f-kube-api-access-5l8tx\") pod \"nova-metadata-0\" (UID: \"9a467cdb-1c68-446a-853a-3bb61468677f\") " pod="openstack/nova-metadata-0" Nov 29 08:08:00 crc kubenswrapper[4795]: I1129 08:08:00.807954 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 29 08:08:00 crc kubenswrapper[4795]: I1129 08:08:00.854632 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 08:08:00 crc kubenswrapper[4795]: I1129 08:08:00.854967 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="18b04c88-379f-448d-8443-dbfa7994e228" containerName="ceilometer-central-agent" containerID="cri-o://5d8b40155304018bf6ca1ecd5005f2e907e8913c4e5b3f2ac8afc23ad087375b" gracePeriod=30 Nov 29 08:08:00 crc kubenswrapper[4795]: I1129 08:08:00.855066 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="18b04c88-379f-448d-8443-dbfa7994e228" containerName="proxy-httpd" containerID="cri-o://2870b60e0be138bbfaa09cf50cb787560961ef2662473138011bba8d1b61d316" gracePeriod=30 Nov 29 08:08:00 crc kubenswrapper[4795]: I1129 08:08:00.855104 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="18b04c88-379f-448d-8443-dbfa7994e228" containerName="sg-core" containerID="cri-o://e6f354fa94c5554bbf1c4808a21431dbb9b715822ccecb0500ca3bb734a54296" gracePeriod=30 Nov 29 08:08:00 crc kubenswrapper[4795]: I1129 08:08:00.855149 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="18b04c88-379f-448d-8443-dbfa7994e228" containerName="ceilometer-notification-agent" containerID="cri-o://acadca80a057d6477b9cb3e2ba412b9341aac67013024dd5dc88de8a257b99dd" gracePeriod=30 Nov 29 08:08:00 crc kubenswrapper[4795]: I1129 08:08:00.906747 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 29 08:08:00 crc kubenswrapper[4795]: I1129 08:08:00.906797 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 29 08:08:00 crc kubenswrapper[4795]: I1129 08:08:00.942435 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 29 08:08:01 crc kubenswrapper[4795]: I1129 08:08:01.027795 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7877d89589-5t79v" Nov 29 08:08:01 crc kubenswrapper[4795]: I1129 08:08:01.141240 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d978555f9-krdd7"] Nov 29 08:08:01 crc kubenswrapper[4795]: I1129 08:08:01.141996 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7d978555f9-krdd7" podUID="f7a16be4-8056-4ba8-9720-6503361132f4" containerName="dnsmasq-dns" containerID="cri-o://f90c34869a371cd02f74b97dc6ac1d2869ecf9d7bf1f0bd98b84d1a688640cfe" gracePeriod=10 Nov 29 08:08:01 crc kubenswrapper[4795]: I1129 08:08:01.153957 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 29 08:08:01 crc kubenswrapper[4795]: I1129 08:08:01.321179 4795 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7d978555f9-krdd7" podUID="f7a16be4-8056-4ba8-9720-6503361132f4" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.211:5353: connect: connection refused" Nov 29 08:08:01 crc kubenswrapper[4795]: I1129 08:08:01.500979 4795 generic.go:334] "Generic (PLEG): container finished" podID="18b04c88-379f-448d-8443-dbfa7994e228" containerID="2870b60e0be138bbfaa09cf50cb787560961ef2662473138011bba8d1b61d316" exitCode=0 Nov 29 08:08:01 crc kubenswrapper[4795]: I1129 08:08:01.501357 4795 generic.go:334] "Generic (PLEG): container finished" podID="18b04c88-379f-448d-8443-dbfa7994e228" containerID="e6f354fa94c5554bbf1c4808a21431dbb9b715822ccecb0500ca3bb734a54296" exitCode=2 Nov 29 08:08:01 crc kubenswrapper[4795]: I1129 08:08:01.501436 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"18b04c88-379f-448d-8443-dbfa7994e228","Type":"ContainerDied","Data":"2870b60e0be138bbfaa09cf50cb787560961ef2662473138011bba8d1b61d316"} Nov 29 08:08:01 crc kubenswrapper[4795]: I1129 08:08:01.501463 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"18b04c88-379f-448d-8443-dbfa7994e228","Type":"ContainerDied","Data":"e6f354fa94c5554bbf1c4808a21431dbb9b715822ccecb0500ca3bb734a54296"} Nov 29 08:08:01 crc kubenswrapper[4795]: I1129 08:08:01.524908 4795 generic.go:334] "Generic (PLEG): container finished" podID="f7a16be4-8056-4ba8-9720-6503361132f4" containerID="f90c34869a371cd02f74b97dc6ac1d2869ecf9d7bf1f0bd98b84d1a688640cfe" exitCode=0 Nov 29 08:08:01 crc kubenswrapper[4795]: I1129 08:08:01.525020 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d978555f9-krdd7" event={"ID":"f7a16be4-8056-4ba8-9720-6503361132f4","Type":"ContainerDied","Data":"f90c34869a371cd02f74b97dc6ac1d2869ecf9d7bf1f0bd98b84d1a688640cfe"} Nov 29 08:08:01 crc kubenswrapper[4795]: I1129 08:08:01.578006 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 29 08:08:01 crc kubenswrapper[4795]: I1129 08:08:01.614840 4795 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6d10ef1f-3b22-4019-a9e6-5ae072648212" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.236:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 29 08:08:01 crc kubenswrapper[4795]: I1129 08:08:01.615807 4795 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6d10ef1f-3b22-4019-a9e6-5ae072648212" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.236:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 29 08:08:01 crc kubenswrapper[4795]: I1129 08:08:01.939609 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d978555f9-krdd7" Nov 29 08:08:02 crc kubenswrapper[4795]: I1129 08:08:02.105176 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 29 08:08:02 crc kubenswrapper[4795]: I1129 08:08:02.112042 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f7a16be4-8056-4ba8-9720-6503361132f4-dns-svc\") pod \"f7a16be4-8056-4ba8-9720-6503361132f4\" (UID: \"f7a16be4-8056-4ba8-9720-6503361132f4\") " Nov 29 08:08:02 crc kubenswrapper[4795]: I1129 08:08:02.112123 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f7a16be4-8056-4ba8-9720-6503361132f4-ovsdbserver-nb\") pod \"f7a16be4-8056-4ba8-9720-6503361132f4\" (UID: \"f7a16be4-8056-4ba8-9720-6503361132f4\") " Nov 29 08:08:02 crc kubenswrapper[4795]: I1129 08:08:02.112277 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f7a16be4-8056-4ba8-9720-6503361132f4-ovsdbserver-sb\") pod \"f7a16be4-8056-4ba8-9720-6503361132f4\" (UID: \"f7a16be4-8056-4ba8-9720-6503361132f4\") " Nov 29 08:08:02 crc kubenswrapper[4795]: I1129 08:08:02.112329 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f7a16be4-8056-4ba8-9720-6503361132f4-dns-swift-storage-0\") pod \"f7a16be4-8056-4ba8-9720-6503361132f4\" (UID: \"f7a16be4-8056-4ba8-9720-6503361132f4\") " Nov 29 08:08:02 crc kubenswrapper[4795]: I1129 08:08:02.112380 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zf9lp\" (UniqueName: \"kubernetes.io/projected/f7a16be4-8056-4ba8-9720-6503361132f4-kube-api-access-zf9lp\") pod \"f7a16be4-8056-4ba8-9720-6503361132f4\" (UID: \"f7a16be4-8056-4ba8-9720-6503361132f4\") " Nov 29 08:08:02 crc kubenswrapper[4795]: I1129 08:08:02.113883 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7a16be4-8056-4ba8-9720-6503361132f4-config\") pod \"f7a16be4-8056-4ba8-9720-6503361132f4\" (UID: \"f7a16be4-8056-4ba8-9720-6503361132f4\") " Nov 29 08:08:02 crc kubenswrapper[4795]: I1129 08:08:02.144103 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7a16be4-8056-4ba8-9720-6503361132f4-kube-api-access-zf9lp" (OuterVolumeSpecName: "kube-api-access-zf9lp") pod "f7a16be4-8056-4ba8-9720-6503361132f4" (UID: "f7a16be4-8056-4ba8-9720-6503361132f4"). InnerVolumeSpecName "kube-api-access-zf9lp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:08:02 crc kubenswrapper[4795]: I1129 08:08:02.176882 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7a16be4-8056-4ba8-9720-6503361132f4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f7a16be4-8056-4ba8-9720-6503361132f4" (UID: "f7a16be4-8056-4ba8-9720-6503361132f4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:08:02 crc kubenswrapper[4795]: I1129 08:08:02.226539 4795 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f7a16be4-8056-4ba8-9720-6503361132f4-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:02 crc kubenswrapper[4795]: I1129 08:08:02.240815 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zf9lp\" (UniqueName: \"kubernetes.io/projected/f7a16be4-8056-4ba8-9720-6503361132f4-kube-api-access-zf9lp\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:02 crc kubenswrapper[4795]: I1129 08:08:02.242253 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7a16be4-8056-4ba8-9720-6503361132f4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f7a16be4-8056-4ba8-9720-6503361132f4" (UID: "f7a16be4-8056-4ba8-9720-6503361132f4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:08:02 crc kubenswrapper[4795]: I1129 08:08:02.278514 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7a16be4-8056-4ba8-9720-6503361132f4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f7a16be4-8056-4ba8-9720-6503361132f4" (UID: "f7a16be4-8056-4ba8-9720-6503361132f4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:08:02 crc kubenswrapper[4795]: I1129 08:08:02.289795 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7a16be4-8056-4ba8-9720-6503361132f4-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f7a16be4-8056-4ba8-9720-6503361132f4" (UID: "f7a16be4-8056-4ba8-9720-6503361132f4"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:08:02 crc kubenswrapper[4795]: I1129 08:08:02.313134 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c93ca60-c407-48a8-888b-d3e600653dc7" path="/var/lib/kubelet/pods/2c93ca60-c407-48a8-888b-d3e600653dc7/volumes" Nov 29 08:08:02 crc kubenswrapper[4795]: I1129 08:08:02.356851 4795 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f7a16be4-8056-4ba8-9720-6503361132f4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:02 crc kubenswrapper[4795]: I1129 08:08:02.357265 4795 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f7a16be4-8056-4ba8-9720-6503361132f4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:02 crc kubenswrapper[4795]: I1129 08:08:02.357352 4795 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f7a16be4-8056-4ba8-9720-6503361132f4-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:02 crc kubenswrapper[4795]: I1129 08:08:02.382086 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7a16be4-8056-4ba8-9720-6503361132f4-config" (OuterVolumeSpecName: "config") pod "f7a16be4-8056-4ba8-9720-6503361132f4" (UID: "f7a16be4-8056-4ba8-9720-6503361132f4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:08:02 crc kubenswrapper[4795]: I1129 08:08:02.460098 4795 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7a16be4-8056-4ba8-9720-6503361132f4-config\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:02 crc kubenswrapper[4795]: I1129 08:08:02.490619 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 08:08:02 crc kubenswrapper[4795]: I1129 08:08:02.561507 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t27b7\" (UniqueName: \"kubernetes.io/projected/18b04c88-379f-448d-8443-dbfa7994e228-kube-api-access-t27b7\") pod \"18b04c88-379f-448d-8443-dbfa7994e228\" (UID: \"18b04c88-379f-448d-8443-dbfa7994e228\") " Nov 29 08:08:02 crc kubenswrapper[4795]: I1129 08:08:02.561627 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/18b04c88-379f-448d-8443-dbfa7994e228-log-httpd\") pod \"18b04c88-379f-448d-8443-dbfa7994e228\" (UID: \"18b04c88-379f-448d-8443-dbfa7994e228\") " Nov 29 08:08:02 crc kubenswrapper[4795]: I1129 08:08:02.561726 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18b04c88-379f-448d-8443-dbfa7994e228-combined-ca-bundle\") pod \"18b04c88-379f-448d-8443-dbfa7994e228\" (UID: \"18b04c88-379f-448d-8443-dbfa7994e228\") " Nov 29 08:08:02 crc kubenswrapper[4795]: I1129 08:08:02.561832 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/18b04c88-379f-448d-8443-dbfa7994e228-sg-core-conf-yaml\") pod \"18b04c88-379f-448d-8443-dbfa7994e228\" (UID: \"18b04c88-379f-448d-8443-dbfa7994e228\") " Nov 29 08:08:02 crc kubenswrapper[4795]: I1129 08:08:02.561927 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18b04c88-379f-448d-8443-dbfa7994e228-scripts\") pod \"18b04c88-379f-448d-8443-dbfa7994e228\" (UID: \"18b04c88-379f-448d-8443-dbfa7994e228\") " Nov 29 08:08:02 crc kubenswrapper[4795]: I1129 08:08:02.561991 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18b04c88-379f-448d-8443-dbfa7994e228-config-data\") pod \"18b04c88-379f-448d-8443-dbfa7994e228\" (UID: \"18b04c88-379f-448d-8443-dbfa7994e228\") " Nov 29 08:08:02 crc kubenswrapper[4795]: I1129 08:08:02.562066 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/18b04c88-379f-448d-8443-dbfa7994e228-run-httpd\") pod \"18b04c88-379f-448d-8443-dbfa7994e228\" (UID: \"18b04c88-379f-448d-8443-dbfa7994e228\") " Nov 29 08:08:02 crc kubenswrapper[4795]: I1129 08:08:02.564973 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18b04c88-379f-448d-8443-dbfa7994e228-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "18b04c88-379f-448d-8443-dbfa7994e228" (UID: "18b04c88-379f-448d-8443-dbfa7994e228"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:08:02 crc kubenswrapper[4795]: I1129 08:08:02.565190 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18b04c88-379f-448d-8443-dbfa7994e228-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "18b04c88-379f-448d-8443-dbfa7994e228" (UID: "18b04c88-379f-448d-8443-dbfa7994e228"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:08:02 crc kubenswrapper[4795]: I1129 08:08:02.582332 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18b04c88-379f-448d-8443-dbfa7994e228-kube-api-access-t27b7" (OuterVolumeSpecName: "kube-api-access-t27b7") pod "18b04c88-379f-448d-8443-dbfa7994e228" (UID: "18b04c88-379f-448d-8443-dbfa7994e228"). InnerVolumeSpecName "kube-api-access-t27b7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:08:02 crc kubenswrapper[4795]: I1129 08:08:02.584645 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9a467cdb-1c68-446a-853a-3bb61468677f","Type":"ContainerStarted","Data":"0a7c7d550f3abc0e9c2e1e77e03fe664b85ef655f9d32129bb7c3431ba8cabcb"} Nov 29 08:08:02 crc kubenswrapper[4795]: I1129 08:08:02.584757 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18b04c88-379f-448d-8443-dbfa7994e228-scripts" (OuterVolumeSpecName: "scripts") pod "18b04c88-379f-448d-8443-dbfa7994e228" (UID: "18b04c88-379f-448d-8443-dbfa7994e228"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:08:02 crc kubenswrapper[4795]: I1129 08:08:02.604170 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d978555f9-krdd7" event={"ID":"f7a16be4-8056-4ba8-9720-6503361132f4","Type":"ContainerDied","Data":"ecca4bbca4279fc7e82c5b17dd4b972af90ec196abb22353db4fd6d0cff6865d"} Nov 29 08:08:02 crc kubenswrapper[4795]: I1129 08:08:02.648903 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d978555f9-krdd7" Nov 29 08:08:02 crc kubenswrapper[4795]: I1129 08:08:02.653645 4795 scope.go:117] "RemoveContainer" containerID="f90c34869a371cd02f74b97dc6ac1d2869ecf9d7bf1f0bd98b84d1a688640cfe" Nov 29 08:08:02 crc kubenswrapper[4795]: I1129 08:08:02.660049 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18b04c88-379f-448d-8443-dbfa7994e228-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "18b04c88-379f-448d-8443-dbfa7994e228" (UID: "18b04c88-379f-448d-8443-dbfa7994e228"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:08:02 crc kubenswrapper[4795]: I1129 08:08:02.673117 4795 generic.go:334] "Generic (PLEG): container finished" podID="18b04c88-379f-448d-8443-dbfa7994e228" containerID="acadca80a057d6477b9cb3e2ba412b9341aac67013024dd5dc88de8a257b99dd" exitCode=0 Nov 29 08:08:02 crc kubenswrapper[4795]: I1129 08:08:02.673157 4795 generic.go:334] "Generic (PLEG): container finished" podID="18b04c88-379f-448d-8443-dbfa7994e228" containerID="5d8b40155304018bf6ca1ecd5005f2e907e8913c4e5b3f2ac8afc23ad087375b" exitCode=0 Nov 29 08:08:02 crc kubenswrapper[4795]: I1129 08:08:02.673217 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"18b04c88-379f-448d-8443-dbfa7994e228","Type":"ContainerDied","Data":"acadca80a057d6477b9cb3e2ba412b9341aac67013024dd5dc88de8a257b99dd"} Nov 29 08:08:02 crc kubenswrapper[4795]: I1129 08:08:02.673250 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"18b04c88-379f-448d-8443-dbfa7994e228","Type":"ContainerDied","Data":"5d8b40155304018bf6ca1ecd5005f2e907e8913c4e5b3f2ac8afc23ad087375b"} Nov 29 08:08:02 crc kubenswrapper[4795]: I1129 08:08:02.673271 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"18b04c88-379f-448d-8443-dbfa7994e228","Type":"ContainerDied","Data":"eeba9281b4c59f57287b1161f11a9655c54251b4c3cc16ebfb35e656923b66c2"} Nov 29 08:08:02 crc kubenswrapper[4795]: I1129 08:08:02.673469 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 08:08:02 crc kubenswrapper[4795]: I1129 08:08:02.674779 4795 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/18b04c88-379f-448d-8443-dbfa7994e228-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:02 crc kubenswrapper[4795]: I1129 08:08:02.674809 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t27b7\" (UniqueName: \"kubernetes.io/projected/18b04c88-379f-448d-8443-dbfa7994e228-kube-api-access-t27b7\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:02 crc kubenswrapper[4795]: I1129 08:08:02.674823 4795 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/18b04c88-379f-448d-8443-dbfa7994e228-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:02 crc kubenswrapper[4795]: I1129 08:08:02.674837 4795 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/18b04c88-379f-448d-8443-dbfa7994e228-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:02 crc kubenswrapper[4795]: I1129 08:08:02.674852 4795 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18b04c88-379f-448d-8443-dbfa7994e228-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:02 crc kubenswrapper[4795]: I1129 08:08:02.685786 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"4552f7de-b8a7-4456-81ce-1faa3a69d96b","Type":"ContainerStarted","Data":"7cc1b3ae22d6b60f126316e366b143bf002a6d56d9d2b84dc6cdce4ebe4afb21"} Nov 29 08:08:02 crc kubenswrapper[4795]: I1129 08:08:02.782536 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18b04c88-379f-448d-8443-dbfa7994e228-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "18b04c88-379f-448d-8443-dbfa7994e228" (UID: "18b04c88-379f-448d-8443-dbfa7994e228"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:08:02 crc kubenswrapper[4795]: I1129 08:08:02.817807 4795 scope.go:117] "RemoveContainer" containerID="e22e18761fe241c53e28b6dcb8bb1bb770d06a2dc97c916613e5c05a2fc74460" Nov 29 08:08:02 crc kubenswrapper[4795]: I1129 08:08:02.828063 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d978555f9-krdd7"] Nov 29 08:08:02 crc kubenswrapper[4795]: I1129 08:08:02.842710 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d978555f9-krdd7"] Nov 29 08:08:02 crc kubenswrapper[4795]: I1129 08:08:02.869127 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18b04c88-379f-448d-8443-dbfa7994e228-config-data" (OuterVolumeSpecName: "config-data") pod "18b04c88-379f-448d-8443-dbfa7994e228" (UID: "18b04c88-379f-448d-8443-dbfa7994e228"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:08:02 crc kubenswrapper[4795]: I1129 08:08:02.873028 4795 scope.go:117] "RemoveContainer" containerID="2870b60e0be138bbfaa09cf50cb787560961ef2662473138011bba8d1b61d316" Nov 29 08:08:02 crc kubenswrapper[4795]: I1129 08:08:02.881236 4795 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18b04c88-379f-448d-8443-dbfa7994e228-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:02 crc kubenswrapper[4795]: I1129 08:08:02.881292 4795 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18b04c88-379f-448d-8443-dbfa7994e228-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:02 crc kubenswrapper[4795]: I1129 08:08:02.972449 4795 scope.go:117] "RemoveContainer" containerID="e6f354fa94c5554bbf1c4808a21431dbb9b715822ccecb0500ca3bb734a54296" Nov 29 08:08:03 crc kubenswrapper[4795]: I1129 08:08:03.060206 4795 scope.go:117] "RemoveContainer" containerID="acadca80a057d6477b9cb3e2ba412b9341aac67013024dd5dc88de8a257b99dd" Nov 29 08:08:03 crc kubenswrapper[4795]: I1129 08:08:03.066043 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 08:08:03 crc kubenswrapper[4795]: I1129 08:08:03.111348 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 29 08:08:03 crc kubenswrapper[4795]: I1129 08:08:03.142638 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 29 08:08:03 crc kubenswrapper[4795]: E1129 08:08:03.143293 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18b04c88-379f-448d-8443-dbfa7994e228" containerName="ceilometer-notification-agent" Nov 29 08:08:03 crc kubenswrapper[4795]: I1129 08:08:03.143313 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="18b04c88-379f-448d-8443-dbfa7994e228" containerName="ceilometer-notification-agent" Nov 29 08:08:03 crc kubenswrapper[4795]: E1129 08:08:03.143324 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18b04c88-379f-448d-8443-dbfa7994e228" containerName="sg-core" Nov 29 08:08:03 crc kubenswrapper[4795]: I1129 08:08:03.143331 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="18b04c88-379f-448d-8443-dbfa7994e228" containerName="sg-core" Nov 29 08:08:03 crc kubenswrapper[4795]: E1129 08:08:03.143367 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18b04c88-379f-448d-8443-dbfa7994e228" containerName="ceilometer-central-agent" Nov 29 08:08:03 crc kubenswrapper[4795]: I1129 08:08:03.143375 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="18b04c88-379f-448d-8443-dbfa7994e228" containerName="ceilometer-central-agent" Nov 29 08:08:03 crc kubenswrapper[4795]: E1129 08:08:03.143396 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7a16be4-8056-4ba8-9720-6503361132f4" containerName="init" Nov 29 08:08:03 crc kubenswrapper[4795]: I1129 08:08:03.143403 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7a16be4-8056-4ba8-9720-6503361132f4" containerName="init" Nov 29 08:08:03 crc kubenswrapper[4795]: E1129 08:08:03.143416 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18b04c88-379f-448d-8443-dbfa7994e228" containerName="proxy-httpd" Nov 29 08:08:03 crc kubenswrapper[4795]: I1129 08:08:03.143421 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="18b04c88-379f-448d-8443-dbfa7994e228" containerName="proxy-httpd" Nov 29 08:08:03 crc kubenswrapper[4795]: E1129 08:08:03.143432 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7a16be4-8056-4ba8-9720-6503361132f4" containerName="dnsmasq-dns" Nov 29 08:08:03 crc kubenswrapper[4795]: I1129 08:08:03.143438 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7a16be4-8056-4ba8-9720-6503361132f4" containerName="dnsmasq-dns" Nov 29 08:08:03 crc kubenswrapper[4795]: I1129 08:08:03.143658 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7a16be4-8056-4ba8-9720-6503361132f4" containerName="dnsmasq-dns" Nov 29 08:08:03 crc kubenswrapper[4795]: I1129 08:08:03.143668 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="18b04c88-379f-448d-8443-dbfa7994e228" containerName="proxy-httpd" Nov 29 08:08:03 crc kubenswrapper[4795]: I1129 08:08:03.143682 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="18b04c88-379f-448d-8443-dbfa7994e228" containerName="ceilometer-central-agent" Nov 29 08:08:03 crc kubenswrapper[4795]: I1129 08:08:03.143690 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="18b04c88-379f-448d-8443-dbfa7994e228" containerName="ceilometer-notification-agent" Nov 29 08:08:03 crc kubenswrapper[4795]: I1129 08:08:03.143702 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="18b04c88-379f-448d-8443-dbfa7994e228" containerName="sg-core" Nov 29 08:08:03 crc kubenswrapper[4795]: I1129 08:08:03.147317 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 08:08:03 crc kubenswrapper[4795]: I1129 08:08:03.157083 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 29 08:08:03 crc kubenswrapper[4795]: I1129 08:08:03.157451 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 29 08:08:03 crc kubenswrapper[4795]: I1129 08:08:03.161615 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 08:08:03 crc kubenswrapper[4795]: I1129 08:08:03.210710 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/feffda9c-4aff-4412-858f-6452ac468a93-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"feffda9c-4aff-4412-858f-6452ac468a93\") " pod="openstack/ceilometer-0" Nov 29 08:08:03 crc kubenswrapper[4795]: I1129 08:08:03.211169 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/feffda9c-4aff-4412-858f-6452ac468a93-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"feffda9c-4aff-4412-858f-6452ac468a93\") " pod="openstack/ceilometer-0" Nov 29 08:08:03 crc kubenswrapper[4795]: I1129 08:08:03.211310 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/feffda9c-4aff-4412-858f-6452ac468a93-log-httpd\") pod \"ceilometer-0\" (UID: \"feffda9c-4aff-4412-858f-6452ac468a93\") " pod="openstack/ceilometer-0" Nov 29 08:08:03 crc kubenswrapper[4795]: I1129 08:08:03.211336 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/feffda9c-4aff-4412-858f-6452ac468a93-scripts\") pod \"ceilometer-0\" (UID: \"feffda9c-4aff-4412-858f-6452ac468a93\") " pod="openstack/ceilometer-0" Nov 29 08:08:03 crc kubenswrapper[4795]: I1129 08:08:03.211456 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwrwd\" (UniqueName: \"kubernetes.io/projected/feffda9c-4aff-4412-858f-6452ac468a93-kube-api-access-gwrwd\") pod \"ceilometer-0\" (UID: \"feffda9c-4aff-4412-858f-6452ac468a93\") " pod="openstack/ceilometer-0" Nov 29 08:08:03 crc kubenswrapper[4795]: I1129 08:08:03.211656 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/feffda9c-4aff-4412-858f-6452ac468a93-run-httpd\") pod \"ceilometer-0\" (UID: \"feffda9c-4aff-4412-858f-6452ac468a93\") " pod="openstack/ceilometer-0" Nov 29 08:08:03 crc kubenswrapper[4795]: I1129 08:08:03.211756 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/feffda9c-4aff-4412-858f-6452ac468a93-config-data\") pod \"ceilometer-0\" (UID: \"feffda9c-4aff-4412-858f-6452ac468a93\") " pod="openstack/ceilometer-0" Nov 29 08:08:03 crc kubenswrapper[4795]: I1129 08:08:03.236692 4795 scope.go:117] "RemoveContainer" containerID="5d8b40155304018bf6ca1ecd5005f2e907e8913c4e5b3f2ac8afc23ad087375b" Nov 29 08:08:03 crc kubenswrapper[4795]: I1129 08:08:03.315085 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/feffda9c-4aff-4412-858f-6452ac468a93-run-httpd\") pod \"ceilometer-0\" (UID: \"feffda9c-4aff-4412-858f-6452ac468a93\") " pod="openstack/ceilometer-0" Nov 29 08:08:03 crc kubenswrapper[4795]: I1129 08:08:03.315158 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/feffda9c-4aff-4412-858f-6452ac468a93-config-data\") pod \"ceilometer-0\" (UID: \"feffda9c-4aff-4412-858f-6452ac468a93\") " pod="openstack/ceilometer-0" Nov 29 08:08:03 crc kubenswrapper[4795]: I1129 08:08:03.315229 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/feffda9c-4aff-4412-858f-6452ac468a93-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"feffda9c-4aff-4412-858f-6452ac468a93\") " pod="openstack/ceilometer-0" Nov 29 08:08:03 crc kubenswrapper[4795]: I1129 08:08:03.315261 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/feffda9c-4aff-4412-858f-6452ac468a93-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"feffda9c-4aff-4412-858f-6452ac468a93\") " pod="openstack/ceilometer-0" Nov 29 08:08:03 crc kubenswrapper[4795]: I1129 08:08:03.315314 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/feffda9c-4aff-4412-858f-6452ac468a93-log-httpd\") pod \"ceilometer-0\" (UID: \"feffda9c-4aff-4412-858f-6452ac468a93\") " pod="openstack/ceilometer-0" Nov 29 08:08:03 crc kubenswrapper[4795]: I1129 08:08:03.315333 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/feffda9c-4aff-4412-858f-6452ac468a93-scripts\") pod \"ceilometer-0\" (UID: \"feffda9c-4aff-4412-858f-6452ac468a93\") " pod="openstack/ceilometer-0" Nov 29 08:08:03 crc kubenswrapper[4795]: I1129 08:08:03.315359 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwrwd\" (UniqueName: \"kubernetes.io/projected/feffda9c-4aff-4412-858f-6452ac468a93-kube-api-access-gwrwd\") pod \"ceilometer-0\" (UID: \"feffda9c-4aff-4412-858f-6452ac468a93\") " pod="openstack/ceilometer-0" Nov 29 08:08:03 crc kubenswrapper[4795]: I1129 08:08:03.316889 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/feffda9c-4aff-4412-858f-6452ac468a93-run-httpd\") pod \"ceilometer-0\" (UID: \"feffda9c-4aff-4412-858f-6452ac468a93\") " pod="openstack/ceilometer-0" Nov 29 08:08:03 crc kubenswrapper[4795]: I1129 08:08:03.319210 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/feffda9c-4aff-4412-858f-6452ac468a93-log-httpd\") pod \"ceilometer-0\" (UID: \"feffda9c-4aff-4412-858f-6452ac468a93\") " pod="openstack/ceilometer-0" Nov 29 08:08:03 crc kubenswrapper[4795]: I1129 08:08:03.334336 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/feffda9c-4aff-4412-858f-6452ac468a93-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"feffda9c-4aff-4412-858f-6452ac468a93\") " pod="openstack/ceilometer-0" Nov 29 08:08:03 crc kubenswrapper[4795]: I1129 08:08:03.335279 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/feffda9c-4aff-4412-858f-6452ac468a93-config-data\") pod \"ceilometer-0\" (UID: \"feffda9c-4aff-4412-858f-6452ac468a93\") " pod="openstack/ceilometer-0" Nov 29 08:08:03 crc kubenswrapper[4795]: I1129 08:08:03.335914 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/feffda9c-4aff-4412-858f-6452ac468a93-scripts\") pod \"ceilometer-0\" (UID: \"feffda9c-4aff-4412-858f-6452ac468a93\") " pod="openstack/ceilometer-0" Nov 29 08:08:03 crc kubenswrapper[4795]: I1129 08:08:03.354615 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/feffda9c-4aff-4412-858f-6452ac468a93-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"feffda9c-4aff-4412-858f-6452ac468a93\") " pod="openstack/ceilometer-0" Nov 29 08:08:03 crc kubenswrapper[4795]: I1129 08:08:03.370678 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwrwd\" (UniqueName: \"kubernetes.io/projected/feffda9c-4aff-4412-858f-6452ac468a93-kube-api-access-gwrwd\") pod \"ceilometer-0\" (UID: \"feffda9c-4aff-4412-858f-6452ac468a93\") " pod="openstack/ceilometer-0" Nov 29 08:08:03 crc kubenswrapper[4795]: I1129 08:08:03.379812 4795 scope.go:117] "RemoveContainer" containerID="2870b60e0be138bbfaa09cf50cb787560961ef2662473138011bba8d1b61d316" Nov 29 08:08:03 crc kubenswrapper[4795]: E1129 08:08:03.405763 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2870b60e0be138bbfaa09cf50cb787560961ef2662473138011bba8d1b61d316\": container with ID starting with 2870b60e0be138bbfaa09cf50cb787560961ef2662473138011bba8d1b61d316 not found: ID does not exist" containerID="2870b60e0be138bbfaa09cf50cb787560961ef2662473138011bba8d1b61d316" Nov 29 08:08:03 crc kubenswrapper[4795]: I1129 08:08:03.405815 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2870b60e0be138bbfaa09cf50cb787560961ef2662473138011bba8d1b61d316"} err="failed to get container status \"2870b60e0be138bbfaa09cf50cb787560961ef2662473138011bba8d1b61d316\": rpc error: code = NotFound desc = could not find container \"2870b60e0be138bbfaa09cf50cb787560961ef2662473138011bba8d1b61d316\": container with ID starting with 2870b60e0be138bbfaa09cf50cb787560961ef2662473138011bba8d1b61d316 not found: ID does not exist" Nov 29 08:08:03 crc kubenswrapper[4795]: I1129 08:08:03.405846 4795 scope.go:117] "RemoveContainer" containerID="e6f354fa94c5554bbf1c4808a21431dbb9b715822ccecb0500ca3bb734a54296" Nov 29 08:08:03 crc kubenswrapper[4795]: E1129 08:08:03.410734 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6f354fa94c5554bbf1c4808a21431dbb9b715822ccecb0500ca3bb734a54296\": container with ID starting with e6f354fa94c5554bbf1c4808a21431dbb9b715822ccecb0500ca3bb734a54296 not found: ID does not exist" containerID="e6f354fa94c5554bbf1c4808a21431dbb9b715822ccecb0500ca3bb734a54296" Nov 29 08:08:03 crc kubenswrapper[4795]: I1129 08:08:03.410787 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6f354fa94c5554bbf1c4808a21431dbb9b715822ccecb0500ca3bb734a54296"} err="failed to get container status \"e6f354fa94c5554bbf1c4808a21431dbb9b715822ccecb0500ca3bb734a54296\": rpc error: code = NotFound desc = could not find container \"e6f354fa94c5554bbf1c4808a21431dbb9b715822ccecb0500ca3bb734a54296\": container with ID starting with e6f354fa94c5554bbf1c4808a21431dbb9b715822ccecb0500ca3bb734a54296 not found: ID does not exist" Nov 29 08:08:03 crc kubenswrapper[4795]: I1129 08:08:03.410816 4795 scope.go:117] "RemoveContainer" containerID="acadca80a057d6477b9cb3e2ba412b9341aac67013024dd5dc88de8a257b99dd" Nov 29 08:08:03 crc kubenswrapper[4795]: E1129 08:08:03.412002 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acadca80a057d6477b9cb3e2ba412b9341aac67013024dd5dc88de8a257b99dd\": container with ID starting with acadca80a057d6477b9cb3e2ba412b9341aac67013024dd5dc88de8a257b99dd not found: ID does not exist" containerID="acadca80a057d6477b9cb3e2ba412b9341aac67013024dd5dc88de8a257b99dd" Nov 29 08:08:03 crc kubenswrapper[4795]: I1129 08:08:03.412035 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acadca80a057d6477b9cb3e2ba412b9341aac67013024dd5dc88de8a257b99dd"} err="failed to get container status \"acadca80a057d6477b9cb3e2ba412b9341aac67013024dd5dc88de8a257b99dd\": rpc error: code = NotFound desc = could not find container \"acadca80a057d6477b9cb3e2ba412b9341aac67013024dd5dc88de8a257b99dd\": container with ID starting with acadca80a057d6477b9cb3e2ba412b9341aac67013024dd5dc88de8a257b99dd not found: ID does not exist" Nov 29 08:08:03 crc kubenswrapper[4795]: I1129 08:08:03.412052 4795 scope.go:117] "RemoveContainer" containerID="5d8b40155304018bf6ca1ecd5005f2e907e8913c4e5b3f2ac8afc23ad087375b" Nov 29 08:08:03 crc kubenswrapper[4795]: E1129 08:08:03.413767 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d8b40155304018bf6ca1ecd5005f2e907e8913c4e5b3f2ac8afc23ad087375b\": container with ID starting with 5d8b40155304018bf6ca1ecd5005f2e907e8913c4e5b3f2ac8afc23ad087375b not found: ID does not exist" containerID="5d8b40155304018bf6ca1ecd5005f2e907e8913c4e5b3f2ac8afc23ad087375b" Nov 29 08:08:03 crc kubenswrapper[4795]: I1129 08:08:03.413828 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d8b40155304018bf6ca1ecd5005f2e907e8913c4e5b3f2ac8afc23ad087375b"} err="failed to get container status \"5d8b40155304018bf6ca1ecd5005f2e907e8913c4e5b3f2ac8afc23ad087375b\": rpc error: code = NotFound desc = could not find container \"5d8b40155304018bf6ca1ecd5005f2e907e8913c4e5b3f2ac8afc23ad087375b\": container with ID starting with 5d8b40155304018bf6ca1ecd5005f2e907e8913c4e5b3f2ac8afc23ad087375b not found: ID does not exist" Nov 29 08:08:03 crc kubenswrapper[4795]: I1129 08:08:03.413855 4795 scope.go:117] "RemoveContainer" containerID="2870b60e0be138bbfaa09cf50cb787560961ef2662473138011bba8d1b61d316" Nov 29 08:08:03 crc kubenswrapper[4795]: I1129 08:08:03.420808 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2870b60e0be138bbfaa09cf50cb787560961ef2662473138011bba8d1b61d316"} err="failed to get container status \"2870b60e0be138bbfaa09cf50cb787560961ef2662473138011bba8d1b61d316\": rpc error: code = NotFound desc = could not find container \"2870b60e0be138bbfaa09cf50cb787560961ef2662473138011bba8d1b61d316\": container with ID starting with 2870b60e0be138bbfaa09cf50cb787560961ef2662473138011bba8d1b61d316 not found: ID does not exist" Nov 29 08:08:03 crc kubenswrapper[4795]: I1129 08:08:03.420857 4795 scope.go:117] "RemoveContainer" containerID="e6f354fa94c5554bbf1c4808a21431dbb9b715822ccecb0500ca3bb734a54296" Nov 29 08:08:03 crc kubenswrapper[4795]: I1129 08:08:03.430767 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6f354fa94c5554bbf1c4808a21431dbb9b715822ccecb0500ca3bb734a54296"} err="failed to get container status \"e6f354fa94c5554bbf1c4808a21431dbb9b715822ccecb0500ca3bb734a54296\": rpc error: code = NotFound desc = could not find container \"e6f354fa94c5554bbf1c4808a21431dbb9b715822ccecb0500ca3bb734a54296\": container with ID starting with e6f354fa94c5554bbf1c4808a21431dbb9b715822ccecb0500ca3bb734a54296 not found: ID does not exist" Nov 29 08:08:03 crc kubenswrapper[4795]: I1129 08:08:03.430812 4795 scope.go:117] "RemoveContainer" containerID="acadca80a057d6477b9cb3e2ba412b9341aac67013024dd5dc88de8a257b99dd" Nov 29 08:08:03 crc kubenswrapper[4795]: I1129 08:08:03.432154 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acadca80a057d6477b9cb3e2ba412b9341aac67013024dd5dc88de8a257b99dd"} err="failed to get container status \"acadca80a057d6477b9cb3e2ba412b9341aac67013024dd5dc88de8a257b99dd\": rpc error: code = NotFound desc = could not find container \"acadca80a057d6477b9cb3e2ba412b9341aac67013024dd5dc88de8a257b99dd\": container with ID starting with acadca80a057d6477b9cb3e2ba412b9341aac67013024dd5dc88de8a257b99dd not found: ID does not exist" Nov 29 08:08:03 crc kubenswrapper[4795]: I1129 08:08:03.432197 4795 scope.go:117] "RemoveContainer" containerID="5d8b40155304018bf6ca1ecd5005f2e907e8913c4e5b3f2ac8afc23ad087375b" Nov 29 08:08:03 crc kubenswrapper[4795]: I1129 08:08:03.432555 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d8b40155304018bf6ca1ecd5005f2e907e8913c4e5b3f2ac8afc23ad087375b"} err="failed to get container status \"5d8b40155304018bf6ca1ecd5005f2e907e8913c4e5b3f2ac8afc23ad087375b\": rpc error: code = NotFound desc = could not find container \"5d8b40155304018bf6ca1ecd5005f2e907e8913c4e5b3f2ac8afc23ad087375b\": container with ID starting with 5d8b40155304018bf6ca1ecd5005f2e907e8913c4e5b3f2ac8afc23ad087375b not found: ID does not exist" Nov 29 08:08:03 crc kubenswrapper[4795]: I1129 08:08:03.537157 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 08:08:03 crc kubenswrapper[4795]: I1129 08:08:03.787832 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9a467cdb-1c68-446a-853a-3bb61468677f","Type":"ContainerStarted","Data":"cd27ea5a5bb3ed01e2f88d07f745b42f0ca2cffd325cbf3f25eb938838f94781"} Nov 29 08:08:03 crc kubenswrapper[4795]: I1129 08:08:03.788214 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9a467cdb-1c68-446a-853a-3bb61468677f","Type":"ContainerStarted","Data":"19d21fbe15d942ec51a6123b7dfc4f063e1d9ab55ac5d146ba1be1639fd2e4e0"} Nov 29 08:08:03 crc kubenswrapper[4795]: I1129 08:08:03.819067 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.819043456 podStartE2EDuration="3.819043456s" podCreationTimestamp="2025-11-29 08:08:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 08:08:03.81636607 +0000 UTC m=+1729.791941860" watchObservedRunningTime="2025-11-29 08:08:03.819043456 +0000 UTC m=+1729.794619246" Nov 29 08:08:04 crc kubenswrapper[4795]: I1129 08:08:04.306389 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18b04c88-379f-448d-8443-dbfa7994e228" path="/var/lib/kubelet/pods/18b04c88-379f-448d-8443-dbfa7994e228/volumes" Nov 29 08:08:04 crc kubenswrapper[4795]: I1129 08:08:04.307757 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7a16be4-8056-4ba8-9720-6503361132f4" path="/var/lib/kubelet/pods/f7a16be4-8056-4ba8-9720-6503361132f4/volumes" Nov 29 08:08:04 crc kubenswrapper[4795]: I1129 08:08:04.308606 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 08:08:04 crc kubenswrapper[4795]: I1129 08:08:04.818054 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"feffda9c-4aff-4412-858f-6452ac468a93","Type":"ContainerStarted","Data":"ad5efdca882cba70284de18cd3b1f2a77462a3b5b5c5f724f8d0ee6a7d268682"} Nov 29 08:08:04 crc kubenswrapper[4795]: I1129 08:08:04.823395 4795 generic.go:334] "Generic (PLEG): container finished" podID="2ac71396-7101-4507-b4cb-577fc1b95a46" containerID="3549bed3fc9cc52d295eb5ae5d381d8c5c0b7b902b430ef59d544e1efe262dfa" exitCode=0 Nov 29 08:08:04 crc kubenswrapper[4795]: I1129 08:08:04.823470 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-6256x" event={"ID":"2ac71396-7101-4507-b4cb-577fc1b95a46","Type":"ContainerDied","Data":"3549bed3fc9cc52d295eb5ae5d381d8c5c0b7b902b430ef59d544e1efe262dfa"} Nov 29 08:08:05 crc kubenswrapper[4795]: I1129 08:08:05.061222 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 08:08:05 crc kubenswrapper[4795]: I1129 08:08:05.809447 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 29 08:08:05 crc kubenswrapper[4795]: I1129 08:08:05.809964 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 29 08:08:05 crc kubenswrapper[4795]: I1129 08:08:05.847378 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"feffda9c-4aff-4412-858f-6452ac468a93","Type":"ContainerStarted","Data":"89faf16d30ab41549b3c7ea9c3acd604b4ea207322bbdcfaffd169636199cb74"} Nov 29 08:08:05 crc kubenswrapper[4795]: I1129 08:08:05.850755 4795 generic.go:334] "Generic (PLEG): container finished" podID="7fc47eb1-ee22-476b-92c2-4ccb500fe572" containerID="ac773d0bbaf8eb4a577d844451949beb38fb9ca36039c8be1d79c0b248c06635" exitCode=0 Nov 29 08:08:05 crc kubenswrapper[4795]: I1129 08:08:05.851022 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-wx6m7" event={"ID":"7fc47eb1-ee22-476b-92c2-4ccb500fe572","Type":"ContainerDied","Data":"ac773d0bbaf8eb4a577d844451949beb38fb9ca36039c8be1d79c0b248c06635"} Nov 29 08:08:05 crc kubenswrapper[4795]: I1129 08:08:05.855823 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"4552f7de-b8a7-4456-81ce-1faa3a69d96b","Type":"ContainerStarted","Data":"777f468745de242a89bb07d681c713941fc528d980006e966b7732f9ccd65536"} Nov 29 08:08:06 crc kubenswrapper[4795]: I1129 08:08:06.388035 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-6256x" Nov 29 08:08:06 crc kubenswrapper[4795]: I1129 08:08:06.535614 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ac71396-7101-4507-b4cb-577fc1b95a46-combined-ca-bundle\") pod \"2ac71396-7101-4507-b4cb-577fc1b95a46\" (UID: \"2ac71396-7101-4507-b4cb-577fc1b95a46\") " Nov 29 08:08:06 crc kubenswrapper[4795]: I1129 08:08:06.535689 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ac71396-7101-4507-b4cb-577fc1b95a46-scripts\") pod \"2ac71396-7101-4507-b4cb-577fc1b95a46\" (UID: \"2ac71396-7101-4507-b4cb-577fc1b95a46\") " Nov 29 08:08:06 crc kubenswrapper[4795]: I1129 08:08:06.535768 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6gzn\" (UniqueName: \"kubernetes.io/projected/2ac71396-7101-4507-b4cb-577fc1b95a46-kube-api-access-r6gzn\") pod \"2ac71396-7101-4507-b4cb-577fc1b95a46\" (UID: \"2ac71396-7101-4507-b4cb-577fc1b95a46\") " Nov 29 08:08:06 crc kubenswrapper[4795]: I1129 08:08:06.535848 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ac71396-7101-4507-b4cb-577fc1b95a46-config-data\") pod \"2ac71396-7101-4507-b4cb-577fc1b95a46\" (UID: \"2ac71396-7101-4507-b4cb-577fc1b95a46\") " Nov 29 08:08:06 crc kubenswrapper[4795]: I1129 08:08:06.542777 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ac71396-7101-4507-b4cb-577fc1b95a46-kube-api-access-r6gzn" (OuterVolumeSpecName: "kube-api-access-r6gzn") pod "2ac71396-7101-4507-b4cb-577fc1b95a46" (UID: "2ac71396-7101-4507-b4cb-577fc1b95a46"). InnerVolumeSpecName "kube-api-access-r6gzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:08:06 crc kubenswrapper[4795]: I1129 08:08:06.542874 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ac71396-7101-4507-b4cb-577fc1b95a46-scripts" (OuterVolumeSpecName: "scripts") pod "2ac71396-7101-4507-b4cb-577fc1b95a46" (UID: "2ac71396-7101-4507-b4cb-577fc1b95a46"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:08:06 crc kubenswrapper[4795]: I1129 08:08:06.581629 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ac71396-7101-4507-b4cb-577fc1b95a46-config-data" (OuterVolumeSpecName: "config-data") pod "2ac71396-7101-4507-b4cb-577fc1b95a46" (UID: "2ac71396-7101-4507-b4cb-577fc1b95a46"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:08:06 crc kubenswrapper[4795]: I1129 08:08:06.582368 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ac71396-7101-4507-b4cb-577fc1b95a46-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2ac71396-7101-4507-b4cb-577fc1b95a46" (UID: "2ac71396-7101-4507-b4cb-577fc1b95a46"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:08:06 crc kubenswrapper[4795]: I1129 08:08:06.639276 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6gzn\" (UniqueName: \"kubernetes.io/projected/2ac71396-7101-4507-b4cb-577fc1b95a46-kube-api-access-r6gzn\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:06 crc kubenswrapper[4795]: I1129 08:08:06.639314 4795 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ac71396-7101-4507-b4cb-577fc1b95a46-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:06 crc kubenswrapper[4795]: I1129 08:08:06.639329 4795 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ac71396-7101-4507-b4cb-577fc1b95a46-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:06 crc kubenswrapper[4795]: I1129 08:08:06.639363 4795 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ac71396-7101-4507-b4cb-577fc1b95a46-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:06 crc kubenswrapper[4795]: I1129 08:08:06.885652 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"feffda9c-4aff-4412-858f-6452ac468a93","Type":"ContainerStarted","Data":"bd23731ee68d0e60c214898a4938a553356052e90c097ba674e7e37fadb1b958"} Nov 29 08:08:06 crc kubenswrapper[4795]: I1129 08:08:06.888486 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-6256x" Nov 29 08:08:06 crc kubenswrapper[4795]: I1129 08:08:06.888483 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-6256x" event={"ID":"2ac71396-7101-4507-b4cb-577fc1b95a46","Type":"ContainerDied","Data":"57c971f47024c2c08ffbedf05115634391e99cb4ce6e4af65c4ab2fd66621284"} Nov 29 08:08:06 crc kubenswrapper[4795]: I1129 08:08:06.888554 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57c971f47024c2c08ffbedf05115634391e99cb4ce6e4af65c4ab2fd66621284" Nov 29 08:08:06 crc kubenswrapper[4795]: I1129 08:08:06.950224 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 29 08:08:06 crc kubenswrapper[4795]: E1129 08:08:06.950894 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ac71396-7101-4507-b4cb-577fc1b95a46" containerName="nova-cell1-conductor-db-sync" Nov 29 08:08:06 crc kubenswrapper[4795]: I1129 08:08:06.950917 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ac71396-7101-4507-b4cb-577fc1b95a46" containerName="nova-cell1-conductor-db-sync" Nov 29 08:08:06 crc kubenswrapper[4795]: I1129 08:08:06.951227 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ac71396-7101-4507-b4cb-577fc1b95a46" containerName="nova-cell1-conductor-db-sync" Nov 29 08:08:06 crc kubenswrapper[4795]: I1129 08:08:06.952786 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 29 08:08:06 crc kubenswrapper[4795]: I1129 08:08:06.956133 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 29 08:08:06 crc kubenswrapper[4795]: I1129 08:08:06.963739 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 29 08:08:07 crc kubenswrapper[4795]: I1129 08:08:07.052955 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjrrw\" (UniqueName: \"kubernetes.io/projected/a16ef858-2118-49ef-be27-4389ab4c34dc-kube-api-access-kjrrw\") pod \"nova-cell1-conductor-0\" (UID: \"a16ef858-2118-49ef-be27-4389ab4c34dc\") " pod="openstack/nova-cell1-conductor-0" Nov 29 08:08:07 crc kubenswrapper[4795]: I1129 08:08:07.053483 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a16ef858-2118-49ef-be27-4389ab4c34dc-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"a16ef858-2118-49ef-be27-4389ab4c34dc\") " pod="openstack/nova-cell1-conductor-0" Nov 29 08:08:07 crc kubenswrapper[4795]: I1129 08:08:07.054094 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a16ef858-2118-49ef-be27-4389ab4c34dc-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"a16ef858-2118-49ef-be27-4389ab4c34dc\") " pod="openstack/nova-cell1-conductor-0" Nov 29 08:08:07 crc kubenswrapper[4795]: I1129 08:08:07.156762 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a16ef858-2118-49ef-be27-4389ab4c34dc-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"a16ef858-2118-49ef-be27-4389ab4c34dc\") " pod="openstack/nova-cell1-conductor-0" Nov 29 08:08:07 crc kubenswrapper[4795]: I1129 08:08:07.156843 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a16ef858-2118-49ef-be27-4389ab4c34dc-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"a16ef858-2118-49ef-be27-4389ab4c34dc\") " pod="openstack/nova-cell1-conductor-0" Nov 29 08:08:07 crc kubenswrapper[4795]: I1129 08:08:07.156892 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjrrw\" (UniqueName: \"kubernetes.io/projected/a16ef858-2118-49ef-be27-4389ab4c34dc-kube-api-access-kjrrw\") pod \"nova-cell1-conductor-0\" (UID: \"a16ef858-2118-49ef-be27-4389ab4c34dc\") " pod="openstack/nova-cell1-conductor-0" Nov 29 08:08:07 crc kubenswrapper[4795]: I1129 08:08:07.163510 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a16ef858-2118-49ef-be27-4389ab4c34dc-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"a16ef858-2118-49ef-be27-4389ab4c34dc\") " pod="openstack/nova-cell1-conductor-0" Nov 29 08:08:07 crc kubenswrapper[4795]: I1129 08:08:07.165009 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a16ef858-2118-49ef-be27-4389ab4c34dc-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"a16ef858-2118-49ef-be27-4389ab4c34dc\") " pod="openstack/nova-cell1-conductor-0" Nov 29 08:08:07 crc kubenswrapper[4795]: I1129 08:08:07.179422 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjrrw\" (UniqueName: \"kubernetes.io/projected/a16ef858-2118-49ef-be27-4389ab4c34dc-kube-api-access-kjrrw\") pod \"nova-cell1-conductor-0\" (UID: \"a16ef858-2118-49ef-be27-4389ab4c34dc\") " pod="openstack/nova-cell1-conductor-0" Nov 29 08:08:07 crc kubenswrapper[4795]: I1129 08:08:07.293330 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 29 08:08:07 crc kubenswrapper[4795]: I1129 08:08:07.569726 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-wx6m7" Nov 29 08:08:07 crc kubenswrapper[4795]: I1129 08:08:07.669753 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7fc47eb1-ee22-476b-92c2-4ccb500fe572-scripts\") pod \"7fc47eb1-ee22-476b-92c2-4ccb500fe572\" (UID: \"7fc47eb1-ee22-476b-92c2-4ccb500fe572\") " Nov 29 08:08:07 crc kubenswrapper[4795]: I1129 08:08:07.669881 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fc47eb1-ee22-476b-92c2-4ccb500fe572-config-data\") pod \"7fc47eb1-ee22-476b-92c2-4ccb500fe572\" (UID: \"7fc47eb1-ee22-476b-92c2-4ccb500fe572\") " Nov 29 08:08:07 crc kubenswrapper[4795]: I1129 08:08:07.670033 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fc47eb1-ee22-476b-92c2-4ccb500fe572-combined-ca-bundle\") pod \"7fc47eb1-ee22-476b-92c2-4ccb500fe572\" (UID: \"7fc47eb1-ee22-476b-92c2-4ccb500fe572\") " Nov 29 08:08:07 crc kubenswrapper[4795]: I1129 08:08:07.670096 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2g557\" (UniqueName: \"kubernetes.io/projected/7fc47eb1-ee22-476b-92c2-4ccb500fe572-kube-api-access-2g557\") pod \"7fc47eb1-ee22-476b-92c2-4ccb500fe572\" (UID: \"7fc47eb1-ee22-476b-92c2-4ccb500fe572\") " Nov 29 08:08:07 crc kubenswrapper[4795]: I1129 08:08:07.683905 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fc47eb1-ee22-476b-92c2-4ccb500fe572-scripts" (OuterVolumeSpecName: "scripts") pod "7fc47eb1-ee22-476b-92c2-4ccb500fe572" (UID: "7fc47eb1-ee22-476b-92c2-4ccb500fe572"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:08:07 crc kubenswrapper[4795]: I1129 08:08:07.696798 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fc47eb1-ee22-476b-92c2-4ccb500fe572-kube-api-access-2g557" (OuterVolumeSpecName: "kube-api-access-2g557") pod "7fc47eb1-ee22-476b-92c2-4ccb500fe572" (UID: "7fc47eb1-ee22-476b-92c2-4ccb500fe572"). InnerVolumeSpecName "kube-api-access-2g557". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:08:07 crc kubenswrapper[4795]: I1129 08:08:07.734197 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fc47eb1-ee22-476b-92c2-4ccb500fe572-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7fc47eb1-ee22-476b-92c2-4ccb500fe572" (UID: "7fc47eb1-ee22-476b-92c2-4ccb500fe572"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:08:07 crc kubenswrapper[4795]: I1129 08:08:07.772627 4795 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fc47eb1-ee22-476b-92c2-4ccb500fe572-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:07 crc kubenswrapper[4795]: I1129 08:08:07.772659 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2g557\" (UniqueName: \"kubernetes.io/projected/7fc47eb1-ee22-476b-92c2-4ccb500fe572-kube-api-access-2g557\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:07 crc kubenswrapper[4795]: I1129 08:08:07.772669 4795 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7fc47eb1-ee22-476b-92c2-4ccb500fe572-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:07 crc kubenswrapper[4795]: I1129 08:08:07.776721 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fc47eb1-ee22-476b-92c2-4ccb500fe572-config-data" (OuterVolumeSpecName: "config-data") pod "7fc47eb1-ee22-476b-92c2-4ccb500fe572" (UID: "7fc47eb1-ee22-476b-92c2-4ccb500fe572"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:08:07 crc kubenswrapper[4795]: I1129 08:08:07.875460 4795 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fc47eb1-ee22-476b-92c2-4ccb500fe572-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:07 crc kubenswrapper[4795]: I1129 08:08:07.902265 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-wx6m7" event={"ID":"7fc47eb1-ee22-476b-92c2-4ccb500fe572","Type":"ContainerDied","Data":"42a2f8a877ae122cd40593f5e4a07b11b36201fc69622a6f93e48572537f8a27"} Nov 29 08:08:07 crc kubenswrapper[4795]: I1129 08:08:07.902310 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42a2f8a877ae122cd40593f5e4a07b11b36201fc69622a6f93e48572537f8a27" Nov 29 08:08:07 crc kubenswrapper[4795]: I1129 08:08:07.902338 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-wx6m7" Nov 29 08:08:07 crc kubenswrapper[4795]: I1129 08:08:07.913864 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 29 08:08:08 crc kubenswrapper[4795]: I1129 08:08:08.177981 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 29 08:08:08 crc kubenswrapper[4795]: I1129 08:08:08.178445 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="6d10ef1f-3b22-4019-a9e6-5ae072648212" containerName="nova-api-log" containerID="cri-o://9689f21b7f18fe8e3a851ee3ba063b06971fff9cf95cbeade85d635cd8b0c26b" gracePeriod=30 Nov 29 08:08:08 crc kubenswrapper[4795]: I1129 08:08:08.178983 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="6d10ef1f-3b22-4019-a9e6-5ae072648212" containerName="nova-api-api" containerID="cri-o://bed7b166cf6e44a1c1af812c4b22eb1bbedab6085744da5c8c38cd3ec0cd3a07" gracePeriod=30 Nov 29 08:08:08 crc kubenswrapper[4795]: I1129 08:08:08.195768 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 29 08:08:08 crc kubenswrapper[4795]: I1129 08:08:08.196009 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="92d4ea06-dad6-4b2a-ace8-b87dbe308e40" containerName="nova-scheduler-scheduler" containerID="cri-o://a4f6aea1a0d61b730f6930f040262834f94192ecf4b467a6c67fdd61126f7ae0" gracePeriod=30 Nov 29 08:08:08 crc kubenswrapper[4795]: I1129 08:08:08.225959 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 29 08:08:08 crc kubenswrapper[4795]: I1129 08:08:08.226195 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="9a467cdb-1c68-446a-853a-3bb61468677f" containerName="nova-metadata-log" containerID="cri-o://19d21fbe15d942ec51a6123b7dfc4f063e1d9ab55ac5d146ba1be1639fd2e4e0" gracePeriod=30 Nov 29 08:08:08 crc kubenswrapper[4795]: I1129 08:08:08.226786 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="9a467cdb-1c68-446a-853a-3bb61468677f" containerName="nova-metadata-metadata" containerID="cri-o://cd27ea5a5bb3ed01e2f88d07f745b42f0ca2cffd325cbf3f25eb938838f94781" gracePeriod=30 Nov 29 08:08:08 crc kubenswrapper[4795]: I1129 08:08:08.917270 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"a16ef858-2118-49ef-be27-4389ab4c34dc","Type":"ContainerStarted","Data":"9270c21ef39d2af4e7fe54a406c681797a7dc6f813d341d609acb1e4f615c27c"} Nov 29 08:08:08 crc kubenswrapper[4795]: I1129 08:08:08.917871 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Nov 29 08:08:08 crc kubenswrapper[4795]: I1129 08:08:08.917893 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"a16ef858-2118-49ef-be27-4389ab4c34dc","Type":"ContainerStarted","Data":"051061faca6e0e66f56004fc649873f48d7f97a1d21375b5dab7869ed97de0e4"} Nov 29 08:08:08 crc kubenswrapper[4795]: I1129 08:08:08.921524 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="4552f7de-b8a7-4456-81ce-1faa3a69d96b" containerName="aodh-api" containerID="cri-o://5139803ab35b32904c3a52717017f5244689ecab0487b355a9edc071e8312479" gracePeriod=30 Nov 29 08:08:08 crc kubenswrapper[4795]: I1129 08:08:08.921810 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"4552f7de-b8a7-4456-81ce-1faa3a69d96b","Type":"ContainerStarted","Data":"842dd230f2b2c112606b5dd4de601db0d5e31ce4587739f4436acaae674454f7"} Nov 29 08:08:08 crc kubenswrapper[4795]: I1129 08:08:08.921844 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="4552f7de-b8a7-4456-81ce-1faa3a69d96b" containerName="aodh-listener" containerID="cri-o://842dd230f2b2c112606b5dd4de601db0d5e31ce4587739f4436acaae674454f7" gracePeriod=30 Nov 29 08:08:08 crc kubenswrapper[4795]: I1129 08:08:08.921897 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="4552f7de-b8a7-4456-81ce-1faa3a69d96b" containerName="aodh-notifier" containerID="cri-o://777f468745de242a89bb07d681c713941fc528d980006e966b7732f9ccd65536" gracePeriod=30 Nov 29 08:08:08 crc kubenswrapper[4795]: I1129 08:08:08.921961 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="4552f7de-b8a7-4456-81ce-1faa3a69d96b" containerName="aodh-evaluator" containerID="cri-o://7cc1b3ae22d6b60f126316e366b143bf002a6d56d9d2b84dc6cdce4ebe4afb21" gracePeriod=30 Nov 29 08:08:08 crc kubenswrapper[4795]: I1129 08:08:08.929450 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"feffda9c-4aff-4412-858f-6452ac468a93","Type":"ContainerStarted","Data":"44e6bc8df10e28299654b6189eee348f9f4e28760fe42c995ae4595d76551e9e"} Nov 29 08:08:08 crc kubenswrapper[4795]: I1129 08:08:08.940821 4795 generic.go:334] "Generic (PLEG): container finished" podID="6d10ef1f-3b22-4019-a9e6-5ae072648212" containerID="9689f21b7f18fe8e3a851ee3ba063b06971fff9cf95cbeade85d635cd8b0c26b" exitCode=143 Nov 29 08:08:08 crc kubenswrapper[4795]: I1129 08:08:08.940949 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6d10ef1f-3b22-4019-a9e6-5ae072648212","Type":"ContainerDied","Data":"9689f21b7f18fe8e3a851ee3ba063b06971fff9cf95cbeade85d635cd8b0c26b"} Nov 29 08:08:08 crc kubenswrapper[4795]: I1129 08:08:08.946503 4795 generic.go:334] "Generic (PLEG): container finished" podID="9a467cdb-1c68-446a-853a-3bb61468677f" containerID="cd27ea5a5bb3ed01e2f88d07f745b42f0ca2cffd325cbf3f25eb938838f94781" exitCode=0 Nov 29 08:08:08 crc kubenswrapper[4795]: I1129 08:08:08.946543 4795 generic.go:334] "Generic (PLEG): container finished" podID="9a467cdb-1c68-446a-853a-3bb61468677f" containerID="19d21fbe15d942ec51a6123b7dfc4f063e1d9ab55ac5d146ba1be1639fd2e4e0" exitCode=143 Nov 29 08:08:08 crc kubenswrapper[4795]: I1129 08:08:08.946569 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9a467cdb-1c68-446a-853a-3bb61468677f","Type":"ContainerDied","Data":"cd27ea5a5bb3ed01e2f88d07f745b42f0ca2cffd325cbf3f25eb938838f94781"} Nov 29 08:08:08 crc kubenswrapper[4795]: I1129 08:08:08.946693 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9a467cdb-1c68-446a-853a-3bb61468677f","Type":"ContainerDied","Data":"19d21fbe15d942ec51a6123b7dfc4f063e1d9ab55ac5d146ba1be1639fd2e4e0"} Nov 29 08:08:08 crc kubenswrapper[4795]: I1129 08:08:08.946713 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9a467cdb-1c68-446a-853a-3bb61468677f","Type":"ContainerDied","Data":"0a7c7d550f3abc0e9c2e1e77e03fe664b85ef655f9d32129bb7c3431ba8cabcb"} Nov 29 08:08:08 crc kubenswrapper[4795]: I1129 08:08:08.946724 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a7c7d550f3abc0e9c2e1e77e03fe664b85ef655f9d32129bb7c3431ba8cabcb" Nov 29 08:08:08 crc kubenswrapper[4795]: I1129 08:08:08.953995 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.953933472 podStartE2EDuration="2.953933472s" podCreationTimestamp="2025-11-29 08:08:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 08:08:08.939970387 +0000 UTC m=+1734.915546177" watchObservedRunningTime="2025-11-29 08:08:08.953933472 +0000 UTC m=+1734.929509262" Nov 29 08:08:08 crc kubenswrapper[4795]: I1129 08:08:08.955312 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 29 08:08:09 crc kubenswrapper[4795]: I1129 08:08:09.005658 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=3.498249371 podStartE2EDuration="13.005639858s" podCreationTimestamp="2025-11-29 08:07:56 +0000 UTC" firstStartedPulling="2025-11-29 08:07:58.029816037 +0000 UTC m=+1724.005391827" lastFinishedPulling="2025-11-29 08:08:07.537206524 +0000 UTC m=+1733.512782314" observedRunningTime="2025-11-29 08:08:08.971214792 +0000 UTC m=+1734.946790592" watchObservedRunningTime="2025-11-29 08:08:09.005639858 +0000 UTC m=+1734.981215648" Nov 29 08:08:09 crc kubenswrapper[4795]: I1129 08:08:09.116483 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a467cdb-1c68-446a-853a-3bb61468677f-config-data\") pod \"9a467cdb-1c68-446a-853a-3bb61468677f\" (UID: \"9a467cdb-1c68-446a-853a-3bb61468677f\") " Nov 29 08:08:09 crc kubenswrapper[4795]: I1129 08:08:09.116561 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a467cdb-1c68-446a-853a-3bb61468677f-combined-ca-bundle\") pod \"9a467cdb-1c68-446a-853a-3bb61468677f\" (UID: \"9a467cdb-1c68-446a-853a-3bb61468677f\") " Nov 29 08:08:09 crc kubenswrapper[4795]: I1129 08:08:09.116696 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a467cdb-1c68-446a-853a-3bb61468677f-logs\") pod \"9a467cdb-1c68-446a-853a-3bb61468677f\" (UID: \"9a467cdb-1c68-446a-853a-3bb61468677f\") " Nov 29 08:08:09 crc kubenswrapper[4795]: I1129 08:08:09.116841 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a467cdb-1c68-446a-853a-3bb61468677f-nova-metadata-tls-certs\") pod \"9a467cdb-1c68-446a-853a-3bb61468677f\" (UID: \"9a467cdb-1c68-446a-853a-3bb61468677f\") " Nov 29 08:08:09 crc kubenswrapper[4795]: I1129 08:08:09.116920 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5l8tx\" (UniqueName: \"kubernetes.io/projected/9a467cdb-1c68-446a-853a-3bb61468677f-kube-api-access-5l8tx\") pod \"9a467cdb-1c68-446a-853a-3bb61468677f\" (UID: \"9a467cdb-1c68-446a-853a-3bb61468677f\") " Nov 29 08:08:09 crc kubenswrapper[4795]: I1129 08:08:09.117185 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a467cdb-1c68-446a-853a-3bb61468677f-logs" (OuterVolumeSpecName: "logs") pod "9a467cdb-1c68-446a-853a-3bb61468677f" (UID: "9a467cdb-1c68-446a-853a-3bb61468677f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:08:09 crc kubenswrapper[4795]: I1129 08:08:09.117676 4795 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a467cdb-1c68-446a-853a-3bb61468677f-logs\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:09 crc kubenswrapper[4795]: I1129 08:08:09.143443 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a467cdb-1c68-446a-853a-3bb61468677f-kube-api-access-5l8tx" (OuterVolumeSpecName: "kube-api-access-5l8tx") pod "9a467cdb-1c68-446a-853a-3bb61468677f" (UID: "9a467cdb-1c68-446a-853a-3bb61468677f"). InnerVolumeSpecName "kube-api-access-5l8tx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:08:09 crc kubenswrapper[4795]: I1129 08:08:09.183258 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a467cdb-1c68-446a-853a-3bb61468677f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9a467cdb-1c68-446a-853a-3bb61468677f" (UID: "9a467cdb-1c68-446a-853a-3bb61468677f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:08:09 crc kubenswrapper[4795]: I1129 08:08:09.190895 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a467cdb-1c68-446a-853a-3bb61468677f-config-data" (OuterVolumeSpecName: "config-data") pod "9a467cdb-1c68-446a-853a-3bb61468677f" (UID: "9a467cdb-1c68-446a-853a-3bb61468677f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:08:09 crc kubenswrapper[4795]: I1129 08:08:09.219656 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5l8tx\" (UniqueName: \"kubernetes.io/projected/9a467cdb-1c68-446a-853a-3bb61468677f-kube-api-access-5l8tx\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:09 crc kubenswrapper[4795]: I1129 08:08:09.219694 4795 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a467cdb-1c68-446a-853a-3bb61468677f-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:09 crc kubenswrapper[4795]: I1129 08:08:09.219708 4795 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a467cdb-1c68-446a-853a-3bb61468677f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:09 crc kubenswrapper[4795]: I1129 08:08:09.313988 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a467cdb-1c68-446a-853a-3bb61468677f-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "9a467cdb-1c68-446a-853a-3bb61468677f" (UID: "9a467cdb-1c68-446a-853a-3bb61468677f"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:08:09 crc kubenswrapper[4795]: I1129 08:08:09.322212 4795 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a467cdb-1c68-446a-853a-3bb61468677f-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:09 crc kubenswrapper[4795]: I1129 08:08:09.966254 4795 generic.go:334] "Generic (PLEG): container finished" podID="4552f7de-b8a7-4456-81ce-1faa3a69d96b" containerID="7cc1b3ae22d6b60f126316e366b143bf002a6d56d9d2b84dc6cdce4ebe4afb21" exitCode=0 Nov 29 08:08:09 crc kubenswrapper[4795]: I1129 08:08:09.966523 4795 generic.go:334] "Generic (PLEG): container finished" podID="4552f7de-b8a7-4456-81ce-1faa3a69d96b" containerID="5139803ab35b32904c3a52717017f5244689ecab0487b355a9edc071e8312479" exitCode=0 Nov 29 08:08:09 crc kubenswrapper[4795]: I1129 08:08:09.966322 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"4552f7de-b8a7-4456-81ce-1faa3a69d96b","Type":"ContainerDied","Data":"7cc1b3ae22d6b60f126316e366b143bf002a6d56d9d2b84dc6cdce4ebe4afb21"} Nov 29 08:08:09 crc kubenswrapper[4795]: I1129 08:08:09.966685 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"4552f7de-b8a7-4456-81ce-1faa3a69d96b","Type":"ContainerDied","Data":"5139803ab35b32904c3a52717017f5244689ecab0487b355a9edc071e8312479"} Nov 29 08:08:09 crc kubenswrapper[4795]: I1129 08:08:09.970002 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"feffda9c-4aff-4412-858f-6452ac468a93","Type":"ContainerStarted","Data":"9e60543aaebc0572ec63143a19c0096bd61764185fad4a545ee6604404fb8a7c"} Nov 29 08:08:09 crc kubenswrapper[4795]: I1129 08:08:09.970277 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 29 08:08:09 crc kubenswrapper[4795]: I1129 08:08:09.970303 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="feffda9c-4aff-4412-858f-6452ac468a93" containerName="ceilometer-central-agent" containerID="cri-o://89faf16d30ab41549b3c7ea9c3acd604b4ea207322bbdcfaffd169636199cb74" gracePeriod=30 Nov 29 08:08:09 crc kubenswrapper[4795]: I1129 08:08:09.970987 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="feffda9c-4aff-4412-858f-6452ac468a93" containerName="proxy-httpd" containerID="cri-o://9e60543aaebc0572ec63143a19c0096bd61764185fad4a545ee6604404fb8a7c" gracePeriod=30 Nov 29 08:08:09 crc kubenswrapper[4795]: I1129 08:08:09.971043 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="feffda9c-4aff-4412-858f-6452ac468a93" containerName="sg-core" containerID="cri-o://44e6bc8df10e28299654b6189eee348f9f4e28760fe42c995ae4595d76551e9e" gracePeriod=30 Nov 29 08:08:09 crc kubenswrapper[4795]: I1129 08:08:09.971076 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="feffda9c-4aff-4412-858f-6452ac468a93" containerName="ceilometer-notification-agent" containerID="cri-o://bd23731ee68d0e60c214898a4938a553356052e90c097ba674e7e37fadb1b958" gracePeriod=30 Nov 29 08:08:10 crc kubenswrapper[4795]: I1129 08:08:10.012731 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.460437342 podStartE2EDuration="7.012705156s" podCreationTimestamp="2025-11-29 08:08:03 +0000 UTC" firstStartedPulling="2025-11-29 08:08:04.832410604 +0000 UTC m=+1730.807986384" lastFinishedPulling="2025-11-29 08:08:09.384678408 +0000 UTC m=+1735.360254198" observedRunningTime="2025-11-29 08:08:10.001850318 +0000 UTC m=+1735.977426128" watchObservedRunningTime="2025-11-29 08:08:10.012705156 +0000 UTC m=+1735.988280946" Nov 29 08:08:10 crc kubenswrapper[4795]: I1129 08:08:10.043709 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 29 08:08:10 crc kubenswrapper[4795]: I1129 08:08:10.062792 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 29 08:08:10 crc kubenswrapper[4795]: I1129 08:08:10.081003 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 29 08:08:10 crc kubenswrapper[4795]: E1129 08:08:10.081900 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a467cdb-1c68-446a-853a-3bb61468677f" containerName="nova-metadata-log" Nov 29 08:08:10 crc kubenswrapper[4795]: I1129 08:08:10.081929 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a467cdb-1c68-446a-853a-3bb61468677f" containerName="nova-metadata-log" Nov 29 08:08:10 crc kubenswrapper[4795]: E1129 08:08:10.081965 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a467cdb-1c68-446a-853a-3bb61468677f" containerName="nova-metadata-metadata" Nov 29 08:08:10 crc kubenswrapper[4795]: I1129 08:08:10.081976 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a467cdb-1c68-446a-853a-3bb61468677f" containerName="nova-metadata-metadata" Nov 29 08:08:10 crc kubenswrapper[4795]: E1129 08:08:10.081998 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fc47eb1-ee22-476b-92c2-4ccb500fe572" containerName="nova-manage" Nov 29 08:08:10 crc kubenswrapper[4795]: I1129 08:08:10.082009 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fc47eb1-ee22-476b-92c2-4ccb500fe572" containerName="nova-manage" Nov 29 08:08:10 crc kubenswrapper[4795]: I1129 08:08:10.082340 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fc47eb1-ee22-476b-92c2-4ccb500fe572" containerName="nova-manage" Nov 29 08:08:10 crc kubenswrapper[4795]: I1129 08:08:10.082386 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a467cdb-1c68-446a-853a-3bb61468677f" containerName="nova-metadata-log" Nov 29 08:08:10 crc kubenswrapper[4795]: I1129 08:08:10.082413 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a467cdb-1c68-446a-853a-3bb61468677f" containerName="nova-metadata-metadata" Nov 29 08:08:10 crc kubenswrapper[4795]: I1129 08:08:10.084280 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 29 08:08:10 crc kubenswrapper[4795]: I1129 08:08:10.087451 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 29 08:08:10 crc kubenswrapper[4795]: I1129 08:08:10.087761 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 29 08:08:10 crc kubenswrapper[4795]: I1129 08:08:10.092576 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 29 08:08:10 crc kubenswrapper[4795]: I1129 08:08:10.149852 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwq4k\" (UniqueName: \"kubernetes.io/projected/53a9feb8-d22b-48bd-9ac5-4168d88c8d78-kube-api-access-fwq4k\") pod \"nova-metadata-0\" (UID: \"53a9feb8-d22b-48bd-9ac5-4168d88c8d78\") " pod="openstack/nova-metadata-0" Nov 29 08:08:10 crc kubenswrapper[4795]: I1129 08:08:10.149955 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53a9feb8-d22b-48bd-9ac5-4168d88c8d78-logs\") pod \"nova-metadata-0\" (UID: \"53a9feb8-d22b-48bd-9ac5-4168d88c8d78\") " pod="openstack/nova-metadata-0" Nov 29 08:08:10 crc kubenswrapper[4795]: I1129 08:08:10.150081 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53a9feb8-d22b-48bd-9ac5-4168d88c8d78-config-data\") pod \"nova-metadata-0\" (UID: \"53a9feb8-d22b-48bd-9ac5-4168d88c8d78\") " pod="openstack/nova-metadata-0" Nov 29 08:08:10 crc kubenswrapper[4795]: I1129 08:08:10.150166 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53a9feb8-d22b-48bd-9ac5-4168d88c8d78-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"53a9feb8-d22b-48bd-9ac5-4168d88c8d78\") " pod="openstack/nova-metadata-0" Nov 29 08:08:10 crc kubenswrapper[4795]: I1129 08:08:10.150233 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/53a9feb8-d22b-48bd-9ac5-4168d88c8d78-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"53a9feb8-d22b-48bd-9ac5-4168d88c8d78\") " pod="openstack/nova-metadata-0" Nov 29 08:08:10 crc kubenswrapper[4795]: I1129 08:08:10.253098 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwq4k\" (UniqueName: \"kubernetes.io/projected/53a9feb8-d22b-48bd-9ac5-4168d88c8d78-kube-api-access-fwq4k\") pod \"nova-metadata-0\" (UID: \"53a9feb8-d22b-48bd-9ac5-4168d88c8d78\") " pod="openstack/nova-metadata-0" Nov 29 08:08:10 crc kubenswrapper[4795]: I1129 08:08:10.253278 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53a9feb8-d22b-48bd-9ac5-4168d88c8d78-logs\") pod \"nova-metadata-0\" (UID: \"53a9feb8-d22b-48bd-9ac5-4168d88c8d78\") " pod="openstack/nova-metadata-0" Nov 29 08:08:10 crc kubenswrapper[4795]: I1129 08:08:10.253391 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53a9feb8-d22b-48bd-9ac5-4168d88c8d78-config-data\") pod \"nova-metadata-0\" (UID: \"53a9feb8-d22b-48bd-9ac5-4168d88c8d78\") " pod="openstack/nova-metadata-0" Nov 29 08:08:10 crc kubenswrapper[4795]: I1129 08:08:10.253882 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53a9feb8-d22b-48bd-9ac5-4168d88c8d78-logs\") pod \"nova-metadata-0\" (UID: \"53a9feb8-d22b-48bd-9ac5-4168d88c8d78\") " pod="openstack/nova-metadata-0" Nov 29 08:08:10 crc kubenswrapper[4795]: I1129 08:08:10.254868 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53a9feb8-d22b-48bd-9ac5-4168d88c8d78-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"53a9feb8-d22b-48bd-9ac5-4168d88c8d78\") " pod="openstack/nova-metadata-0" Nov 29 08:08:10 crc kubenswrapper[4795]: I1129 08:08:10.255022 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/53a9feb8-d22b-48bd-9ac5-4168d88c8d78-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"53a9feb8-d22b-48bd-9ac5-4168d88c8d78\") " pod="openstack/nova-metadata-0" Nov 29 08:08:10 crc kubenswrapper[4795]: I1129 08:08:10.261740 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53a9feb8-d22b-48bd-9ac5-4168d88c8d78-config-data\") pod \"nova-metadata-0\" (UID: \"53a9feb8-d22b-48bd-9ac5-4168d88c8d78\") " pod="openstack/nova-metadata-0" Nov 29 08:08:10 crc kubenswrapper[4795]: I1129 08:08:10.262180 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/53a9feb8-d22b-48bd-9ac5-4168d88c8d78-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"53a9feb8-d22b-48bd-9ac5-4168d88c8d78\") " pod="openstack/nova-metadata-0" Nov 29 08:08:10 crc kubenswrapper[4795]: I1129 08:08:10.264480 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53a9feb8-d22b-48bd-9ac5-4168d88c8d78-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"53a9feb8-d22b-48bd-9ac5-4168d88c8d78\") " pod="openstack/nova-metadata-0" Nov 29 08:08:10 crc kubenswrapper[4795]: I1129 08:08:10.278133 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwq4k\" (UniqueName: \"kubernetes.io/projected/53a9feb8-d22b-48bd-9ac5-4168d88c8d78-kube-api-access-fwq4k\") pod \"nova-metadata-0\" (UID: \"53a9feb8-d22b-48bd-9ac5-4168d88c8d78\") " pod="openstack/nova-metadata-0" Nov 29 08:08:10 crc kubenswrapper[4795]: I1129 08:08:10.296111 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a467cdb-1c68-446a-853a-3bb61468677f" path="/var/lib/kubelet/pods/9a467cdb-1c68-446a-853a-3bb61468677f/volumes" Nov 29 08:08:10 crc kubenswrapper[4795]: I1129 08:08:10.545461 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 29 08:08:10 crc kubenswrapper[4795]: E1129 08:08:10.908784 4795 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a4f6aea1a0d61b730f6930f040262834f94192ecf4b467a6c67fdd61126f7ae0" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 29 08:08:10 crc kubenswrapper[4795]: E1129 08:08:10.910621 4795 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a4f6aea1a0d61b730f6930f040262834f94192ecf4b467a6c67fdd61126f7ae0" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 29 08:08:10 crc kubenswrapper[4795]: E1129 08:08:10.912399 4795 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a4f6aea1a0d61b730f6930f040262834f94192ecf4b467a6c67fdd61126f7ae0" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 29 08:08:10 crc kubenswrapper[4795]: E1129 08:08:10.912457 4795 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="92d4ea06-dad6-4b2a-ace8-b87dbe308e40" containerName="nova-scheduler-scheduler" Nov 29 08:08:10 crc kubenswrapper[4795]: I1129 08:08:10.986276 4795 generic.go:334] "Generic (PLEG): container finished" podID="4552f7de-b8a7-4456-81ce-1faa3a69d96b" containerID="777f468745de242a89bb07d681c713941fc528d980006e966b7732f9ccd65536" exitCode=0 Nov 29 08:08:10 crc kubenswrapper[4795]: I1129 08:08:10.986356 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"4552f7de-b8a7-4456-81ce-1faa3a69d96b","Type":"ContainerDied","Data":"777f468745de242a89bb07d681c713941fc528d980006e966b7732f9ccd65536"} Nov 29 08:08:10 crc kubenswrapper[4795]: I1129 08:08:10.991106 4795 generic.go:334] "Generic (PLEG): container finished" podID="feffda9c-4aff-4412-858f-6452ac468a93" containerID="9e60543aaebc0572ec63143a19c0096bd61764185fad4a545ee6604404fb8a7c" exitCode=0 Nov 29 08:08:10 crc kubenswrapper[4795]: I1129 08:08:10.991136 4795 generic.go:334] "Generic (PLEG): container finished" podID="feffda9c-4aff-4412-858f-6452ac468a93" containerID="44e6bc8df10e28299654b6189eee348f9f4e28760fe42c995ae4595d76551e9e" exitCode=2 Nov 29 08:08:10 crc kubenswrapper[4795]: I1129 08:08:10.991148 4795 generic.go:334] "Generic (PLEG): container finished" podID="feffda9c-4aff-4412-858f-6452ac468a93" containerID="bd23731ee68d0e60c214898a4938a553356052e90c097ba674e7e37fadb1b958" exitCode=0 Nov 29 08:08:10 crc kubenswrapper[4795]: I1129 08:08:10.991171 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"feffda9c-4aff-4412-858f-6452ac468a93","Type":"ContainerDied","Data":"9e60543aaebc0572ec63143a19c0096bd61764185fad4a545ee6604404fb8a7c"} Nov 29 08:08:10 crc kubenswrapper[4795]: I1129 08:08:10.991198 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"feffda9c-4aff-4412-858f-6452ac468a93","Type":"ContainerDied","Data":"44e6bc8df10e28299654b6189eee348f9f4e28760fe42c995ae4595d76551e9e"} Nov 29 08:08:10 crc kubenswrapper[4795]: I1129 08:08:10.991212 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"feffda9c-4aff-4412-858f-6452ac468a93","Type":"ContainerDied","Data":"bd23731ee68d0e60c214898a4938a553356052e90c097ba674e7e37fadb1b958"} Nov 29 08:08:11 crc kubenswrapper[4795]: W1129 08:08:11.060563 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod53a9feb8_d22b_48bd_9ac5_4168d88c8d78.slice/crio-f0091f01d9155a0a15c91a32b06ecdb7b4637c528dacaaf3a617bca27c8848d2 WatchSource:0}: Error finding container f0091f01d9155a0a15c91a32b06ecdb7b4637c528dacaaf3a617bca27c8848d2: Status 404 returned error can't find the container with id f0091f01d9155a0a15c91a32b06ecdb7b4637c528dacaaf3a617bca27c8848d2 Nov 29 08:08:11 crc kubenswrapper[4795]: I1129 08:08:11.069785 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 29 08:08:11 crc kubenswrapper[4795]: I1129 08:08:11.941307 4795 patch_prober.go:28] interesting pod/machine-config-daemon-bkmq6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:08:11 crc kubenswrapper[4795]: I1129 08:08:11.941640 4795 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:08:12 crc kubenswrapper[4795]: I1129 08:08:12.004869 4795 generic.go:334] "Generic (PLEG): container finished" podID="6d10ef1f-3b22-4019-a9e6-5ae072648212" containerID="bed7b166cf6e44a1c1af812c4b22eb1bbedab6085744da5c8c38cd3ec0cd3a07" exitCode=0 Nov 29 08:08:12 crc kubenswrapper[4795]: I1129 08:08:12.004974 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6d10ef1f-3b22-4019-a9e6-5ae072648212","Type":"ContainerDied","Data":"bed7b166cf6e44a1c1af812c4b22eb1bbedab6085744da5c8c38cd3ec0cd3a07"} Nov 29 08:08:12 crc kubenswrapper[4795]: I1129 08:08:12.005006 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6d10ef1f-3b22-4019-a9e6-5ae072648212","Type":"ContainerDied","Data":"50308beba8d7db2d624305f05b3bd3aa2dacfe49ea9a2ad192fdc2a51726d5e2"} Nov 29 08:08:12 crc kubenswrapper[4795]: I1129 08:08:12.005019 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50308beba8d7db2d624305f05b3bd3aa2dacfe49ea9a2ad192fdc2a51726d5e2" Nov 29 08:08:12 crc kubenswrapper[4795]: I1129 08:08:12.025655 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"53a9feb8-d22b-48bd-9ac5-4168d88c8d78","Type":"ContainerStarted","Data":"edfe05456087ccca5e9569d1a1702b10b8f704808655ceabb74c5748607bc411"} Nov 29 08:08:12 crc kubenswrapper[4795]: I1129 08:08:12.025704 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"53a9feb8-d22b-48bd-9ac5-4168d88c8d78","Type":"ContainerStarted","Data":"125845c828337de0391d2b3bc6204768aa543b0da8e7bbf046b1716e10bf770c"} Nov 29 08:08:12 crc kubenswrapper[4795]: I1129 08:08:12.025718 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"53a9feb8-d22b-48bd-9ac5-4168d88c8d78","Type":"ContainerStarted","Data":"f0091f01d9155a0a15c91a32b06ecdb7b4637c528dacaaf3a617bca27c8848d2"} Nov 29 08:08:12 crc kubenswrapper[4795]: I1129 08:08:12.029909 4795 generic.go:334] "Generic (PLEG): container finished" podID="92d4ea06-dad6-4b2a-ace8-b87dbe308e40" containerID="a4f6aea1a0d61b730f6930f040262834f94192ecf4b467a6c67fdd61126f7ae0" exitCode=0 Nov 29 08:08:12 crc kubenswrapper[4795]: I1129 08:08:12.029943 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"92d4ea06-dad6-4b2a-ace8-b87dbe308e40","Type":"ContainerDied","Data":"a4f6aea1a0d61b730f6930f040262834f94192ecf4b467a6c67fdd61126f7ae0"} Nov 29 08:08:12 crc kubenswrapper[4795]: I1129 08:08:12.030064 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 29 08:08:12 crc kubenswrapper[4795]: I1129 08:08:12.062280 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.062258278 podStartE2EDuration="2.062258278s" podCreationTimestamp="2025-11-29 08:08:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 08:08:12.04750604 +0000 UTC m=+1738.023081840" watchObservedRunningTime="2025-11-29 08:08:12.062258278 +0000 UTC m=+1738.037834068" Nov 29 08:08:12 crc kubenswrapper[4795]: I1129 08:08:12.100809 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d10ef1f-3b22-4019-a9e6-5ae072648212-config-data\") pod \"6d10ef1f-3b22-4019-a9e6-5ae072648212\" (UID: \"6d10ef1f-3b22-4019-a9e6-5ae072648212\") " Nov 29 08:08:12 crc kubenswrapper[4795]: I1129 08:08:12.100932 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d10ef1f-3b22-4019-a9e6-5ae072648212-combined-ca-bundle\") pod \"6d10ef1f-3b22-4019-a9e6-5ae072648212\" (UID: \"6d10ef1f-3b22-4019-a9e6-5ae072648212\") " Nov 29 08:08:12 crc kubenswrapper[4795]: I1129 08:08:12.101540 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d10ef1f-3b22-4019-a9e6-5ae072648212-logs\") pod \"6d10ef1f-3b22-4019-a9e6-5ae072648212\" (UID: \"6d10ef1f-3b22-4019-a9e6-5ae072648212\") " Nov 29 08:08:12 crc kubenswrapper[4795]: I1129 08:08:12.101646 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gpsgq\" (UniqueName: \"kubernetes.io/projected/6d10ef1f-3b22-4019-a9e6-5ae072648212-kube-api-access-gpsgq\") pod \"6d10ef1f-3b22-4019-a9e6-5ae072648212\" (UID: \"6d10ef1f-3b22-4019-a9e6-5ae072648212\") " Nov 29 08:08:12 crc kubenswrapper[4795]: I1129 08:08:12.102358 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d10ef1f-3b22-4019-a9e6-5ae072648212-logs" (OuterVolumeSpecName: "logs") pod "6d10ef1f-3b22-4019-a9e6-5ae072648212" (UID: "6d10ef1f-3b22-4019-a9e6-5ae072648212"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:08:12 crc kubenswrapper[4795]: I1129 08:08:12.103077 4795 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d10ef1f-3b22-4019-a9e6-5ae072648212-logs\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:12 crc kubenswrapper[4795]: I1129 08:08:12.107502 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d10ef1f-3b22-4019-a9e6-5ae072648212-kube-api-access-gpsgq" (OuterVolumeSpecName: "kube-api-access-gpsgq") pod "6d10ef1f-3b22-4019-a9e6-5ae072648212" (UID: "6d10ef1f-3b22-4019-a9e6-5ae072648212"). InnerVolumeSpecName "kube-api-access-gpsgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:08:12 crc kubenswrapper[4795]: I1129 08:08:12.143541 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d10ef1f-3b22-4019-a9e6-5ae072648212-config-data" (OuterVolumeSpecName: "config-data") pod "6d10ef1f-3b22-4019-a9e6-5ae072648212" (UID: "6d10ef1f-3b22-4019-a9e6-5ae072648212"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:08:12 crc kubenswrapper[4795]: I1129 08:08:12.145529 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d10ef1f-3b22-4019-a9e6-5ae072648212-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6d10ef1f-3b22-4019-a9e6-5ae072648212" (UID: "6d10ef1f-3b22-4019-a9e6-5ae072648212"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:08:12 crc kubenswrapper[4795]: I1129 08:08:12.206627 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gpsgq\" (UniqueName: \"kubernetes.io/projected/6d10ef1f-3b22-4019-a9e6-5ae072648212-kube-api-access-gpsgq\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:12 crc kubenswrapper[4795]: I1129 08:08:12.207271 4795 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d10ef1f-3b22-4019-a9e6-5ae072648212-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:12 crc kubenswrapper[4795]: I1129 08:08:12.207285 4795 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d10ef1f-3b22-4019-a9e6-5ae072648212-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:12 crc kubenswrapper[4795]: I1129 08:08:12.252372 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 29 08:08:12 crc kubenswrapper[4795]: I1129 08:08:12.318088 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zzxr\" (UniqueName: \"kubernetes.io/projected/92d4ea06-dad6-4b2a-ace8-b87dbe308e40-kube-api-access-7zzxr\") pod \"92d4ea06-dad6-4b2a-ace8-b87dbe308e40\" (UID: \"92d4ea06-dad6-4b2a-ace8-b87dbe308e40\") " Nov 29 08:08:12 crc kubenswrapper[4795]: I1129 08:08:12.318375 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92d4ea06-dad6-4b2a-ace8-b87dbe308e40-combined-ca-bundle\") pod \"92d4ea06-dad6-4b2a-ace8-b87dbe308e40\" (UID: \"92d4ea06-dad6-4b2a-ace8-b87dbe308e40\") " Nov 29 08:08:12 crc kubenswrapper[4795]: I1129 08:08:12.318653 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92d4ea06-dad6-4b2a-ace8-b87dbe308e40-config-data\") pod \"92d4ea06-dad6-4b2a-ace8-b87dbe308e40\" (UID: \"92d4ea06-dad6-4b2a-ace8-b87dbe308e40\") " Nov 29 08:08:12 crc kubenswrapper[4795]: I1129 08:08:12.325682 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92d4ea06-dad6-4b2a-ace8-b87dbe308e40-kube-api-access-7zzxr" (OuterVolumeSpecName: "kube-api-access-7zzxr") pod "92d4ea06-dad6-4b2a-ace8-b87dbe308e40" (UID: "92d4ea06-dad6-4b2a-ace8-b87dbe308e40"). InnerVolumeSpecName "kube-api-access-7zzxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:08:12 crc kubenswrapper[4795]: I1129 08:08:12.341815 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zzxr\" (UniqueName: \"kubernetes.io/projected/92d4ea06-dad6-4b2a-ace8-b87dbe308e40-kube-api-access-7zzxr\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:12 crc kubenswrapper[4795]: I1129 08:08:12.361009 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92d4ea06-dad6-4b2a-ace8-b87dbe308e40-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "92d4ea06-dad6-4b2a-ace8-b87dbe308e40" (UID: "92d4ea06-dad6-4b2a-ace8-b87dbe308e40"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:08:12 crc kubenswrapper[4795]: I1129 08:08:12.371109 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92d4ea06-dad6-4b2a-ace8-b87dbe308e40-config-data" (OuterVolumeSpecName: "config-data") pod "92d4ea06-dad6-4b2a-ace8-b87dbe308e40" (UID: "92d4ea06-dad6-4b2a-ace8-b87dbe308e40"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:08:12 crc kubenswrapper[4795]: I1129 08:08:12.444100 4795 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92d4ea06-dad6-4b2a-ace8-b87dbe308e40-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:12 crc kubenswrapper[4795]: I1129 08:08:12.444146 4795 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92d4ea06-dad6-4b2a-ace8-b87dbe308e40-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:13 crc kubenswrapper[4795]: I1129 08:08:13.048899 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"92d4ea06-dad6-4b2a-ace8-b87dbe308e40","Type":"ContainerDied","Data":"f359656805a2033aba2116ba59d48b27abf5e94080c95f21ec223bf39f865e26"} Nov 29 08:08:13 crc kubenswrapper[4795]: I1129 08:08:13.049567 4795 scope.go:117] "RemoveContainer" containerID="a4f6aea1a0d61b730f6930f040262834f94192ecf4b467a6c67fdd61126f7ae0" Nov 29 08:08:13 crc kubenswrapper[4795]: I1129 08:08:13.048971 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 29 08:08:13 crc kubenswrapper[4795]: I1129 08:08:13.049120 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 29 08:08:13 crc kubenswrapper[4795]: I1129 08:08:13.123663 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 29 08:08:13 crc kubenswrapper[4795]: I1129 08:08:13.141778 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 29 08:08:13 crc kubenswrapper[4795]: I1129 08:08:13.163134 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 29 08:08:13 crc kubenswrapper[4795]: I1129 08:08:13.177908 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 29 08:08:13 crc kubenswrapper[4795]: I1129 08:08:13.193103 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 29 08:08:13 crc kubenswrapper[4795]: E1129 08:08:13.193755 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d10ef1f-3b22-4019-a9e6-5ae072648212" containerName="nova-api-api" Nov 29 08:08:13 crc kubenswrapper[4795]: I1129 08:08:13.193783 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d10ef1f-3b22-4019-a9e6-5ae072648212" containerName="nova-api-api" Nov 29 08:08:13 crc kubenswrapper[4795]: E1129 08:08:13.193806 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d10ef1f-3b22-4019-a9e6-5ae072648212" containerName="nova-api-log" Nov 29 08:08:13 crc kubenswrapper[4795]: I1129 08:08:13.193815 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d10ef1f-3b22-4019-a9e6-5ae072648212" containerName="nova-api-log" Nov 29 08:08:13 crc kubenswrapper[4795]: E1129 08:08:13.193859 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92d4ea06-dad6-4b2a-ace8-b87dbe308e40" containerName="nova-scheduler-scheduler" Nov 29 08:08:13 crc kubenswrapper[4795]: I1129 08:08:13.193870 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="92d4ea06-dad6-4b2a-ace8-b87dbe308e40" containerName="nova-scheduler-scheduler" Nov 29 08:08:13 crc kubenswrapper[4795]: I1129 08:08:13.194159 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d10ef1f-3b22-4019-a9e6-5ae072648212" containerName="nova-api-log" Nov 29 08:08:13 crc kubenswrapper[4795]: I1129 08:08:13.194187 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d10ef1f-3b22-4019-a9e6-5ae072648212" containerName="nova-api-api" Nov 29 08:08:13 crc kubenswrapper[4795]: I1129 08:08:13.194214 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="92d4ea06-dad6-4b2a-ace8-b87dbe308e40" containerName="nova-scheduler-scheduler" Nov 29 08:08:13 crc kubenswrapper[4795]: I1129 08:08:13.195552 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 29 08:08:13 crc kubenswrapper[4795]: I1129 08:08:13.203054 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 29 08:08:13 crc kubenswrapper[4795]: I1129 08:08:13.206952 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 29 08:08:13 crc kubenswrapper[4795]: I1129 08:08:13.221216 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 29 08:08:13 crc kubenswrapper[4795]: I1129 08:08:13.223564 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 29 08:08:13 crc kubenswrapper[4795]: I1129 08:08:13.227924 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 29 08:08:13 crc kubenswrapper[4795]: I1129 08:08:13.238481 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 29 08:08:13 crc kubenswrapper[4795]: I1129 08:08:13.268766 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bblxf\" (UniqueName: \"kubernetes.io/projected/b4e7a65c-2285-410a-8759-75a41c63d9b1-kube-api-access-bblxf\") pod \"nova-api-0\" (UID: \"b4e7a65c-2285-410a-8759-75a41c63d9b1\") " pod="openstack/nova-api-0" Nov 29 08:08:13 crc kubenswrapper[4795]: I1129 08:08:13.268874 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4e7a65c-2285-410a-8759-75a41c63d9b1-logs\") pod \"nova-api-0\" (UID: \"b4e7a65c-2285-410a-8759-75a41c63d9b1\") " pod="openstack/nova-api-0" Nov 29 08:08:13 crc kubenswrapper[4795]: I1129 08:08:13.268936 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4e7a65c-2285-410a-8759-75a41c63d9b1-config-data\") pod \"nova-api-0\" (UID: \"b4e7a65c-2285-410a-8759-75a41c63d9b1\") " pod="openstack/nova-api-0" Nov 29 08:08:13 crc kubenswrapper[4795]: I1129 08:08:13.268977 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4e7a65c-2285-410a-8759-75a41c63d9b1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b4e7a65c-2285-410a-8759-75a41c63d9b1\") " pod="openstack/nova-api-0" Nov 29 08:08:13 crc kubenswrapper[4795]: I1129 08:08:13.374328 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c33ae8e1-1fb9-4053-9dcf-76b7a5bb370d-config-data\") pod \"nova-scheduler-0\" (UID: \"c33ae8e1-1fb9-4053-9dcf-76b7a5bb370d\") " pod="openstack/nova-scheduler-0" Nov 29 08:08:13 crc kubenswrapper[4795]: I1129 08:08:13.374539 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drsfm\" (UniqueName: \"kubernetes.io/projected/c33ae8e1-1fb9-4053-9dcf-76b7a5bb370d-kube-api-access-drsfm\") pod \"nova-scheduler-0\" (UID: \"c33ae8e1-1fb9-4053-9dcf-76b7a5bb370d\") " pod="openstack/nova-scheduler-0" Nov 29 08:08:13 crc kubenswrapper[4795]: I1129 08:08:13.375044 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bblxf\" (UniqueName: \"kubernetes.io/projected/b4e7a65c-2285-410a-8759-75a41c63d9b1-kube-api-access-bblxf\") pod \"nova-api-0\" (UID: \"b4e7a65c-2285-410a-8759-75a41c63d9b1\") " pod="openstack/nova-api-0" Nov 29 08:08:13 crc kubenswrapper[4795]: I1129 08:08:13.375426 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4e7a65c-2285-410a-8759-75a41c63d9b1-logs\") pod \"nova-api-0\" (UID: \"b4e7a65c-2285-410a-8759-75a41c63d9b1\") " pod="openstack/nova-api-0" Nov 29 08:08:13 crc kubenswrapper[4795]: I1129 08:08:13.375718 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4e7a65c-2285-410a-8759-75a41c63d9b1-config-data\") pod \"nova-api-0\" (UID: \"b4e7a65c-2285-410a-8759-75a41c63d9b1\") " pod="openstack/nova-api-0" Nov 29 08:08:13 crc kubenswrapper[4795]: I1129 08:08:13.375849 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4e7a65c-2285-410a-8759-75a41c63d9b1-logs\") pod \"nova-api-0\" (UID: \"b4e7a65c-2285-410a-8759-75a41c63d9b1\") " pod="openstack/nova-api-0" Nov 29 08:08:13 crc kubenswrapper[4795]: I1129 08:08:13.375895 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4e7a65c-2285-410a-8759-75a41c63d9b1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b4e7a65c-2285-410a-8759-75a41c63d9b1\") " pod="openstack/nova-api-0" Nov 29 08:08:13 crc kubenswrapper[4795]: I1129 08:08:13.376335 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c33ae8e1-1fb9-4053-9dcf-76b7a5bb370d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c33ae8e1-1fb9-4053-9dcf-76b7a5bb370d\") " pod="openstack/nova-scheduler-0" Nov 29 08:08:13 crc kubenswrapper[4795]: I1129 08:08:13.380901 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4e7a65c-2285-410a-8759-75a41c63d9b1-config-data\") pod \"nova-api-0\" (UID: \"b4e7a65c-2285-410a-8759-75a41c63d9b1\") " pod="openstack/nova-api-0" Nov 29 08:08:13 crc kubenswrapper[4795]: I1129 08:08:13.381132 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4e7a65c-2285-410a-8759-75a41c63d9b1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b4e7a65c-2285-410a-8759-75a41c63d9b1\") " pod="openstack/nova-api-0" Nov 29 08:08:13 crc kubenswrapper[4795]: I1129 08:08:13.393060 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bblxf\" (UniqueName: \"kubernetes.io/projected/b4e7a65c-2285-410a-8759-75a41c63d9b1-kube-api-access-bblxf\") pod \"nova-api-0\" (UID: \"b4e7a65c-2285-410a-8759-75a41c63d9b1\") " pod="openstack/nova-api-0" Nov 29 08:08:13 crc kubenswrapper[4795]: I1129 08:08:13.478659 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c33ae8e1-1fb9-4053-9dcf-76b7a5bb370d-config-data\") pod \"nova-scheduler-0\" (UID: \"c33ae8e1-1fb9-4053-9dcf-76b7a5bb370d\") " pod="openstack/nova-scheduler-0" Nov 29 08:08:13 crc kubenswrapper[4795]: I1129 08:08:13.478714 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drsfm\" (UniqueName: \"kubernetes.io/projected/c33ae8e1-1fb9-4053-9dcf-76b7a5bb370d-kube-api-access-drsfm\") pod \"nova-scheduler-0\" (UID: \"c33ae8e1-1fb9-4053-9dcf-76b7a5bb370d\") " pod="openstack/nova-scheduler-0" Nov 29 08:08:13 crc kubenswrapper[4795]: I1129 08:08:13.478922 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c33ae8e1-1fb9-4053-9dcf-76b7a5bb370d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c33ae8e1-1fb9-4053-9dcf-76b7a5bb370d\") " pod="openstack/nova-scheduler-0" Nov 29 08:08:13 crc kubenswrapper[4795]: I1129 08:08:13.483070 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c33ae8e1-1fb9-4053-9dcf-76b7a5bb370d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c33ae8e1-1fb9-4053-9dcf-76b7a5bb370d\") " pod="openstack/nova-scheduler-0" Nov 29 08:08:13 crc kubenswrapper[4795]: I1129 08:08:13.484425 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c33ae8e1-1fb9-4053-9dcf-76b7a5bb370d-config-data\") pod \"nova-scheduler-0\" (UID: \"c33ae8e1-1fb9-4053-9dcf-76b7a5bb370d\") " pod="openstack/nova-scheduler-0" Nov 29 08:08:13 crc kubenswrapper[4795]: I1129 08:08:13.496196 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drsfm\" (UniqueName: \"kubernetes.io/projected/c33ae8e1-1fb9-4053-9dcf-76b7a5bb370d-kube-api-access-drsfm\") pod \"nova-scheduler-0\" (UID: \"c33ae8e1-1fb9-4053-9dcf-76b7a5bb370d\") " pod="openstack/nova-scheduler-0" Nov 29 08:08:13 crc kubenswrapper[4795]: I1129 08:08:13.523365 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 29 08:08:13 crc kubenswrapper[4795]: I1129 08:08:13.556219 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 29 08:08:14 crc kubenswrapper[4795]: I1129 08:08:14.077644 4795 generic.go:334] "Generic (PLEG): container finished" podID="feffda9c-4aff-4412-858f-6452ac468a93" containerID="89faf16d30ab41549b3c7ea9c3acd604b4ea207322bbdcfaffd169636199cb74" exitCode=0 Nov 29 08:08:14 crc kubenswrapper[4795]: I1129 08:08:14.077767 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"feffda9c-4aff-4412-858f-6452ac468a93","Type":"ContainerDied","Data":"89faf16d30ab41549b3c7ea9c3acd604b4ea207322bbdcfaffd169636199cb74"} Nov 29 08:08:14 crc kubenswrapper[4795]: I1129 08:08:14.078349 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"feffda9c-4aff-4412-858f-6452ac468a93","Type":"ContainerDied","Data":"ad5efdca882cba70284de18cd3b1f2a77462a3b5b5c5f724f8d0ee6a7d268682"} Nov 29 08:08:14 crc kubenswrapper[4795]: I1129 08:08:14.078368 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad5efdca882cba70284de18cd3b1f2a77462a3b5b5c5f724f8d0ee6a7d268682" Nov 29 08:08:14 crc kubenswrapper[4795]: W1129 08:08:14.151796 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc33ae8e1_1fb9_4053_9dcf_76b7a5bb370d.slice/crio-49e0060a8c4a31d20924171938a82495896f080054a854e0e4837ae407bd3cd8 WatchSource:0}: Error finding container 49e0060a8c4a31d20924171938a82495896f080054a854e0e4837ae407bd3cd8: Status 404 returned error can't find the container with id 49e0060a8c4a31d20924171938a82495896f080054a854e0e4837ae407bd3cd8 Nov 29 08:08:14 crc kubenswrapper[4795]: I1129 08:08:14.172924 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 29 08:08:14 crc kubenswrapper[4795]: I1129 08:08:14.198412 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 29 08:08:14 crc kubenswrapper[4795]: I1129 08:08:14.302975 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d10ef1f-3b22-4019-a9e6-5ae072648212" path="/var/lib/kubelet/pods/6d10ef1f-3b22-4019-a9e6-5ae072648212/volumes" Nov 29 08:08:14 crc kubenswrapper[4795]: I1129 08:08:14.304395 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92d4ea06-dad6-4b2a-ace8-b87dbe308e40" path="/var/lib/kubelet/pods/92d4ea06-dad6-4b2a-ace8-b87dbe308e40/volumes" Nov 29 08:08:14 crc kubenswrapper[4795]: I1129 08:08:14.320804 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 08:08:14 crc kubenswrapper[4795]: I1129 08:08:14.407362 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/feffda9c-4aff-4412-858f-6452ac468a93-sg-core-conf-yaml\") pod \"feffda9c-4aff-4412-858f-6452ac468a93\" (UID: \"feffda9c-4aff-4412-858f-6452ac468a93\") " Nov 29 08:08:14 crc kubenswrapper[4795]: I1129 08:08:14.407459 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/feffda9c-4aff-4412-858f-6452ac468a93-combined-ca-bundle\") pod \"feffda9c-4aff-4412-858f-6452ac468a93\" (UID: \"feffda9c-4aff-4412-858f-6452ac468a93\") " Nov 29 08:08:14 crc kubenswrapper[4795]: I1129 08:08:14.407535 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/feffda9c-4aff-4412-858f-6452ac468a93-scripts\") pod \"feffda9c-4aff-4412-858f-6452ac468a93\" (UID: \"feffda9c-4aff-4412-858f-6452ac468a93\") " Nov 29 08:08:14 crc kubenswrapper[4795]: I1129 08:08:14.407680 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/feffda9c-4aff-4412-858f-6452ac468a93-run-httpd\") pod \"feffda9c-4aff-4412-858f-6452ac468a93\" (UID: \"feffda9c-4aff-4412-858f-6452ac468a93\") " Nov 29 08:08:14 crc kubenswrapper[4795]: I1129 08:08:14.407878 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/feffda9c-4aff-4412-858f-6452ac468a93-log-httpd\") pod \"feffda9c-4aff-4412-858f-6452ac468a93\" (UID: \"feffda9c-4aff-4412-858f-6452ac468a93\") " Nov 29 08:08:14 crc kubenswrapper[4795]: I1129 08:08:14.407950 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwrwd\" (UniqueName: \"kubernetes.io/projected/feffda9c-4aff-4412-858f-6452ac468a93-kube-api-access-gwrwd\") pod \"feffda9c-4aff-4412-858f-6452ac468a93\" (UID: \"feffda9c-4aff-4412-858f-6452ac468a93\") " Nov 29 08:08:14 crc kubenswrapper[4795]: I1129 08:08:14.408089 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/feffda9c-4aff-4412-858f-6452ac468a93-config-data\") pod \"feffda9c-4aff-4412-858f-6452ac468a93\" (UID: \"feffda9c-4aff-4412-858f-6452ac468a93\") " Nov 29 08:08:14 crc kubenswrapper[4795]: I1129 08:08:14.408119 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/feffda9c-4aff-4412-858f-6452ac468a93-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "feffda9c-4aff-4412-858f-6452ac468a93" (UID: "feffda9c-4aff-4412-858f-6452ac468a93"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:08:14 crc kubenswrapper[4795]: I1129 08:08:14.408766 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/feffda9c-4aff-4412-858f-6452ac468a93-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "feffda9c-4aff-4412-858f-6452ac468a93" (UID: "feffda9c-4aff-4412-858f-6452ac468a93"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:08:14 crc kubenswrapper[4795]: I1129 08:08:14.411471 4795 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/feffda9c-4aff-4412-858f-6452ac468a93-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:14 crc kubenswrapper[4795]: I1129 08:08:14.411489 4795 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/feffda9c-4aff-4412-858f-6452ac468a93-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:14 crc kubenswrapper[4795]: I1129 08:08:14.415087 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/feffda9c-4aff-4412-858f-6452ac468a93-kube-api-access-gwrwd" (OuterVolumeSpecName: "kube-api-access-gwrwd") pod "feffda9c-4aff-4412-858f-6452ac468a93" (UID: "feffda9c-4aff-4412-858f-6452ac468a93"). InnerVolumeSpecName "kube-api-access-gwrwd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:08:14 crc kubenswrapper[4795]: I1129 08:08:14.417393 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/feffda9c-4aff-4412-858f-6452ac468a93-scripts" (OuterVolumeSpecName: "scripts") pod "feffda9c-4aff-4412-858f-6452ac468a93" (UID: "feffda9c-4aff-4412-858f-6452ac468a93"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:08:14 crc kubenswrapper[4795]: I1129 08:08:14.465500 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/feffda9c-4aff-4412-858f-6452ac468a93-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "feffda9c-4aff-4412-858f-6452ac468a93" (UID: "feffda9c-4aff-4412-858f-6452ac468a93"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:08:14 crc kubenswrapper[4795]: I1129 08:08:14.513997 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwrwd\" (UniqueName: \"kubernetes.io/projected/feffda9c-4aff-4412-858f-6452ac468a93-kube-api-access-gwrwd\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:14 crc kubenswrapper[4795]: I1129 08:08:14.514038 4795 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/feffda9c-4aff-4412-858f-6452ac468a93-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:14 crc kubenswrapper[4795]: I1129 08:08:14.514047 4795 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/feffda9c-4aff-4412-858f-6452ac468a93-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:14 crc kubenswrapper[4795]: I1129 08:08:14.519840 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/feffda9c-4aff-4412-858f-6452ac468a93-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "feffda9c-4aff-4412-858f-6452ac468a93" (UID: "feffda9c-4aff-4412-858f-6452ac468a93"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:08:14 crc kubenswrapper[4795]: I1129 08:08:14.559414 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/feffda9c-4aff-4412-858f-6452ac468a93-config-data" (OuterVolumeSpecName: "config-data") pod "feffda9c-4aff-4412-858f-6452ac468a93" (UID: "feffda9c-4aff-4412-858f-6452ac468a93"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:08:14 crc kubenswrapper[4795]: I1129 08:08:14.616326 4795 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/feffda9c-4aff-4412-858f-6452ac468a93-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:14 crc kubenswrapper[4795]: I1129 08:08:14.616539 4795 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/feffda9c-4aff-4412-858f-6452ac468a93-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:15 crc kubenswrapper[4795]: I1129 08:08:15.119635 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b4e7a65c-2285-410a-8759-75a41c63d9b1","Type":"ContainerStarted","Data":"405089a7f64e11e1f9c1842dfadf678d9cf93b53969589b26bd04e0568184bc2"} Nov 29 08:08:15 crc kubenswrapper[4795]: I1129 08:08:15.119991 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b4e7a65c-2285-410a-8759-75a41c63d9b1","Type":"ContainerStarted","Data":"a76156f16136e23a3359778dc457f70a0c6ccf09563dc20edd423b0cdc122bdf"} Nov 29 08:08:15 crc kubenswrapper[4795]: I1129 08:08:15.120009 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b4e7a65c-2285-410a-8759-75a41c63d9b1","Type":"ContainerStarted","Data":"c557839e6f82748ad17f82ddf610b578aa4b955af831835612ce95a4ad3b9e31"} Nov 29 08:08:15 crc kubenswrapper[4795]: I1129 08:08:15.121947 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c33ae8e1-1fb9-4053-9dcf-76b7a5bb370d","Type":"ContainerStarted","Data":"8b8df53bdfcffe0c5cb033cb2d4dff6b323c346e5043cf654875b0cc3bbe396a"} Nov 29 08:08:15 crc kubenswrapper[4795]: I1129 08:08:15.121983 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c33ae8e1-1fb9-4053-9dcf-76b7a5bb370d","Type":"ContainerStarted","Data":"49e0060a8c4a31d20924171938a82495896f080054a854e0e4837ae407bd3cd8"} Nov 29 08:08:15 crc kubenswrapper[4795]: I1129 08:08:15.121956 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 08:08:15 crc kubenswrapper[4795]: I1129 08:08:15.154852 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.15209913 podStartE2EDuration="2.15209913s" podCreationTimestamp="2025-11-29 08:08:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 08:08:15.141684465 +0000 UTC m=+1741.117260255" watchObservedRunningTime="2025-11-29 08:08:15.15209913 +0000 UTC m=+1741.127674930" Nov 29 08:08:15 crc kubenswrapper[4795]: I1129 08:08:15.170905 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.170888443 podStartE2EDuration="2.170888443s" podCreationTimestamp="2025-11-29 08:08:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 08:08:15.156878996 +0000 UTC m=+1741.132454786" watchObservedRunningTime="2025-11-29 08:08:15.170888443 +0000 UTC m=+1741.146464223" Nov 29 08:08:15 crc kubenswrapper[4795]: I1129 08:08:15.203348 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 08:08:15 crc kubenswrapper[4795]: I1129 08:08:15.223123 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 29 08:08:15 crc kubenswrapper[4795]: I1129 08:08:15.239487 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 29 08:08:15 crc kubenswrapper[4795]: E1129 08:08:15.240160 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="feffda9c-4aff-4412-858f-6452ac468a93" containerName="ceilometer-central-agent" Nov 29 08:08:15 crc kubenswrapper[4795]: I1129 08:08:15.240183 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="feffda9c-4aff-4412-858f-6452ac468a93" containerName="ceilometer-central-agent" Nov 29 08:08:15 crc kubenswrapper[4795]: E1129 08:08:15.240205 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="feffda9c-4aff-4412-858f-6452ac468a93" containerName="proxy-httpd" Nov 29 08:08:15 crc kubenswrapper[4795]: I1129 08:08:15.240212 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="feffda9c-4aff-4412-858f-6452ac468a93" containerName="proxy-httpd" Nov 29 08:08:15 crc kubenswrapper[4795]: E1129 08:08:15.240223 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="feffda9c-4aff-4412-858f-6452ac468a93" containerName="ceilometer-notification-agent" Nov 29 08:08:15 crc kubenswrapper[4795]: I1129 08:08:15.240230 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="feffda9c-4aff-4412-858f-6452ac468a93" containerName="ceilometer-notification-agent" Nov 29 08:08:15 crc kubenswrapper[4795]: E1129 08:08:15.240241 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="feffda9c-4aff-4412-858f-6452ac468a93" containerName="sg-core" Nov 29 08:08:15 crc kubenswrapper[4795]: I1129 08:08:15.240246 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="feffda9c-4aff-4412-858f-6452ac468a93" containerName="sg-core" Nov 29 08:08:15 crc kubenswrapper[4795]: I1129 08:08:15.240473 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="feffda9c-4aff-4412-858f-6452ac468a93" containerName="ceilometer-notification-agent" Nov 29 08:08:15 crc kubenswrapper[4795]: I1129 08:08:15.244212 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="feffda9c-4aff-4412-858f-6452ac468a93" containerName="sg-core" Nov 29 08:08:15 crc kubenswrapper[4795]: I1129 08:08:15.244270 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="feffda9c-4aff-4412-858f-6452ac468a93" containerName="ceilometer-central-agent" Nov 29 08:08:15 crc kubenswrapper[4795]: I1129 08:08:15.244306 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="feffda9c-4aff-4412-858f-6452ac468a93" containerName="proxy-httpd" Nov 29 08:08:15 crc kubenswrapper[4795]: I1129 08:08:15.248238 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 08:08:15 crc kubenswrapper[4795]: I1129 08:08:15.252280 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 08:08:15 crc kubenswrapper[4795]: I1129 08:08:15.254298 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 29 08:08:15 crc kubenswrapper[4795]: I1129 08:08:15.254489 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 29 08:08:15 crc kubenswrapper[4795]: I1129 08:08:15.336281 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dccb767f-ff08-4edf-aa79-f2a09633d95d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dccb767f-ff08-4edf-aa79-f2a09633d95d\") " pod="openstack/ceilometer-0" Nov 29 08:08:15 crc kubenswrapper[4795]: I1129 08:08:15.337103 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dccb767f-ff08-4edf-aa79-f2a09633d95d-scripts\") pod \"ceilometer-0\" (UID: \"dccb767f-ff08-4edf-aa79-f2a09633d95d\") " pod="openstack/ceilometer-0" Nov 29 08:08:15 crc kubenswrapper[4795]: I1129 08:08:15.338179 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dccb767f-ff08-4edf-aa79-f2a09633d95d-log-httpd\") pod \"ceilometer-0\" (UID: \"dccb767f-ff08-4edf-aa79-f2a09633d95d\") " pod="openstack/ceilometer-0" Nov 29 08:08:15 crc kubenswrapper[4795]: I1129 08:08:15.338322 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dccb767f-ff08-4edf-aa79-f2a09633d95d-config-data\") pod \"ceilometer-0\" (UID: \"dccb767f-ff08-4edf-aa79-f2a09633d95d\") " pod="openstack/ceilometer-0" Nov 29 08:08:15 crc kubenswrapper[4795]: I1129 08:08:15.338402 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dccb767f-ff08-4edf-aa79-f2a09633d95d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dccb767f-ff08-4edf-aa79-f2a09633d95d\") " pod="openstack/ceilometer-0" Nov 29 08:08:15 crc kubenswrapper[4795]: I1129 08:08:15.338443 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dccb767f-ff08-4edf-aa79-f2a09633d95d-run-httpd\") pod \"ceilometer-0\" (UID: \"dccb767f-ff08-4edf-aa79-f2a09633d95d\") " pod="openstack/ceilometer-0" Nov 29 08:08:15 crc kubenswrapper[4795]: I1129 08:08:15.338494 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzfxx\" (UniqueName: \"kubernetes.io/projected/dccb767f-ff08-4edf-aa79-f2a09633d95d-kube-api-access-lzfxx\") pod \"ceilometer-0\" (UID: \"dccb767f-ff08-4edf-aa79-f2a09633d95d\") " pod="openstack/ceilometer-0" Nov 29 08:08:15 crc kubenswrapper[4795]: I1129 08:08:15.440230 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dccb767f-ff08-4edf-aa79-f2a09633d95d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dccb767f-ff08-4edf-aa79-f2a09633d95d\") " pod="openstack/ceilometer-0" Nov 29 08:08:15 crc kubenswrapper[4795]: I1129 08:08:15.440326 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dccb767f-ff08-4edf-aa79-f2a09633d95d-scripts\") pod \"ceilometer-0\" (UID: \"dccb767f-ff08-4edf-aa79-f2a09633d95d\") " pod="openstack/ceilometer-0" Nov 29 08:08:15 crc kubenswrapper[4795]: I1129 08:08:15.440414 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dccb767f-ff08-4edf-aa79-f2a09633d95d-log-httpd\") pod \"ceilometer-0\" (UID: \"dccb767f-ff08-4edf-aa79-f2a09633d95d\") " pod="openstack/ceilometer-0" Nov 29 08:08:15 crc kubenswrapper[4795]: I1129 08:08:15.440456 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dccb767f-ff08-4edf-aa79-f2a09633d95d-config-data\") pod \"ceilometer-0\" (UID: \"dccb767f-ff08-4edf-aa79-f2a09633d95d\") " pod="openstack/ceilometer-0" Nov 29 08:08:15 crc kubenswrapper[4795]: I1129 08:08:15.440486 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dccb767f-ff08-4edf-aa79-f2a09633d95d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dccb767f-ff08-4edf-aa79-f2a09633d95d\") " pod="openstack/ceilometer-0" Nov 29 08:08:15 crc kubenswrapper[4795]: I1129 08:08:15.440506 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dccb767f-ff08-4edf-aa79-f2a09633d95d-run-httpd\") pod \"ceilometer-0\" (UID: \"dccb767f-ff08-4edf-aa79-f2a09633d95d\") " pod="openstack/ceilometer-0" Nov 29 08:08:15 crc kubenswrapper[4795]: I1129 08:08:15.440534 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzfxx\" (UniqueName: \"kubernetes.io/projected/dccb767f-ff08-4edf-aa79-f2a09633d95d-kube-api-access-lzfxx\") pod \"ceilometer-0\" (UID: \"dccb767f-ff08-4edf-aa79-f2a09633d95d\") " pod="openstack/ceilometer-0" Nov 29 08:08:15 crc kubenswrapper[4795]: I1129 08:08:15.442286 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dccb767f-ff08-4edf-aa79-f2a09633d95d-log-httpd\") pod \"ceilometer-0\" (UID: \"dccb767f-ff08-4edf-aa79-f2a09633d95d\") " pod="openstack/ceilometer-0" Nov 29 08:08:15 crc kubenswrapper[4795]: I1129 08:08:15.442312 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dccb767f-ff08-4edf-aa79-f2a09633d95d-run-httpd\") pod \"ceilometer-0\" (UID: \"dccb767f-ff08-4edf-aa79-f2a09633d95d\") " pod="openstack/ceilometer-0" Nov 29 08:08:15 crc kubenswrapper[4795]: I1129 08:08:15.447037 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dccb767f-ff08-4edf-aa79-f2a09633d95d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dccb767f-ff08-4edf-aa79-f2a09633d95d\") " pod="openstack/ceilometer-0" Nov 29 08:08:15 crc kubenswrapper[4795]: I1129 08:08:15.447738 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dccb767f-ff08-4edf-aa79-f2a09633d95d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dccb767f-ff08-4edf-aa79-f2a09633d95d\") " pod="openstack/ceilometer-0" Nov 29 08:08:15 crc kubenswrapper[4795]: I1129 08:08:15.447746 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dccb767f-ff08-4edf-aa79-f2a09633d95d-config-data\") pod \"ceilometer-0\" (UID: \"dccb767f-ff08-4edf-aa79-f2a09633d95d\") " pod="openstack/ceilometer-0" Nov 29 08:08:15 crc kubenswrapper[4795]: I1129 08:08:15.448407 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dccb767f-ff08-4edf-aa79-f2a09633d95d-scripts\") pod \"ceilometer-0\" (UID: \"dccb767f-ff08-4edf-aa79-f2a09633d95d\") " pod="openstack/ceilometer-0" Nov 29 08:08:15 crc kubenswrapper[4795]: I1129 08:08:15.457178 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzfxx\" (UniqueName: \"kubernetes.io/projected/dccb767f-ff08-4edf-aa79-f2a09633d95d-kube-api-access-lzfxx\") pod \"ceilometer-0\" (UID: \"dccb767f-ff08-4edf-aa79-f2a09633d95d\") " pod="openstack/ceilometer-0" Nov 29 08:08:15 crc kubenswrapper[4795]: I1129 08:08:15.546088 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 29 08:08:15 crc kubenswrapper[4795]: I1129 08:08:15.546149 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 29 08:08:15 crc kubenswrapper[4795]: I1129 08:08:15.591673 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 08:08:16 crc kubenswrapper[4795]: I1129 08:08:16.110236 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 08:08:16 crc kubenswrapper[4795]: I1129 08:08:16.142578 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dccb767f-ff08-4edf-aa79-f2a09633d95d","Type":"ContainerStarted","Data":"aa1563d357872ebac5397cc9fa8bbac8cf46ce9ee7d1606a894cb3465d8a96b6"} Nov 29 08:08:16 crc kubenswrapper[4795]: I1129 08:08:16.290384 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="feffda9c-4aff-4412-858f-6452ac468a93" path="/var/lib/kubelet/pods/feffda9c-4aff-4412-858f-6452ac468a93/volumes" Nov 29 08:08:17 crc kubenswrapper[4795]: I1129 08:08:17.328694 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Nov 29 08:08:18 crc kubenswrapper[4795]: I1129 08:08:18.557084 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 29 08:08:20 crc kubenswrapper[4795]: I1129 08:08:20.546179 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 29 08:08:20 crc kubenswrapper[4795]: I1129 08:08:20.546849 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 29 08:08:21 crc kubenswrapper[4795]: I1129 08:08:21.561007 4795 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="53a9feb8-d22b-48bd-9ac5-4168d88c8d78" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.246:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 29 08:08:21 crc kubenswrapper[4795]: I1129 08:08:21.561032 4795 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="53a9feb8-d22b-48bd-9ac5-4168d88c8d78" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.246:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 29 08:08:23 crc kubenswrapper[4795]: I1129 08:08:23.523836 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 29 08:08:23 crc kubenswrapper[4795]: I1129 08:08:23.524994 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 29 08:08:23 crc kubenswrapper[4795]: I1129 08:08:23.557366 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 29 08:08:23 crc kubenswrapper[4795]: I1129 08:08:23.597021 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 29 08:08:24 crc kubenswrapper[4795]: I1129 08:08:24.313020 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 29 08:08:24 crc kubenswrapper[4795]: I1129 08:08:24.606808 4795 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b4e7a65c-2285-410a-8759-75a41c63d9b1" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.247:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 29 08:08:24 crc kubenswrapper[4795]: I1129 08:08:24.606819 4795 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b4e7a65c-2285-410a-8759-75a41c63d9b1" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.247:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 29 08:08:27 crc kubenswrapper[4795]: I1129 08:08:27.272490 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dccb767f-ff08-4edf-aa79-f2a09633d95d","Type":"ContainerStarted","Data":"741ceda2511c1b18cbb0c8e80cf44feda077b3ab8a1524435b0690333dd65bda"} Nov 29 08:08:28 crc kubenswrapper[4795]: I1129 08:08:28.830849 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 29 08:08:28 crc kubenswrapper[4795]: I1129 08:08:28.887772 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3085d3da-ff4f-4b37-be0a-3f1754a7fbf0-combined-ca-bundle\") pod \"3085d3da-ff4f-4b37-be0a-3f1754a7fbf0\" (UID: \"3085d3da-ff4f-4b37-be0a-3f1754a7fbf0\") " Nov 29 08:08:28 crc kubenswrapper[4795]: I1129 08:08:28.887913 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l67xl\" (UniqueName: \"kubernetes.io/projected/3085d3da-ff4f-4b37-be0a-3f1754a7fbf0-kube-api-access-l67xl\") pod \"3085d3da-ff4f-4b37-be0a-3f1754a7fbf0\" (UID: \"3085d3da-ff4f-4b37-be0a-3f1754a7fbf0\") " Nov 29 08:08:28 crc kubenswrapper[4795]: I1129 08:08:28.888073 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3085d3da-ff4f-4b37-be0a-3f1754a7fbf0-config-data\") pod \"3085d3da-ff4f-4b37-be0a-3f1754a7fbf0\" (UID: \"3085d3da-ff4f-4b37-be0a-3f1754a7fbf0\") " Nov 29 08:08:28 crc kubenswrapper[4795]: I1129 08:08:28.901223 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3085d3da-ff4f-4b37-be0a-3f1754a7fbf0-kube-api-access-l67xl" (OuterVolumeSpecName: "kube-api-access-l67xl") pod "3085d3da-ff4f-4b37-be0a-3f1754a7fbf0" (UID: "3085d3da-ff4f-4b37-be0a-3f1754a7fbf0"). InnerVolumeSpecName "kube-api-access-l67xl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:08:28 crc kubenswrapper[4795]: I1129 08:08:28.928344 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3085d3da-ff4f-4b37-be0a-3f1754a7fbf0-config-data" (OuterVolumeSpecName: "config-data") pod "3085d3da-ff4f-4b37-be0a-3f1754a7fbf0" (UID: "3085d3da-ff4f-4b37-be0a-3f1754a7fbf0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:08:28 crc kubenswrapper[4795]: I1129 08:08:28.954754 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3085d3da-ff4f-4b37-be0a-3f1754a7fbf0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3085d3da-ff4f-4b37-be0a-3f1754a7fbf0" (UID: "3085d3da-ff4f-4b37-be0a-3f1754a7fbf0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:08:28 crc kubenswrapper[4795]: I1129 08:08:28.994156 4795 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3085d3da-ff4f-4b37-be0a-3f1754a7fbf0-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:28 crc kubenswrapper[4795]: I1129 08:08:28.994203 4795 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3085d3da-ff4f-4b37-be0a-3f1754a7fbf0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:28 crc kubenswrapper[4795]: I1129 08:08:28.994220 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l67xl\" (UniqueName: \"kubernetes.io/projected/3085d3da-ff4f-4b37-be0a-3f1754a7fbf0-kube-api-access-l67xl\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:29 crc kubenswrapper[4795]: I1129 08:08:29.314038 4795 generic.go:334] "Generic (PLEG): container finished" podID="3085d3da-ff4f-4b37-be0a-3f1754a7fbf0" containerID="9438cceefedbe627af13cc105670b054eaa22dae99e7507c69bd095fc311c8ef" exitCode=137 Nov 29 08:08:29 crc kubenswrapper[4795]: I1129 08:08:29.314101 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3085d3da-ff4f-4b37-be0a-3f1754a7fbf0","Type":"ContainerDied","Data":"9438cceefedbe627af13cc105670b054eaa22dae99e7507c69bd095fc311c8ef"} Nov 29 08:08:29 crc kubenswrapper[4795]: I1129 08:08:29.314185 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 29 08:08:29 crc kubenswrapper[4795]: I1129 08:08:29.314455 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3085d3da-ff4f-4b37-be0a-3f1754a7fbf0","Type":"ContainerDied","Data":"12bbf9a73c120a50824091d91901d9ee07305141903bf390fd493e1ab71f4d1a"} Nov 29 08:08:29 crc kubenswrapper[4795]: I1129 08:08:29.314474 4795 scope.go:117] "RemoveContainer" containerID="9438cceefedbe627af13cc105670b054eaa22dae99e7507c69bd095fc311c8ef" Nov 29 08:08:29 crc kubenswrapper[4795]: I1129 08:08:29.351502 4795 scope.go:117] "RemoveContainer" containerID="9438cceefedbe627af13cc105670b054eaa22dae99e7507c69bd095fc311c8ef" Nov 29 08:08:29 crc kubenswrapper[4795]: E1129 08:08:29.356050 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9438cceefedbe627af13cc105670b054eaa22dae99e7507c69bd095fc311c8ef\": container with ID starting with 9438cceefedbe627af13cc105670b054eaa22dae99e7507c69bd095fc311c8ef not found: ID does not exist" containerID="9438cceefedbe627af13cc105670b054eaa22dae99e7507c69bd095fc311c8ef" Nov 29 08:08:29 crc kubenswrapper[4795]: I1129 08:08:29.356112 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9438cceefedbe627af13cc105670b054eaa22dae99e7507c69bd095fc311c8ef"} err="failed to get container status \"9438cceefedbe627af13cc105670b054eaa22dae99e7507c69bd095fc311c8ef\": rpc error: code = NotFound desc = could not find container \"9438cceefedbe627af13cc105670b054eaa22dae99e7507c69bd095fc311c8ef\": container with ID starting with 9438cceefedbe627af13cc105670b054eaa22dae99e7507c69bd095fc311c8ef not found: ID does not exist" Nov 29 08:08:29 crc kubenswrapper[4795]: I1129 08:08:29.364319 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 29 08:08:29 crc kubenswrapper[4795]: I1129 08:08:29.379341 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 29 08:08:29 crc kubenswrapper[4795]: I1129 08:08:29.400771 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 29 08:08:29 crc kubenswrapper[4795]: E1129 08:08:29.401839 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3085d3da-ff4f-4b37-be0a-3f1754a7fbf0" containerName="nova-cell1-novncproxy-novncproxy" Nov 29 08:08:29 crc kubenswrapper[4795]: I1129 08:08:29.403703 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="3085d3da-ff4f-4b37-be0a-3f1754a7fbf0" containerName="nova-cell1-novncproxy-novncproxy" Nov 29 08:08:29 crc kubenswrapper[4795]: I1129 08:08:29.404171 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="3085d3da-ff4f-4b37-be0a-3f1754a7fbf0" containerName="nova-cell1-novncproxy-novncproxy" Nov 29 08:08:29 crc kubenswrapper[4795]: I1129 08:08:29.405503 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 29 08:08:29 crc kubenswrapper[4795]: I1129 08:08:29.410188 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Nov 29 08:08:29 crc kubenswrapper[4795]: I1129 08:08:29.410726 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Nov 29 08:08:29 crc kubenswrapper[4795]: I1129 08:08:29.412365 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 29 08:08:29 crc kubenswrapper[4795]: I1129 08:08:29.435860 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 29 08:08:29 crc kubenswrapper[4795]: I1129 08:08:29.508829 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9e160dc-75ec-49d4-8145-76df59c61dda-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e9e160dc-75ec-49d4-8145-76df59c61dda\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 08:08:29 crc kubenswrapper[4795]: I1129 08:08:29.509159 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9e160dc-75ec-49d4-8145-76df59c61dda-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"e9e160dc-75ec-49d4-8145-76df59c61dda\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 08:08:29 crc kubenswrapper[4795]: I1129 08:08:29.509310 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nktvz\" (UniqueName: \"kubernetes.io/projected/e9e160dc-75ec-49d4-8145-76df59c61dda-kube-api-access-nktvz\") pod \"nova-cell1-novncproxy-0\" (UID: \"e9e160dc-75ec-49d4-8145-76df59c61dda\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 08:08:29 crc kubenswrapper[4795]: I1129 08:08:29.509433 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9e160dc-75ec-49d4-8145-76df59c61dda-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e9e160dc-75ec-49d4-8145-76df59c61dda\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 08:08:29 crc kubenswrapper[4795]: I1129 08:08:29.509616 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9e160dc-75ec-49d4-8145-76df59c61dda-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"e9e160dc-75ec-49d4-8145-76df59c61dda\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 08:08:29 crc kubenswrapper[4795]: I1129 08:08:29.611870 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9e160dc-75ec-49d4-8145-76df59c61dda-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"e9e160dc-75ec-49d4-8145-76df59c61dda\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 08:08:29 crc kubenswrapper[4795]: I1129 08:08:29.612272 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nktvz\" (UniqueName: \"kubernetes.io/projected/e9e160dc-75ec-49d4-8145-76df59c61dda-kube-api-access-nktvz\") pod \"nova-cell1-novncproxy-0\" (UID: \"e9e160dc-75ec-49d4-8145-76df59c61dda\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 08:08:29 crc kubenswrapper[4795]: I1129 08:08:29.612403 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9e160dc-75ec-49d4-8145-76df59c61dda-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e9e160dc-75ec-49d4-8145-76df59c61dda\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 08:08:29 crc kubenswrapper[4795]: I1129 08:08:29.612789 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9e160dc-75ec-49d4-8145-76df59c61dda-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"e9e160dc-75ec-49d4-8145-76df59c61dda\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 08:08:29 crc kubenswrapper[4795]: I1129 08:08:29.613293 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9e160dc-75ec-49d4-8145-76df59c61dda-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e9e160dc-75ec-49d4-8145-76df59c61dda\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 08:08:29 crc kubenswrapper[4795]: I1129 08:08:29.618635 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9e160dc-75ec-49d4-8145-76df59c61dda-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e9e160dc-75ec-49d4-8145-76df59c61dda\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 08:08:29 crc kubenswrapper[4795]: I1129 08:08:29.619084 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9e160dc-75ec-49d4-8145-76df59c61dda-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"e9e160dc-75ec-49d4-8145-76df59c61dda\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 08:08:29 crc kubenswrapper[4795]: I1129 08:08:29.619203 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9e160dc-75ec-49d4-8145-76df59c61dda-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e9e160dc-75ec-49d4-8145-76df59c61dda\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 08:08:29 crc kubenswrapper[4795]: I1129 08:08:29.630840 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nktvz\" (UniqueName: \"kubernetes.io/projected/e9e160dc-75ec-49d4-8145-76df59c61dda-kube-api-access-nktvz\") pod \"nova-cell1-novncproxy-0\" (UID: \"e9e160dc-75ec-49d4-8145-76df59c61dda\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 08:08:29 crc kubenswrapper[4795]: I1129 08:08:29.635253 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9e160dc-75ec-49d4-8145-76df59c61dda-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"e9e160dc-75ec-49d4-8145-76df59c61dda\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 08:08:29 crc kubenswrapper[4795]: I1129 08:08:29.728284 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 29 08:08:30 crc kubenswrapper[4795]: I1129 08:08:30.206101 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 29 08:08:30 crc kubenswrapper[4795]: I1129 08:08:30.302201 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3085d3da-ff4f-4b37-be0a-3f1754a7fbf0" path="/var/lib/kubelet/pods/3085d3da-ff4f-4b37-be0a-3f1754a7fbf0/volumes" Nov 29 08:08:30 crc kubenswrapper[4795]: I1129 08:08:30.337433 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e9e160dc-75ec-49d4-8145-76df59c61dda","Type":"ContainerStarted","Data":"2c23888164cbe5256e5ca018755f86c29c656ccd039123dfc145758367a93f0d"} Nov 29 08:08:30 crc kubenswrapper[4795]: I1129 08:08:30.551488 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 29 08:08:30 crc kubenswrapper[4795]: I1129 08:08:30.551873 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 29 08:08:30 crc kubenswrapper[4795]: I1129 08:08:30.558780 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 29 08:08:30 crc kubenswrapper[4795]: I1129 08:08:30.559326 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 29 08:08:31 crc kubenswrapper[4795]: I1129 08:08:31.355960 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e9e160dc-75ec-49d4-8145-76df59c61dda","Type":"ContainerStarted","Data":"e7b967720535800b70f4c5a9421ed56e8282ea7313263eb737077e0d6097cc23"} Nov 29 08:08:31 crc kubenswrapper[4795]: I1129 08:08:31.381214 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.381195301 podStartE2EDuration="2.381195301s" podCreationTimestamp="2025-11-29 08:08:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 08:08:31.374940384 +0000 UTC m=+1757.350516174" watchObservedRunningTime="2025-11-29 08:08:31.381195301 +0000 UTC m=+1757.356771091" Nov 29 08:08:33 crc kubenswrapper[4795]: I1129 08:08:33.528292 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 29 08:08:33 crc kubenswrapper[4795]: I1129 08:08:33.528872 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 29 08:08:33 crc kubenswrapper[4795]: I1129 08:08:33.529181 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 29 08:08:33 crc kubenswrapper[4795]: I1129 08:08:33.529328 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 29 08:08:33 crc kubenswrapper[4795]: I1129 08:08:33.533066 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 29 08:08:33 crc kubenswrapper[4795]: I1129 08:08:33.533189 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 29 08:08:33 crc kubenswrapper[4795]: I1129 08:08:33.750019 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d99f6bc7f-5xkn9"] Nov 29 08:08:33 crc kubenswrapper[4795]: I1129 08:08:33.754506 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d99f6bc7f-5xkn9" Nov 29 08:08:33 crc kubenswrapper[4795]: I1129 08:08:33.776470 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d99f6bc7f-5xkn9"] Nov 29 08:08:33 crc kubenswrapper[4795]: I1129 08:08:33.809669 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fd74474-a041-4d6f-84f9-90d8161e943e-config\") pod \"dnsmasq-dns-6d99f6bc7f-5xkn9\" (UID: \"3fd74474-a041-4d6f-84f9-90d8161e943e\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-5xkn9" Nov 29 08:08:33 crc kubenswrapper[4795]: I1129 08:08:33.809767 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lpzx\" (UniqueName: \"kubernetes.io/projected/3fd74474-a041-4d6f-84f9-90d8161e943e-kube-api-access-2lpzx\") pod \"dnsmasq-dns-6d99f6bc7f-5xkn9\" (UID: \"3fd74474-a041-4d6f-84f9-90d8161e943e\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-5xkn9" Nov 29 08:08:33 crc kubenswrapper[4795]: I1129 08:08:33.809821 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3fd74474-a041-4d6f-84f9-90d8161e943e-dns-svc\") pod \"dnsmasq-dns-6d99f6bc7f-5xkn9\" (UID: \"3fd74474-a041-4d6f-84f9-90d8161e943e\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-5xkn9" Nov 29 08:08:33 crc kubenswrapper[4795]: I1129 08:08:33.809868 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3fd74474-a041-4d6f-84f9-90d8161e943e-dns-swift-storage-0\") pod \"dnsmasq-dns-6d99f6bc7f-5xkn9\" (UID: \"3fd74474-a041-4d6f-84f9-90d8161e943e\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-5xkn9" Nov 29 08:08:33 crc kubenswrapper[4795]: I1129 08:08:33.809893 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3fd74474-a041-4d6f-84f9-90d8161e943e-ovsdbserver-sb\") pod \"dnsmasq-dns-6d99f6bc7f-5xkn9\" (UID: \"3fd74474-a041-4d6f-84f9-90d8161e943e\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-5xkn9" Nov 29 08:08:33 crc kubenswrapper[4795]: I1129 08:08:33.809910 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3fd74474-a041-4d6f-84f9-90d8161e943e-ovsdbserver-nb\") pod \"dnsmasq-dns-6d99f6bc7f-5xkn9\" (UID: \"3fd74474-a041-4d6f-84f9-90d8161e943e\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-5xkn9" Nov 29 08:08:33 crc kubenswrapper[4795]: I1129 08:08:33.912271 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fd74474-a041-4d6f-84f9-90d8161e943e-config\") pod \"dnsmasq-dns-6d99f6bc7f-5xkn9\" (UID: \"3fd74474-a041-4d6f-84f9-90d8161e943e\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-5xkn9" Nov 29 08:08:33 crc kubenswrapper[4795]: I1129 08:08:33.912383 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lpzx\" (UniqueName: \"kubernetes.io/projected/3fd74474-a041-4d6f-84f9-90d8161e943e-kube-api-access-2lpzx\") pod \"dnsmasq-dns-6d99f6bc7f-5xkn9\" (UID: \"3fd74474-a041-4d6f-84f9-90d8161e943e\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-5xkn9" Nov 29 08:08:33 crc kubenswrapper[4795]: I1129 08:08:33.912449 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3fd74474-a041-4d6f-84f9-90d8161e943e-dns-svc\") pod \"dnsmasq-dns-6d99f6bc7f-5xkn9\" (UID: \"3fd74474-a041-4d6f-84f9-90d8161e943e\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-5xkn9" Nov 29 08:08:33 crc kubenswrapper[4795]: I1129 08:08:33.912495 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3fd74474-a041-4d6f-84f9-90d8161e943e-dns-swift-storage-0\") pod \"dnsmasq-dns-6d99f6bc7f-5xkn9\" (UID: \"3fd74474-a041-4d6f-84f9-90d8161e943e\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-5xkn9" Nov 29 08:08:33 crc kubenswrapper[4795]: I1129 08:08:33.912517 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3fd74474-a041-4d6f-84f9-90d8161e943e-ovsdbserver-sb\") pod \"dnsmasq-dns-6d99f6bc7f-5xkn9\" (UID: \"3fd74474-a041-4d6f-84f9-90d8161e943e\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-5xkn9" Nov 29 08:08:33 crc kubenswrapper[4795]: I1129 08:08:33.912537 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3fd74474-a041-4d6f-84f9-90d8161e943e-ovsdbserver-nb\") pod \"dnsmasq-dns-6d99f6bc7f-5xkn9\" (UID: \"3fd74474-a041-4d6f-84f9-90d8161e943e\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-5xkn9" Nov 29 08:08:33 crc kubenswrapper[4795]: I1129 08:08:33.913264 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3fd74474-a041-4d6f-84f9-90d8161e943e-ovsdbserver-sb\") pod \"dnsmasq-dns-6d99f6bc7f-5xkn9\" (UID: \"3fd74474-a041-4d6f-84f9-90d8161e943e\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-5xkn9" Nov 29 08:08:33 crc kubenswrapper[4795]: I1129 08:08:33.913271 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fd74474-a041-4d6f-84f9-90d8161e943e-config\") pod \"dnsmasq-dns-6d99f6bc7f-5xkn9\" (UID: \"3fd74474-a041-4d6f-84f9-90d8161e943e\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-5xkn9" Nov 29 08:08:33 crc kubenswrapper[4795]: I1129 08:08:33.913309 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3fd74474-a041-4d6f-84f9-90d8161e943e-ovsdbserver-nb\") pod \"dnsmasq-dns-6d99f6bc7f-5xkn9\" (UID: \"3fd74474-a041-4d6f-84f9-90d8161e943e\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-5xkn9" Nov 29 08:08:33 crc kubenswrapper[4795]: I1129 08:08:33.913705 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3fd74474-a041-4d6f-84f9-90d8161e943e-dns-svc\") pod \"dnsmasq-dns-6d99f6bc7f-5xkn9\" (UID: \"3fd74474-a041-4d6f-84f9-90d8161e943e\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-5xkn9" Nov 29 08:08:33 crc kubenswrapper[4795]: I1129 08:08:33.913864 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3fd74474-a041-4d6f-84f9-90d8161e943e-dns-swift-storage-0\") pod \"dnsmasq-dns-6d99f6bc7f-5xkn9\" (UID: \"3fd74474-a041-4d6f-84f9-90d8161e943e\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-5xkn9" Nov 29 08:08:33 crc kubenswrapper[4795]: I1129 08:08:33.938459 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lpzx\" (UniqueName: \"kubernetes.io/projected/3fd74474-a041-4d6f-84f9-90d8161e943e-kube-api-access-2lpzx\") pod \"dnsmasq-dns-6d99f6bc7f-5xkn9\" (UID: \"3fd74474-a041-4d6f-84f9-90d8161e943e\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-5xkn9" Nov 29 08:08:34 crc kubenswrapper[4795]: I1129 08:08:34.084701 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d99f6bc7f-5xkn9" Nov 29 08:08:34 crc kubenswrapper[4795]: I1129 08:08:34.600456 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d99f6bc7f-5xkn9"] Nov 29 08:08:34 crc kubenswrapper[4795]: I1129 08:08:34.728925 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 29 08:08:35 crc kubenswrapper[4795]: I1129 08:08:35.400182 4795 generic.go:334] "Generic (PLEG): container finished" podID="3fd74474-a041-4d6f-84f9-90d8161e943e" containerID="8c97c16d2137ecd9b8901d0d08a51bab89eccaf6fa16c62308b0908b67fbd1e7" exitCode=0 Nov 29 08:08:35 crc kubenswrapper[4795]: I1129 08:08:35.402174 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d99f6bc7f-5xkn9" event={"ID":"3fd74474-a041-4d6f-84f9-90d8161e943e","Type":"ContainerDied","Data":"8c97c16d2137ecd9b8901d0d08a51bab89eccaf6fa16c62308b0908b67fbd1e7"} Nov 29 08:08:35 crc kubenswrapper[4795]: I1129 08:08:35.402242 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d99f6bc7f-5xkn9" event={"ID":"3fd74474-a041-4d6f-84f9-90d8161e943e","Type":"ContainerStarted","Data":"cf74017088d42cf3a2e23610a538b6fc218a2eb8eba9ed42c8e25ec1e35b93e8"} Nov 29 08:08:36 crc kubenswrapper[4795]: I1129 08:08:36.058504 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 29 08:08:36 crc kubenswrapper[4795]: I1129 08:08:36.163065 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 08:08:36 crc kubenswrapper[4795]: I1129 08:08:36.413019 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d99f6bc7f-5xkn9" event={"ID":"3fd74474-a041-4d6f-84f9-90d8161e943e","Type":"ContainerStarted","Data":"8b29db2ef12ff07b0cc2c1a6b7f9ca4ec5319a14eede4bcfcb67ed46af6cd8f1"} Nov 29 08:08:36 crc kubenswrapper[4795]: I1129 08:08:36.413483 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b4e7a65c-2285-410a-8759-75a41c63d9b1" containerName="nova-api-log" containerID="cri-o://a76156f16136e23a3359778dc457f70a0c6ccf09563dc20edd423b0cdc122bdf" gracePeriod=30 Nov 29 08:08:36 crc kubenswrapper[4795]: I1129 08:08:36.413622 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b4e7a65c-2285-410a-8759-75a41c63d9b1" containerName="nova-api-api" containerID="cri-o://405089a7f64e11e1f9c1842dfadf678d9cf93b53969589b26bd04e0568184bc2" gracePeriod=30 Nov 29 08:08:36 crc kubenswrapper[4795]: I1129 08:08:36.447686 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d99f6bc7f-5xkn9" podStartSLOduration=3.447664428 podStartE2EDuration="3.447664428s" podCreationTimestamp="2025-11-29 08:08:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 08:08:36.43399166 +0000 UTC m=+1762.409567450" watchObservedRunningTime="2025-11-29 08:08:36.447664428 +0000 UTC m=+1762.423240218" Nov 29 08:08:37 crc kubenswrapper[4795]: I1129 08:08:37.434909 4795 generic.go:334] "Generic (PLEG): container finished" podID="b4e7a65c-2285-410a-8759-75a41c63d9b1" containerID="a76156f16136e23a3359778dc457f70a0c6ccf09563dc20edd423b0cdc122bdf" exitCode=143 Nov 29 08:08:37 crc kubenswrapper[4795]: I1129 08:08:37.434996 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b4e7a65c-2285-410a-8759-75a41c63d9b1","Type":"ContainerDied","Data":"a76156f16136e23a3359778dc457f70a0c6ccf09563dc20edd423b0cdc122bdf"} Nov 29 08:08:37 crc kubenswrapper[4795]: I1129 08:08:37.435556 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d99f6bc7f-5xkn9" Nov 29 08:08:39 crc kubenswrapper[4795]: E1129 08:08:39.108761 4795 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3085d3da_ff4f_4b37_be0a_3f1754a7fbf0.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfeffda9c_4aff_4412_858f_6452ac468a93.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc33ae8e1_1fb9_4053_9dcf_76b7a5bb370d.slice/crio-conmon-8b8df53bdfcffe0c5cb033cb2d4dff6b323c346e5043cf654875b0cc3bbe396a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3085d3da_ff4f_4b37_be0a_3f1754a7fbf0.slice/crio-9438cceefedbe627af13cc105670b054eaa22dae99e7507c69bd095fc311c8ef.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfeffda9c_4aff_4412_858f_6452ac468a93.slice/crio-ad5efdca882cba70284de18cd3b1f2a77462a3b5b5c5f724f8d0ee6a7d268682\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4552f7de_b8a7_4456_81ce_1faa3a69d96b.slice/crio-842dd230f2b2c112606b5dd4de601db0d5e31ce4587739f4436acaae674454f7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3085d3da_ff4f_4b37_be0a_3f1754a7fbf0.slice/crio-conmon-9438cceefedbe627af13cc105670b054eaa22dae99e7507c69bd095fc311c8ef.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4552f7de_b8a7_4456_81ce_1faa3a69d96b.slice/crio-conmon-842dd230f2b2c112606b5dd4de601db0d5e31ce4587739f4436acaae674454f7.scope\": RecentStats: unable to find data in memory cache]" Nov 29 08:08:39 crc kubenswrapper[4795]: E1129 08:08:39.109032 4795 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfeffda9c_4aff_4412_858f_6452ac468a93.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3085d3da_ff4f_4b37_be0a_3f1754a7fbf0.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4552f7de_b8a7_4456_81ce_1faa3a69d96b.slice/crio-842dd230f2b2c112606b5dd4de601db0d5e31ce4587739f4436acaae674454f7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc33ae8e1_1fb9_4053_9dcf_76b7a5bb370d.slice/crio-conmon-8b8df53bdfcffe0c5cb033cb2d4dff6b323c346e5043cf654875b0cc3bbe396a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfeffda9c_4aff_4412_858f_6452ac468a93.slice/crio-ad5efdca882cba70284de18cd3b1f2a77462a3b5b5c5f724f8d0ee6a7d268682\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3085d3da_ff4f_4b37_be0a_3f1754a7fbf0.slice/crio-9438cceefedbe627af13cc105670b054eaa22dae99e7507c69bd095fc311c8ef.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3085d3da_ff4f_4b37_be0a_3f1754a7fbf0.slice/crio-conmon-9438cceefedbe627af13cc105670b054eaa22dae99e7507c69bd095fc311c8ef.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4552f7de_b8a7_4456_81ce_1faa3a69d96b.slice/crio-conmon-842dd230f2b2c112606b5dd4de601db0d5e31ce4587739f4436acaae674454f7.scope\": RecentStats: unable to find data in memory cache]" Nov 29 08:08:39 crc kubenswrapper[4795]: E1129 08:08:39.110011 4795 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3085d3da_ff4f_4b37_be0a_3f1754a7fbf0.slice/crio-12bbf9a73c120a50824091d91901d9ee07305141903bf390fd493e1ab71f4d1a\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3085d3da_ff4f_4b37_be0a_3f1754a7fbf0.slice/crio-9438cceefedbe627af13cc105670b054eaa22dae99e7507c69bd095fc311c8ef.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb4e7a65c_2285_410a_8759_75a41c63d9b1.slice/crio-conmon-a76156f16136e23a3359778dc457f70a0c6ccf09563dc20edd423b0cdc122bdf.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4552f7de_b8a7_4456_81ce_1faa3a69d96b.slice/crio-conmon-842dd230f2b2c112606b5dd4de601db0d5e31ce4587739f4436acaae674454f7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4552f7de_b8a7_4456_81ce_1faa3a69d96b.slice/crio-842dd230f2b2c112606b5dd4de601db0d5e31ce4587739f4436acaae674454f7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3085d3da_ff4f_4b37_be0a_3f1754a7fbf0.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3085d3da_ff4f_4b37_be0a_3f1754a7fbf0.slice/crio-conmon-9438cceefedbe627af13cc105670b054eaa22dae99e7507c69bd095fc311c8ef.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb4e7a65c_2285_410a_8759_75a41c63d9b1.slice/crio-a76156f16136e23a3359778dc457f70a0c6ccf09563dc20edd423b0cdc122bdf.scope\": RecentStats: unable to find data in memory cache]" Nov 29 08:08:39 crc kubenswrapper[4795]: I1129 08:08:39.468505 4795 generic.go:334] "Generic (PLEG): container finished" podID="4552f7de-b8a7-4456-81ce-1faa3a69d96b" containerID="842dd230f2b2c112606b5dd4de601db0d5e31ce4587739f4436acaae674454f7" exitCode=137 Nov 29 08:08:39 crc kubenswrapper[4795]: I1129 08:08:39.468886 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"4552f7de-b8a7-4456-81ce-1faa3a69d96b","Type":"ContainerDied","Data":"842dd230f2b2c112606b5dd4de601db0d5e31ce4587739f4436acaae674454f7"} Nov 29 08:08:39 crc kubenswrapper[4795]: I1129 08:08:39.729650 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Nov 29 08:08:39 crc kubenswrapper[4795]: I1129 08:08:39.769263 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.075257 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.168984 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4552f7de-b8a7-4456-81ce-1faa3a69d96b-config-data\") pod \"4552f7de-b8a7-4456-81ce-1faa3a69d96b\" (UID: \"4552f7de-b8a7-4456-81ce-1faa3a69d96b\") " Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.169194 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4552f7de-b8a7-4456-81ce-1faa3a69d96b-combined-ca-bundle\") pod \"4552f7de-b8a7-4456-81ce-1faa3a69d96b\" (UID: \"4552f7de-b8a7-4456-81ce-1faa3a69d96b\") " Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.169301 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4552f7de-b8a7-4456-81ce-1faa3a69d96b-scripts\") pod \"4552f7de-b8a7-4456-81ce-1faa3a69d96b\" (UID: \"4552f7de-b8a7-4456-81ce-1faa3a69d96b\") " Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.169360 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dq47\" (UniqueName: \"kubernetes.io/projected/4552f7de-b8a7-4456-81ce-1faa3a69d96b-kube-api-access-2dq47\") pod \"4552f7de-b8a7-4456-81ce-1faa3a69d96b\" (UID: \"4552f7de-b8a7-4456-81ce-1faa3a69d96b\") " Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.180770 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4552f7de-b8a7-4456-81ce-1faa3a69d96b-scripts" (OuterVolumeSpecName: "scripts") pod "4552f7de-b8a7-4456-81ce-1faa3a69d96b" (UID: "4552f7de-b8a7-4456-81ce-1faa3a69d96b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.183863 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4552f7de-b8a7-4456-81ce-1faa3a69d96b-kube-api-access-2dq47" (OuterVolumeSpecName: "kube-api-access-2dq47") pod "4552f7de-b8a7-4456-81ce-1faa3a69d96b" (UID: "4552f7de-b8a7-4456-81ce-1faa3a69d96b"). InnerVolumeSpecName "kube-api-access-2dq47". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.274672 4795 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4552f7de-b8a7-4456-81ce-1faa3a69d96b-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.275194 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dq47\" (UniqueName: \"kubernetes.io/projected/4552f7de-b8a7-4456-81ce-1faa3a69d96b-kube-api-access-2dq47\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.421177 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4552f7de-b8a7-4456-81ce-1faa3a69d96b-config-data" (OuterVolumeSpecName: "config-data") pod "4552f7de-b8a7-4456-81ce-1faa3a69d96b" (UID: "4552f7de-b8a7-4456-81ce-1faa3a69d96b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.426241 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.459222 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4552f7de-b8a7-4456-81ce-1faa3a69d96b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4552f7de-b8a7-4456-81ce-1faa3a69d96b" (UID: "4552f7de-b8a7-4456-81ce-1faa3a69d96b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.478547 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4e7a65c-2285-410a-8759-75a41c63d9b1-combined-ca-bundle\") pod \"b4e7a65c-2285-410a-8759-75a41c63d9b1\" (UID: \"b4e7a65c-2285-410a-8759-75a41c63d9b1\") " Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.478741 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bblxf\" (UniqueName: \"kubernetes.io/projected/b4e7a65c-2285-410a-8759-75a41c63d9b1-kube-api-access-bblxf\") pod \"b4e7a65c-2285-410a-8759-75a41c63d9b1\" (UID: \"b4e7a65c-2285-410a-8759-75a41c63d9b1\") " Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.478840 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4e7a65c-2285-410a-8759-75a41c63d9b1-config-data\") pod \"b4e7a65c-2285-410a-8759-75a41c63d9b1\" (UID: \"b4e7a65c-2285-410a-8759-75a41c63d9b1\") " Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.478881 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4e7a65c-2285-410a-8759-75a41c63d9b1-logs\") pod \"b4e7a65c-2285-410a-8759-75a41c63d9b1\" (UID: \"b4e7a65c-2285-410a-8759-75a41c63d9b1\") " Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.479541 4795 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4552f7de-b8a7-4456-81ce-1faa3a69d96b-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.479555 4795 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4552f7de-b8a7-4456-81ce-1faa3a69d96b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.480090 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4e7a65c-2285-410a-8759-75a41c63d9b1-logs" (OuterVolumeSpecName: "logs") pod "b4e7a65c-2285-410a-8759-75a41c63d9b1" (UID: "b4e7a65c-2285-410a-8759-75a41c63d9b1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.483781 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4e7a65c-2285-410a-8759-75a41c63d9b1-kube-api-access-bblxf" (OuterVolumeSpecName: "kube-api-access-bblxf") pod "b4e7a65c-2285-410a-8759-75a41c63d9b1" (UID: "b4e7a65c-2285-410a-8759-75a41c63d9b1"). InnerVolumeSpecName "kube-api-access-bblxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.494757 4795 generic.go:334] "Generic (PLEG): container finished" podID="b4e7a65c-2285-410a-8759-75a41c63d9b1" containerID="405089a7f64e11e1f9c1842dfadf678d9cf93b53969589b26bd04e0568184bc2" exitCode=0 Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.494825 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b4e7a65c-2285-410a-8759-75a41c63d9b1","Type":"ContainerDied","Data":"405089a7f64e11e1f9c1842dfadf678d9cf93b53969589b26bd04e0568184bc2"} Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.494859 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b4e7a65c-2285-410a-8759-75a41c63d9b1","Type":"ContainerDied","Data":"c557839e6f82748ad17f82ddf610b578aa4b955af831835612ce95a4ad3b9e31"} Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.494879 4795 scope.go:117] "RemoveContainer" containerID="405089a7f64e11e1f9c1842dfadf678d9cf93b53969589b26bd04e0568184bc2" Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.495084 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.506748 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.506793 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"4552f7de-b8a7-4456-81ce-1faa3a69d96b","Type":"ContainerDied","Data":"f42c05addda10b6c3da07f39657dd92cf0b5e56aed9b4d12fbe0f5efad8710b2"} Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.532386 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4e7a65c-2285-410a-8759-75a41c63d9b1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b4e7a65c-2285-410a-8759-75a41c63d9b1" (UID: "b4e7a65c-2285-410a-8759-75a41c63d9b1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.534215 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.540048 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4e7a65c-2285-410a-8759-75a41c63d9b1-config-data" (OuterVolumeSpecName: "config-data") pod "b4e7a65c-2285-410a-8759-75a41c63d9b1" (UID: "b4e7a65c-2285-410a-8759-75a41c63d9b1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.554513 4795 scope.go:117] "RemoveContainer" containerID="a76156f16136e23a3359778dc457f70a0c6ccf09563dc20edd423b0cdc122bdf" Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.590933 4795 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4e7a65c-2285-410a-8759-75a41c63d9b1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.591315 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bblxf\" (UniqueName: \"kubernetes.io/projected/b4e7a65c-2285-410a-8759-75a41c63d9b1-kube-api-access-bblxf\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.591424 4795 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4e7a65c-2285-410a-8759-75a41c63d9b1-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.591510 4795 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4e7a65c-2285-410a-8759-75a41c63d9b1-logs\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.617957 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.638157 4795 scope.go:117] "RemoveContainer" containerID="405089a7f64e11e1f9c1842dfadf678d9cf93b53969589b26bd04e0568184bc2" Nov 29 08:08:40 crc kubenswrapper[4795]: E1129 08:08:40.652811 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"405089a7f64e11e1f9c1842dfadf678d9cf93b53969589b26bd04e0568184bc2\": container with ID starting with 405089a7f64e11e1f9c1842dfadf678d9cf93b53969589b26bd04e0568184bc2 not found: ID does not exist" containerID="405089a7f64e11e1f9c1842dfadf678d9cf93b53969589b26bd04e0568184bc2" Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.652886 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"405089a7f64e11e1f9c1842dfadf678d9cf93b53969589b26bd04e0568184bc2"} err="failed to get container status \"405089a7f64e11e1f9c1842dfadf678d9cf93b53969589b26bd04e0568184bc2\": rpc error: code = NotFound desc = could not find container \"405089a7f64e11e1f9c1842dfadf678d9cf93b53969589b26bd04e0568184bc2\": container with ID starting with 405089a7f64e11e1f9c1842dfadf678d9cf93b53969589b26bd04e0568184bc2 not found: ID does not exist" Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.652935 4795 scope.go:117] "RemoveContainer" containerID="a76156f16136e23a3359778dc457f70a0c6ccf09563dc20edd423b0cdc122bdf" Nov 29 08:08:40 crc kubenswrapper[4795]: E1129 08:08:40.656483 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a76156f16136e23a3359778dc457f70a0c6ccf09563dc20edd423b0cdc122bdf\": container with ID starting with a76156f16136e23a3359778dc457f70a0c6ccf09563dc20edd423b0cdc122bdf not found: ID does not exist" containerID="a76156f16136e23a3359778dc457f70a0c6ccf09563dc20edd423b0cdc122bdf" Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.661449 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a76156f16136e23a3359778dc457f70a0c6ccf09563dc20edd423b0cdc122bdf"} err="failed to get container status \"a76156f16136e23a3359778dc457f70a0c6ccf09563dc20edd423b0cdc122bdf\": rpc error: code = NotFound desc = could not find container \"a76156f16136e23a3359778dc457f70a0c6ccf09563dc20edd423b0cdc122bdf\": container with ID starting with a76156f16136e23a3359778dc457f70a0c6ccf09563dc20edd423b0cdc122bdf not found: ID does not exist" Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.661637 4795 scope.go:117] "RemoveContainer" containerID="842dd230f2b2c112606b5dd4de601db0d5e31ce4587739f4436acaae674454f7" Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.665207 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.686753 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Nov 29 08:08:40 crc kubenswrapper[4795]: E1129 08:08:40.696490 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4e7a65c-2285-410a-8759-75a41c63d9b1" containerName="nova-api-log" Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.696544 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4e7a65c-2285-410a-8759-75a41c63d9b1" containerName="nova-api-log" Nov 29 08:08:40 crc kubenswrapper[4795]: E1129 08:08:40.696629 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4552f7de-b8a7-4456-81ce-1faa3a69d96b" containerName="aodh-evaluator" Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.696643 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="4552f7de-b8a7-4456-81ce-1faa3a69d96b" containerName="aodh-evaluator" Nov 29 08:08:40 crc kubenswrapper[4795]: E1129 08:08:40.696678 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4552f7de-b8a7-4456-81ce-1faa3a69d96b" containerName="aodh-listener" Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.697774 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="4552f7de-b8a7-4456-81ce-1faa3a69d96b" containerName="aodh-listener" Nov 29 08:08:40 crc kubenswrapper[4795]: E1129 08:08:40.697819 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4e7a65c-2285-410a-8759-75a41c63d9b1" containerName="nova-api-api" Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.697830 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4e7a65c-2285-410a-8759-75a41c63d9b1" containerName="nova-api-api" Nov 29 08:08:40 crc kubenswrapper[4795]: E1129 08:08:40.697856 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4552f7de-b8a7-4456-81ce-1faa3a69d96b" containerName="aodh-api" Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.697865 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="4552f7de-b8a7-4456-81ce-1faa3a69d96b" containerName="aodh-api" Nov 29 08:08:40 crc kubenswrapper[4795]: E1129 08:08:40.697891 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4552f7de-b8a7-4456-81ce-1faa3a69d96b" containerName="aodh-notifier" Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.697899 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="4552f7de-b8a7-4456-81ce-1faa3a69d96b" containerName="aodh-notifier" Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.698552 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="4552f7de-b8a7-4456-81ce-1faa3a69d96b" containerName="aodh-api" Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.698585 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="4552f7de-b8a7-4456-81ce-1faa3a69d96b" containerName="aodh-notifier" Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.698622 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="4552f7de-b8a7-4456-81ce-1faa3a69d96b" containerName="aodh-evaluator" Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.698642 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4e7a65c-2285-410a-8759-75a41c63d9b1" containerName="nova-api-log" Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.698670 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="4552f7de-b8a7-4456-81ce-1faa3a69d96b" containerName="aodh-listener" Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.698727 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4e7a65c-2285-410a-8759-75a41c63d9b1" containerName="nova-api-api" Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.708227 4795 scope.go:117] "RemoveContainer" containerID="777f468745de242a89bb07d681c713941fc528d980006e966b7732f9ccd65536" Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.710394 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.713526 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.713678 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.713737 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-xg4wp" Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.713818 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.715909 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.732642 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.763525 4795 scope.go:117] "RemoveContainer" containerID="7cc1b3ae22d6b60f126316e366b143bf002a6d56d9d2b84dc6cdce4ebe4afb21" Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.803529 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72d0f4d2-d953-4cfa-a24e-221e2cf4e994-config-data\") pod \"aodh-0\" (UID: \"72d0f4d2-d953-4cfa-a24e-221e2cf4e994\") " pod="openstack/aodh-0" Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.803635 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/72d0f4d2-d953-4cfa-a24e-221e2cf4e994-public-tls-certs\") pod \"aodh-0\" (UID: \"72d0f4d2-d953-4cfa-a24e-221e2cf4e994\") " pod="openstack/aodh-0" Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.803666 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72d0f4d2-d953-4cfa-a24e-221e2cf4e994-combined-ca-bundle\") pod \"aodh-0\" (UID: \"72d0f4d2-d953-4cfa-a24e-221e2cf4e994\") " pod="openstack/aodh-0" Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.803763 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72d0f4d2-d953-4cfa-a24e-221e2cf4e994-scripts\") pod \"aodh-0\" (UID: \"72d0f4d2-d953-4cfa-a24e-221e2cf4e994\") " pod="openstack/aodh-0" Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.803786 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/72d0f4d2-d953-4cfa-a24e-221e2cf4e994-internal-tls-certs\") pod \"aodh-0\" (UID: \"72d0f4d2-d953-4cfa-a24e-221e2cf4e994\") " pod="openstack/aodh-0" Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.803816 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtv6q\" (UniqueName: \"kubernetes.io/projected/72d0f4d2-d953-4cfa-a24e-221e2cf4e994-kube-api-access-qtv6q\") pod \"aodh-0\" (UID: \"72d0f4d2-d953-4cfa-a24e-221e2cf4e994\") " pod="openstack/aodh-0" Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.821778 4795 scope.go:117] "RemoveContainer" containerID="5139803ab35b32904c3a52717017f5244689ecab0487b355a9edc071e8312479" Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.838316 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-sf8vb"] Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.840870 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-sf8vb" Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.844526 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.845153 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.903059 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-sf8vb"] Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.917092 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce709568-6b90-4abf-96e3-bc9369ea9296-scripts\") pod \"nova-cell1-cell-mapping-sf8vb\" (UID: \"ce709568-6b90-4abf-96e3-bc9369ea9296\") " pod="openstack/nova-cell1-cell-mapping-sf8vb" Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.917283 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72d0f4d2-d953-4cfa-a24e-221e2cf4e994-config-data\") pod \"aodh-0\" (UID: \"72d0f4d2-d953-4cfa-a24e-221e2cf4e994\") " pod="openstack/aodh-0" Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.917446 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72d0f4d2-d953-4cfa-a24e-221e2cf4e994-combined-ca-bundle\") pod \"aodh-0\" (UID: \"72d0f4d2-d953-4cfa-a24e-221e2cf4e994\") " pod="openstack/aodh-0" Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.917530 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/72d0f4d2-d953-4cfa-a24e-221e2cf4e994-public-tls-certs\") pod \"aodh-0\" (UID: \"72d0f4d2-d953-4cfa-a24e-221e2cf4e994\") " pod="openstack/aodh-0" Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.917757 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbhml\" (UniqueName: \"kubernetes.io/projected/ce709568-6b90-4abf-96e3-bc9369ea9296-kube-api-access-rbhml\") pod \"nova-cell1-cell-mapping-sf8vb\" (UID: \"ce709568-6b90-4abf-96e3-bc9369ea9296\") " pod="openstack/nova-cell1-cell-mapping-sf8vb" Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.917941 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72d0f4d2-d953-4cfa-a24e-221e2cf4e994-scripts\") pod \"aodh-0\" (UID: \"72d0f4d2-d953-4cfa-a24e-221e2cf4e994\") " pod="openstack/aodh-0" Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.918019 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/72d0f4d2-d953-4cfa-a24e-221e2cf4e994-internal-tls-certs\") pod \"aodh-0\" (UID: \"72d0f4d2-d953-4cfa-a24e-221e2cf4e994\") " pod="openstack/aodh-0" Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.918124 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtv6q\" (UniqueName: \"kubernetes.io/projected/72d0f4d2-d953-4cfa-a24e-221e2cf4e994-kube-api-access-qtv6q\") pod \"aodh-0\" (UID: \"72d0f4d2-d953-4cfa-a24e-221e2cf4e994\") " pod="openstack/aodh-0" Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.918406 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce709568-6b90-4abf-96e3-bc9369ea9296-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-sf8vb\" (UID: \"ce709568-6b90-4abf-96e3-bc9369ea9296\") " pod="openstack/nova-cell1-cell-mapping-sf8vb" Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.918528 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce709568-6b90-4abf-96e3-bc9369ea9296-config-data\") pod \"nova-cell1-cell-mapping-sf8vb\" (UID: \"ce709568-6b90-4abf-96e3-bc9369ea9296\") " pod="openstack/nova-cell1-cell-mapping-sf8vb" Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.917318 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.929927 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72d0f4d2-d953-4cfa-a24e-221e2cf4e994-config-data\") pod \"aodh-0\" (UID: \"72d0f4d2-d953-4cfa-a24e-221e2cf4e994\") " pod="openstack/aodh-0" Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.933108 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/72d0f4d2-d953-4cfa-a24e-221e2cf4e994-public-tls-certs\") pod \"aodh-0\" (UID: \"72d0f4d2-d953-4cfa-a24e-221e2cf4e994\") " pod="openstack/aodh-0" Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.934401 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/72d0f4d2-d953-4cfa-a24e-221e2cf4e994-internal-tls-certs\") pod \"aodh-0\" (UID: \"72d0f4d2-d953-4cfa-a24e-221e2cf4e994\") " pod="openstack/aodh-0" Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.935336 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72d0f4d2-d953-4cfa-a24e-221e2cf4e994-scripts\") pod \"aodh-0\" (UID: \"72d0f4d2-d953-4cfa-a24e-221e2cf4e994\") " pod="openstack/aodh-0" Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.949143 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtv6q\" (UniqueName: \"kubernetes.io/projected/72d0f4d2-d953-4cfa-a24e-221e2cf4e994-kube-api-access-qtv6q\") pod \"aodh-0\" (UID: \"72d0f4d2-d953-4cfa-a24e-221e2cf4e994\") " pod="openstack/aodh-0" Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.951703 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.958660 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72d0f4d2-d953-4cfa-a24e-221e2cf4e994-combined-ca-bundle\") pod \"aodh-0\" (UID: \"72d0f4d2-d953-4cfa-a24e-221e2cf4e994\") " pod="openstack/aodh-0" Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.994654 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 29 08:08:40 crc kubenswrapper[4795]: I1129 08:08:40.996825 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 29 08:08:41 crc kubenswrapper[4795]: I1129 08:08:41.006504 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 29 08:08:41 crc kubenswrapper[4795]: I1129 08:08:41.029759 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 29 08:08:41 crc kubenswrapper[4795]: I1129 08:08:41.030229 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 29 08:08:41 crc kubenswrapper[4795]: I1129 08:08:41.030583 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce709568-6b90-4abf-96e3-bc9369ea9296-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-sf8vb\" (UID: \"ce709568-6b90-4abf-96e3-bc9369ea9296\") " pod="openstack/nova-cell1-cell-mapping-sf8vb" Nov 29 08:08:41 crc kubenswrapper[4795]: I1129 08:08:41.030767 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e4bde91-81cb-4873-a759-31f23757ad0d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6e4bde91-81cb-4873-a759-31f23757ad0d\") " pod="openstack/nova-api-0" Nov 29 08:08:41 crc kubenswrapper[4795]: I1129 08:08:41.030862 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e4bde91-81cb-4873-a759-31f23757ad0d-logs\") pod \"nova-api-0\" (UID: \"6e4bde91-81cb-4873-a759-31f23757ad0d\") " pod="openstack/nova-api-0" Nov 29 08:08:41 crc kubenswrapper[4795]: I1129 08:08:41.030920 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce709568-6b90-4abf-96e3-bc9369ea9296-config-data\") pod \"nova-cell1-cell-mapping-sf8vb\" (UID: \"ce709568-6b90-4abf-96e3-bc9369ea9296\") " pod="openstack/nova-cell1-cell-mapping-sf8vb" Nov 29 08:08:41 crc kubenswrapper[4795]: I1129 08:08:41.030989 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcff9\" (UniqueName: \"kubernetes.io/projected/6e4bde91-81cb-4873-a759-31f23757ad0d-kube-api-access-pcff9\") pod \"nova-api-0\" (UID: \"6e4bde91-81cb-4873-a759-31f23757ad0d\") " pod="openstack/nova-api-0" Nov 29 08:08:41 crc kubenswrapper[4795]: I1129 08:08:41.030999 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 29 08:08:41 crc kubenswrapper[4795]: I1129 08:08:41.031137 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce709568-6b90-4abf-96e3-bc9369ea9296-scripts\") pod \"nova-cell1-cell-mapping-sf8vb\" (UID: \"ce709568-6b90-4abf-96e3-bc9369ea9296\") " pod="openstack/nova-cell1-cell-mapping-sf8vb" Nov 29 08:08:41 crc kubenswrapper[4795]: I1129 08:08:41.031311 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e4bde91-81cb-4873-a759-31f23757ad0d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"6e4bde91-81cb-4873-a759-31f23757ad0d\") " pod="openstack/nova-api-0" Nov 29 08:08:41 crc kubenswrapper[4795]: I1129 08:08:41.031371 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e4bde91-81cb-4873-a759-31f23757ad0d-config-data\") pod \"nova-api-0\" (UID: \"6e4bde91-81cb-4873-a759-31f23757ad0d\") " pod="openstack/nova-api-0" Nov 29 08:08:41 crc kubenswrapper[4795]: I1129 08:08:41.031453 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbhml\" (UniqueName: \"kubernetes.io/projected/ce709568-6b90-4abf-96e3-bc9369ea9296-kube-api-access-rbhml\") pod \"nova-cell1-cell-mapping-sf8vb\" (UID: \"ce709568-6b90-4abf-96e3-bc9369ea9296\") " pod="openstack/nova-cell1-cell-mapping-sf8vb" Nov 29 08:08:41 crc kubenswrapper[4795]: I1129 08:08:41.031572 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e4bde91-81cb-4873-a759-31f23757ad0d-public-tls-certs\") pod \"nova-api-0\" (UID: \"6e4bde91-81cb-4873-a759-31f23757ad0d\") " pod="openstack/nova-api-0" Nov 29 08:08:41 crc kubenswrapper[4795]: I1129 08:08:41.049215 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce709568-6b90-4abf-96e3-bc9369ea9296-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-sf8vb\" (UID: \"ce709568-6b90-4abf-96e3-bc9369ea9296\") " pod="openstack/nova-cell1-cell-mapping-sf8vb" Nov 29 08:08:41 crc kubenswrapper[4795]: I1129 08:08:41.049380 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce709568-6b90-4abf-96e3-bc9369ea9296-scripts\") pod \"nova-cell1-cell-mapping-sf8vb\" (UID: \"ce709568-6b90-4abf-96e3-bc9369ea9296\") " pod="openstack/nova-cell1-cell-mapping-sf8vb" Nov 29 08:08:41 crc kubenswrapper[4795]: I1129 08:08:41.066530 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 29 08:08:41 crc kubenswrapper[4795]: I1129 08:08:41.084221 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbhml\" (UniqueName: \"kubernetes.io/projected/ce709568-6b90-4abf-96e3-bc9369ea9296-kube-api-access-rbhml\") pod \"nova-cell1-cell-mapping-sf8vb\" (UID: \"ce709568-6b90-4abf-96e3-bc9369ea9296\") " pod="openstack/nova-cell1-cell-mapping-sf8vb" Nov 29 08:08:41 crc kubenswrapper[4795]: I1129 08:08:41.121551 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce709568-6b90-4abf-96e3-bc9369ea9296-config-data\") pod \"nova-cell1-cell-mapping-sf8vb\" (UID: \"ce709568-6b90-4abf-96e3-bc9369ea9296\") " pod="openstack/nova-cell1-cell-mapping-sf8vb" Nov 29 08:08:41 crc kubenswrapper[4795]: I1129 08:08:41.134426 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e4bde91-81cb-4873-a759-31f23757ad0d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6e4bde91-81cb-4873-a759-31f23757ad0d\") " pod="openstack/nova-api-0" Nov 29 08:08:41 crc kubenswrapper[4795]: I1129 08:08:41.134818 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e4bde91-81cb-4873-a759-31f23757ad0d-logs\") pod \"nova-api-0\" (UID: \"6e4bde91-81cb-4873-a759-31f23757ad0d\") " pod="openstack/nova-api-0" Nov 29 08:08:41 crc kubenswrapper[4795]: I1129 08:08:41.134859 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcff9\" (UniqueName: \"kubernetes.io/projected/6e4bde91-81cb-4873-a759-31f23757ad0d-kube-api-access-pcff9\") pod \"nova-api-0\" (UID: \"6e4bde91-81cb-4873-a759-31f23757ad0d\") " pod="openstack/nova-api-0" Nov 29 08:08:41 crc kubenswrapper[4795]: I1129 08:08:41.134992 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e4bde91-81cb-4873-a759-31f23757ad0d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"6e4bde91-81cb-4873-a759-31f23757ad0d\") " pod="openstack/nova-api-0" Nov 29 08:08:41 crc kubenswrapper[4795]: I1129 08:08:41.135018 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e4bde91-81cb-4873-a759-31f23757ad0d-config-data\") pod \"nova-api-0\" (UID: \"6e4bde91-81cb-4873-a759-31f23757ad0d\") " pod="openstack/nova-api-0" Nov 29 08:08:41 crc kubenswrapper[4795]: I1129 08:08:41.135089 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e4bde91-81cb-4873-a759-31f23757ad0d-public-tls-certs\") pod \"nova-api-0\" (UID: \"6e4bde91-81cb-4873-a759-31f23757ad0d\") " pod="openstack/nova-api-0" Nov 29 08:08:41 crc kubenswrapper[4795]: I1129 08:08:41.137878 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e4bde91-81cb-4873-a759-31f23757ad0d-logs\") pod \"nova-api-0\" (UID: \"6e4bde91-81cb-4873-a759-31f23757ad0d\") " pod="openstack/nova-api-0" Nov 29 08:08:41 crc kubenswrapper[4795]: I1129 08:08:41.138985 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e4bde91-81cb-4873-a759-31f23757ad0d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6e4bde91-81cb-4873-a759-31f23757ad0d\") " pod="openstack/nova-api-0" Nov 29 08:08:41 crc kubenswrapper[4795]: I1129 08:08:41.140057 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e4bde91-81cb-4873-a759-31f23757ad0d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"6e4bde91-81cb-4873-a759-31f23757ad0d\") " pod="openstack/nova-api-0" Nov 29 08:08:41 crc kubenswrapper[4795]: I1129 08:08:41.140519 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e4bde91-81cb-4873-a759-31f23757ad0d-config-data\") pod \"nova-api-0\" (UID: \"6e4bde91-81cb-4873-a759-31f23757ad0d\") " pod="openstack/nova-api-0" Nov 29 08:08:41 crc kubenswrapper[4795]: I1129 08:08:41.142840 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e4bde91-81cb-4873-a759-31f23757ad0d-public-tls-certs\") pod \"nova-api-0\" (UID: \"6e4bde91-81cb-4873-a759-31f23757ad0d\") " pod="openstack/nova-api-0" Nov 29 08:08:41 crc kubenswrapper[4795]: I1129 08:08:41.158189 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcff9\" (UniqueName: \"kubernetes.io/projected/6e4bde91-81cb-4873-a759-31f23757ad0d-kube-api-access-pcff9\") pod \"nova-api-0\" (UID: \"6e4bde91-81cb-4873-a759-31f23757ad0d\") " pod="openstack/nova-api-0" Nov 29 08:08:41 crc kubenswrapper[4795]: I1129 08:08:41.191270 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-sf8vb" Nov 29 08:08:41 crc kubenswrapper[4795]: I1129 08:08:41.373298 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 29 08:08:41 crc kubenswrapper[4795]: I1129 08:08:41.715265 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 29 08:08:41 crc kubenswrapper[4795]: I1129 08:08:41.849735 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-sf8vb"] Nov 29 08:08:41 crc kubenswrapper[4795]: W1129 08:08:41.851515 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce709568_6b90_4abf_96e3_bc9369ea9296.slice/crio-e754f3fdcbd1fd213be890dd9db207f0abdf8c4e9c8189b7a1c2e8ff8357632a WatchSource:0}: Error finding container e754f3fdcbd1fd213be890dd9db207f0abdf8c4e9c8189b7a1c2e8ff8357632a: Status 404 returned error can't find the container with id e754f3fdcbd1fd213be890dd9db207f0abdf8c4e9c8189b7a1c2e8ff8357632a Nov 29 08:08:41 crc kubenswrapper[4795]: I1129 08:08:41.941716 4795 patch_prober.go:28] interesting pod/machine-config-daemon-bkmq6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:08:41 crc kubenswrapper[4795]: I1129 08:08:41.941780 4795 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:08:41 crc kubenswrapper[4795]: I1129 08:08:41.941823 4795 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" Nov 29 08:08:41 crc kubenswrapper[4795]: I1129 08:08:41.942626 4795 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cb788e622da5bdf804dcfa0c959dfb0158a848bc427ed90337dfd530ea78ea5d"} pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 08:08:41 crc kubenswrapper[4795]: I1129 08:08:41.942685 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" containerID="cri-o://cb788e622da5bdf804dcfa0c959dfb0158a848bc427ed90337dfd530ea78ea5d" gracePeriod=600 Nov 29 08:08:42 crc kubenswrapper[4795]: I1129 08:08:42.010028 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 29 08:08:42 crc kubenswrapper[4795]: E1129 08:08:42.079077 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:08:42 crc kubenswrapper[4795]: I1129 08:08:42.297988 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4552f7de-b8a7-4456-81ce-1faa3a69d96b" path="/var/lib/kubelet/pods/4552f7de-b8a7-4456-81ce-1faa3a69d96b/volumes" Nov 29 08:08:42 crc kubenswrapper[4795]: I1129 08:08:42.300236 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4e7a65c-2285-410a-8759-75a41c63d9b1" path="/var/lib/kubelet/pods/b4e7a65c-2285-410a-8759-75a41c63d9b1/volumes" Nov 29 08:08:42 crc kubenswrapper[4795]: I1129 08:08:42.535336 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-sf8vb" event={"ID":"ce709568-6b90-4abf-96e3-bc9369ea9296","Type":"ContainerStarted","Data":"1cd9464ccea1d80c37c072f524a63a068d4673551e6e96f3a6303f30daad348a"} Nov 29 08:08:42 crc kubenswrapper[4795]: I1129 08:08:42.535396 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-sf8vb" event={"ID":"ce709568-6b90-4abf-96e3-bc9369ea9296","Type":"ContainerStarted","Data":"e754f3fdcbd1fd213be890dd9db207f0abdf8c4e9c8189b7a1c2e8ff8357632a"} Nov 29 08:08:42 crc kubenswrapper[4795]: I1129 08:08:42.537726 4795 generic.go:334] "Generic (PLEG): container finished" podID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerID="cb788e622da5bdf804dcfa0c959dfb0158a848bc427ed90337dfd530ea78ea5d" exitCode=0 Nov 29 08:08:42 crc kubenswrapper[4795]: I1129 08:08:42.537779 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" event={"ID":"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1","Type":"ContainerDied","Data":"cb788e622da5bdf804dcfa0c959dfb0158a848bc427ed90337dfd530ea78ea5d"} Nov 29 08:08:42 crc kubenswrapper[4795]: I1129 08:08:42.537853 4795 scope.go:117] "RemoveContainer" containerID="92bdb17e0171829dd368ff834d02a1b920553ce70337271f54fb17757386f741" Nov 29 08:08:42 crc kubenswrapper[4795]: I1129 08:08:42.538252 4795 scope.go:117] "RemoveContainer" containerID="cb788e622da5bdf804dcfa0c959dfb0158a848bc427ed90337dfd530ea78ea5d" Nov 29 08:08:42 crc kubenswrapper[4795]: E1129 08:08:42.538610 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:08:42 crc kubenswrapper[4795]: I1129 08:08:42.541124 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"72d0f4d2-d953-4cfa-a24e-221e2cf4e994","Type":"ContainerStarted","Data":"3223a642dc38eb3ad653079297261bf76061fcc0bbd8360b54f0ebe5658dec00"} Nov 29 08:08:42 crc kubenswrapper[4795]: I1129 08:08:42.547823 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6e4bde91-81cb-4873-a759-31f23757ad0d","Type":"ContainerStarted","Data":"fc131ab9522b96a8110256851cf82244be5cf1f4727aac56d0140635286f2311"} Nov 29 08:08:42 crc kubenswrapper[4795]: I1129 08:08:42.547868 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6e4bde91-81cb-4873-a759-31f23757ad0d","Type":"ContainerStarted","Data":"759154865bc923070b9002eb87f9b806be752aa56e0f09b58d2f9a6af6b1f972"} Nov 29 08:08:42 crc kubenswrapper[4795]: I1129 08:08:42.568609 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-sf8vb" podStartSLOduration=2.568568746 podStartE2EDuration="2.568568746s" podCreationTimestamp="2025-11-29 08:08:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 08:08:42.551158113 +0000 UTC m=+1768.526733903" watchObservedRunningTime="2025-11-29 08:08:42.568568746 +0000 UTC m=+1768.544144536" Nov 29 08:08:43 crc kubenswrapper[4795]: I1129 08:08:43.569170 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6e4bde91-81cb-4873-a759-31f23757ad0d","Type":"ContainerStarted","Data":"e059c975ab302e3c827b6f673a648defcd8842386808630d67f8a6ba3a74afc9"} Nov 29 08:08:44 crc kubenswrapper[4795]: I1129 08:08:44.086836 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6d99f6bc7f-5xkn9" Nov 29 08:08:44 crc kubenswrapper[4795]: I1129 08:08:44.118764 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=4.118738736 podStartE2EDuration="4.118738736s" podCreationTimestamp="2025-11-29 08:08:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 08:08:43.591217717 +0000 UTC m=+1769.566793527" watchObservedRunningTime="2025-11-29 08:08:44.118738736 +0000 UTC m=+1770.094314526" Nov 29 08:08:44 crc kubenswrapper[4795]: I1129 08:08:44.145234 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7877d89589-5t79v"] Nov 29 08:08:44 crc kubenswrapper[4795]: I1129 08:08:44.146911 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7877d89589-5t79v" podUID="8678c9b5-b7c3-4448-9051-17a60a1b92d6" containerName="dnsmasq-dns" containerID="cri-o://cd8ff6cafe85973f6f3fc5a92709a648dd2baf5503097b3eb1d1a06dded7a3fe" gracePeriod=10 Nov 29 08:08:44 crc kubenswrapper[4795]: I1129 08:08:44.583221 4795 generic.go:334] "Generic (PLEG): container finished" podID="8678c9b5-b7c3-4448-9051-17a60a1b92d6" containerID="cd8ff6cafe85973f6f3fc5a92709a648dd2baf5503097b3eb1d1a06dded7a3fe" exitCode=0 Nov 29 08:08:44 crc kubenswrapper[4795]: I1129 08:08:44.583344 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7877d89589-5t79v" event={"ID":"8678c9b5-b7c3-4448-9051-17a60a1b92d6","Type":"ContainerDied","Data":"cd8ff6cafe85973f6f3fc5a92709a648dd2baf5503097b3eb1d1a06dded7a3fe"} Nov 29 08:08:44 crc kubenswrapper[4795]: I1129 08:08:44.820691 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7877d89589-5t79v" Nov 29 08:08:44 crc kubenswrapper[4795]: I1129 08:08:44.958385 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2htx\" (UniqueName: \"kubernetes.io/projected/8678c9b5-b7c3-4448-9051-17a60a1b92d6-kube-api-access-x2htx\") pod \"8678c9b5-b7c3-4448-9051-17a60a1b92d6\" (UID: \"8678c9b5-b7c3-4448-9051-17a60a1b92d6\") " Nov 29 08:08:44 crc kubenswrapper[4795]: I1129 08:08:44.958868 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8678c9b5-b7c3-4448-9051-17a60a1b92d6-ovsdbserver-sb\") pod \"8678c9b5-b7c3-4448-9051-17a60a1b92d6\" (UID: \"8678c9b5-b7c3-4448-9051-17a60a1b92d6\") " Nov 29 08:08:44 crc kubenswrapper[4795]: I1129 08:08:44.958968 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8678c9b5-b7c3-4448-9051-17a60a1b92d6-ovsdbserver-nb\") pod \"8678c9b5-b7c3-4448-9051-17a60a1b92d6\" (UID: \"8678c9b5-b7c3-4448-9051-17a60a1b92d6\") " Nov 29 08:08:44 crc kubenswrapper[4795]: I1129 08:08:44.959121 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8678c9b5-b7c3-4448-9051-17a60a1b92d6-dns-svc\") pod \"8678c9b5-b7c3-4448-9051-17a60a1b92d6\" (UID: \"8678c9b5-b7c3-4448-9051-17a60a1b92d6\") " Nov 29 08:08:44 crc kubenswrapper[4795]: I1129 08:08:44.959250 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8678c9b5-b7c3-4448-9051-17a60a1b92d6-dns-swift-storage-0\") pod \"8678c9b5-b7c3-4448-9051-17a60a1b92d6\" (UID: \"8678c9b5-b7c3-4448-9051-17a60a1b92d6\") " Nov 29 08:08:44 crc kubenswrapper[4795]: I1129 08:08:44.959342 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8678c9b5-b7c3-4448-9051-17a60a1b92d6-config\") pod \"8678c9b5-b7c3-4448-9051-17a60a1b92d6\" (UID: \"8678c9b5-b7c3-4448-9051-17a60a1b92d6\") " Nov 29 08:08:44 crc kubenswrapper[4795]: I1129 08:08:44.969053 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8678c9b5-b7c3-4448-9051-17a60a1b92d6-kube-api-access-x2htx" (OuterVolumeSpecName: "kube-api-access-x2htx") pod "8678c9b5-b7c3-4448-9051-17a60a1b92d6" (UID: "8678c9b5-b7c3-4448-9051-17a60a1b92d6"). InnerVolumeSpecName "kube-api-access-x2htx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:08:45 crc kubenswrapper[4795]: I1129 08:08:45.027312 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8678c9b5-b7c3-4448-9051-17a60a1b92d6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8678c9b5-b7c3-4448-9051-17a60a1b92d6" (UID: "8678c9b5-b7c3-4448-9051-17a60a1b92d6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:08:45 crc kubenswrapper[4795]: I1129 08:08:45.028446 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8678c9b5-b7c3-4448-9051-17a60a1b92d6-config" (OuterVolumeSpecName: "config") pod "8678c9b5-b7c3-4448-9051-17a60a1b92d6" (UID: "8678c9b5-b7c3-4448-9051-17a60a1b92d6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:08:45 crc kubenswrapper[4795]: I1129 08:08:45.038884 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8678c9b5-b7c3-4448-9051-17a60a1b92d6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8678c9b5-b7c3-4448-9051-17a60a1b92d6" (UID: "8678c9b5-b7c3-4448-9051-17a60a1b92d6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:08:45 crc kubenswrapper[4795]: I1129 08:08:45.046914 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8678c9b5-b7c3-4448-9051-17a60a1b92d6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8678c9b5-b7c3-4448-9051-17a60a1b92d6" (UID: "8678c9b5-b7c3-4448-9051-17a60a1b92d6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:08:45 crc kubenswrapper[4795]: I1129 08:08:45.053151 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8678c9b5-b7c3-4448-9051-17a60a1b92d6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8678c9b5-b7c3-4448-9051-17a60a1b92d6" (UID: "8678c9b5-b7c3-4448-9051-17a60a1b92d6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:08:45 crc kubenswrapper[4795]: I1129 08:08:45.062717 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2htx\" (UniqueName: \"kubernetes.io/projected/8678c9b5-b7c3-4448-9051-17a60a1b92d6-kube-api-access-x2htx\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:45 crc kubenswrapper[4795]: I1129 08:08:45.062767 4795 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8678c9b5-b7c3-4448-9051-17a60a1b92d6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:45 crc kubenswrapper[4795]: I1129 08:08:45.062778 4795 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8678c9b5-b7c3-4448-9051-17a60a1b92d6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:45 crc kubenswrapper[4795]: I1129 08:08:45.062788 4795 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8678c9b5-b7c3-4448-9051-17a60a1b92d6-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:45 crc kubenswrapper[4795]: I1129 08:08:45.062798 4795 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8678c9b5-b7c3-4448-9051-17a60a1b92d6-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:45 crc kubenswrapper[4795]: I1129 08:08:45.062808 4795 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8678c9b5-b7c3-4448-9051-17a60a1b92d6-config\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:45 crc kubenswrapper[4795]: I1129 08:08:45.595315 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7877d89589-5t79v" event={"ID":"8678c9b5-b7c3-4448-9051-17a60a1b92d6","Type":"ContainerDied","Data":"ee064aaf1e2d359354ae7dbd1d26c05bb8bcbf09ad0f125cee313b73dfe5ef20"} Nov 29 08:08:45 crc kubenswrapper[4795]: I1129 08:08:45.595361 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7877d89589-5t79v" Nov 29 08:08:45 crc kubenswrapper[4795]: I1129 08:08:45.595377 4795 scope.go:117] "RemoveContainer" containerID="cd8ff6cafe85973f6f3fc5a92709a648dd2baf5503097b3eb1d1a06dded7a3fe" Nov 29 08:08:45 crc kubenswrapper[4795]: I1129 08:08:45.631870 4795 scope.go:117] "RemoveContainer" containerID="25ddc0e31154f600166d574b3cdb585251e9beee99622657e991331a6b57b8c6" Nov 29 08:08:45 crc kubenswrapper[4795]: I1129 08:08:45.639288 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7877d89589-5t79v"] Nov 29 08:08:45 crc kubenswrapper[4795]: I1129 08:08:45.650936 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7877d89589-5t79v"] Nov 29 08:08:46 crc kubenswrapper[4795]: I1129 08:08:46.291494 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8678c9b5-b7c3-4448-9051-17a60a1b92d6" path="/var/lib/kubelet/pods/8678c9b5-b7c3-4448-9051-17a60a1b92d6/volumes" Nov 29 08:08:47 crc kubenswrapper[4795]: I1129 08:08:47.625311 4795 generic.go:334] "Generic (PLEG): container finished" podID="ce709568-6b90-4abf-96e3-bc9369ea9296" containerID="1cd9464ccea1d80c37c072f524a63a068d4673551e6e96f3a6303f30daad348a" exitCode=0 Nov 29 08:08:47 crc kubenswrapper[4795]: I1129 08:08:47.625399 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-sf8vb" event={"ID":"ce709568-6b90-4abf-96e3-bc9369ea9296","Type":"ContainerDied","Data":"1cd9464ccea1d80c37c072f524a63a068d4673551e6e96f3a6303f30daad348a"} Nov 29 08:08:49 crc kubenswrapper[4795]: I1129 08:08:49.073731 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-sf8vb" Nov 29 08:08:49 crc kubenswrapper[4795]: I1129 08:08:49.195926 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce709568-6b90-4abf-96e3-bc9369ea9296-combined-ca-bundle\") pod \"ce709568-6b90-4abf-96e3-bc9369ea9296\" (UID: \"ce709568-6b90-4abf-96e3-bc9369ea9296\") " Nov 29 08:08:49 crc kubenswrapper[4795]: I1129 08:08:49.196192 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbhml\" (UniqueName: \"kubernetes.io/projected/ce709568-6b90-4abf-96e3-bc9369ea9296-kube-api-access-rbhml\") pod \"ce709568-6b90-4abf-96e3-bc9369ea9296\" (UID: \"ce709568-6b90-4abf-96e3-bc9369ea9296\") " Nov 29 08:08:49 crc kubenswrapper[4795]: I1129 08:08:49.196332 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce709568-6b90-4abf-96e3-bc9369ea9296-config-data\") pod \"ce709568-6b90-4abf-96e3-bc9369ea9296\" (UID: \"ce709568-6b90-4abf-96e3-bc9369ea9296\") " Nov 29 08:08:49 crc kubenswrapper[4795]: I1129 08:08:49.196385 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce709568-6b90-4abf-96e3-bc9369ea9296-scripts\") pod \"ce709568-6b90-4abf-96e3-bc9369ea9296\" (UID: \"ce709568-6b90-4abf-96e3-bc9369ea9296\") " Nov 29 08:08:49 crc kubenswrapper[4795]: I1129 08:08:49.201373 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce709568-6b90-4abf-96e3-bc9369ea9296-kube-api-access-rbhml" (OuterVolumeSpecName: "kube-api-access-rbhml") pod "ce709568-6b90-4abf-96e3-bc9369ea9296" (UID: "ce709568-6b90-4abf-96e3-bc9369ea9296"). InnerVolumeSpecName "kube-api-access-rbhml". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:08:49 crc kubenswrapper[4795]: I1129 08:08:49.205804 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce709568-6b90-4abf-96e3-bc9369ea9296-scripts" (OuterVolumeSpecName: "scripts") pod "ce709568-6b90-4abf-96e3-bc9369ea9296" (UID: "ce709568-6b90-4abf-96e3-bc9369ea9296"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:08:49 crc kubenswrapper[4795]: I1129 08:08:49.233017 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce709568-6b90-4abf-96e3-bc9369ea9296-config-data" (OuterVolumeSpecName: "config-data") pod "ce709568-6b90-4abf-96e3-bc9369ea9296" (UID: "ce709568-6b90-4abf-96e3-bc9369ea9296"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:08:49 crc kubenswrapper[4795]: I1129 08:08:49.234184 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce709568-6b90-4abf-96e3-bc9369ea9296-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ce709568-6b90-4abf-96e3-bc9369ea9296" (UID: "ce709568-6b90-4abf-96e3-bc9369ea9296"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:08:49 crc kubenswrapper[4795]: I1129 08:08:49.299452 4795 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce709568-6b90-4abf-96e3-bc9369ea9296-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:49 crc kubenswrapper[4795]: I1129 08:08:49.299797 4795 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce709568-6b90-4abf-96e3-bc9369ea9296-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:49 crc kubenswrapper[4795]: I1129 08:08:49.299808 4795 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce709568-6b90-4abf-96e3-bc9369ea9296-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:49 crc kubenswrapper[4795]: I1129 08:08:49.299819 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rbhml\" (UniqueName: \"kubernetes.io/projected/ce709568-6b90-4abf-96e3-bc9369ea9296-kube-api-access-rbhml\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:49 crc kubenswrapper[4795]: I1129 08:08:49.663815 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-sf8vb" event={"ID":"ce709568-6b90-4abf-96e3-bc9369ea9296","Type":"ContainerDied","Data":"e754f3fdcbd1fd213be890dd9db207f0abdf8c4e9c8189b7a1c2e8ff8357632a"} Nov 29 08:08:49 crc kubenswrapper[4795]: I1129 08:08:49.663867 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e754f3fdcbd1fd213be890dd9db207f0abdf8c4e9c8189b7a1c2e8ff8357632a" Nov 29 08:08:49 crc kubenswrapper[4795]: I1129 08:08:49.663964 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-sf8vb" Nov 29 08:08:49 crc kubenswrapper[4795]: I1129 08:08:49.892877 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 29 08:08:49 crc kubenswrapper[4795]: I1129 08:08:49.893571 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="6e4bde91-81cb-4873-a759-31f23757ad0d" containerName="nova-api-log" containerID="cri-o://fc131ab9522b96a8110256851cf82244be5cf1f4727aac56d0140635286f2311" gracePeriod=30 Nov 29 08:08:49 crc kubenswrapper[4795]: I1129 08:08:49.893983 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="6e4bde91-81cb-4873-a759-31f23757ad0d" containerName="nova-api-api" containerID="cri-o://e059c975ab302e3c827b6f673a648defcd8842386808630d67f8a6ba3a74afc9" gracePeriod=30 Nov 29 08:08:49 crc kubenswrapper[4795]: I1129 08:08:49.931262 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 29 08:08:49 crc kubenswrapper[4795]: I1129 08:08:49.931533 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="c33ae8e1-1fb9-4053-9dcf-76b7a5bb370d" containerName="nova-scheduler-scheduler" containerID="cri-o://8b8df53bdfcffe0c5cb033cb2d4dff6b323c346e5043cf654875b0cc3bbe396a" gracePeriod=30 Nov 29 08:08:49 crc kubenswrapper[4795]: I1129 08:08:49.943775 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 29 08:08:49 crc kubenswrapper[4795]: I1129 08:08:49.944091 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="53a9feb8-d22b-48bd-9ac5-4168d88c8d78" containerName="nova-metadata-log" containerID="cri-o://125845c828337de0391d2b3bc6204768aa543b0da8e7bbf046b1716e10bf770c" gracePeriod=30 Nov 29 08:08:49 crc kubenswrapper[4795]: I1129 08:08:49.944303 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="53a9feb8-d22b-48bd-9ac5-4168d88c8d78" containerName="nova-metadata-metadata" containerID="cri-o://edfe05456087ccca5e9569d1a1702b10b8f704808655ceabb74c5748607bc411" gracePeriod=30 Nov 29 08:08:50 crc kubenswrapper[4795]: I1129 08:08:50.485416 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 29 08:08:50 crc kubenswrapper[4795]: I1129 08:08:50.661692 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e4bde91-81cb-4873-a759-31f23757ad0d-combined-ca-bundle\") pod \"6e4bde91-81cb-4873-a759-31f23757ad0d\" (UID: \"6e4bde91-81cb-4873-a759-31f23757ad0d\") " Nov 29 08:08:50 crc kubenswrapper[4795]: I1129 08:08:50.661775 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e4bde91-81cb-4873-a759-31f23757ad0d-config-data\") pod \"6e4bde91-81cb-4873-a759-31f23757ad0d\" (UID: \"6e4bde91-81cb-4873-a759-31f23757ad0d\") " Nov 29 08:08:50 crc kubenswrapper[4795]: I1129 08:08:50.661820 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcff9\" (UniqueName: \"kubernetes.io/projected/6e4bde91-81cb-4873-a759-31f23757ad0d-kube-api-access-pcff9\") pod \"6e4bde91-81cb-4873-a759-31f23757ad0d\" (UID: \"6e4bde91-81cb-4873-a759-31f23757ad0d\") " Nov 29 08:08:50 crc kubenswrapper[4795]: I1129 08:08:50.662038 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e4bde91-81cb-4873-a759-31f23757ad0d-internal-tls-certs\") pod \"6e4bde91-81cb-4873-a759-31f23757ad0d\" (UID: \"6e4bde91-81cb-4873-a759-31f23757ad0d\") " Nov 29 08:08:50 crc kubenswrapper[4795]: I1129 08:08:50.662060 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e4bde91-81cb-4873-a759-31f23757ad0d-public-tls-certs\") pod \"6e4bde91-81cb-4873-a759-31f23757ad0d\" (UID: \"6e4bde91-81cb-4873-a759-31f23757ad0d\") " Nov 29 08:08:50 crc kubenswrapper[4795]: I1129 08:08:50.662147 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e4bde91-81cb-4873-a759-31f23757ad0d-logs\") pod \"6e4bde91-81cb-4873-a759-31f23757ad0d\" (UID: \"6e4bde91-81cb-4873-a759-31f23757ad0d\") " Nov 29 08:08:50 crc kubenswrapper[4795]: I1129 08:08:50.663168 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e4bde91-81cb-4873-a759-31f23757ad0d-logs" (OuterVolumeSpecName: "logs") pod "6e4bde91-81cb-4873-a759-31f23757ad0d" (UID: "6e4bde91-81cb-4873-a759-31f23757ad0d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:08:50 crc kubenswrapper[4795]: I1129 08:08:50.670794 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e4bde91-81cb-4873-a759-31f23757ad0d-kube-api-access-pcff9" (OuterVolumeSpecName: "kube-api-access-pcff9") pod "6e4bde91-81cb-4873-a759-31f23757ad0d" (UID: "6e4bde91-81cb-4873-a759-31f23757ad0d"). InnerVolumeSpecName "kube-api-access-pcff9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:08:50 crc kubenswrapper[4795]: I1129 08:08:50.683625 4795 generic.go:334] "Generic (PLEG): container finished" podID="53a9feb8-d22b-48bd-9ac5-4168d88c8d78" containerID="125845c828337de0391d2b3bc6204768aa543b0da8e7bbf046b1716e10bf770c" exitCode=143 Nov 29 08:08:50 crc kubenswrapper[4795]: I1129 08:08:50.683729 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"53a9feb8-d22b-48bd-9ac5-4168d88c8d78","Type":"ContainerDied","Data":"125845c828337de0391d2b3bc6204768aa543b0da8e7bbf046b1716e10bf770c"} Nov 29 08:08:50 crc kubenswrapper[4795]: I1129 08:08:50.687266 4795 generic.go:334] "Generic (PLEG): container finished" podID="6e4bde91-81cb-4873-a759-31f23757ad0d" containerID="e059c975ab302e3c827b6f673a648defcd8842386808630d67f8a6ba3a74afc9" exitCode=0 Nov 29 08:08:50 crc kubenswrapper[4795]: I1129 08:08:50.687292 4795 generic.go:334] "Generic (PLEG): container finished" podID="6e4bde91-81cb-4873-a759-31f23757ad0d" containerID="fc131ab9522b96a8110256851cf82244be5cf1f4727aac56d0140635286f2311" exitCode=143 Nov 29 08:08:50 crc kubenswrapper[4795]: I1129 08:08:50.687308 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6e4bde91-81cb-4873-a759-31f23757ad0d","Type":"ContainerDied","Data":"e059c975ab302e3c827b6f673a648defcd8842386808630d67f8a6ba3a74afc9"} Nov 29 08:08:50 crc kubenswrapper[4795]: I1129 08:08:50.687325 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6e4bde91-81cb-4873-a759-31f23757ad0d","Type":"ContainerDied","Data":"fc131ab9522b96a8110256851cf82244be5cf1f4727aac56d0140635286f2311"} Nov 29 08:08:50 crc kubenswrapper[4795]: I1129 08:08:50.687336 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6e4bde91-81cb-4873-a759-31f23757ad0d","Type":"ContainerDied","Data":"759154865bc923070b9002eb87f9b806be752aa56e0f09b58d2f9a6af6b1f972"} Nov 29 08:08:50 crc kubenswrapper[4795]: I1129 08:08:50.687351 4795 scope.go:117] "RemoveContainer" containerID="e059c975ab302e3c827b6f673a648defcd8842386808630d67f8a6ba3a74afc9" Nov 29 08:08:50 crc kubenswrapper[4795]: I1129 08:08:50.687355 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 29 08:08:50 crc kubenswrapper[4795]: I1129 08:08:50.706945 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e4bde91-81cb-4873-a759-31f23757ad0d-config-data" (OuterVolumeSpecName: "config-data") pod "6e4bde91-81cb-4873-a759-31f23757ad0d" (UID: "6e4bde91-81cb-4873-a759-31f23757ad0d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:08:50 crc kubenswrapper[4795]: I1129 08:08:50.726916 4795 scope.go:117] "RemoveContainer" containerID="fc131ab9522b96a8110256851cf82244be5cf1f4727aac56d0140635286f2311" Nov 29 08:08:50 crc kubenswrapper[4795]: I1129 08:08:50.736733 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e4bde91-81cb-4873-a759-31f23757ad0d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6e4bde91-81cb-4873-a759-31f23757ad0d" (UID: "6e4bde91-81cb-4873-a759-31f23757ad0d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:08:50 crc kubenswrapper[4795]: I1129 08:08:50.756231 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e4bde91-81cb-4873-a759-31f23757ad0d-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "6e4bde91-81cb-4873-a759-31f23757ad0d" (UID: "6e4bde91-81cb-4873-a759-31f23757ad0d"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:08:50 crc kubenswrapper[4795]: I1129 08:08:50.757101 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e4bde91-81cb-4873-a759-31f23757ad0d-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "6e4bde91-81cb-4873-a759-31f23757ad0d" (UID: "6e4bde91-81cb-4873-a759-31f23757ad0d"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:08:50 crc kubenswrapper[4795]: I1129 08:08:50.759273 4795 scope.go:117] "RemoveContainer" containerID="e059c975ab302e3c827b6f673a648defcd8842386808630d67f8a6ba3a74afc9" Nov 29 08:08:50 crc kubenswrapper[4795]: E1129 08:08:50.759930 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e059c975ab302e3c827b6f673a648defcd8842386808630d67f8a6ba3a74afc9\": container with ID starting with e059c975ab302e3c827b6f673a648defcd8842386808630d67f8a6ba3a74afc9 not found: ID does not exist" containerID="e059c975ab302e3c827b6f673a648defcd8842386808630d67f8a6ba3a74afc9" Nov 29 08:08:50 crc kubenswrapper[4795]: I1129 08:08:50.760001 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e059c975ab302e3c827b6f673a648defcd8842386808630d67f8a6ba3a74afc9"} err="failed to get container status \"e059c975ab302e3c827b6f673a648defcd8842386808630d67f8a6ba3a74afc9\": rpc error: code = NotFound desc = could not find container \"e059c975ab302e3c827b6f673a648defcd8842386808630d67f8a6ba3a74afc9\": container with ID starting with e059c975ab302e3c827b6f673a648defcd8842386808630d67f8a6ba3a74afc9 not found: ID does not exist" Nov 29 08:08:50 crc kubenswrapper[4795]: I1129 08:08:50.760038 4795 scope.go:117] "RemoveContainer" containerID="fc131ab9522b96a8110256851cf82244be5cf1f4727aac56d0140635286f2311" Nov 29 08:08:50 crc kubenswrapper[4795]: E1129 08:08:50.761487 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc131ab9522b96a8110256851cf82244be5cf1f4727aac56d0140635286f2311\": container with ID starting with fc131ab9522b96a8110256851cf82244be5cf1f4727aac56d0140635286f2311 not found: ID does not exist" containerID="fc131ab9522b96a8110256851cf82244be5cf1f4727aac56d0140635286f2311" Nov 29 08:08:50 crc kubenswrapper[4795]: I1129 08:08:50.761527 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc131ab9522b96a8110256851cf82244be5cf1f4727aac56d0140635286f2311"} err="failed to get container status \"fc131ab9522b96a8110256851cf82244be5cf1f4727aac56d0140635286f2311\": rpc error: code = NotFound desc = could not find container \"fc131ab9522b96a8110256851cf82244be5cf1f4727aac56d0140635286f2311\": container with ID starting with fc131ab9522b96a8110256851cf82244be5cf1f4727aac56d0140635286f2311 not found: ID does not exist" Nov 29 08:08:50 crc kubenswrapper[4795]: I1129 08:08:50.761552 4795 scope.go:117] "RemoveContainer" containerID="e059c975ab302e3c827b6f673a648defcd8842386808630d67f8a6ba3a74afc9" Nov 29 08:08:50 crc kubenswrapper[4795]: I1129 08:08:50.762091 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e059c975ab302e3c827b6f673a648defcd8842386808630d67f8a6ba3a74afc9"} err="failed to get container status \"e059c975ab302e3c827b6f673a648defcd8842386808630d67f8a6ba3a74afc9\": rpc error: code = NotFound desc = could not find container \"e059c975ab302e3c827b6f673a648defcd8842386808630d67f8a6ba3a74afc9\": container with ID starting with e059c975ab302e3c827b6f673a648defcd8842386808630d67f8a6ba3a74afc9 not found: ID does not exist" Nov 29 08:08:50 crc kubenswrapper[4795]: I1129 08:08:50.762147 4795 scope.go:117] "RemoveContainer" containerID="fc131ab9522b96a8110256851cf82244be5cf1f4727aac56d0140635286f2311" Nov 29 08:08:50 crc kubenswrapper[4795]: I1129 08:08:50.762481 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc131ab9522b96a8110256851cf82244be5cf1f4727aac56d0140635286f2311"} err="failed to get container status \"fc131ab9522b96a8110256851cf82244be5cf1f4727aac56d0140635286f2311\": rpc error: code = NotFound desc = could not find container \"fc131ab9522b96a8110256851cf82244be5cf1f4727aac56d0140635286f2311\": container with ID starting with fc131ab9522b96a8110256851cf82244be5cf1f4727aac56d0140635286f2311 not found: ID does not exist" Nov 29 08:08:50 crc kubenswrapper[4795]: I1129 08:08:50.765357 4795 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e4bde91-81cb-4873-a759-31f23757ad0d-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:50 crc kubenswrapper[4795]: I1129 08:08:50.765397 4795 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e4bde91-81cb-4873-a759-31f23757ad0d-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:50 crc kubenswrapper[4795]: I1129 08:08:50.765412 4795 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e4bde91-81cb-4873-a759-31f23757ad0d-logs\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:50 crc kubenswrapper[4795]: I1129 08:08:50.765426 4795 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e4bde91-81cb-4873-a759-31f23757ad0d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:50 crc kubenswrapper[4795]: I1129 08:08:50.765436 4795 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e4bde91-81cb-4873-a759-31f23757ad0d-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:50 crc kubenswrapper[4795]: I1129 08:08:50.765448 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcff9\" (UniqueName: \"kubernetes.io/projected/6e4bde91-81cb-4873-a759-31f23757ad0d-kube-api-access-pcff9\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:51 crc kubenswrapper[4795]: I1129 08:08:51.026877 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 29 08:08:51 crc kubenswrapper[4795]: I1129 08:08:51.040112 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 29 08:08:51 crc kubenswrapper[4795]: I1129 08:08:51.058960 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 29 08:08:51 crc kubenswrapper[4795]: E1129 08:08:51.059560 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8678c9b5-b7c3-4448-9051-17a60a1b92d6" containerName="dnsmasq-dns" Nov 29 08:08:51 crc kubenswrapper[4795]: I1129 08:08:51.059579 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="8678c9b5-b7c3-4448-9051-17a60a1b92d6" containerName="dnsmasq-dns" Nov 29 08:08:51 crc kubenswrapper[4795]: E1129 08:08:51.059609 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e4bde91-81cb-4873-a759-31f23757ad0d" containerName="nova-api-api" Nov 29 08:08:51 crc kubenswrapper[4795]: I1129 08:08:51.059616 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e4bde91-81cb-4873-a759-31f23757ad0d" containerName="nova-api-api" Nov 29 08:08:51 crc kubenswrapper[4795]: E1129 08:08:51.059635 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e4bde91-81cb-4873-a759-31f23757ad0d" containerName="nova-api-log" Nov 29 08:08:51 crc kubenswrapper[4795]: I1129 08:08:51.059641 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e4bde91-81cb-4873-a759-31f23757ad0d" containerName="nova-api-log" Nov 29 08:08:51 crc kubenswrapper[4795]: E1129 08:08:51.059672 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce709568-6b90-4abf-96e3-bc9369ea9296" containerName="nova-manage" Nov 29 08:08:51 crc kubenswrapper[4795]: I1129 08:08:51.059678 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce709568-6b90-4abf-96e3-bc9369ea9296" containerName="nova-manage" Nov 29 08:08:51 crc kubenswrapper[4795]: E1129 08:08:51.059710 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8678c9b5-b7c3-4448-9051-17a60a1b92d6" containerName="init" Nov 29 08:08:51 crc kubenswrapper[4795]: I1129 08:08:51.059717 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="8678c9b5-b7c3-4448-9051-17a60a1b92d6" containerName="init" Nov 29 08:08:51 crc kubenswrapper[4795]: I1129 08:08:51.059950 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="8678c9b5-b7c3-4448-9051-17a60a1b92d6" containerName="dnsmasq-dns" Nov 29 08:08:51 crc kubenswrapper[4795]: I1129 08:08:51.059984 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce709568-6b90-4abf-96e3-bc9369ea9296" containerName="nova-manage" Nov 29 08:08:51 crc kubenswrapper[4795]: I1129 08:08:51.059996 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e4bde91-81cb-4873-a759-31f23757ad0d" containerName="nova-api-api" Nov 29 08:08:51 crc kubenswrapper[4795]: I1129 08:08:51.060010 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e4bde91-81cb-4873-a759-31f23757ad0d" containerName="nova-api-log" Nov 29 08:08:51 crc kubenswrapper[4795]: I1129 08:08:51.061353 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 29 08:08:51 crc kubenswrapper[4795]: I1129 08:08:51.063556 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 29 08:08:51 crc kubenswrapper[4795]: I1129 08:08:51.069131 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 29 08:08:51 crc kubenswrapper[4795]: I1129 08:08:51.069787 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 29 08:08:51 crc kubenswrapper[4795]: I1129 08:08:51.071525 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 29 08:08:51 crc kubenswrapper[4795]: I1129 08:08:51.174284 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f574bcc1-8e96-4c98-a600-1fcd846864d9-config-data\") pod \"nova-api-0\" (UID: \"f574bcc1-8e96-4c98-a600-1fcd846864d9\") " pod="openstack/nova-api-0" Nov 29 08:08:51 crc kubenswrapper[4795]: I1129 08:08:51.174967 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lvmg\" (UniqueName: \"kubernetes.io/projected/f574bcc1-8e96-4c98-a600-1fcd846864d9-kube-api-access-6lvmg\") pod \"nova-api-0\" (UID: \"f574bcc1-8e96-4c98-a600-1fcd846864d9\") " pod="openstack/nova-api-0" Nov 29 08:08:51 crc kubenswrapper[4795]: I1129 08:08:51.175223 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f574bcc1-8e96-4c98-a600-1fcd846864d9-logs\") pod \"nova-api-0\" (UID: \"f574bcc1-8e96-4c98-a600-1fcd846864d9\") " pod="openstack/nova-api-0" Nov 29 08:08:51 crc kubenswrapper[4795]: I1129 08:08:51.175461 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f574bcc1-8e96-4c98-a600-1fcd846864d9-public-tls-certs\") pod \"nova-api-0\" (UID: \"f574bcc1-8e96-4c98-a600-1fcd846864d9\") " pod="openstack/nova-api-0" Nov 29 08:08:51 crc kubenswrapper[4795]: I1129 08:08:51.175696 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f574bcc1-8e96-4c98-a600-1fcd846864d9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f574bcc1-8e96-4c98-a600-1fcd846864d9\") " pod="openstack/nova-api-0" Nov 29 08:08:51 crc kubenswrapper[4795]: I1129 08:08:51.175752 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f574bcc1-8e96-4c98-a600-1fcd846864d9-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f574bcc1-8e96-4c98-a600-1fcd846864d9\") " pod="openstack/nova-api-0" Nov 29 08:08:51 crc kubenswrapper[4795]: I1129 08:08:51.277322 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f574bcc1-8e96-4c98-a600-1fcd846864d9-config-data\") pod \"nova-api-0\" (UID: \"f574bcc1-8e96-4c98-a600-1fcd846864d9\") " pod="openstack/nova-api-0" Nov 29 08:08:51 crc kubenswrapper[4795]: I1129 08:08:51.277451 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lvmg\" (UniqueName: \"kubernetes.io/projected/f574bcc1-8e96-4c98-a600-1fcd846864d9-kube-api-access-6lvmg\") pod \"nova-api-0\" (UID: \"f574bcc1-8e96-4c98-a600-1fcd846864d9\") " pod="openstack/nova-api-0" Nov 29 08:08:51 crc kubenswrapper[4795]: I1129 08:08:51.277485 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f574bcc1-8e96-4c98-a600-1fcd846864d9-logs\") pod \"nova-api-0\" (UID: \"f574bcc1-8e96-4c98-a600-1fcd846864d9\") " pod="openstack/nova-api-0" Nov 29 08:08:51 crc kubenswrapper[4795]: I1129 08:08:51.277525 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f574bcc1-8e96-4c98-a600-1fcd846864d9-public-tls-certs\") pod \"nova-api-0\" (UID: \"f574bcc1-8e96-4c98-a600-1fcd846864d9\") " pod="openstack/nova-api-0" Nov 29 08:08:51 crc kubenswrapper[4795]: I1129 08:08:51.277580 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f574bcc1-8e96-4c98-a600-1fcd846864d9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f574bcc1-8e96-4c98-a600-1fcd846864d9\") " pod="openstack/nova-api-0" Nov 29 08:08:51 crc kubenswrapper[4795]: I1129 08:08:51.277697 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f574bcc1-8e96-4c98-a600-1fcd846864d9-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f574bcc1-8e96-4c98-a600-1fcd846864d9\") " pod="openstack/nova-api-0" Nov 29 08:08:51 crc kubenswrapper[4795]: I1129 08:08:51.278187 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f574bcc1-8e96-4c98-a600-1fcd846864d9-logs\") pod \"nova-api-0\" (UID: \"f574bcc1-8e96-4c98-a600-1fcd846864d9\") " pod="openstack/nova-api-0" Nov 29 08:08:51 crc kubenswrapper[4795]: I1129 08:08:51.285262 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f574bcc1-8e96-4c98-a600-1fcd846864d9-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f574bcc1-8e96-4c98-a600-1fcd846864d9\") " pod="openstack/nova-api-0" Nov 29 08:08:51 crc kubenswrapper[4795]: I1129 08:08:51.285375 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f574bcc1-8e96-4c98-a600-1fcd846864d9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f574bcc1-8e96-4c98-a600-1fcd846864d9\") " pod="openstack/nova-api-0" Nov 29 08:08:51 crc kubenswrapper[4795]: I1129 08:08:51.294014 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f574bcc1-8e96-4c98-a600-1fcd846864d9-config-data\") pod \"nova-api-0\" (UID: \"f574bcc1-8e96-4c98-a600-1fcd846864d9\") " pod="openstack/nova-api-0" Nov 29 08:08:51 crc kubenswrapper[4795]: I1129 08:08:51.294367 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f574bcc1-8e96-4c98-a600-1fcd846864d9-public-tls-certs\") pod \"nova-api-0\" (UID: \"f574bcc1-8e96-4c98-a600-1fcd846864d9\") " pod="openstack/nova-api-0" Nov 29 08:08:51 crc kubenswrapper[4795]: I1129 08:08:51.303358 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lvmg\" (UniqueName: \"kubernetes.io/projected/f574bcc1-8e96-4c98-a600-1fcd846864d9-kube-api-access-6lvmg\") pod \"nova-api-0\" (UID: \"f574bcc1-8e96-4c98-a600-1fcd846864d9\") " pod="openstack/nova-api-0" Nov 29 08:08:51 crc kubenswrapper[4795]: I1129 08:08:51.413222 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 29 08:08:51 crc kubenswrapper[4795]: I1129 08:08:51.913443 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 29 08:08:52 crc kubenswrapper[4795]: I1129 08:08:52.290827 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e4bde91-81cb-4873-a759-31f23757ad0d" path="/var/lib/kubelet/pods/6e4bde91-81cb-4873-a759-31f23757ad0d/volumes" Nov 29 08:08:52 crc kubenswrapper[4795]: I1129 08:08:52.715608 4795 generic.go:334] "Generic (PLEG): container finished" podID="c33ae8e1-1fb9-4053-9dcf-76b7a5bb370d" containerID="8b8df53bdfcffe0c5cb033cb2d4dff6b323c346e5043cf654875b0cc3bbe396a" exitCode=0 Nov 29 08:08:52 crc kubenswrapper[4795]: I1129 08:08:52.715674 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c33ae8e1-1fb9-4053-9dcf-76b7a5bb370d","Type":"ContainerDied","Data":"8b8df53bdfcffe0c5cb033cb2d4dff6b323c346e5043cf654875b0cc3bbe396a"} Nov 29 08:08:52 crc kubenswrapper[4795]: I1129 08:08:52.718194 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f574bcc1-8e96-4c98-a600-1fcd846864d9","Type":"ContainerStarted","Data":"cbadb7be34b28347ee6306f4256a67e1f3e817f55989d406288509205e927866"} Nov 29 08:08:52 crc kubenswrapper[4795]: I1129 08:08:52.718229 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f574bcc1-8e96-4c98-a600-1fcd846864d9","Type":"ContainerStarted","Data":"0cc03fe8b460274d24b746d3e4badb1b454a1dbbdcc273fa73ca5cdaa3802121"} Nov 29 08:08:52 crc kubenswrapper[4795]: I1129 08:08:52.718242 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f574bcc1-8e96-4c98-a600-1fcd846864d9","Type":"ContainerStarted","Data":"01f596b30d099bd3d3d142449095523da4bbf232870d2ee4ca1cdb8c11b3ad2c"} Nov 29 08:08:52 crc kubenswrapper[4795]: I1129 08:08:52.741881 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=1.741862354 podStartE2EDuration="1.741862354s" podCreationTimestamp="2025-11-29 08:08:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 08:08:52.738968932 +0000 UTC m=+1778.714544722" watchObservedRunningTime="2025-11-29 08:08:52.741862354 +0000 UTC m=+1778.717438144" Nov 29 08:08:52 crc kubenswrapper[4795]: I1129 08:08:52.922128 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 29 08:08:53 crc kubenswrapper[4795]: I1129 08:08:53.022341 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drsfm\" (UniqueName: \"kubernetes.io/projected/c33ae8e1-1fb9-4053-9dcf-76b7a5bb370d-kube-api-access-drsfm\") pod \"c33ae8e1-1fb9-4053-9dcf-76b7a5bb370d\" (UID: \"c33ae8e1-1fb9-4053-9dcf-76b7a5bb370d\") " Nov 29 08:08:53 crc kubenswrapper[4795]: I1129 08:08:53.022435 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c33ae8e1-1fb9-4053-9dcf-76b7a5bb370d-combined-ca-bundle\") pod \"c33ae8e1-1fb9-4053-9dcf-76b7a5bb370d\" (UID: \"c33ae8e1-1fb9-4053-9dcf-76b7a5bb370d\") " Nov 29 08:08:53 crc kubenswrapper[4795]: I1129 08:08:53.022701 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c33ae8e1-1fb9-4053-9dcf-76b7a5bb370d-config-data\") pod \"c33ae8e1-1fb9-4053-9dcf-76b7a5bb370d\" (UID: \"c33ae8e1-1fb9-4053-9dcf-76b7a5bb370d\") " Nov 29 08:08:53 crc kubenswrapper[4795]: I1129 08:08:53.027306 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c33ae8e1-1fb9-4053-9dcf-76b7a5bb370d-kube-api-access-drsfm" (OuterVolumeSpecName: "kube-api-access-drsfm") pod "c33ae8e1-1fb9-4053-9dcf-76b7a5bb370d" (UID: "c33ae8e1-1fb9-4053-9dcf-76b7a5bb370d"). InnerVolumeSpecName "kube-api-access-drsfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:08:53 crc kubenswrapper[4795]: I1129 08:08:53.064679 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c33ae8e1-1fb9-4053-9dcf-76b7a5bb370d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c33ae8e1-1fb9-4053-9dcf-76b7a5bb370d" (UID: "c33ae8e1-1fb9-4053-9dcf-76b7a5bb370d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:08:53 crc kubenswrapper[4795]: I1129 08:08:53.078494 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c33ae8e1-1fb9-4053-9dcf-76b7a5bb370d-config-data" (OuterVolumeSpecName: "config-data") pod "c33ae8e1-1fb9-4053-9dcf-76b7a5bb370d" (UID: "c33ae8e1-1fb9-4053-9dcf-76b7a5bb370d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:08:53 crc kubenswrapper[4795]: I1129 08:08:53.078804 4795 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="53a9feb8-d22b-48bd-9ac5-4168d88c8d78" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.246:8775/\": read tcp 10.217.0.2:54580->10.217.0.246:8775: read: connection reset by peer" Nov 29 08:08:53 crc kubenswrapper[4795]: I1129 08:08:53.079105 4795 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="53a9feb8-d22b-48bd-9ac5-4168d88c8d78" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.246:8775/\": read tcp 10.217.0.2:54596->10.217.0.246:8775: read: connection reset by peer" Nov 29 08:08:53 crc kubenswrapper[4795]: I1129 08:08:53.133120 4795 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c33ae8e1-1fb9-4053-9dcf-76b7a5bb370d-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:53 crc kubenswrapper[4795]: I1129 08:08:53.133171 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drsfm\" (UniqueName: \"kubernetes.io/projected/c33ae8e1-1fb9-4053-9dcf-76b7a5bb370d-kube-api-access-drsfm\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:53 crc kubenswrapper[4795]: I1129 08:08:53.133185 4795 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c33ae8e1-1fb9-4053-9dcf-76b7a5bb370d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:53 crc kubenswrapper[4795]: I1129 08:08:53.613922 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 29 08:08:53 crc kubenswrapper[4795]: I1129 08:08:53.646997 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53a9feb8-d22b-48bd-9ac5-4168d88c8d78-combined-ca-bundle\") pod \"53a9feb8-d22b-48bd-9ac5-4168d88c8d78\" (UID: \"53a9feb8-d22b-48bd-9ac5-4168d88c8d78\") " Nov 29 08:08:53 crc kubenswrapper[4795]: I1129 08:08:53.647058 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/53a9feb8-d22b-48bd-9ac5-4168d88c8d78-nova-metadata-tls-certs\") pod \"53a9feb8-d22b-48bd-9ac5-4168d88c8d78\" (UID: \"53a9feb8-d22b-48bd-9ac5-4168d88c8d78\") " Nov 29 08:08:53 crc kubenswrapper[4795]: I1129 08:08:53.647208 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53a9feb8-d22b-48bd-9ac5-4168d88c8d78-config-data\") pod \"53a9feb8-d22b-48bd-9ac5-4168d88c8d78\" (UID: \"53a9feb8-d22b-48bd-9ac5-4168d88c8d78\") " Nov 29 08:08:53 crc kubenswrapper[4795]: I1129 08:08:53.647273 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53a9feb8-d22b-48bd-9ac5-4168d88c8d78-logs\") pod \"53a9feb8-d22b-48bd-9ac5-4168d88c8d78\" (UID: \"53a9feb8-d22b-48bd-9ac5-4168d88c8d78\") " Nov 29 08:08:53 crc kubenswrapper[4795]: I1129 08:08:53.647346 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwq4k\" (UniqueName: \"kubernetes.io/projected/53a9feb8-d22b-48bd-9ac5-4168d88c8d78-kube-api-access-fwq4k\") pod \"53a9feb8-d22b-48bd-9ac5-4168d88c8d78\" (UID: \"53a9feb8-d22b-48bd-9ac5-4168d88c8d78\") " Nov 29 08:08:53 crc kubenswrapper[4795]: I1129 08:08:53.647965 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53a9feb8-d22b-48bd-9ac5-4168d88c8d78-logs" (OuterVolumeSpecName: "logs") pod "53a9feb8-d22b-48bd-9ac5-4168d88c8d78" (UID: "53a9feb8-d22b-48bd-9ac5-4168d88c8d78"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:08:53 crc kubenswrapper[4795]: I1129 08:08:53.648637 4795 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53a9feb8-d22b-48bd-9ac5-4168d88c8d78-logs\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:53 crc kubenswrapper[4795]: I1129 08:08:53.668415 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53a9feb8-d22b-48bd-9ac5-4168d88c8d78-kube-api-access-fwq4k" (OuterVolumeSpecName: "kube-api-access-fwq4k") pod "53a9feb8-d22b-48bd-9ac5-4168d88c8d78" (UID: "53a9feb8-d22b-48bd-9ac5-4168d88c8d78"). InnerVolumeSpecName "kube-api-access-fwq4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:08:53 crc kubenswrapper[4795]: I1129 08:08:53.688763 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53a9feb8-d22b-48bd-9ac5-4168d88c8d78-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "53a9feb8-d22b-48bd-9ac5-4168d88c8d78" (UID: "53a9feb8-d22b-48bd-9ac5-4168d88c8d78"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:08:53 crc kubenswrapper[4795]: I1129 08:08:53.698945 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53a9feb8-d22b-48bd-9ac5-4168d88c8d78-config-data" (OuterVolumeSpecName: "config-data") pod "53a9feb8-d22b-48bd-9ac5-4168d88c8d78" (UID: "53a9feb8-d22b-48bd-9ac5-4168d88c8d78"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:08:53 crc kubenswrapper[4795]: I1129 08:08:53.726318 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53a9feb8-d22b-48bd-9ac5-4168d88c8d78-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "53a9feb8-d22b-48bd-9ac5-4168d88c8d78" (UID: "53a9feb8-d22b-48bd-9ac5-4168d88c8d78"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:08:53 crc kubenswrapper[4795]: I1129 08:08:53.732975 4795 generic.go:334] "Generic (PLEG): container finished" podID="53a9feb8-d22b-48bd-9ac5-4168d88c8d78" containerID="edfe05456087ccca5e9569d1a1702b10b8f704808655ceabb74c5748607bc411" exitCode=0 Nov 29 08:08:53 crc kubenswrapper[4795]: I1129 08:08:53.733034 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 29 08:08:53 crc kubenswrapper[4795]: I1129 08:08:53.733046 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"53a9feb8-d22b-48bd-9ac5-4168d88c8d78","Type":"ContainerDied","Data":"edfe05456087ccca5e9569d1a1702b10b8f704808655ceabb74c5748607bc411"} Nov 29 08:08:53 crc kubenswrapper[4795]: I1129 08:08:53.733135 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"53a9feb8-d22b-48bd-9ac5-4168d88c8d78","Type":"ContainerDied","Data":"f0091f01d9155a0a15c91a32b06ecdb7b4637c528dacaaf3a617bca27c8848d2"} Nov 29 08:08:53 crc kubenswrapper[4795]: I1129 08:08:53.733160 4795 scope.go:117] "RemoveContainer" containerID="edfe05456087ccca5e9569d1a1702b10b8f704808655ceabb74c5748607bc411" Nov 29 08:08:53 crc kubenswrapper[4795]: I1129 08:08:53.736169 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c33ae8e1-1fb9-4053-9dcf-76b7a5bb370d","Type":"ContainerDied","Data":"49e0060a8c4a31d20924171938a82495896f080054a854e0e4837ae407bd3cd8"} Nov 29 08:08:53 crc kubenswrapper[4795]: I1129 08:08:53.737620 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 29 08:08:53 crc kubenswrapper[4795]: I1129 08:08:53.750996 4795 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53a9feb8-d22b-48bd-9ac5-4168d88c8d78-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:53 crc kubenswrapper[4795]: I1129 08:08:53.751035 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwq4k\" (UniqueName: \"kubernetes.io/projected/53a9feb8-d22b-48bd-9ac5-4168d88c8d78-kube-api-access-fwq4k\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:53 crc kubenswrapper[4795]: I1129 08:08:53.751046 4795 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53a9feb8-d22b-48bd-9ac5-4168d88c8d78-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:53 crc kubenswrapper[4795]: I1129 08:08:53.751056 4795 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/53a9feb8-d22b-48bd-9ac5-4168d88c8d78-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:53 crc kubenswrapper[4795]: I1129 08:08:53.761298 4795 scope.go:117] "RemoveContainer" containerID="125845c828337de0391d2b3bc6204768aa543b0da8e7bbf046b1716e10bf770c" Nov 29 08:08:53 crc kubenswrapper[4795]: I1129 08:08:53.788152 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 29 08:08:53 crc kubenswrapper[4795]: I1129 08:08:53.810085 4795 scope.go:117] "RemoveContainer" containerID="edfe05456087ccca5e9569d1a1702b10b8f704808655ceabb74c5748607bc411" Nov 29 08:08:53 crc kubenswrapper[4795]: E1129 08:08:53.812091 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edfe05456087ccca5e9569d1a1702b10b8f704808655ceabb74c5748607bc411\": container with ID starting with edfe05456087ccca5e9569d1a1702b10b8f704808655ceabb74c5748607bc411 not found: ID does not exist" containerID="edfe05456087ccca5e9569d1a1702b10b8f704808655ceabb74c5748607bc411" Nov 29 08:08:53 crc kubenswrapper[4795]: I1129 08:08:53.812276 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edfe05456087ccca5e9569d1a1702b10b8f704808655ceabb74c5748607bc411"} err="failed to get container status \"edfe05456087ccca5e9569d1a1702b10b8f704808655ceabb74c5748607bc411\": rpc error: code = NotFound desc = could not find container \"edfe05456087ccca5e9569d1a1702b10b8f704808655ceabb74c5748607bc411\": container with ID starting with edfe05456087ccca5e9569d1a1702b10b8f704808655ceabb74c5748607bc411 not found: ID does not exist" Nov 29 08:08:53 crc kubenswrapper[4795]: I1129 08:08:53.812463 4795 scope.go:117] "RemoveContainer" containerID="125845c828337de0391d2b3bc6204768aa543b0da8e7bbf046b1716e10bf770c" Nov 29 08:08:53 crc kubenswrapper[4795]: I1129 08:08:53.816118 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 29 08:08:53 crc kubenswrapper[4795]: E1129 08:08:53.816654 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"125845c828337de0391d2b3bc6204768aa543b0da8e7bbf046b1716e10bf770c\": container with ID starting with 125845c828337de0391d2b3bc6204768aa543b0da8e7bbf046b1716e10bf770c not found: ID does not exist" containerID="125845c828337de0391d2b3bc6204768aa543b0da8e7bbf046b1716e10bf770c" Nov 29 08:08:53 crc kubenswrapper[4795]: I1129 08:08:53.816693 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"125845c828337de0391d2b3bc6204768aa543b0da8e7bbf046b1716e10bf770c"} err="failed to get container status \"125845c828337de0391d2b3bc6204768aa543b0da8e7bbf046b1716e10bf770c\": rpc error: code = NotFound desc = could not find container \"125845c828337de0391d2b3bc6204768aa543b0da8e7bbf046b1716e10bf770c\": container with ID starting with 125845c828337de0391d2b3bc6204768aa543b0da8e7bbf046b1716e10bf770c not found: ID does not exist" Nov 29 08:08:53 crc kubenswrapper[4795]: I1129 08:08:53.816718 4795 scope.go:117] "RemoveContainer" containerID="8b8df53bdfcffe0c5cb033cb2d4dff6b323c346e5043cf654875b0cc3bbe396a" Nov 29 08:08:53 crc kubenswrapper[4795]: I1129 08:08:53.836893 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 29 08:08:53 crc kubenswrapper[4795]: I1129 08:08:53.849909 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 29 08:08:53 crc kubenswrapper[4795]: I1129 08:08:53.868981 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 29 08:08:53 crc kubenswrapper[4795]: E1129 08:08:53.869724 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c33ae8e1-1fb9-4053-9dcf-76b7a5bb370d" containerName="nova-scheduler-scheduler" Nov 29 08:08:53 crc kubenswrapper[4795]: I1129 08:08:53.869741 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="c33ae8e1-1fb9-4053-9dcf-76b7a5bb370d" containerName="nova-scheduler-scheduler" Nov 29 08:08:53 crc kubenswrapper[4795]: E1129 08:08:53.869756 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53a9feb8-d22b-48bd-9ac5-4168d88c8d78" containerName="nova-metadata-log" Nov 29 08:08:53 crc kubenswrapper[4795]: I1129 08:08:53.869790 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="53a9feb8-d22b-48bd-9ac5-4168d88c8d78" containerName="nova-metadata-log" Nov 29 08:08:53 crc kubenswrapper[4795]: E1129 08:08:53.869837 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53a9feb8-d22b-48bd-9ac5-4168d88c8d78" containerName="nova-metadata-metadata" Nov 29 08:08:53 crc kubenswrapper[4795]: I1129 08:08:53.869846 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="53a9feb8-d22b-48bd-9ac5-4168d88c8d78" containerName="nova-metadata-metadata" Nov 29 08:08:53 crc kubenswrapper[4795]: I1129 08:08:53.870436 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="53a9feb8-d22b-48bd-9ac5-4168d88c8d78" containerName="nova-metadata-metadata" Nov 29 08:08:53 crc kubenswrapper[4795]: I1129 08:08:53.870468 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="c33ae8e1-1fb9-4053-9dcf-76b7a5bb370d" containerName="nova-scheduler-scheduler" Nov 29 08:08:53 crc kubenswrapper[4795]: I1129 08:08:53.870487 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="53a9feb8-d22b-48bd-9ac5-4168d88c8d78" containerName="nova-metadata-log" Nov 29 08:08:53 crc kubenswrapper[4795]: I1129 08:08:53.871660 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 29 08:08:53 crc kubenswrapper[4795]: I1129 08:08:53.875194 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 29 08:08:53 crc kubenswrapper[4795]: I1129 08:08:53.883427 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 29 08:08:53 crc kubenswrapper[4795]: I1129 08:08:53.898452 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 29 08:08:53 crc kubenswrapper[4795]: I1129 08:08:53.900692 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 29 08:08:53 crc kubenswrapper[4795]: I1129 08:08:53.903025 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 29 08:08:53 crc kubenswrapper[4795]: I1129 08:08:53.903233 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 29 08:08:53 crc kubenswrapper[4795]: I1129 08:08:53.916464 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 29 08:08:53 crc kubenswrapper[4795]: I1129 08:08:53.955613 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e58a5e7-cc35-47ee-af21-e80500efd523-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7e58a5e7-cc35-47ee-af21-e80500efd523\") " pod="openstack/nova-scheduler-0" Nov 29 08:08:53 crc kubenswrapper[4795]: I1129 08:08:53.955712 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qg9l\" (UniqueName: \"kubernetes.io/projected/c23a0993-0a7b-4452-bdcc-a199abf1de88-kube-api-access-7qg9l\") pod \"nova-metadata-0\" (UID: \"c23a0993-0a7b-4452-bdcc-a199abf1de88\") " pod="openstack/nova-metadata-0" Nov 29 08:08:53 crc kubenswrapper[4795]: I1129 08:08:53.955775 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c23a0993-0a7b-4452-bdcc-a199abf1de88-config-data\") pod \"nova-metadata-0\" (UID: \"c23a0993-0a7b-4452-bdcc-a199abf1de88\") " pod="openstack/nova-metadata-0" Nov 29 08:08:53 crc kubenswrapper[4795]: I1129 08:08:53.955850 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e58a5e7-cc35-47ee-af21-e80500efd523-config-data\") pod \"nova-scheduler-0\" (UID: \"7e58a5e7-cc35-47ee-af21-e80500efd523\") " pod="openstack/nova-scheduler-0" Nov 29 08:08:53 crc kubenswrapper[4795]: I1129 08:08:53.955910 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c23a0993-0a7b-4452-bdcc-a199abf1de88-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c23a0993-0a7b-4452-bdcc-a199abf1de88\") " pod="openstack/nova-metadata-0" Nov 29 08:08:53 crc kubenswrapper[4795]: I1129 08:08:53.955993 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmr7g\" (UniqueName: \"kubernetes.io/projected/7e58a5e7-cc35-47ee-af21-e80500efd523-kube-api-access-zmr7g\") pod \"nova-scheduler-0\" (UID: \"7e58a5e7-cc35-47ee-af21-e80500efd523\") " pod="openstack/nova-scheduler-0" Nov 29 08:08:53 crc kubenswrapper[4795]: I1129 08:08:53.956028 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c23a0993-0a7b-4452-bdcc-a199abf1de88-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c23a0993-0a7b-4452-bdcc-a199abf1de88\") " pod="openstack/nova-metadata-0" Nov 29 08:08:53 crc kubenswrapper[4795]: I1129 08:08:53.956054 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c23a0993-0a7b-4452-bdcc-a199abf1de88-logs\") pod \"nova-metadata-0\" (UID: \"c23a0993-0a7b-4452-bdcc-a199abf1de88\") " pod="openstack/nova-metadata-0" Nov 29 08:08:54 crc kubenswrapper[4795]: I1129 08:08:54.058240 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c23a0993-0a7b-4452-bdcc-a199abf1de88-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c23a0993-0a7b-4452-bdcc-a199abf1de88\") " pod="openstack/nova-metadata-0" Nov 29 08:08:54 crc kubenswrapper[4795]: I1129 08:08:54.058368 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmr7g\" (UniqueName: \"kubernetes.io/projected/7e58a5e7-cc35-47ee-af21-e80500efd523-kube-api-access-zmr7g\") pod \"nova-scheduler-0\" (UID: \"7e58a5e7-cc35-47ee-af21-e80500efd523\") " pod="openstack/nova-scheduler-0" Nov 29 08:08:54 crc kubenswrapper[4795]: I1129 08:08:54.058400 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c23a0993-0a7b-4452-bdcc-a199abf1de88-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c23a0993-0a7b-4452-bdcc-a199abf1de88\") " pod="openstack/nova-metadata-0" Nov 29 08:08:54 crc kubenswrapper[4795]: I1129 08:08:54.058420 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c23a0993-0a7b-4452-bdcc-a199abf1de88-logs\") pod \"nova-metadata-0\" (UID: \"c23a0993-0a7b-4452-bdcc-a199abf1de88\") " pod="openstack/nova-metadata-0" Nov 29 08:08:54 crc kubenswrapper[4795]: I1129 08:08:54.058445 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e58a5e7-cc35-47ee-af21-e80500efd523-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7e58a5e7-cc35-47ee-af21-e80500efd523\") " pod="openstack/nova-scheduler-0" Nov 29 08:08:54 crc kubenswrapper[4795]: I1129 08:08:54.058493 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qg9l\" (UniqueName: \"kubernetes.io/projected/c23a0993-0a7b-4452-bdcc-a199abf1de88-kube-api-access-7qg9l\") pod \"nova-metadata-0\" (UID: \"c23a0993-0a7b-4452-bdcc-a199abf1de88\") " pod="openstack/nova-metadata-0" Nov 29 08:08:54 crc kubenswrapper[4795]: I1129 08:08:54.058528 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c23a0993-0a7b-4452-bdcc-a199abf1de88-config-data\") pod \"nova-metadata-0\" (UID: \"c23a0993-0a7b-4452-bdcc-a199abf1de88\") " pod="openstack/nova-metadata-0" Nov 29 08:08:54 crc kubenswrapper[4795]: I1129 08:08:54.058606 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e58a5e7-cc35-47ee-af21-e80500efd523-config-data\") pod \"nova-scheduler-0\" (UID: \"7e58a5e7-cc35-47ee-af21-e80500efd523\") " pod="openstack/nova-scheduler-0" Nov 29 08:08:54 crc kubenswrapper[4795]: I1129 08:08:54.059337 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c23a0993-0a7b-4452-bdcc-a199abf1de88-logs\") pod \"nova-metadata-0\" (UID: \"c23a0993-0a7b-4452-bdcc-a199abf1de88\") " pod="openstack/nova-metadata-0" Nov 29 08:08:54 crc kubenswrapper[4795]: I1129 08:08:54.062529 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c23a0993-0a7b-4452-bdcc-a199abf1de88-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c23a0993-0a7b-4452-bdcc-a199abf1de88\") " pod="openstack/nova-metadata-0" Nov 29 08:08:54 crc kubenswrapper[4795]: I1129 08:08:54.062557 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c23a0993-0a7b-4452-bdcc-a199abf1de88-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c23a0993-0a7b-4452-bdcc-a199abf1de88\") " pod="openstack/nova-metadata-0" Nov 29 08:08:54 crc kubenswrapper[4795]: I1129 08:08:54.063543 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e58a5e7-cc35-47ee-af21-e80500efd523-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7e58a5e7-cc35-47ee-af21-e80500efd523\") " pod="openstack/nova-scheduler-0" Nov 29 08:08:54 crc kubenswrapper[4795]: I1129 08:08:54.064359 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e58a5e7-cc35-47ee-af21-e80500efd523-config-data\") pod \"nova-scheduler-0\" (UID: \"7e58a5e7-cc35-47ee-af21-e80500efd523\") " pod="openstack/nova-scheduler-0" Nov 29 08:08:54 crc kubenswrapper[4795]: I1129 08:08:54.066089 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c23a0993-0a7b-4452-bdcc-a199abf1de88-config-data\") pod \"nova-metadata-0\" (UID: \"c23a0993-0a7b-4452-bdcc-a199abf1de88\") " pod="openstack/nova-metadata-0" Nov 29 08:08:54 crc kubenswrapper[4795]: I1129 08:08:54.077793 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qg9l\" (UniqueName: \"kubernetes.io/projected/c23a0993-0a7b-4452-bdcc-a199abf1de88-kube-api-access-7qg9l\") pod \"nova-metadata-0\" (UID: \"c23a0993-0a7b-4452-bdcc-a199abf1de88\") " pod="openstack/nova-metadata-0" Nov 29 08:08:54 crc kubenswrapper[4795]: I1129 08:08:54.081185 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmr7g\" (UniqueName: \"kubernetes.io/projected/7e58a5e7-cc35-47ee-af21-e80500efd523-kube-api-access-zmr7g\") pod \"nova-scheduler-0\" (UID: \"7e58a5e7-cc35-47ee-af21-e80500efd523\") " pod="openstack/nova-scheduler-0" Nov 29 08:08:54 crc kubenswrapper[4795]: I1129 08:08:54.217354 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 29 08:08:54 crc kubenswrapper[4795]: I1129 08:08:54.232004 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 29 08:08:54 crc kubenswrapper[4795]: I1129 08:08:54.292914 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53a9feb8-d22b-48bd-9ac5-4168d88c8d78" path="/var/lib/kubelet/pods/53a9feb8-d22b-48bd-9ac5-4168d88c8d78/volumes" Nov 29 08:08:54 crc kubenswrapper[4795]: I1129 08:08:54.293611 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c33ae8e1-1fb9-4053-9dcf-76b7a5bb370d" path="/var/lib/kubelet/pods/c33ae8e1-1fb9-4053-9dcf-76b7a5bb370d/volumes" Nov 29 08:08:54 crc kubenswrapper[4795]: W1129 08:08:54.699398 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7e58a5e7_cc35_47ee_af21_e80500efd523.slice/crio-1829bc3a0a62d0f1ef09f91a205a86ddaeea77b476f37603022dde6d1181e49f WatchSource:0}: Error finding container 1829bc3a0a62d0f1ef09f91a205a86ddaeea77b476f37603022dde6d1181e49f: Status 404 returned error can't find the container with id 1829bc3a0a62d0f1ef09f91a205a86ddaeea77b476f37603022dde6d1181e49f Nov 29 08:08:54 crc kubenswrapper[4795]: I1129 08:08:54.700503 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 29 08:08:54 crc kubenswrapper[4795]: I1129 08:08:54.748894 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7e58a5e7-cc35-47ee-af21-e80500efd523","Type":"ContainerStarted","Data":"1829bc3a0a62d0f1ef09f91a205a86ddaeea77b476f37603022dde6d1181e49f"} Nov 29 08:08:54 crc kubenswrapper[4795]: I1129 08:08:54.813852 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 29 08:08:54 crc kubenswrapper[4795]: W1129 08:08:54.821987 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc23a0993_0a7b_4452_bdcc_a199abf1de88.slice/crio-09add6b72f4f1b1759aab596b2ffbd2352a67f463774a11db9171724817cf924 WatchSource:0}: Error finding container 09add6b72f4f1b1759aab596b2ffbd2352a67f463774a11db9171724817cf924: Status 404 returned error can't find the container with id 09add6b72f4f1b1759aab596b2ffbd2352a67f463774a11db9171724817cf924 Nov 29 08:08:55 crc kubenswrapper[4795]: I1129 08:08:55.761828 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7e58a5e7-cc35-47ee-af21-e80500efd523","Type":"ContainerStarted","Data":"9d6362b3f067381db753085804fa540e87a2bdcd66ddeaa7df15d3d778a9be74"} Nov 29 08:08:55 crc kubenswrapper[4795]: I1129 08:08:55.763276 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c23a0993-0a7b-4452-bdcc-a199abf1de88","Type":"ContainerStarted","Data":"228d24ce7f0faf617a75d2b153de0cdae5e109115b9c2f8d40ae6d48088a6e61"} Nov 29 08:08:55 crc kubenswrapper[4795]: I1129 08:08:55.763316 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c23a0993-0a7b-4452-bdcc-a199abf1de88","Type":"ContainerStarted","Data":"04c1eee8a74a6da59b7701da97c220a254dbd57791e12822b686674b80c944ad"} Nov 29 08:08:55 crc kubenswrapper[4795]: I1129 08:08:55.763326 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c23a0993-0a7b-4452-bdcc-a199abf1de88","Type":"ContainerStarted","Data":"09add6b72f4f1b1759aab596b2ffbd2352a67f463774a11db9171724817cf924"} Nov 29 08:08:55 crc kubenswrapper[4795]: I1129 08:08:55.794387 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.794357297 podStartE2EDuration="2.794357297s" podCreationTimestamp="2025-11-29 08:08:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 08:08:55.778160289 +0000 UTC m=+1781.753736079" watchObservedRunningTime="2025-11-29 08:08:55.794357297 +0000 UTC m=+1781.769933087" Nov 29 08:08:55 crc kubenswrapper[4795]: I1129 08:08:55.825297 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.825247593 podStartE2EDuration="2.825247593s" podCreationTimestamp="2025-11-29 08:08:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 08:08:55.810466154 +0000 UTC m=+1781.786041944" watchObservedRunningTime="2025-11-29 08:08:55.825247593 +0000 UTC m=+1781.800823383" Nov 29 08:08:56 crc kubenswrapper[4795]: I1129 08:08:56.276441 4795 scope.go:117] "RemoveContainer" containerID="cb788e622da5bdf804dcfa0c959dfb0158a848bc427ed90337dfd530ea78ea5d" Nov 29 08:08:56 crc kubenswrapper[4795]: E1129 08:08:56.276802 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:08:58 crc kubenswrapper[4795]: I1129 08:08:58.798670 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dccb767f-ff08-4edf-aa79-f2a09633d95d","Type":"ContainerStarted","Data":"98e445aa64d3dd40acddaaa2626d8cc5d63847edaaa9dc6bd496cc0a6f19c6d0"} Nov 29 08:08:59 crc kubenswrapper[4795]: I1129 08:08:59.218234 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 29 08:08:59 crc kubenswrapper[4795]: I1129 08:08:59.232129 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 29 08:08:59 crc kubenswrapper[4795]: I1129 08:08:59.232166 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 29 08:09:01 crc kubenswrapper[4795]: I1129 08:09:01.414377 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 29 08:09:01 crc kubenswrapper[4795]: I1129 08:09:01.414944 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 29 08:09:02 crc kubenswrapper[4795]: I1129 08:09:02.428725 4795 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f574bcc1-8e96-4c98-a600-1fcd846864d9" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.255:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 29 08:09:02 crc kubenswrapper[4795]: I1129 08:09:02.428762 4795 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f574bcc1-8e96-4c98-a600-1fcd846864d9" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.255:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 29 08:09:04 crc kubenswrapper[4795]: I1129 08:09:04.218051 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 29 08:09:04 crc kubenswrapper[4795]: I1129 08:09:04.233386 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 29 08:09:04 crc kubenswrapper[4795]: I1129 08:09:04.233467 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 29 08:09:04 crc kubenswrapper[4795]: I1129 08:09:04.258055 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 29 08:09:04 crc kubenswrapper[4795]: I1129 08:09:04.899989 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 29 08:09:05 crc kubenswrapper[4795]: I1129 08:09:05.244829 4795 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="c23a0993-0a7b-4452-bdcc-a199abf1de88" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.1:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 29 08:09:05 crc kubenswrapper[4795]: I1129 08:09:05.245126 4795 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="c23a0993-0a7b-4452-bdcc-a199abf1de88" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.1:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 29 08:09:07 crc kubenswrapper[4795]: I1129 08:09:07.904521 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"72d0f4d2-d953-4cfa-a24e-221e2cf4e994","Type":"ContainerStarted","Data":"ee97bc8feac99071eb68a042602a82d308a7e8e5198684037ba2618866cbcb5d"} Nov 29 08:09:08 crc kubenswrapper[4795]: I1129 08:09:08.280045 4795 scope.go:117] "RemoveContainer" containerID="cb788e622da5bdf804dcfa0c959dfb0158a848bc427ed90337dfd530ea78ea5d" Nov 29 08:09:08 crc kubenswrapper[4795]: E1129 08:09:08.280398 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:09:11 crc kubenswrapper[4795]: I1129 08:09:11.428821 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 29 08:09:11 crc kubenswrapper[4795]: I1129 08:09:11.429467 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 29 08:09:11 crc kubenswrapper[4795]: I1129 08:09:11.429922 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 29 08:09:11 crc kubenswrapper[4795]: I1129 08:09:11.429967 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 29 08:09:11 crc kubenswrapper[4795]: I1129 08:09:11.435636 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 29 08:09:11 crc kubenswrapper[4795]: I1129 08:09:11.437015 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 29 08:09:14 crc kubenswrapper[4795]: I1129 08:09:14.240490 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 29 08:09:14 crc kubenswrapper[4795]: I1129 08:09:14.252128 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 29 08:09:14 crc kubenswrapper[4795]: I1129 08:09:14.255850 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 29 08:09:15 crc kubenswrapper[4795]: I1129 08:09:15.099226 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 29 08:09:21 crc kubenswrapper[4795]: E1129 08:09:21.870351 4795 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: Get \"https://cdn01.quay.io/quayio-production-s3/sha256/85/85550bcfa4cb05e41d3f6771548a6a2403918d88011d7d91cd157e9dad6777a6?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIATAAF2YHTGR23ZTE6%2F20251129%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20251129T080911Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=48e2e4958b071a700110f0ee8bc10cb9b7a306168e12797273d673b4d8d5ba64®ion=us-east-1&namespace=openstack-k8s-operators&username=openshift-release-dev+ocm_access_1b89217552bc42d1be3fb06a1aed001a&repo_name=sg-core&akamai_signature=exp=1764404651~hmac=7cdf1dfc47ed7502a2ec99a91c1f6b853f3d4327f4e13a731307ba218029a853\": net/http: TLS handshake timeout" image="quay.io/openstack-k8s-operators/sg-core:latest" Nov 29 08:09:21 crc kubenswrapper[4795]: E1129 08:09:21.871723 4795 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:sg-core,Image:quay.io/openstack-k8s-operators/sg-core:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:sg-core-conf-yaml,ReadOnly:false,MountPath:/etc/sg-core.conf.yaml,SubPath:sg-core.conf.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lzfxx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(dccb767f-ff08-4edf-aa79-f2a09633d95d): ErrImagePull: parsing image configuration: Get \"https://cdn01.quay.io/quayio-production-s3/sha256/85/85550bcfa4cb05e41d3f6771548a6a2403918d88011d7d91cd157e9dad6777a6?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIATAAF2YHTGR23ZTE6%2F20251129%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20251129T080911Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=48e2e4958b071a700110f0ee8bc10cb9b7a306168e12797273d673b4d8d5ba64®ion=us-east-1&namespace=openstack-k8s-operators&username=openshift-release-dev+ocm_access_1b89217552bc42d1be3fb06a1aed001a&repo_name=sg-core&akamai_signature=exp=1764404651~hmac=7cdf1dfc47ed7502a2ec99a91c1f6b853f3d4327f4e13a731307ba218029a853\": net/http: TLS handshake timeout" logger="UnhandledError" Nov 29 08:09:22 crc kubenswrapper[4795]: I1129 08:09:22.101022 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"72d0f4d2-d953-4cfa-a24e-221e2cf4e994","Type":"ContainerStarted","Data":"cafc02ffd2e71c98c39616ad76cdb228f7f2729d66f07556ce92fc337917f007"} Nov 29 08:09:22 crc kubenswrapper[4795]: I1129 08:09:22.276514 4795 scope.go:117] "RemoveContainer" containerID="cb788e622da5bdf804dcfa0c959dfb0158a848bc427ed90337dfd530ea78ea5d" Nov 29 08:09:22 crc kubenswrapper[4795]: E1129 08:09:22.276960 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:09:34 crc kubenswrapper[4795]: I1129 08:09:34.288189 4795 scope.go:117] "RemoveContainer" containerID="cb788e622da5bdf804dcfa0c959dfb0158a848bc427ed90337dfd530ea78ea5d" Nov 29 08:09:34 crc kubenswrapper[4795]: E1129 08:09:34.288967 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:09:43 crc kubenswrapper[4795]: I1129 08:09:43.344058 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"72d0f4d2-d953-4cfa-a24e-221e2cf4e994","Type":"ContainerStarted","Data":"5da0b91ad20b347b423d990141652916306ff6fda9c2ae32b780ef3cff470119"} Nov 29 08:09:44 crc kubenswrapper[4795]: E1129 08:09:44.628629 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"sg-core\" with ErrImagePull: \"parsing image configuration: Get \\\"https://cdn01.quay.io/quayio-production-s3/sha256/85/85550bcfa4cb05e41d3f6771548a6a2403918d88011d7d91cd157e9dad6777a6?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIATAAF2YHTGR23ZTE6%2F20251129%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20251129T080911Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=48e2e4958b071a700110f0ee8bc10cb9b7a306168e12797273d673b4d8d5ba64®ion=us-east-1&namespace=openstack-k8s-operators&username=openshift-release-dev+ocm_access_1b89217552bc42d1be3fb06a1aed001a&repo_name=sg-core&akamai_signature=exp=1764404651~hmac=7cdf1dfc47ed7502a2ec99a91c1f6b853f3d4327f4e13a731307ba218029a853\\\": net/http: TLS handshake timeout\"" pod="openstack/ceilometer-0" podUID="dccb767f-ff08-4edf-aa79-f2a09633d95d" Nov 29 08:09:45 crc kubenswrapper[4795]: I1129 08:09:45.374849 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dccb767f-ff08-4edf-aa79-f2a09633d95d","Type":"ContainerStarted","Data":"16b54466edaa31ec6ed7a5885fa76f789d137e57fbab9b266775334d23a4ee9a"} Nov 29 08:09:45 crc kubenswrapper[4795]: I1129 08:09:45.375327 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 29 08:09:45 crc kubenswrapper[4795]: I1129 08:09:45.375191 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dccb767f-ff08-4edf-aa79-f2a09633d95d" containerName="ceilometer-notification-agent" containerID="cri-o://98e445aa64d3dd40acddaaa2626d8cc5d63847edaaa9dc6bd496cc0a6f19c6d0" gracePeriod=30 Nov 29 08:09:45 crc kubenswrapper[4795]: I1129 08:09:45.375199 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dccb767f-ff08-4edf-aa79-f2a09633d95d" containerName="proxy-httpd" containerID="cri-o://16b54466edaa31ec6ed7a5885fa76f789d137e57fbab9b266775334d23a4ee9a" gracePeriod=30 Nov 29 08:09:45 crc kubenswrapper[4795]: I1129 08:09:45.374992 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dccb767f-ff08-4edf-aa79-f2a09633d95d" containerName="ceilometer-central-agent" containerID="cri-o://741ceda2511c1b18cbb0c8e80cf44feda077b3ab8a1524435b0690333dd65bda" gracePeriod=30 Nov 29 08:09:46 crc kubenswrapper[4795]: I1129 08:09:46.471983 4795 generic.go:334] "Generic (PLEG): container finished" podID="dccb767f-ff08-4edf-aa79-f2a09633d95d" containerID="16b54466edaa31ec6ed7a5885fa76f789d137e57fbab9b266775334d23a4ee9a" exitCode=0 Nov 29 08:09:46 crc kubenswrapper[4795]: I1129 08:09:46.473086 4795 generic.go:334] "Generic (PLEG): container finished" podID="dccb767f-ff08-4edf-aa79-f2a09633d95d" containerID="98e445aa64d3dd40acddaaa2626d8cc5d63847edaaa9dc6bd496cc0a6f19c6d0" exitCode=0 Nov 29 08:09:46 crc kubenswrapper[4795]: I1129 08:09:46.473153 4795 generic.go:334] "Generic (PLEG): container finished" podID="dccb767f-ff08-4edf-aa79-f2a09633d95d" containerID="741ceda2511c1b18cbb0c8e80cf44feda077b3ab8a1524435b0690333dd65bda" exitCode=0 Nov 29 08:09:46 crc kubenswrapper[4795]: I1129 08:09:46.473222 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dccb767f-ff08-4edf-aa79-f2a09633d95d","Type":"ContainerDied","Data":"16b54466edaa31ec6ed7a5885fa76f789d137e57fbab9b266775334d23a4ee9a"} Nov 29 08:09:46 crc kubenswrapper[4795]: I1129 08:09:46.473295 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dccb767f-ff08-4edf-aa79-f2a09633d95d","Type":"ContainerDied","Data":"98e445aa64d3dd40acddaaa2626d8cc5d63847edaaa9dc6bd496cc0a6f19c6d0"} Nov 29 08:09:46 crc kubenswrapper[4795]: I1129 08:09:46.473353 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dccb767f-ff08-4edf-aa79-f2a09633d95d","Type":"ContainerDied","Data":"741ceda2511c1b18cbb0c8e80cf44feda077b3ab8a1524435b0690333dd65bda"} Nov 29 08:09:46 crc kubenswrapper[4795]: I1129 08:09:46.689578 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 08:09:46 crc kubenswrapper[4795]: I1129 08:09:46.845269 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dccb767f-ff08-4edf-aa79-f2a09633d95d-run-httpd\") pod \"dccb767f-ff08-4edf-aa79-f2a09633d95d\" (UID: \"dccb767f-ff08-4edf-aa79-f2a09633d95d\") " Nov 29 08:09:46 crc kubenswrapper[4795]: I1129 08:09:46.845355 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzfxx\" (UniqueName: \"kubernetes.io/projected/dccb767f-ff08-4edf-aa79-f2a09633d95d-kube-api-access-lzfxx\") pod \"dccb767f-ff08-4edf-aa79-f2a09633d95d\" (UID: \"dccb767f-ff08-4edf-aa79-f2a09633d95d\") " Nov 29 08:09:46 crc kubenswrapper[4795]: I1129 08:09:46.845450 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dccb767f-ff08-4edf-aa79-f2a09633d95d-log-httpd\") pod \"dccb767f-ff08-4edf-aa79-f2a09633d95d\" (UID: \"dccb767f-ff08-4edf-aa79-f2a09633d95d\") " Nov 29 08:09:46 crc kubenswrapper[4795]: I1129 08:09:46.845641 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dccb767f-ff08-4edf-aa79-f2a09633d95d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "dccb767f-ff08-4edf-aa79-f2a09633d95d" (UID: "dccb767f-ff08-4edf-aa79-f2a09633d95d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:09:46 crc kubenswrapper[4795]: I1129 08:09:46.845673 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dccb767f-ff08-4edf-aa79-f2a09633d95d-sg-core-conf-yaml\") pod \"dccb767f-ff08-4edf-aa79-f2a09633d95d\" (UID: \"dccb767f-ff08-4edf-aa79-f2a09633d95d\") " Nov 29 08:09:46 crc kubenswrapper[4795]: I1129 08:09:46.845810 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dccb767f-ff08-4edf-aa79-f2a09633d95d-config-data\") pod \"dccb767f-ff08-4edf-aa79-f2a09633d95d\" (UID: \"dccb767f-ff08-4edf-aa79-f2a09633d95d\") " Nov 29 08:09:46 crc kubenswrapper[4795]: I1129 08:09:46.845868 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dccb767f-ff08-4edf-aa79-f2a09633d95d-scripts\") pod \"dccb767f-ff08-4edf-aa79-f2a09633d95d\" (UID: \"dccb767f-ff08-4edf-aa79-f2a09633d95d\") " Nov 29 08:09:46 crc kubenswrapper[4795]: I1129 08:09:46.845925 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dccb767f-ff08-4edf-aa79-f2a09633d95d-combined-ca-bundle\") pod \"dccb767f-ff08-4edf-aa79-f2a09633d95d\" (UID: \"dccb767f-ff08-4edf-aa79-f2a09633d95d\") " Nov 29 08:09:46 crc kubenswrapper[4795]: I1129 08:09:46.846424 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dccb767f-ff08-4edf-aa79-f2a09633d95d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "dccb767f-ff08-4edf-aa79-f2a09633d95d" (UID: "dccb767f-ff08-4edf-aa79-f2a09633d95d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:09:46 crc kubenswrapper[4795]: I1129 08:09:46.846987 4795 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dccb767f-ff08-4edf-aa79-f2a09633d95d-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 29 08:09:46 crc kubenswrapper[4795]: I1129 08:09:46.847092 4795 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dccb767f-ff08-4edf-aa79-f2a09633d95d-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 29 08:09:46 crc kubenswrapper[4795]: I1129 08:09:46.854176 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dccb767f-ff08-4edf-aa79-f2a09633d95d-kube-api-access-lzfxx" (OuterVolumeSpecName: "kube-api-access-lzfxx") pod "dccb767f-ff08-4edf-aa79-f2a09633d95d" (UID: "dccb767f-ff08-4edf-aa79-f2a09633d95d"). InnerVolumeSpecName "kube-api-access-lzfxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:09:46 crc kubenswrapper[4795]: I1129 08:09:46.855052 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dccb767f-ff08-4edf-aa79-f2a09633d95d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "dccb767f-ff08-4edf-aa79-f2a09633d95d" (UID: "dccb767f-ff08-4edf-aa79-f2a09633d95d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:09:46 crc kubenswrapper[4795]: I1129 08:09:46.858980 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dccb767f-ff08-4edf-aa79-f2a09633d95d-scripts" (OuterVolumeSpecName: "scripts") pod "dccb767f-ff08-4edf-aa79-f2a09633d95d" (UID: "dccb767f-ff08-4edf-aa79-f2a09633d95d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:09:46 crc kubenswrapper[4795]: I1129 08:09:46.946022 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dccb767f-ff08-4edf-aa79-f2a09633d95d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dccb767f-ff08-4edf-aa79-f2a09633d95d" (UID: "dccb767f-ff08-4edf-aa79-f2a09633d95d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:09:46 crc kubenswrapper[4795]: I1129 08:09:46.949734 4795 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dccb767f-ff08-4edf-aa79-f2a09633d95d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 29 08:09:46 crc kubenswrapper[4795]: I1129 08:09:46.949771 4795 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dccb767f-ff08-4edf-aa79-f2a09633d95d-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 08:09:46 crc kubenswrapper[4795]: I1129 08:09:46.949781 4795 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dccb767f-ff08-4edf-aa79-f2a09633d95d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 08:09:46 crc kubenswrapper[4795]: I1129 08:09:46.949791 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzfxx\" (UniqueName: \"kubernetes.io/projected/dccb767f-ff08-4edf-aa79-f2a09633d95d-kube-api-access-lzfxx\") on node \"crc\" DevicePath \"\"" Nov 29 08:09:46 crc kubenswrapper[4795]: I1129 08:09:46.975642 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dccb767f-ff08-4edf-aa79-f2a09633d95d-config-data" (OuterVolumeSpecName: "config-data") pod "dccb767f-ff08-4edf-aa79-f2a09633d95d" (UID: "dccb767f-ff08-4edf-aa79-f2a09633d95d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:09:47 crc kubenswrapper[4795]: I1129 08:09:47.052357 4795 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dccb767f-ff08-4edf-aa79-f2a09633d95d-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 08:09:47 crc kubenswrapper[4795]: I1129 08:09:47.486235 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dccb767f-ff08-4edf-aa79-f2a09633d95d","Type":"ContainerDied","Data":"aa1563d357872ebac5397cc9fa8bbac8cf46ce9ee7d1606a894cb3465d8a96b6"} Nov 29 08:09:47 crc kubenswrapper[4795]: I1129 08:09:47.486278 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 08:09:47 crc kubenswrapper[4795]: I1129 08:09:47.486283 4795 scope.go:117] "RemoveContainer" containerID="16b54466edaa31ec6ed7a5885fa76f789d137e57fbab9b266775334d23a4ee9a" Nov 29 08:09:47 crc kubenswrapper[4795]: I1129 08:09:47.511919 4795 scope.go:117] "RemoveContainer" containerID="98e445aa64d3dd40acddaaa2626d8cc5d63847edaaa9dc6bd496cc0a6f19c6d0" Nov 29 08:09:47 crc kubenswrapper[4795]: I1129 08:09:47.550029 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 08:09:47 crc kubenswrapper[4795]: I1129 08:09:47.557181 4795 scope.go:117] "RemoveContainer" containerID="741ceda2511c1b18cbb0c8e80cf44feda077b3ab8a1524435b0690333dd65bda" Nov 29 08:09:47 crc kubenswrapper[4795]: I1129 08:09:47.562720 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 29 08:09:47 crc kubenswrapper[4795]: I1129 08:09:47.585160 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 29 08:09:47 crc kubenswrapper[4795]: E1129 08:09:47.590375 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dccb767f-ff08-4edf-aa79-f2a09633d95d" containerName="proxy-httpd" Nov 29 08:09:47 crc kubenswrapper[4795]: I1129 08:09:47.590411 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="dccb767f-ff08-4edf-aa79-f2a09633d95d" containerName="proxy-httpd" Nov 29 08:09:47 crc kubenswrapper[4795]: E1129 08:09:47.590437 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dccb767f-ff08-4edf-aa79-f2a09633d95d" containerName="ceilometer-notification-agent" Nov 29 08:09:47 crc kubenswrapper[4795]: I1129 08:09:47.590444 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="dccb767f-ff08-4edf-aa79-f2a09633d95d" containerName="ceilometer-notification-agent" Nov 29 08:09:47 crc kubenswrapper[4795]: E1129 08:09:47.590465 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dccb767f-ff08-4edf-aa79-f2a09633d95d" containerName="ceilometer-central-agent" Nov 29 08:09:47 crc kubenswrapper[4795]: I1129 08:09:47.590471 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="dccb767f-ff08-4edf-aa79-f2a09633d95d" containerName="ceilometer-central-agent" Nov 29 08:09:47 crc kubenswrapper[4795]: I1129 08:09:47.590832 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="dccb767f-ff08-4edf-aa79-f2a09633d95d" containerName="ceilometer-notification-agent" Nov 29 08:09:47 crc kubenswrapper[4795]: I1129 08:09:47.590851 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="dccb767f-ff08-4edf-aa79-f2a09633d95d" containerName="proxy-httpd" Nov 29 08:09:47 crc kubenswrapper[4795]: I1129 08:09:47.590895 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="dccb767f-ff08-4edf-aa79-f2a09633d95d" containerName="ceilometer-central-agent" Nov 29 08:09:47 crc kubenswrapper[4795]: I1129 08:09:47.595937 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 08:09:47 crc kubenswrapper[4795]: I1129 08:09:47.599489 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 29 08:09:47 crc kubenswrapper[4795]: I1129 08:09:47.600194 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 29 08:09:47 crc kubenswrapper[4795]: I1129 08:09:47.620394 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 08:09:47 crc kubenswrapper[4795]: I1129 08:09:47.771220 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20caaf57-3b9d-4552-9aa7-6d401e00c146-run-httpd\") pod \"ceilometer-0\" (UID: \"20caaf57-3b9d-4552-9aa7-6d401e00c146\") " pod="openstack/ceilometer-0" Nov 29 08:09:47 crc kubenswrapper[4795]: I1129 08:09:47.771297 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20caaf57-3b9d-4552-9aa7-6d401e00c146-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"20caaf57-3b9d-4552-9aa7-6d401e00c146\") " pod="openstack/ceilometer-0" Nov 29 08:09:47 crc kubenswrapper[4795]: I1129 08:09:47.771383 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9tz5\" (UniqueName: \"kubernetes.io/projected/20caaf57-3b9d-4552-9aa7-6d401e00c146-kube-api-access-k9tz5\") pod \"ceilometer-0\" (UID: \"20caaf57-3b9d-4552-9aa7-6d401e00c146\") " pod="openstack/ceilometer-0" Nov 29 08:09:47 crc kubenswrapper[4795]: I1129 08:09:47.771410 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20caaf57-3b9d-4552-9aa7-6d401e00c146-log-httpd\") pod \"ceilometer-0\" (UID: \"20caaf57-3b9d-4552-9aa7-6d401e00c146\") " pod="openstack/ceilometer-0" Nov 29 08:09:47 crc kubenswrapper[4795]: I1129 08:09:47.771444 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20caaf57-3b9d-4552-9aa7-6d401e00c146-scripts\") pod \"ceilometer-0\" (UID: \"20caaf57-3b9d-4552-9aa7-6d401e00c146\") " pod="openstack/ceilometer-0" Nov 29 08:09:47 crc kubenswrapper[4795]: I1129 08:09:47.771579 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20caaf57-3b9d-4552-9aa7-6d401e00c146-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"20caaf57-3b9d-4552-9aa7-6d401e00c146\") " pod="openstack/ceilometer-0" Nov 29 08:09:47 crc kubenswrapper[4795]: I1129 08:09:47.771630 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20caaf57-3b9d-4552-9aa7-6d401e00c146-config-data\") pod \"ceilometer-0\" (UID: \"20caaf57-3b9d-4552-9aa7-6d401e00c146\") " pod="openstack/ceilometer-0" Nov 29 08:09:47 crc kubenswrapper[4795]: I1129 08:09:47.873312 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20caaf57-3b9d-4552-9aa7-6d401e00c146-run-httpd\") pod \"ceilometer-0\" (UID: \"20caaf57-3b9d-4552-9aa7-6d401e00c146\") " pod="openstack/ceilometer-0" Nov 29 08:09:47 crc kubenswrapper[4795]: I1129 08:09:47.874033 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20caaf57-3b9d-4552-9aa7-6d401e00c146-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"20caaf57-3b9d-4552-9aa7-6d401e00c146\") " pod="openstack/ceilometer-0" Nov 29 08:09:47 crc kubenswrapper[4795]: I1129 08:09:47.874148 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9tz5\" (UniqueName: \"kubernetes.io/projected/20caaf57-3b9d-4552-9aa7-6d401e00c146-kube-api-access-k9tz5\") pod \"ceilometer-0\" (UID: \"20caaf57-3b9d-4552-9aa7-6d401e00c146\") " pod="openstack/ceilometer-0" Nov 29 08:09:47 crc kubenswrapper[4795]: I1129 08:09:47.874175 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20caaf57-3b9d-4552-9aa7-6d401e00c146-log-httpd\") pod \"ceilometer-0\" (UID: \"20caaf57-3b9d-4552-9aa7-6d401e00c146\") " pod="openstack/ceilometer-0" Nov 29 08:09:47 crc kubenswrapper[4795]: I1129 08:09:47.874209 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20caaf57-3b9d-4552-9aa7-6d401e00c146-scripts\") pod \"ceilometer-0\" (UID: \"20caaf57-3b9d-4552-9aa7-6d401e00c146\") " pod="openstack/ceilometer-0" Nov 29 08:09:47 crc kubenswrapper[4795]: I1129 08:09:47.874357 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20caaf57-3b9d-4552-9aa7-6d401e00c146-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"20caaf57-3b9d-4552-9aa7-6d401e00c146\") " pod="openstack/ceilometer-0" Nov 29 08:09:47 crc kubenswrapper[4795]: I1129 08:09:47.874386 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20caaf57-3b9d-4552-9aa7-6d401e00c146-config-data\") pod \"ceilometer-0\" (UID: \"20caaf57-3b9d-4552-9aa7-6d401e00c146\") " pod="openstack/ceilometer-0" Nov 29 08:09:47 crc kubenswrapper[4795]: I1129 08:09:47.873951 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20caaf57-3b9d-4552-9aa7-6d401e00c146-run-httpd\") pod \"ceilometer-0\" (UID: \"20caaf57-3b9d-4552-9aa7-6d401e00c146\") " pod="openstack/ceilometer-0" Nov 29 08:09:47 crc kubenswrapper[4795]: I1129 08:09:47.876077 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20caaf57-3b9d-4552-9aa7-6d401e00c146-log-httpd\") pod \"ceilometer-0\" (UID: \"20caaf57-3b9d-4552-9aa7-6d401e00c146\") " pod="openstack/ceilometer-0" Nov 29 08:09:47 crc kubenswrapper[4795]: I1129 08:09:47.879243 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20caaf57-3b9d-4552-9aa7-6d401e00c146-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"20caaf57-3b9d-4552-9aa7-6d401e00c146\") " pod="openstack/ceilometer-0" Nov 29 08:09:47 crc kubenswrapper[4795]: I1129 08:09:47.879892 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20caaf57-3b9d-4552-9aa7-6d401e00c146-scripts\") pod \"ceilometer-0\" (UID: \"20caaf57-3b9d-4552-9aa7-6d401e00c146\") " pod="openstack/ceilometer-0" Nov 29 08:09:47 crc kubenswrapper[4795]: I1129 08:09:47.880378 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20caaf57-3b9d-4552-9aa7-6d401e00c146-config-data\") pod \"ceilometer-0\" (UID: \"20caaf57-3b9d-4552-9aa7-6d401e00c146\") " pod="openstack/ceilometer-0" Nov 29 08:09:47 crc kubenswrapper[4795]: I1129 08:09:47.887570 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20caaf57-3b9d-4552-9aa7-6d401e00c146-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"20caaf57-3b9d-4552-9aa7-6d401e00c146\") " pod="openstack/ceilometer-0" Nov 29 08:09:47 crc kubenswrapper[4795]: I1129 08:09:47.892384 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9tz5\" (UniqueName: \"kubernetes.io/projected/20caaf57-3b9d-4552-9aa7-6d401e00c146-kube-api-access-k9tz5\") pod \"ceilometer-0\" (UID: \"20caaf57-3b9d-4552-9aa7-6d401e00c146\") " pod="openstack/ceilometer-0" Nov 29 08:09:47 crc kubenswrapper[4795]: I1129 08:09:47.920996 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 08:09:48 crc kubenswrapper[4795]: I1129 08:09:48.296055 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dccb767f-ff08-4edf-aa79-f2a09633d95d" path="/var/lib/kubelet/pods/dccb767f-ff08-4edf-aa79-f2a09633d95d/volumes" Nov 29 08:09:48 crc kubenswrapper[4795]: I1129 08:09:48.478960 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 08:09:48 crc kubenswrapper[4795]: I1129 08:09:48.517684 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20caaf57-3b9d-4552-9aa7-6d401e00c146","Type":"ContainerStarted","Data":"f3b25d8e925e625c2498adf4bf6dc2eae1f64e9ea9291d38b2e874035f437456"} Nov 29 08:09:49 crc kubenswrapper[4795]: I1129 08:09:49.275847 4795 scope.go:117] "RemoveContainer" containerID="cb788e622da5bdf804dcfa0c959dfb0158a848bc427ed90337dfd530ea78ea5d" Nov 29 08:09:49 crc kubenswrapper[4795]: E1129 08:09:49.276371 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:10:03 crc kubenswrapper[4795]: I1129 08:10:03.276551 4795 scope.go:117] "RemoveContainer" containerID="cb788e622da5bdf804dcfa0c959dfb0158a848bc427ed90337dfd530ea78ea5d" Nov 29 08:10:03 crc kubenswrapper[4795]: E1129 08:10:03.277352 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:10:07 crc kubenswrapper[4795]: I1129 08:10:07.720821 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20caaf57-3b9d-4552-9aa7-6d401e00c146","Type":"ContainerStarted","Data":"820e9ecd95bb66dae9141cae12d913e5aa3164d569c5c9f9042dfcd45bba672d"} Nov 29 08:10:13 crc kubenswrapper[4795]: I1129 08:10:13.806549 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"72d0f4d2-d953-4cfa-a24e-221e2cf4e994","Type":"ContainerStarted","Data":"60eef7bffab2cda952c51080277affae2d2bfddacb15e17b60e8d551203f9d14"} Nov 29 08:10:13 crc kubenswrapper[4795]: I1129 08:10:13.846576 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.8230701480000002 podStartE2EDuration="1m33.846549633s" podCreationTimestamp="2025-11-29 08:08:40 +0000 UTC" firstStartedPulling="2025-11-29 08:08:41.713854314 +0000 UTC m=+1767.689430094" lastFinishedPulling="2025-11-29 08:10:12.737333789 +0000 UTC m=+1858.712909579" observedRunningTime="2025-11-29 08:10:13.825363923 +0000 UTC m=+1859.800939713" watchObservedRunningTime="2025-11-29 08:10:13.846549633 +0000 UTC m=+1859.822125433" Nov 29 08:10:16 crc kubenswrapper[4795]: I1129 08:10:16.276039 4795 scope.go:117] "RemoveContainer" containerID="cb788e622da5bdf804dcfa0c959dfb0158a848bc427ed90337dfd530ea78ea5d" Nov 29 08:10:16 crc kubenswrapper[4795]: E1129 08:10:16.276654 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:10:27 crc kubenswrapper[4795]: I1129 08:10:27.326201 4795 scope.go:117] "RemoveContainer" containerID="941249688badc0d381c329a17fc5b82e6614eecd7e35012a87826a56d60c35fa" Nov 29 08:10:27 crc kubenswrapper[4795]: I1129 08:10:27.379387 4795 scope.go:117] "RemoveContainer" containerID="9e057905edce3e13b8e32626995262dd69799bbb15d8787ce54a86cabae62b35" Nov 29 08:10:30 crc kubenswrapper[4795]: I1129 08:10:30.276023 4795 scope.go:117] "RemoveContainer" containerID="cb788e622da5bdf804dcfa0c959dfb0158a848bc427ed90337dfd530ea78ea5d" Nov 29 08:10:30 crc kubenswrapper[4795]: E1129 08:10:30.276843 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:10:36 crc kubenswrapper[4795]: I1129 08:10:36.042765 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20caaf57-3b9d-4552-9aa7-6d401e00c146","Type":"ContainerStarted","Data":"7c039fa858196ac5e45d74190c240b5125d0dc97f21d4b97dbc0ec490b32af02"} Nov 29 08:10:39 crc kubenswrapper[4795]: I1129 08:10:39.089148 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20caaf57-3b9d-4552-9aa7-6d401e00c146","Type":"ContainerStarted","Data":"7f2807e425d0554708013698a696ee0e4f440541aeeb36d4132c246f6301b4df"} Nov 29 08:10:42 crc kubenswrapper[4795]: I1129 08:10:42.277815 4795 scope.go:117] "RemoveContainer" containerID="cb788e622da5bdf804dcfa0c959dfb0158a848bc427ed90337dfd530ea78ea5d" Nov 29 08:10:42 crc kubenswrapper[4795]: E1129 08:10:42.278481 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:10:52 crc kubenswrapper[4795]: I1129 08:10:52.261733 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20caaf57-3b9d-4552-9aa7-6d401e00c146","Type":"ContainerStarted","Data":"76621532edcd7890cbdb6a00d6f65fd366d3d6d642855540f6af29a557db24e9"} Nov 29 08:10:52 crc kubenswrapper[4795]: I1129 08:10:52.262864 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 29 08:10:52 crc kubenswrapper[4795]: I1129 08:10:52.326450 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.37462086 podStartE2EDuration="1m5.326422861s" podCreationTimestamp="2025-11-29 08:09:47 +0000 UTC" firstStartedPulling="2025-11-29 08:09:48.468511062 +0000 UTC m=+1834.444086852" lastFinishedPulling="2025-11-29 08:10:51.420313063 +0000 UTC m=+1897.395888853" observedRunningTime="2025-11-29 08:10:52.283463964 +0000 UTC m=+1898.259039754" watchObservedRunningTime="2025-11-29 08:10:52.326422861 +0000 UTC m=+1898.301998651" Nov 29 08:10:56 crc kubenswrapper[4795]: I1129 08:10:56.416099 4795 scope.go:117] "RemoveContainer" containerID="cb788e622da5bdf804dcfa0c959dfb0158a848bc427ed90337dfd530ea78ea5d" Nov 29 08:10:56 crc kubenswrapper[4795]: E1129 08:10:56.417272 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:11:08 crc kubenswrapper[4795]: I1129 08:11:08.276606 4795 scope.go:117] "RemoveContainer" containerID="cb788e622da5bdf804dcfa0c959dfb0158a848bc427ed90337dfd530ea78ea5d" Nov 29 08:11:08 crc kubenswrapper[4795]: E1129 08:11:08.277736 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:11:17 crc kubenswrapper[4795]: I1129 08:11:17.993623 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 29 08:11:22 crc kubenswrapper[4795]: I1129 08:11:22.277633 4795 scope.go:117] "RemoveContainer" containerID="cb788e622da5bdf804dcfa0c959dfb0158a848bc427ed90337dfd530ea78ea5d" Nov 29 08:11:22 crc kubenswrapper[4795]: E1129 08:11:22.278686 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:11:22 crc kubenswrapper[4795]: I1129 08:11:22.699379 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 29 08:11:22 crc kubenswrapper[4795]: I1129 08:11:22.699611 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="1bbcaf05-b2f0-4958-b862-50bb9d1f62b5" containerName="kube-state-metrics" containerID="cri-o://168e5b9f85111702a49a66146f29355f7dbb91a35c317f7e56c85dff748d0bf6" gracePeriod=30 Nov 29 08:11:22 crc kubenswrapper[4795]: I1129 08:11:22.794525 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Nov 29 08:11:22 crc kubenswrapper[4795]: I1129 08:11:22.794980 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mysqld-exporter-0" podUID="a1592ce2-fb7d-464f-b13d-09e287a45af6" containerName="mysqld-exporter" containerID="cri-o://3569a57a1c5e12b2ec026d71f27e5f63f1ca059c182a1ad8555946b9b4b42dd9" gracePeriod=30 Nov 29 08:11:22 crc kubenswrapper[4795]: I1129 08:11:22.929200 4795 generic.go:334] "Generic (PLEG): container finished" podID="1bbcaf05-b2f0-4958-b862-50bb9d1f62b5" containerID="168e5b9f85111702a49a66146f29355f7dbb91a35c317f7e56c85dff748d0bf6" exitCode=2 Nov 29 08:11:22 crc kubenswrapper[4795]: I1129 08:11:22.929248 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"1bbcaf05-b2f0-4958-b862-50bb9d1f62b5","Type":"ContainerDied","Data":"168e5b9f85111702a49a66146f29355f7dbb91a35c317f7e56c85dff748d0bf6"} Nov 29 08:11:23 crc kubenswrapper[4795]: I1129 08:11:23.499559 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 29 08:11:23 crc kubenswrapper[4795]: I1129 08:11:23.597235 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Nov 29 08:11:23 crc kubenswrapper[4795]: I1129 08:11:23.599600 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nv6vx\" (UniqueName: \"kubernetes.io/projected/1bbcaf05-b2f0-4958-b862-50bb9d1f62b5-kube-api-access-nv6vx\") pod \"1bbcaf05-b2f0-4958-b862-50bb9d1f62b5\" (UID: \"1bbcaf05-b2f0-4958-b862-50bb9d1f62b5\") " Nov 29 08:11:23 crc kubenswrapper[4795]: I1129 08:11:23.605131 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bbcaf05-b2f0-4958-b862-50bb9d1f62b5-kube-api-access-nv6vx" (OuterVolumeSpecName: "kube-api-access-nv6vx") pod "1bbcaf05-b2f0-4958-b862-50bb9d1f62b5" (UID: "1bbcaf05-b2f0-4958-b862-50bb9d1f62b5"). InnerVolumeSpecName "kube-api-access-nv6vx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:11:23 crc kubenswrapper[4795]: I1129 08:11:23.701642 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpbpt\" (UniqueName: \"kubernetes.io/projected/a1592ce2-fb7d-464f-b13d-09e287a45af6-kube-api-access-zpbpt\") pod \"a1592ce2-fb7d-464f-b13d-09e287a45af6\" (UID: \"a1592ce2-fb7d-464f-b13d-09e287a45af6\") " Nov 29 08:11:23 crc kubenswrapper[4795]: I1129 08:11:23.701926 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1592ce2-fb7d-464f-b13d-09e287a45af6-combined-ca-bundle\") pod \"a1592ce2-fb7d-464f-b13d-09e287a45af6\" (UID: \"a1592ce2-fb7d-464f-b13d-09e287a45af6\") " Nov 29 08:11:23 crc kubenswrapper[4795]: I1129 08:11:23.702035 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1592ce2-fb7d-464f-b13d-09e287a45af6-config-data\") pod \"a1592ce2-fb7d-464f-b13d-09e287a45af6\" (UID: \"a1592ce2-fb7d-464f-b13d-09e287a45af6\") " Nov 29 08:11:23 crc kubenswrapper[4795]: I1129 08:11:23.704289 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nv6vx\" (UniqueName: \"kubernetes.io/projected/1bbcaf05-b2f0-4958-b862-50bb9d1f62b5-kube-api-access-nv6vx\") on node \"crc\" DevicePath \"\"" Nov 29 08:11:23 crc kubenswrapper[4795]: I1129 08:11:23.705838 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1592ce2-fb7d-464f-b13d-09e287a45af6-kube-api-access-zpbpt" (OuterVolumeSpecName: "kube-api-access-zpbpt") pod "a1592ce2-fb7d-464f-b13d-09e287a45af6" (UID: "a1592ce2-fb7d-464f-b13d-09e287a45af6"). InnerVolumeSpecName "kube-api-access-zpbpt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:11:23 crc kubenswrapper[4795]: I1129 08:11:23.742364 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1592ce2-fb7d-464f-b13d-09e287a45af6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a1592ce2-fb7d-464f-b13d-09e287a45af6" (UID: "a1592ce2-fb7d-464f-b13d-09e287a45af6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:11:23 crc kubenswrapper[4795]: I1129 08:11:23.777657 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1592ce2-fb7d-464f-b13d-09e287a45af6-config-data" (OuterVolumeSpecName: "config-data") pod "a1592ce2-fb7d-464f-b13d-09e287a45af6" (UID: "a1592ce2-fb7d-464f-b13d-09e287a45af6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:11:23 crc kubenswrapper[4795]: I1129 08:11:23.805933 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpbpt\" (UniqueName: \"kubernetes.io/projected/a1592ce2-fb7d-464f-b13d-09e287a45af6-kube-api-access-zpbpt\") on node \"crc\" DevicePath \"\"" Nov 29 08:11:23 crc kubenswrapper[4795]: I1129 08:11:23.805975 4795 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1592ce2-fb7d-464f-b13d-09e287a45af6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 08:11:23 crc kubenswrapper[4795]: I1129 08:11:23.805984 4795 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1592ce2-fb7d-464f-b13d-09e287a45af6-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 08:11:23 crc kubenswrapper[4795]: I1129 08:11:23.941200 4795 generic.go:334] "Generic (PLEG): container finished" podID="a1592ce2-fb7d-464f-b13d-09e287a45af6" containerID="3569a57a1c5e12b2ec026d71f27e5f63f1ca059c182a1ad8555946b9b4b42dd9" exitCode=2 Nov 29 08:11:23 crc kubenswrapper[4795]: I1129 08:11:23.941271 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Nov 29 08:11:23 crc kubenswrapper[4795]: I1129 08:11:23.941284 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"a1592ce2-fb7d-464f-b13d-09e287a45af6","Type":"ContainerDied","Data":"3569a57a1c5e12b2ec026d71f27e5f63f1ca059c182a1ad8555946b9b4b42dd9"} Nov 29 08:11:23 crc kubenswrapper[4795]: I1129 08:11:23.941334 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"a1592ce2-fb7d-464f-b13d-09e287a45af6","Type":"ContainerDied","Data":"b44c7cfc3b66e80f3820c397d817f0be2d6f243cb498317bafc51411c59480cd"} Nov 29 08:11:23 crc kubenswrapper[4795]: I1129 08:11:23.941357 4795 scope.go:117] "RemoveContainer" containerID="3569a57a1c5e12b2ec026d71f27e5f63f1ca059c182a1ad8555946b9b4b42dd9" Nov 29 08:11:23 crc kubenswrapper[4795]: I1129 08:11:23.943672 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"1bbcaf05-b2f0-4958-b862-50bb9d1f62b5","Type":"ContainerDied","Data":"1e9b15ef8efeb88244e9fa6f20a0447fd7be575328209675f927989d98f26205"} Nov 29 08:11:23 crc kubenswrapper[4795]: I1129 08:11:23.943743 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 29 08:11:23 crc kubenswrapper[4795]: I1129 08:11:23.972436 4795 scope.go:117] "RemoveContainer" containerID="3569a57a1c5e12b2ec026d71f27e5f63f1ca059c182a1ad8555946b9b4b42dd9" Nov 29 08:11:23 crc kubenswrapper[4795]: E1129 08:11:23.972920 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3569a57a1c5e12b2ec026d71f27e5f63f1ca059c182a1ad8555946b9b4b42dd9\": container with ID starting with 3569a57a1c5e12b2ec026d71f27e5f63f1ca059c182a1ad8555946b9b4b42dd9 not found: ID does not exist" containerID="3569a57a1c5e12b2ec026d71f27e5f63f1ca059c182a1ad8555946b9b4b42dd9" Nov 29 08:11:23 crc kubenswrapper[4795]: I1129 08:11:23.972959 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3569a57a1c5e12b2ec026d71f27e5f63f1ca059c182a1ad8555946b9b4b42dd9"} err="failed to get container status \"3569a57a1c5e12b2ec026d71f27e5f63f1ca059c182a1ad8555946b9b4b42dd9\": rpc error: code = NotFound desc = could not find container \"3569a57a1c5e12b2ec026d71f27e5f63f1ca059c182a1ad8555946b9b4b42dd9\": container with ID starting with 3569a57a1c5e12b2ec026d71f27e5f63f1ca059c182a1ad8555946b9b4b42dd9 not found: ID does not exist" Nov 29 08:11:23 crc kubenswrapper[4795]: I1129 08:11:23.972988 4795 scope.go:117] "RemoveContainer" containerID="168e5b9f85111702a49a66146f29355f7dbb91a35c317f7e56c85dff748d0bf6" Nov 29 08:11:23 crc kubenswrapper[4795]: I1129 08:11:23.993469 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 29 08:11:24 crc kubenswrapper[4795]: I1129 08:11:24.032557 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 29 08:11:24 crc kubenswrapper[4795]: I1129 08:11:24.071388 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Nov 29 08:11:24 crc kubenswrapper[4795]: I1129 08:11:24.087058 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-0"] Nov 29 08:11:24 crc kubenswrapper[4795]: I1129 08:11:24.099247 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 29 08:11:24 crc kubenswrapper[4795]: E1129 08:11:24.099853 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bbcaf05-b2f0-4958-b862-50bb9d1f62b5" containerName="kube-state-metrics" Nov 29 08:11:24 crc kubenswrapper[4795]: I1129 08:11:24.099873 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bbcaf05-b2f0-4958-b862-50bb9d1f62b5" containerName="kube-state-metrics" Nov 29 08:11:24 crc kubenswrapper[4795]: E1129 08:11:24.099928 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1592ce2-fb7d-464f-b13d-09e287a45af6" containerName="mysqld-exporter" Nov 29 08:11:24 crc kubenswrapper[4795]: I1129 08:11:24.099935 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1592ce2-fb7d-464f-b13d-09e287a45af6" containerName="mysqld-exporter" Nov 29 08:11:24 crc kubenswrapper[4795]: I1129 08:11:24.100142 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1592ce2-fb7d-464f-b13d-09e287a45af6" containerName="mysqld-exporter" Nov 29 08:11:24 crc kubenswrapper[4795]: I1129 08:11:24.100180 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bbcaf05-b2f0-4958-b862-50bb9d1f62b5" containerName="kube-state-metrics" Nov 29 08:11:24 crc kubenswrapper[4795]: I1129 08:11:24.101754 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 29 08:11:24 crc kubenswrapper[4795]: I1129 08:11:24.106005 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Nov 29 08:11:24 crc kubenswrapper[4795]: I1129 08:11:24.106203 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Nov 29 08:11:24 crc kubenswrapper[4795]: I1129 08:11:24.117556 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Nov 29 08:11:24 crc kubenswrapper[4795]: I1129 08:11:24.119365 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Nov 29 08:11:24 crc kubenswrapper[4795]: I1129 08:11:24.121067 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Nov 29 08:11:24 crc kubenswrapper[4795]: I1129 08:11:24.123950 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-mysqld-exporter-svc" Nov 29 08:11:24 crc kubenswrapper[4795]: I1129 08:11:24.136386 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 29 08:11:24 crc kubenswrapper[4795]: I1129 08:11:24.152704 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Nov 29 08:11:24 crc kubenswrapper[4795]: I1129 08:11:24.219012 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/259b25c2-59e5-4ee8-bedc-23b7423bfae6-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"259b25c2-59e5-4ee8-bedc-23b7423bfae6\") " pod="openstack/kube-state-metrics-0" Nov 29 08:11:24 crc kubenswrapper[4795]: I1129 08:11:24.219083 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/259b25c2-59e5-4ee8-bedc-23b7423bfae6-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"259b25c2-59e5-4ee8-bedc-23b7423bfae6\") " pod="openstack/kube-state-metrics-0" Nov 29 08:11:24 crc kubenswrapper[4795]: I1129 08:11:24.219207 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8z98\" (UniqueName: \"kubernetes.io/projected/259b25c2-59e5-4ee8-bedc-23b7423bfae6-kube-api-access-h8z98\") pod \"kube-state-metrics-0\" (UID: \"259b25c2-59e5-4ee8-bedc-23b7423bfae6\") " pod="openstack/kube-state-metrics-0" Nov 29 08:11:24 crc kubenswrapper[4795]: I1129 08:11:24.219269 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/259b25c2-59e5-4ee8-bedc-23b7423bfae6-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"259b25c2-59e5-4ee8-bedc-23b7423bfae6\") " pod="openstack/kube-state-metrics-0" Nov 29 08:11:24 crc kubenswrapper[4795]: I1129 08:11:24.290033 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bbcaf05-b2f0-4958-b862-50bb9d1f62b5" path="/var/lib/kubelet/pods/1bbcaf05-b2f0-4958-b862-50bb9d1f62b5/volumes" Nov 29 08:11:24 crc kubenswrapper[4795]: I1129 08:11:24.290674 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1592ce2-fb7d-464f-b13d-09e287a45af6" path="/var/lib/kubelet/pods/a1592ce2-fb7d-464f-b13d-09e287a45af6/volumes" Nov 29 08:11:24 crc kubenswrapper[4795]: I1129 08:11:24.321330 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/259b25c2-59e5-4ee8-bedc-23b7423bfae6-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"259b25c2-59e5-4ee8-bedc-23b7423bfae6\") " pod="openstack/kube-state-metrics-0" Nov 29 08:11:24 crc kubenswrapper[4795]: I1129 08:11:24.321412 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/259b25c2-59e5-4ee8-bedc-23b7423bfae6-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"259b25c2-59e5-4ee8-bedc-23b7423bfae6\") " pod="openstack/kube-state-metrics-0" Nov 29 08:11:24 crc kubenswrapper[4795]: I1129 08:11:24.321492 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9pmj\" (UniqueName: \"kubernetes.io/projected/1f634ca3-f70b-49c4-9cc4-54fc9a0f4cde-kube-api-access-v9pmj\") pod \"mysqld-exporter-0\" (UID: \"1f634ca3-f70b-49c4-9cc4-54fc9a0f4cde\") " pod="openstack/mysqld-exporter-0" Nov 29 08:11:24 crc kubenswrapper[4795]: I1129 08:11:24.321551 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f634ca3-f70b-49c4-9cc4-54fc9a0f4cde-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"1f634ca3-f70b-49c4-9cc4-54fc9a0f4cde\") " pod="openstack/mysqld-exporter-0" Nov 29 08:11:24 crc kubenswrapper[4795]: I1129 08:11:24.321944 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f634ca3-f70b-49c4-9cc4-54fc9a0f4cde-config-data\") pod \"mysqld-exporter-0\" (UID: \"1f634ca3-f70b-49c4-9cc4-54fc9a0f4cde\") " pod="openstack/mysqld-exporter-0" Nov 29 08:11:24 crc kubenswrapper[4795]: I1129 08:11:24.322000 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8z98\" (UniqueName: \"kubernetes.io/projected/259b25c2-59e5-4ee8-bedc-23b7423bfae6-kube-api-access-h8z98\") pod \"kube-state-metrics-0\" (UID: \"259b25c2-59e5-4ee8-bedc-23b7423bfae6\") " pod="openstack/kube-state-metrics-0" Nov 29 08:11:24 crc kubenswrapper[4795]: I1129 08:11:24.322100 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/259b25c2-59e5-4ee8-bedc-23b7423bfae6-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"259b25c2-59e5-4ee8-bedc-23b7423bfae6\") " pod="openstack/kube-state-metrics-0" Nov 29 08:11:24 crc kubenswrapper[4795]: I1129 08:11:24.322190 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f634ca3-f70b-49c4-9cc4-54fc9a0f4cde-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"1f634ca3-f70b-49c4-9cc4-54fc9a0f4cde\") " pod="openstack/mysqld-exporter-0" Nov 29 08:11:24 crc kubenswrapper[4795]: I1129 08:11:24.334655 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/259b25c2-59e5-4ee8-bedc-23b7423bfae6-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"259b25c2-59e5-4ee8-bedc-23b7423bfae6\") " pod="openstack/kube-state-metrics-0" Nov 29 08:11:24 crc kubenswrapper[4795]: I1129 08:11:24.334798 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/259b25c2-59e5-4ee8-bedc-23b7423bfae6-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"259b25c2-59e5-4ee8-bedc-23b7423bfae6\") " pod="openstack/kube-state-metrics-0" Nov 29 08:11:24 crc kubenswrapper[4795]: I1129 08:11:24.335258 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/259b25c2-59e5-4ee8-bedc-23b7423bfae6-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"259b25c2-59e5-4ee8-bedc-23b7423bfae6\") " pod="openstack/kube-state-metrics-0" Nov 29 08:11:24 crc kubenswrapper[4795]: I1129 08:11:24.338815 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8z98\" (UniqueName: \"kubernetes.io/projected/259b25c2-59e5-4ee8-bedc-23b7423bfae6-kube-api-access-h8z98\") pod \"kube-state-metrics-0\" (UID: \"259b25c2-59e5-4ee8-bedc-23b7423bfae6\") " pod="openstack/kube-state-metrics-0" Nov 29 08:11:24 crc kubenswrapper[4795]: I1129 08:11:24.424611 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9pmj\" (UniqueName: \"kubernetes.io/projected/1f634ca3-f70b-49c4-9cc4-54fc9a0f4cde-kube-api-access-v9pmj\") pod \"mysqld-exporter-0\" (UID: \"1f634ca3-f70b-49c4-9cc4-54fc9a0f4cde\") " pod="openstack/mysqld-exporter-0" Nov 29 08:11:24 crc kubenswrapper[4795]: I1129 08:11:24.425089 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f634ca3-f70b-49c4-9cc4-54fc9a0f4cde-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"1f634ca3-f70b-49c4-9cc4-54fc9a0f4cde\") " pod="openstack/mysqld-exporter-0" Nov 29 08:11:24 crc kubenswrapper[4795]: I1129 08:11:24.425142 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f634ca3-f70b-49c4-9cc4-54fc9a0f4cde-config-data\") pod \"mysqld-exporter-0\" (UID: \"1f634ca3-f70b-49c4-9cc4-54fc9a0f4cde\") " pod="openstack/mysqld-exporter-0" Nov 29 08:11:24 crc kubenswrapper[4795]: I1129 08:11:24.425225 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f634ca3-f70b-49c4-9cc4-54fc9a0f4cde-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"1f634ca3-f70b-49c4-9cc4-54fc9a0f4cde\") " pod="openstack/mysqld-exporter-0" Nov 29 08:11:24 crc kubenswrapper[4795]: I1129 08:11:24.429619 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f634ca3-f70b-49c4-9cc4-54fc9a0f4cde-config-data\") pod \"mysqld-exporter-0\" (UID: \"1f634ca3-f70b-49c4-9cc4-54fc9a0f4cde\") " pod="openstack/mysqld-exporter-0" Nov 29 08:11:24 crc kubenswrapper[4795]: I1129 08:11:24.429787 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f634ca3-f70b-49c4-9cc4-54fc9a0f4cde-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"1f634ca3-f70b-49c4-9cc4-54fc9a0f4cde\") " pod="openstack/mysqld-exporter-0" Nov 29 08:11:24 crc kubenswrapper[4795]: I1129 08:11:24.430388 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f634ca3-f70b-49c4-9cc4-54fc9a0f4cde-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"1f634ca3-f70b-49c4-9cc4-54fc9a0f4cde\") " pod="openstack/mysqld-exporter-0" Nov 29 08:11:24 crc kubenswrapper[4795]: I1129 08:11:24.435813 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 29 08:11:24 crc kubenswrapper[4795]: I1129 08:11:24.441400 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9pmj\" (UniqueName: \"kubernetes.io/projected/1f634ca3-f70b-49c4-9cc4-54fc9a0f4cde-kube-api-access-v9pmj\") pod \"mysqld-exporter-0\" (UID: \"1f634ca3-f70b-49c4-9cc4-54fc9a0f4cde\") " pod="openstack/mysqld-exporter-0" Nov 29 08:11:24 crc kubenswrapper[4795]: I1129 08:11:24.446635 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Nov 29 08:11:25 crc kubenswrapper[4795]: I1129 08:11:25.024690 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 08:11:25 crc kubenswrapper[4795]: I1129 08:11:25.025298 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="20caaf57-3b9d-4552-9aa7-6d401e00c146" containerName="ceilometer-central-agent" containerID="cri-o://820e9ecd95bb66dae9141cae12d913e5aa3164d569c5c9f9042dfcd45bba672d" gracePeriod=30 Nov 29 08:11:25 crc kubenswrapper[4795]: I1129 08:11:25.025857 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="20caaf57-3b9d-4552-9aa7-6d401e00c146" containerName="proxy-httpd" containerID="cri-o://76621532edcd7890cbdb6a00d6f65fd366d3d6d642855540f6af29a557db24e9" gracePeriod=30 Nov 29 08:11:25 crc kubenswrapper[4795]: I1129 08:11:25.025929 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="20caaf57-3b9d-4552-9aa7-6d401e00c146" containerName="ceilometer-notification-agent" containerID="cri-o://7c039fa858196ac5e45d74190c240b5125d0dc97f21d4b97dbc0ec490b32af02" gracePeriod=30 Nov 29 08:11:25 crc kubenswrapper[4795]: I1129 08:11:25.026067 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="20caaf57-3b9d-4552-9aa7-6d401e00c146" containerName="sg-core" containerID="cri-o://7f2807e425d0554708013698a696ee0e4f440541aeeb36d4132c246f6301b4df" gracePeriod=30 Nov 29 08:11:25 crc kubenswrapper[4795]: I1129 08:11:25.055346 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 29 08:11:25 crc kubenswrapper[4795]: I1129 08:11:25.070127 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Nov 29 08:11:25 crc kubenswrapper[4795]: I1129 08:11:25.969478 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"1f634ca3-f70b-49c4-9cc4-54fc9a0f4cde","Type":"ContainerStarted","Data":"e8c7524852b2889da99a492865b1e1cb1efdaa1672d71ace4815f0e06cbf19b8"} Nov 29 08:11:25 crc kubenswrapper[4795]: I1129 08:11:25.974472 4795 generic.go:334] "Generic (PLEG): container finished" podID="20caaf57-3b9d-4552-9aa7-6d401e00c146" containerID="76621532edcd7890cbdb6a00d6f65fd366d3d6d642855540f6af29a557db24e9" exitCode=0 Nov 29 08:11:25 crc kubenswrapper[4795]: I1129 08:11:25.974505 4795 generic.go:334] "Generic (PLEG): container finished" podID="20caaf57-3b9d-4552-9aa7-6d401e00c146" containerID="7f2807e425d0554708013698a696ee0e4f440541aeeb36d4132c246f6301b4df" exitCode=2 Nov 29 08:11:25 crc kubenswrapper[4795]: I1129 08:11:25.974514 4795 generic.go:334] "Generic (PLEG): container finished" podID="20caaf57-3b9d-4552-9aa7-6d401e00c146" containerID="820e9ecd95bb66dae9141cae12d913e5aa3164d569c5c9f9042dfcd45bba672d" exitCode=0 Nov 29 08:11:25 crc kubenswrapper[4795]: I1129 08:11:25.974570 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20caaf57-3b9d-4552-9aa7-6d401e00c146","Type":"ContainerDied","Data":"76621532edcd7890cbdb6a00d6f65fd366d3d6d642855540f6af29a557db24e9"} Nov 29 08:11:25 crc kubenswrapper[4795]: I1129 08:11:25.974647 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20caaf57-3b9d-4552-9aa7-6d401e00c146","Type":"ContainerDied","Data":"7f2807e425d0554708013698a696ee0e4f440541aeeb36d4132c246f6301b4df"} Nov 29 08:11:25 crc kubenswrapper[4795]: I1129 08:11:25.974661 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20caaf57-3b9d-4552-9aa7-6d401e00c146","Type":"ContainerDied","Data":"820e9ecd95bb66dae9141cae12d913e5aa3164d569c5c9f9042dfcd45bba672d"} Nov 29 08:11:25 crc kubenswrapper[4795]: I1129 08:11:25.976989 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"259b25c2-59e5-4ee8-bedc-23b7423bfae6","Type":"ContainerStarted","Data":"4cc816e30b28da9646b767d6a14ebb3ba66644fa9896b4ada49a379d86122b00"} Nov 29 08:11:25 crc kubenswrapper[4795]: I1129 08:11:25.977034 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"259b25c2-59e5-4ee8-bedc-23b7423bfae6","Type":"ContainerStarted","Data":"f6a1e63d587af5f603ae02e814c06add80d561904f1394bf6d57c2bba640320d"} Nov 29 08:11:25 crc kubenswrapper[4795]: I1129 08:11:25.977147 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 29 08:11:26 crc kubenswrapper[4795]: I1129 08:11:26.000713 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.407497124 podStartE2EDuration="3.000688484s" podCreationTimestamp="2025-11-29 08:11:23 +0000 UTC" firstStartedPulling="2025-11-29 08:11:25.091058227 +0000 UTC m=+1931.066634017" lastFinishedPulling="2025-11-29 08:11:25.684249587 +0000 UTC m=+1931.659825377" observedRunningTime="2025-11-29 08:11:25.995647471 +0000 UTC m=+1931.971223261" watchObservedRunningTime="2025-11-29 08:11:26.000688484 +0000 UTC m=+1931.976264284" Nov 29 08:11:26 crc kubenswrapper[4795]: I1129 08:11:26.989543 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"1f634ca3-f70b-49c4-9cc4-54fc9a0f4cde","Type":"ContainerStarted","Data":"d125b6f2971e82c1b93fcfaed954387626740cc30d050f16a17c788e727b23e7"} Nov 29 08:11:27 crc kubenswrapper[4795]: I1129 08:11:27.014282 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=2.317619835 podStartE2EDuration="3.014256507s" podCreationTimestamp="2025-11-29 08:11:24 +0000 UTC" firstStartedPulling="2025-11-29 08:11:25.100630348 +0000 UTC m=+1931.076206138" lastFinishedPulling="2025-11-29 08:11:25.79726702 +0000 UTC m=+1931.772842810" observedRunningTime="2025-11-29 08:11:27.007418164 +0000 UTC m=+1932.982993954" watchObservedRunningTime="2025-11-29 08:11:27.014256507 +0000 UTC m=+1932.989832297" Nov 29 08:11:27 crc kubenswrapper[4795]: I1129 08:11:27.628583 4795 scope.go:117] "RemoveContainer" containerID="a3eac150aab4f523499bb70dae16fc16fde8b4333aa5c440b79d3a4ff2e3923b" Nov 29 08:11:29 crc kubenswrapper[4795]: I1129 08:11:29.027574 4795 generic.go:334] "Generic (PLEG): container finished" podID="20caaf57-3b9d-4552-9aa7-6d401e00c146" containerID="7c039fa858196ac5e45d74190c240b5125d0dc97f21d4b97dbc0ec490b32af02" exitCode=0 Nov 29 08:11:29 crc kubenswrapper[4795]: I1129 08:11:29.027636 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20caaf57-3b9d-4552-9aa7-6d401e00c146","Type":"ContainerDied","Data":"7c039fa858196ac5e45d74190c240b5125d0dc97f21d4b97dbc0ec490b32af02"} Nov 29 08:11:29 crc kubenswrapper[4795]: I1129 08:11:29.168311 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 08:11:29 crc kubenswrapper[4795]: I1129 08:11:29.346534 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20caaf57-3b9d-4552-9aa7-6d401e00c146-config-data\") pod \"20caaf57-3b9d-4552-9aa7-6d401e00c146\" (UID: \"20caaf57-3b9d-4552-9aa7-6d401e00c146\") " Nov 29 08:11:29 crc kubenswrapper[4795]: I1129 08:11:29.347040 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20caaf57-3b9d-4552-9aa7-6d401e00c146-log-httpd\") pod \"20caaf57-3b9d-4552-9aa7-6d401e00c146\" (UID: \"20caaf57-3b9d-4552-9aa7-6d401e00c146\") " Nov 29 08:11:29 crc kubenswrapper[4795]: I1129 08:11:29.347096 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20caaf57-3b9d-4552-9aa7-6d401e00c146-combined-ca-bundle\") pod \"20caaf57-3b9d-4552-9aa7-6d401e00c146\" (UID: \"20caaf57-3b9d-4552-9aa7-6d401e00c146\") " Nov 29 08:11:29 crc kubenswrapper[4795]: I1129 08:11:29.347261 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9tz5\" (UniqueName: \"kubernetes.io/projected/20caaf57-3b9d-4552-9aa7-6d401e00c146-kube-api-access-k9tz5\") pod \"20caaf57-3b9d-4552-9aa7-6d401e00c146\" (UID: \"20caaf57-3b9d-4552-9aa7-6d401e00c146\") " Nov 29 08:11:29 crc kubenswrapper[4795]: I1129 08:11:29.347284 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20caaf57-3b9d-4552-9aa7-6d401e00c146-scripts\") pod \"20caaf57-3b9d-4552-9aa7-6d401e00c146\" (UID: \"20caaf57-3b9d-4552-9aa7-6d401e00c146\") " Nov 29 08:11:29 crc kubenswrapper[4795]: I1129 08:11:29.347311 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20caaf57-3b9d-4552-9aa7-6d401e00c146-sg-core-conf-yaml\") pod \"20caaf57-3b9d-4552-9aa7-6d401e00c146\" (UID: \"20caaf57-3b9d-4552-9aa7-6d401e00c146\") " Nov 29 08:11:29 crc kubenswrapper[4795]: I1129 08:11:29.347356 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20caaf57-3b9d-4552-9aa7-6d401e00c146-run-httpd\") pod \"20caaf57-3b9d-4552-9aa7-6d401e00c146\" (UID: \"20caaf57-3b9d-4552-9aa7-6d401e00c146\") " Nov 29 08:11:29 crc kubenswrapper[4795]: I1129 08:11:29.348839 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20caaf57-3b9d-4552-9aa7-6d401e00c146-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "20caaf57-3b9d-4552-9aa7-6d401e00c146" (UID: "20caaf57-3b9d-4552-9aa7-6d401e00c146"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:11:29 crc kubenswrapper[4795]: I1129 08:11:29.353852 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20caaf57-3b9d-4552-9aa7-6d401e00c146-kube-api-access-k9tz5" (OuterVolumeSpecName: "kube-api-access-k9tz5") pod "20caaf57-3b9d-4552-9aa7-6d401e00c146" (UID: "20caaf57-3b9d-4552-9aa7-6d401e00c146"). InnerVolumeSpecName "kube-api-access-k9tz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:11:29 crc kubenswrapper[4795]: I1129 08:11:29.354138 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20caaf57-3b9d-4552-9aa7-6d401e00c146-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "20caaf57-3b9d-4552-9aa7-6d401e00c146" (UID: "20caaf57-3b9d-4552-9aa7-6d401e00c146"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:11:29 crc kubenswrapper[4795]: I1129 08:11:29.356135 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20caaf57-3b9d-4552-9aa7-6d401e00c146-scripts" (OuterVolumeSpecName: "scripts") pod "20caaf57-3b9d-4552-9aa7-6d401e00c146" (UID: "20caaf57-3b9d-4552-9aa7-6d401e00c146"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:11:29 crc kubenswrapper[4795]: I1129 08:11:29.404750 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20caaf57-3b9d-4552-9aa7-6d401e00c146-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "20caaf57-3b9d-4552-9aa7-6d401e00c146" (UID: "20caaf57-3b9d-4552-9aa7-6d401e00c146"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:11:29 crc kubenswrapper[4795]: I1129 08:11:29.451646 4795 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20caaf57-3b9d-4552-9aa7-6d401e00c146-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 29 08:11:29 crc kubenswrapper[4795]: I1129 08:11:29.451677 4795 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20caaf57-3b9d-4552-9aa7-6d401e00c146-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 29 08:11:29 crc kubenswrapper[4795]: I1129 08:11:29.451691 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9tz5\" (UniqueName: \"kubernetes.io/projected/20caaf57-3b9d-4552-9aa7-6d401e00c146-kube-api-access-k9tz5\") on node \"crc\" DevicePath \"\"" Nov 29 08:11:29 crc kubenswrapper[4795]: I1129 08:11:29.451703 4795 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20caaf57-3b9d-4552-9aa7-6d401e00c146-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 08:11:29 crc kubenswrapper[4795]: I1129 08:11:29.451713 4795 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20caaf57-3b9d-4552-9aa7-6d401e00c146-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 29 08:11:29 crc kubenswrapper[4795]: I1129 08:11:29.455096 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20caaf57-3b9d-4552-9aa7-6d401e00c146-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "20caaf57-3b9d-4552-9aa7-6d401e00c146" (UID: "20caaf57-3b9d-4552-9aa7-6d401e00c146"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:11:29 crc kubenswrapper[4795]: I1129 08:11:29.506756 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20caaf57-3b9d-4552-9aa7-6d401e00c146-config-data" (OuterVolumeSpecName: "config-data") pod "20caaf57-3b9d-4552-9aa7-6d401e00c146" (UID: "20caaf57-3b9d-4552-9aa7-6d401e00c146"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:11:29 crc kubenswrapper[4795]: I1129 08:11:29.553540 4795 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20caaf57-3b9d-4552-9aa7-6d401e00c146-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 08:11:29 crc kubenswrapper[4795]: I1129 08:11:29.553578 4795 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20caaf57-3b9d-4552-9aa7-6d401e00c146-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 08:11:30 crc kubenswrapper[4795]: I1129 08:11:30.042166 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20caaf57-3b9d-4552-9aa7-6d401e00c146","Type":"ContainerDied","Data":"f3b25d8e925e625c2498adf4bf6dc2eae1f64e9ea9291d38b2e874035f437456"} Nov 29 08:11:30 crc kubenswrapper[4795]: I1129 08:11:30.042317 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 08:11:30 crc kubenswrapper[4795]: I1129 08:11:30.042568 4795 scope.go:117] "RemoveContainer" containerID="76621532edcd7890cbdb6a00d6f65fd366d3d6d642855540f6af29a557db24e9" Nov 29 08:11:30 crc kubenswrapper[4795]: I1129 08:11:30.078062 4795 scope.go:117] "RemoveContainer" containerID="7f2807e425d0554708013698a696ee0e4f440541aeeb36d4132c246f6301b4df" Nov 29 08:11:30 crc kubenswrapper[4795]: I1129 08:11:30.081381 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 08:11:30 crc kubenswrapper[4795]: I1129 08:11:30.091436 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 29 08:11:30 crc kubenswrapper[4795]: I1129 08:11:30.105426 4795 scope.go:117] "RemoveContainer" containerID="7c039fa858196ac5e45d74190c240b5125d0dc97f21d4b97dbc0ec490b32af02" Nov 29 08:11:30 crc kubenswrapper[4795]: I1129 08:11:30.112231 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 29 08:11:30 crc kubenswrapper[4795]: E1129 08:11:30.112808 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20caaf57-3b9d-4552-9aa7-6d401e00c146" containerName="sg-core" Nov 29 08:11:30 crc kubenswrapper[4795]: I1129 08:11:30.112828 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="20caaf57-3b9d-4552-9aa7-6d401e00c146" containerName="sg-core" Nov 29 08:11:30 crc kubenswrapper[4795]: E1129 08:11:30.112858 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20caaf57-3b9d-4552-9aa7-6d401e00c146" containerName="ceilometer-notification-agent" Nov 29 08:11:30 crc kubenswrapper[4795]: I1129 08:11:30.112869 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="20caaf57-3b9d-4552-9aa7-6d401e00c146" containerName="ceilometer-notification-agent" Nov 29 08:11:30 crc kubenswrapper[4795]: E1129 08:11:30.112884 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20caaf57-3b9d-4552-9aa7-6d401e00c146" containerName="proxy-httpd" Nov 29 08:11:30 crc kubenswrapper[4795]: I1129 08:11:30.112892 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="20caaf57-3b9d-4552-9aa7-6d401e00c146" containerName="proxy-httpd" Nov 29 08:11:30 crc kubenswrapper[4795]: E1129 08:11:30.112917 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20caaf57-3b9d-4552-9aa7-6d401e00c146" containerName="ceilometer-central-agent" Nov 29 08:11:30 crc kubenswrapper[4795]: I1129 08:11:30.112924 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="20caaf57-3b9d-4552-9aa7-6d401e00c146" containerName="ceilometer-central-agent" Nov 29 08:11:30 crc kubenswrapper[4795]: I1129 08:11:30.113166 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="20caaf57-3b9d-4552-9aa7-6d401e00c146" containerName="proxy-httpd" Nov 29 08:11:30 crc kubenswrapper[4795]: I1129 08:11:30.113185 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="20caaf57-3b9d-4552-9aa7-6d401e00c146" containerName="ceilometer-central-agent" Nov 29 08:11:30 crc kubenswrapper[4795]: I1129 08:11:30.113202 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="20caaf57-3b9d-4552-9aa7-6d401e00c146" containerName="sg-core" Nov 29 08:11:30 crc kubenswrapper[4795]: I1129 08:11:30.113219 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="20caaf57-3b9d-4552-9aa7-6d401e00c146" containerName="ceilometer-notification-agent" Nov 29 08:11:30 crc kubenswrapper[4795]: I1129 08:11:30.115549 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 08:11:30 crc kubenswrapper[4795]: I1129 08:11:30.120512 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 29 08:11:30 crc kubenswrapper[4795]: I1129 08:11:30.120630 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 29 08:11:30 crc kubenswrapper[4795]: I1129 08:11:30.129072 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 29 08:11:30 crc kubenswrapper[4795]: I1129 08:11:30.141823 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 08:11:30 crc kubenswrapper[4795]: I1129 08:11:30.158202 4795 scope.go:117] "RemoveContainer" containerID="820e9ecd95bb66dae9141cae12d913e5aa3164d569c5c9f9042dfcd45bba672d" Nov 29 08:11:30 crc kubenswrapper[4795]: I1129 08:11:30.268899 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efbd2908-7755-45b1-8624-63c8b7a26704-run-httpd\") pod \"ceilometer-0\" (UID: \"efbd2908-7755-45b1-8624-63c8b7a26704\") " pod="openstack/ceilometer-0" Nov 29 08:11:30 crc kubenswrapper[4795]: I1129 08:11:30.268951 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efbd2908-7755-45b1-8624-63c8b7a26704-scripts\") pod \"ceilometer-0\" (UID: \"efbd2908-7755-45b1-8624-63c8b7a26704\") " pod="openstack/ceilometer-0" Nov 29 08:11:30 crc kubenswrapper[4795]: I1129 08:11:30.268985 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/efbd2908-7755-45b1-8624-63c8b7a26704-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"efbd2908-7755-45b1-8624-63c8b7a26704\") " pod="openstack/ceilometer-0" Nov 29 08:11:30 crc kubenswrapper[4795]: I1129 08:11:30.269067 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxppb\" (UniqueName: \"kubernetes.io/projected/efbd2908-7755-45b1-8624-63c8b7a26704-kube-api-access-xxppb\") pod \"ceilometer-0\" (UID: \"efbd2908-7755-45b1-8624-63c8b7a26704\") " pod="openstack/ceilometer-0" Nov 29 08:11:30 crc kubenswrapper[4795]: I1129 08:11:30.269137 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/efbd2908-7755-45b1-8624-63c8b7a26704-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"efbd2908-7755-45b1-8624-63c8b7a26704\") " pod="openstack/ceilometer-0" Nov 29 08:11:30 crc kubenswrapper[4795]: I1129 08:11:30.269246 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efbd2908-7755-45b1-8624-63c8b7a26704-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"efbd2908-7755-45b1-8624-63c8b7a26704\") " pod="openstack/ceilometer-0" Nov 29 08:11:30 crc kubenswrapper[4795]: I1129 08:11:30.269542 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efbd2908-7755-45b1-8624-63c8b7a26704-log-httpd\") pod \"ceilometer-0\" (UID: \"efbd2908-7755-45b1-8624-63c8b7a26704\") " pod="openstack/ceilometer-0" Nov 29 08:11:30 crc kubenswrapper[4795]: I1129 08:11:30.269740 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efbd2908-7755-45b1-8624-63c8b7a26704-config-data\") pod \"ceilometer-0\" (UID: \"efbd2908-7755-45b1-8624-63c8b7a26704\") " pod="openstack/ceilometer-0" Nov 29 08:11:30 crc kubenswrapper[4795]: I1129 08:11:30.288240 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20caaf57-3b9d-4552-9aa7-6d401e00c146" path="/var/lib/kubelet/pods/20caaf57-3b9d-4552-9aa7-6d401e00c146/volumes" Nov 29 08:11:30 crc kubenswrapper[4795]: I1129 08:11:30.371412 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efbd2908-7755-45b1-8624-63c8b7a26704-log-httpd\") pod \"ceilometer-0\" (UID: \"efbd2908-7755-45b1-8624-63c8b7a26704\") " pod="openstack/ceilometer-0" Nov 29 08:11:30 crc kubenswrapper[4795]: I1129 08:11:30.371522 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efbd2908-7755-45b1-8624-63c8b7a26704-config-data\") pod \"ceilometer-0\" (UID: \"efbd2908-7755-45b1-8624-63c8b7a26704\") " pod="openstack/ceilometer-0" Nov 29 08:11:30 crc kubenswrapper[4795]: I1129 08:11:30.371573 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efbd2908-7755-45b1-8624-63c8b7a26704-run-httpd\") pod \"ceilometer-0\" (UID: \"efbd2908-7755-45b1-8624-63c8b7a26704\") " pod="openstack/ceilometer-0" Nov 29 08:11:30 crc kubenswrapper[4795]: I1129 08:11:30.371616 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efbd2908-7755-45b1-8624-63c8b7a26704-scripts\") pod \"ceilometer-0\" (UID: \"efbd2908-7755-45b1-8624-63c8b7a26704\") " pod="openstack/ceilometer-0" Nov 29 08:11:30 crc kubenswrapper[4795]: I1129 08:11:30.371645 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/efbd2908-7755-45b1-8624-63c8b7a26704-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"efbd2908-7755-45b1-8624-63c8b7a26704\") " pod="openstack/ceilometer-0" Nov 29 08:11:30 crc kubenswrapper[4795]: I1129 08:11:30.371695 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxppb\" (UniqueName: \"kubernetes.io/projected/efbd2908-7755-45b1-8624-63c8b7a26704-kube-api-access-xxppb\") pod \"ceilometer-0\" (UID: \"efbd2908-7755-45b1-8624-63c8b7a26704\") " pod="openstack/ceilometer-0" Nov 29 08:11:30 crc kubenswrapper[4795]: I1129 08:11:30.371732 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/efbd2908-7755-45b1-8624-63c8b7a26704-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"efbd2908-7755-45b1-8624-63c8b7a26704\") " pod="openstack/ceilometer-0" Nov 29 08:11:30 crc kubenswrapper[4795]: I1129 08:11:30.371794 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efbd2908-7755-45b1-8624-63c8b7a26704-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"efbd2908-7755-45b1-8624-63c8b7a26704\") " pod="openstack/ceilometer-0" Nov 29 08:11:30 crc kubenswrapper[4795]: I1129 08:11:30.372113 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efbd2908-7755-45b1-8624-63c8b7a26704-run-httpd\") pod \"ceilometer-0\" (UID: \"efbd2908-7755-45b1-8624-63c8b7a26704\") " pod="openstack/ceilometer-0" Nov 29 08:11:30 crc kubenswrapper[4795]: I1129 08:11:30.372113 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efbd2908-7755-45b1-8624-63c8b7a26704-log-httpd\") pod \"ceilometer-0\" (UID: \"efbd2908-7755-45b1-8624-63c8b7a26704\") " pod="openstack/ceilometer-0" Nov 29 08:11:30 crc kubenswrapper[4795]: I1129 08:11:30.377968 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efbd2908-7755-45b1-8624-63c8b7a26704-config-data\") pod \"ceilometer-0\" (UID: \"efbd2908-7755-45b1-8624-63c8b7a26704\") " pod="openstack/ceilometer-0" Nov 29 08:11:30 crc kubenswrapper[4795]: I1129 08:11:30.378781 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efbd2908-7755-45b1-8624-63c8b7a26704-scripts\") pod \"ceilometer-0\" (UID: \"efbd2908-7755-45b1-8624-63c8b7a26704\") " pod="openstack/ceilometer-0" Nov 29 08:11:30 crc kubenswrapper[4795]: I1129 08:11:30.379765 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/efbd2908-7755-45b1-8624-63c8b7a26704-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"efbd2908-7755-45b1-8624-63c8b7a26704\") " pod="openstack/ceilometer-0" Nov 29 08:11:30 crc kubenswrapper[4795]: I1129 08:11:30.386497 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efbd2908-7755-45b1-8624-63c8b7a26704-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"efbd2908-7755-45b1-8624-63c8b7a26704\") " pod="openstack/ceilometer-0" Nov 29 08:11:30 crc kubenswrapper[4795]: I1129 08:11:30.387502 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/efbd2908-7755-45b1-8624-63c8b7a26704-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"efbd2908-7755-45b1-8624-63c8b7a26704\") " pod="openstack/ceilometer-0" Nov 29 08:11:30 crc kubenswrapper[4795]: I1129 08:11:30.393396 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxppb\" (UniqueName: \"kubernetes.io/projected/efbd2908-7755-45b1-8624-63c8b7a26704-kube-api-access-xxppb\") pod \"ceilometer-0\" (UID: \"efbd2908-7755-45b1-8624-63c8b7a26704\") " pod="openstack/ceilometer-0" Nov 29 08:11:30 crc kubenswrapper[4795]: I1129 08:11:30.455248 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 08:11:31 crc kubenswrapper[4795]: I1129 08:11:31.033148 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 08:11:31 crc kubenswrapper[4795]: I1129 08:11:31.090810 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efbd2908-7755-45b1-8624-63c8b7a26704","Type":"ContainerStarted","Data":"f511c849a1b7ca19d04858c1de76a9f4b15f4ffca32d8e5d0a07049ae45debae"} Nov 29 08:11:32 crc kubenswrapper[4795]: I1129 08:11:32.120174 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efbd2908-7755-45b1-8624-63c8b7a26704","Type":"ContainerStarted","Data":"8538c632fd28235313631975f494bd7dc7544f2654ad4385b476cc40f667de98"} Nov 29 08:11:32 crc kubenswrapper[4795]: I1129 08:11:32.486910 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-85xp6"] Nov 29 08:11:32 crc kubenswrapper[4795]: I1129 08:11:32.502007 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-85xp6"] Nov 29 08:11:32 crc kubenswrapper[4795]: I1129 08:11:32.555805 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-whxgt"] Nov 29 08:11:32 crc kubenswrapper[4795]: I1129 08:11:32.557375 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-whxgt" Nov 29 08:11:32 crc kubenswrapper[4795]: I1129 08:11:32.590741 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-whxgt"] Nov 29 08:11:32 crc kubenswrapper[4795]: I1129 08:11:32.641116 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zd7nl\" (UniqueName: \"kubernetes.io/projected/7981887b-f33c-421f-aac5-520c03b7a48a-kube-api-access-zd7nl\") pod \"heat-db-sync-whxgt\" (UID: \"7981887b-f33c-421f-aac5-520c03b7a48a\") " pod="openstack/heat-db-sync-whxgt" Nov 29 08:11:32 crc kubenswrapper[4795]: I1129 08:11:32.641324 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7981887b-f33c-421f-aac5-520c03b7a48a-combined-ca-bundle\") pod \"heat-db-sync-whxgt\" (UID: \"7981887b-f33c-421f-aac5-520c03b7a48a\") " pod="openstack/heat-db-sync-whxgt" Nov 29 08:11:32 crc kubenswrapper[4795]: I1129 08:11:32.641423 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7981887b-f33c-421f-aac5-520c03b7a48a-config-data\") pod \"heat-db-sync-whxgt\" (UID: \"7981887b-f33c-421f-aac5-520c03b7a48a\") " pod="openstack/heat-db-sync-whxgt" Nov 29 08:11:32 crc kubenswrapper[4795]: I1129 08:11:32.743780 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7981887b-f33c-421f-aac5-520c03b7a48a-combined-ca-bundle\") pod \"heat-db-sync-whxgt\" (UID: \"7981887b-f33c-421f-aac5-520c03b7a48a\") " pod="openstack/heat-db-sync-whxgt" Nov 29 08:11:32 crc kubenswrapper[4795]: I1129 08:11:32.744431 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7981887b-f33c-421f-aac5-520c03b7a48a-config-data\") pod \"heat-db-sync-whxgt\" (UID: \"7981887b-f33c-421f-aac5-520c03b7a48a\") " pod="openstack/heat-db-sync-whxgt" Nov 29 08:11:32 crc kubenswrapper[4795]: I1129 08:11:32.744886 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zd7nl\" (UniqueName: \"kubernetes.io/projected/7981887b-f33c-421f-aac5-520c03b7a48a-kube-api-access-zd7nl\") pod \"heat-db-sync-whxgt\" (UID: \"7981887b-f33c-421f-aac5-520c03b7a48a\") " pod="openstack/heat-db-sync-whxgt" Nov 29 08:11:32 crc kubenswrapper[4795]: I1129 08:11:32.750714 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7981887b-f33c-421f-aac5-520c03b7a48a-combined-ca-bundle\") pod \"heat-db-sync-whxgt\" (UID: \"7981887b-f33c-421f-aac5-520c03b7a48a\") " pod="openstack/heat-db-sync-whxgt" Nov 29 08:11:32 crc kubenswrapper[4795]: I1129 08:11:32.753272 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7981887b-f33c-421f-aac5-520c03b7a48a-config-data\") pod \"heat-db-sync-whxgt\" (UID: \"7981887b-f33c-421f-aac5-520c03b7a48a\") " pod="openstack/heat-db-sync-whxgt" Nov 29 08:11:32 crc kubenswrapper[4795]: I1129 08:11:32.765108 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zd7nl\" (UniqueName: \"kubernetes.io/projected/7981887b-f33c-421f-aac5-520c03b7a48a-kube-api-access-zd7nl\") pod \"heat-db-sync-whxgt\" (UID: \"7981887b-f33c-421f-aac5-520c03b7a48a\") " pod="openstack/heat-db-sync-whxgt" Nov 29 08:11:32 crc kubenswrapper[4795]: I1129 08:11:32.986489 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-whxgt" Nov 29 08:11:33 crc kubenswrapper[4795]: I1129 08:11:33.156662 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efbd2908-7755-45b1-8624-63c8b7a26704","Type":"ContainerStarted","Data":"1699fcf66fbf7f2ba77a21508b21b24ae73ce4fd2c05fd3ffe30b6c2764be760"} Nov 29 08:11:33 crc kubenswrapper[4795]: I1129 08:11:33.512247 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-whxgt"] Nov 29 08:11:33 crc kubenswrapper[4795]: W1129 08:11:33.525873 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7981887b_f33c_421f_aac5_520c03b7a48a.slice/crio-7f7545ec8106390706d66eed52e97a7228d067dfcaf015ee9e891a447dd6c13b WatchSource:0}: Error finding container 7f7545ec8106390706d66eed52e97a7228d067dfcaf015ee9e891a447dd6c13b: Status 404 returned error can't find the container with id 7f7545ec8106390706d66eed52e97a7228d067dfcaf015ee9e891a447dd6c13b Nov 29 08:11:34 crc kubenswrapper[4795]: I1129 08:11:34.177269 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-whxgt" event={"ID":"7981887b-f33c-421f-aac5-520c03b7a48a","Type":"ContainerStarted","Data":"7f7545ec8106390706d66eed52e97a7228d067dfcaf015ee9e891a447dd6c13b"} Nov 29 08:11:34 crc kubenswrapper[4795]: I1129 08:11:34.183309 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efbd2908-7755-45b1-8624-63c8b7a26704","Type":"ContainerStarted","Data":"cc7f9147c3264c7e01e4cf0536092f6ba44d439d1abc73f86856275959712599"} Nov 29 08:11:34 crc kubenswrapper[4795]: I1129 08:11:34.295768 4795 scope.go:117] "RemoveContainer" containerID="cb788e622da5bdf804dcfa0c959dfb0158a848bc427ed90337dfd530ea78ea5d" Nov 29 08:11:34 crc kubenswrapper[4795]: E1129 08:11:34.296148 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:11:34 crc kubenswrapper[4795]: I1129 08:11:34.322748 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59666d8f-35e8-4c8a-887f-0c23881547ec" path="/var/lib/kubelet/pods/59666d8f-35e8-4c8a-887f-0c23881547ec/volumes" Nov 29 08:11:34 crc kubenswrapper[4795]: I1129 08:11:34.562727 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 29 08:11:34 crc kubenswrapper[4795]: I1129 08:11:34.727875 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 29 08:11:35 crc kubenswrapper[4795]: I1129 08:11:35.216481 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 29 08:11:35 crc kubenswrapper[4795]: I1129 08:11:35.243920 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.506514063 podStartE2EDuration="5.243892225s" podCreationTimestamp="2025-11-29 08:11:30 +0000 UTC" firstStartedPulling="2025-11-29 08:11:31.045060826 +0000 UTC m=+1937.020636616" lastFinishedPulling="2025-11-29 08:11:34.782438988 +0000 UTC m=+1940.758014778" observedRunningTime="2025-11-29 08:11:35.240290253 +0000 UTC m=+1941.215866043" watchObservedRunningTime="2025-11-29 08:11:35.243892225 +0000 UTC m=+1941.219468015" Nov 29 08:11:36 crc kubenswrapper[4795]: I1129 08:11:36.119026 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 29 08:11:36 crc kubenswrapper[4795]: I1129 08:11:36.232159 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efbd2908-7755-45b1-8624-63c8b7a26704","Type":"ContainerStarted","Data":"df71dc900f51af0a977018d884a53296a8c0f29eae5b954c0a30510ca57f7867"} Nov 29 08:11:37 crc kubenswrapper[4795]: I1129 08:11:37.773691 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 08:11:38 crc kubenswrapper[4795]: I1129 08:11:38.327085 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="efbd2908-7755-45b1-8624-63c8b7a26704" containerName="ceilometer-central-agent" containerID="cri-o://8538c632fd28235313631975f494bd7dc7544f2654ad4385b476cc40f667de98" gracePeriod=30 Nov 29 08:11:38 crc kubenswrapper[4795]: I1129 08:11:38.327605 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="efbd2908-7755-45b1-8624-63c8b7a26704" containerName="proxy-httpd" containerID="cri-o://df71dc900f51af0a977018d884a53296a8c0f29eae5b954c0a30510ca57f7867" gracePeriod=30 Nov 29 08:11:38 crc kubenswrapper[4795]: I1129 08:11:38.327669 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="efbd2908-7755-45b1-8624-63c8b7a26704" containerName="sg-core" containerID="cri-o://cc7f9147c3264c7e01e4cf0536092f6ba44d439d1abc73f86856275959712599" gracePeriod=30 Nov 29 08:11:38 crc kubenswrapper[4795]: I1129 08:11:38.327711 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="efbd2908-7755-45b1-8624-63c8b7a26704" containerName="ceilometer-notification-agent" containerID="cri-o://1699fcf66fbf7f2ba77a21508b21b24ae73ce4fd2c05fd3ffe30b6c2764be760" gracePeriod=30 Nov 29 08:11:39 crc kubenswrapper[4795]: I1129 08:11:39.405091 4795 generic.go:334] "Generic (PLEG): container finished" podID="efbd2908-7755-45b1-8624-63c8b7a26704" containerID="df71dc900f51af0a977018d884a53296a8c0f29eae5b954c0a30510ca57f7867" exitCode=0 Nov 29 08:11:39 crc kubenswrapper[4795]: I1129 08:11:39.405337 4795 generic.go:334] "Generic (PLEG): container finished" podID="efbd2908-7755-45b1-8624-63c8b7a26704" containerID="cc7f9147c3264c7e01e4cf0536092f6ba44d439d1abc73f86856275959712599" exitCode=2 Nov 29 08:11:39 crc kubenswrapper[4795]: I1129 08:11:39.405348 4795 generic.go:334] "Generic (PLEG): container finished" podID="efbd2908-7755-45b1-8624-63c8b7a26704" containerID="1699fcf66fbf7f2ba77a21508b21b24ae73ce4fd2c05fd3ffe30b6c2764be760" exitCode=0 Nov 29 08:11:39 crc kubenswrapper[4795]: I1129 08:11:39.405168 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efbd2908-7755-45b1-8624-63c8b7a26704","Type":"ContainerDied","Data":"df71dc900f51af0a977018d884a53296a8c0f29eae5b954c0a30510ca57f7867"} Nov 29 08:11:39 crc kubenswrapper[4795]: I1129 08:11:39.405420 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efbd2908-7755-45b1-8624-63c8b7a26704","Type":"ContainerDied","Data":"cc7f9147c3264c7e01e4cf0536092f6ba44d439d1abc73f86856275959712599"} Nov 29 08:11:39 crc kubenswrapper[4795]: I1129 08:11:39.405436 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efbd2908-7755-45b1-8624-63c8b7a26704","Type":"ContainerDied","Data":"1699fcf66fbf7f2ba77a21508b21b24ae73ce4fd2c05fd3ffe30b6c2764be760"} Nov 29 08:11:40 crc kubenswrapper[4795]: I1129 08:11:40.170390 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="74169c45-99e0-4179-a18b-07a1c2cade8b" containerName="rabbitmq" containerID="cri-o://223146acec54e1f27f623b4d80745a10b0a3e4ff9128f8cc93d3e916a4e18752" gracePeriod=604795 Nov 29 08:11:40 crc kubenswrapper[4795]: I1129 08:11:40.424561 4795 generic.go:334] "Generic (PLEG): container finished" podID="efbd2908-7755-45b1-8624-63c8b7a26704" containerID="8538c632fd28235313631975f494bd7dc7544f2654ad4385b476cc40f667de98" exitCode=0 Nov 29 08:11:40 crc kubenswrapper[4795]: I1129 08:11:40.424636 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efbd2908-7755-45b1-8624-63c8b7a26704","Type":"ContainerDied","Data":"8538c632fd28235313631975f494bd7dc7544f2654ad4385b476cc40f667de98"} Nov 29 08:11:41 crc kubenswrapper[4795]: I1129 08:11:41.007662 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="28c13f65-78c4-4d4a-8960-7ef17a4c93e8" containerName="rabbitmq" containerID="cri-o://5113ad63b22ce09343c250010d6d6471a16602dc3aa39754aa991fb1ca8b6f22" gracePeriod=604796 Nov 29 08:11:41 crc kubenswrapper[4795]: I1129 08:11:41.375651 4795 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="74169c45-99e0-4179-a18b-07a1c2cade8b" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.129:5671: connect: connection refused" Nov 29 08:11:41 crc kubenswrapper[4795]: I1129 08:11:41.832479 4795 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="28c13f65-78c4-4d4a-8960-7ef17a4c93e8" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.130:5671: connect: connection refused" Nov 29 08:11:47 crc kubenswrapper[4795]: I1129 08:11:47.276212 4795 scope.go:117] "RemoveContainer" containerID="cb788e622da5bdf804dcfa0c959dfb0158a848bc427ed90337dfd530ea78ea5d" Nov 29 08:11:47 crc kubenswrapper[4795]: E1129 08:11:47.277033 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:11:47 crc kubenswrapper[4795]: I1129 08:11:47.531285 4795 generic.go:334] "Generic (PLEG): container finished" podID="74169c45-99e0-4179-a18b-07a1c2cade8b" containerID="223146acec54e1f27f623b4d80745a10b0a3e4ff9128f8cc93d3e916a4e18752" exitCode=0 Nov 29 08:11:47 crc kubenswrapper[4795]: I1129 08:11:47.531377 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"74169c45-99e0-4179-a18b-07a1c2cade8b","Type":"ContainerDied","Data":"223146acec54e1f27f623b4d80745a10b0a3e4ff9128f8cc93d3e916a4e18752"} Nov 29 08:11:47 crc kubenswrapper[4795]: I1129 08:11:47.534217 4795 generic.go:334] "Generic (PLEG): container finished" podID="28c13f65-78c4-4d4a-8960-7ef17a4c93e8" containerID="5113ad63b22ce09343c250010d6d6471a16602dc3aa39754aa991fb1ca8b6f22" exitCode=0 Nov 29 08:11:47 crc kubenswrapper[4795]: I1129 08:11:47.534276 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"28c13f65-78c4-4d4a-8960-7ef17a4c93e8","Type":"ContainerDied","Data":"5113ad63b22ce09343c250010d6d6471a16602dc3aa39754aa991fb1ca8b6f22"} Nov 29 08:11:48 crc kubenswrapper[4795]: I1129 08:11:48.636973 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-594cb89c79-x8bmt"] Nov 29 08:11:48 crc kubenswrapper[4795]: I1129 08:11:48.639796 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-594cb89c79-x8bmt" Nov 29 08:11:48 crc kubenswrapper[4795]: I1129 08:11:48.641862 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Nov 29 08:11:48 crc kubenswrapper[4795]: I1129 08:11:48.664972 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-594cb89c79-x8bmt"] Nov 29 08:11:48 crc kubenswrapper[4795]: I1129 08:11:48.737090 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/bb9518bd-e4c7-49ee-9f39-f20800f6813c-openstack-edpm-ipam\") pod \"dnsmasq-dns-594cb89c79-x8bmt\" (UID: \"bb9518bd-e4c7-49ee-9f39-f20800f6813c\") " pod="openstack/dnsmasq-dns-594cb89c79-x8bmt" Nov 29 08:11:48 crc kubenswrapper[4795]: I1129 08:11:48.737192 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bb9518bd-e4c7-49ee-9f39-f20800f6813c-ovsdbserver-nb\") pod \"dnsmasq-dns-594cb89c79-x8bmt\" (UID: \"bb9518bd-e4c7-49ee-9f39-f20800f6813c\") " pod="openstack/dnsmasq-dns-594cb89c79-x8bmt" Nov 29 08:11:48 crc kubenswrapper[4795]: I1129 08:11:48.737230 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb9518bd-e4c7-49ee-9f39-f20800f6813c-config\") pod \"dnsmasq-dns-594cb89c79-x8bmt\" (UID: \"bb9518bd-e4c7-49ee-9f39-f20800f6813c\") " pod="openstack/dnsmasq-dns-594cb89c79-x8bmt" Nov 29 08:11:48 crc kubenswrapper[4795]: I1129 08:11:48.737276 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wtsl\" (UniqueName: \"kubernetes.io/projected/bb9518bd-e4c7-49ee-9f39-f20800f6813c-kube-api-access-5wtsl\") pod \"dnsmasq-dns-594cb89c79-x8bmt\" (UID: \"bb9518bd-e4c7-49ee-9f39-f20800f6813c\") " pod="openstack/dnsmasq-dns-594cb89c79-x8bmt" Nov 29 08:11:48 crc kubenswrapper[4795]: I1129 08:11:48.737465 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bb9518bd-e4c7-49ee-9f39-f20800f6813c-dns-swift-storage-0\") pod \"dnsmasq-dns-594cb89c79-x8bmt\" (UID: \"bb9518bd-e4c7-49ee-9f39-f20800f6813c\") " pod="openstack/dnsmasq-dns-594cb89c79-x8bmt" Nov 29 08:11:48 crc kubenswrapper[4795]: I1129 08:11:48.737697 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bb9518bd-e4c7-49ee-9f39-f20800f6813c-dns-svc\") pod \"dnsmasq-dns-594cb89c79-x8bmt\" (UID: \"bb9518bd-e4c7-49ee-9f39-f20800f6813c\") " pod="openstack/dnsmasq-dns-594cb89c79-x8bmt" Nov 29 08:11:48 crc kubenswrapper[4795]: I1129 08:11:48.737847 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bb9518bd-e4c7-49ee-9f39-f20800f6813c-ovsdbserver-sb\") pod \"dnsmasq-dns-594cb89c79-x8bmt\" (UID: \"bb9518bd-e4c7-49ee-9f39-f20800f6813c\") " pod="openstack/dnsmasq-dns-594cb89c79-x8bmt" Nov 29 08:11:48 crc kubenswrapper[4795]: I1129 08:11:48.840065 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wtsl\" (UniqueName: \"kubernetes.io/projected/bb9518bd-e4c7-49ee-9f39-f20800f6813c-kube-api-access-5wtsl\") pod \"dnsmasq-dns-594cb89c79-x8bmt\" (UID: \"bb9518bd-e4c7-49ee-9f39-f20800f6813c\") " pod="openstack/dnsmasq-dns-594cb89c79-x8bmt" Nov 29 08:11:48 crc kubenswrapper[4795]: I1129 08:11:48.840561 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bb9518bd-e4c7-49ee-9f39-f20800f6813c-dns-swift-storage-0\") pod \"dnsmasq-dns-594cb89c79-x8bmt\" (UID: \"bb9518bd-e4c7-49ee-9f39-f20800f6813c\") " pod="openstack/dnsmasq-dns-594cb89c79-x8bmt" Nov 29 08:11:48 crc kubenswrapper[4795]: I1129 08:11:48.840653 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bb9518bd-e4c7-49ee-9f39-f20800f6813c-dns-svc\") pod \"dnsmasq-dns-594cb89c79-x8bmt\" (UID: \"bb9518bd-e4c7-49ee-9f39-f20800f6813c\") " pod="openstack/dnsmasq-dns-594cb89c79-x8bmt" Nov 29 08:11:48 crc kubenswrapper[4795]: I1129 08:11:48.840702 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bb9518bd-e4c7-49ee-9f39-f20800f6813c-ovsdbserver-sb\") pod \"dnsmasq-dns-594cb89c79-x8bmt\" (UID: \"bb9518bd-e4c7-49ee-9f39-f20800f6813c\") " pod="openstack/dnsmasq-dns-594cb89c79-x8bmt" Nov 29 08:11:48 crc kubenswrapper[4795]: I1129 08:11:48.840775 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/bb9518bd-e4c7-49ee-9f39-f20800f6813c-openstack-edpm-ipam\") pod \"dnsmasq-dns-594cb89c79-x8bmt\" (UID: \"bb9518bd-e4c7-49ee-9f39-f20800f6813c\") " pod="openstack/dnsmasq-dns-594cb89c79-x8bmt" Nov 29 08:11:48 crc kubenswrapper[4795]: I1129 08:11:48.840878 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bb9518bd-e4c7-49ee-9f39-f20800f6813c-ovsdbserver-nb\") pod \"dnsmasq-dns-594cb89c79-x8bmt\" (UID: \"bb9518bd-e4c7-49ee-9f39-f20800f6813c\") " pod="openstack/dnsmasq-dns-594cb89c79-x8bmt" Nov 29 08:11:48 crc kubenswrapper[4795]: I1129 08:11:48.840927 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb9518bd-e4c7-49ee-9f39-f20800f6813c-config\") pod \"dnsmasq-dns-594cb89c79-x8bmt\" (UID: \"bb9518bd-e4c7-49ee-9f39-f20800f6813c\") " pod="openstack/dnsmasq-dns-594cb89c79-x8bmt" Nov 29 08:11:48 crc kubenswrapper[4795]: I1129 08:11:48.841765 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bb9518bd-e4c7-49ee-9f39-f20800f6813c-ovsdbserver-sb\") pod \"dnsmasq-dns-594cb89c79-x8bmt\" (UID: \"bb9518bd-e4c7-49ee-9f39-f20800f6813c\") " pod="openstack/dnsmasq-dns-594cb89c79-x8bmt" Nov 29 08:11:48 crc kubenswrapper[4795]: I1129 08:11:48.844134 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bb9518bd-e4c7-49ee-9f39-f20800f6813c-dns-swift-storage-0\") pod \"dnsmasq-dns-594cb89c79-x8bmt\" (UID: \"bb9518bd-e4c7-49ee-9f39-f20800f6813c\") " pod="openstack/dnsmasq-dns-594cb89c79-x8bmt" Nov 29 08:11:48 crc kubenswrapper[4795]: I1129 08:11:48.847682 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/bb9518bd-e4c7-49ee-9f39-f20800f6813c-openstack-edpm-ipam\") pod \"dnsmasq-dns-594cb89c79-x8bmt\" (UID: \"bb9518bd-e4c7-49ee-9f39-f20800f6813c\") " pod="openstack/dnsmasq-dns-594cb89c79-x8bmt" Nov 29 08:11:48 crc kubenswrapper[4795]: I1129 08:11:48.848885 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bb9518bd-e4c7-49ee-9f39-f20800f6813c-ovsdbserver-nb\") pod \"dnsmasq-dns-594cb89c79-x8bmt\" (UID: \"bb9518bd-e4c7-49ee-9f39-f20800f6813c\") " pod="openstack/dnsmasq-dns-594cb89c79-x8bmt" Nov 29 08:11:48 crc kubenswrapper[4795]: I1129 08:11:48.851300 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb9518bd-e4c7-49ee-9f39-f20800f6813c-config\") pod \"dnsmasq-dns-594cb89c79-x8bmt\" (UID: \"bb9518bd-e4c7-49ee-9f39-f20800f6813c\") " pod="openstack/dnsmasq-dns-594cb89c79-x8bmt" Nov 29 08:11:48 crc kubenswrapper[4795]: I1129 08:11:48.851481 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bb9518bd-e4c7-49ee-9f39-f20800f6813c-dns-svc\") pod \"dnsmasq-dns-594cb89c79-x8bmt\" (UID: \"bb9518bd-e4c7-49ee-9f39-f20800f6813c\") " pod="openstack/dnsmasq-dns-594cb89c79-x8bmt" Nov 29 08:11:48 crc kubenswrapper[4795]: I1129 08:11:48.865559 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wtsl\" (UniqueName: \"kubernetes.io/projected/bb9518bd-e4c7-49ee-9f39-f20800f6813c-kube-api-access-5wtsl\") pod \"dnsmasq-dns-594cb89c79-x8bmt\" (UID: \"bb9518bd-e4c7-49ee-9f39-f20800f6813c\") " pod="openstack/dnsmasq-dns-594cb89c79-x8bmt" Nov 29 08:11:48 crc kubenswrapper[4795]: I1129 08:11:48.971263 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-594cb89c79-x8bmt" Nov 29 08:11:51 crc kubenswrapper[4795]: I1129 08:11:51.374758 4795 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="74169c45-99e0-4179-a18b-07a1c2cade8b" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.129:5671: connect: connection refused" Nov 29 08:11:51 crc kubenswrapper[4795]: I1129 08:11:51.832750 4795 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="28c13f65-78c4-4d4a-8960-7ef17a4c93e8" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.130:5671: connect: connection refused" Nov 29 08:11:52 crc kubenswrapper[4795]: I1129 08:11:52.453910 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 08:11:52 crc kubenswrapper[4795]: I1129 08:11:52.533902 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxppb\" (UniqueName: \"kubernetes.io/projected/efbd2908-7755-45b1-8624-63c8b7a26704-kube-api-access-xxppb\") pod \"efbd2908-7755-45b1-8624-63c8b7a26704\" (UID: \"efbd2908-7755-45b1-8624-63c8b7a26704\") " Nov 29 08:11:52 crc kubenswrapper[4795]: I1129 08:11:52.533967 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/efbd2908-7755-45b1-8624-63c8b7a26704-ceilometer-tls-certs\") pod \"efbd2908-7755-45b1-8624-63c8b7a26704\" (UID: \"efbd2908-7755-45b1-8624-63c8b7a26704\") " Nov 29 08:11:52 crc kubenswrapper[4795]: I1129 08:11:52.534114 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efbd2908-7755-45b1-8624-63c8b7a26704-combined-ca-bundle\") pod \"efbd2908-7755-45b1-8624-63c8b7a26704\" (UID: \"efbd2908-7755-45b1-8624-63c8b7a26704\") " Nov 29 08:11:52 crc kubenswrapper[4795]: I1129 08:11:52.534205 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efbd2908-7755-45b1-8624-63c8b7a26704-run-httpd\") pod \"efbd2908-7755-45b1-8624-63c8b7a26704\" (UID: \"efbd2908-7755-45b1-8624-63c8b7a26704\") " Nov 29 08:11:52 crc kubenswrapper[4795]: I1129 08:11:52.534346 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/efbd2908-7755-45b1-8624-63c8b7a26704-sg-core-conf-yaml\") pod \"efbd2908-7755-45b1-8624-63c8b7a26704\" (UID: \"efbd2908-7755-45b1-8624-63c8b7a26704\") " Nov 29 08:11:52 crc kubenswrapper[4795]: I1129 08:11:52.534398 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efbd2908-7755-45b1-8624-63c8b7a26704-config-data\") pod \"efbd2908-7755-45b1-8624-63c8b7a26704\" (UID: \"efbd2908-7755-45b1-8624-63c8b7a26704\") " Nov 29 08:11:52 crc kubenswrapper[4795]: I1129 08:11:52.534484 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efbd2908-7755-45b1-8624-63c8b7a26704-scripts\") pod \"efbd2908-7755-45b1-8624-63c8b7a26704\" (UID: \"efbd2908-7755-45b1-8624-63c8b7a26704\") " Nov 29 08:11:52 crc kubenswrapper[4795]: I1129 08:11:52.534508 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efbd2908-7755-45b1-8624-63c8b7a26704-log-httpd\") pod \"efbd2908-7755-45b1-8624-63c8b7a26704\" (UID: \"efbd2908-7755-45b1-8624-63c8b7a26704\") " Nov 29 08:11:52 crc kubenswrapper[4795]: I1129 08:11:52.534732 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efbd2908-7755-45b1-8624-63c8b7a26704-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "efbd2908-7755-45b1-8624-63c8b7a26704" (UID: "efbd2908-7755-45b1-8624-63c8b7a26704"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:11:52 crc kubenswrapper[4795]: I1129 08:11:52.535030 4795 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efbd2908-7755-45b1-8624-63c8b7a26704-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 29 08:11:52 crc kubenswrapper[4795]: I1129 08:11:52.536964 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efbd2908-7755-45b1-8624-63c8b7a26704-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "efbd2908-7755-45b1-8624-63c8b7a26704" (UID: "efbd2908-7755-45b1-8624-63c8b7a26704"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:11:52 crc kubenswrapper[4795]: I1129 08:11:52.555504 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efbd2908-7755-45b1-8624-63c8b7a26704-scripts" (OuterVolumeSpecName: "scripts") pod "efbd2908-7755-45b1-8624-63c8b7a26704" (UID: "efbd2908-7755-45b1-8624-63c8b7a26704"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:11:52 crc kubenswrapper[4795]: I1129 08:11:52.583372 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efbd2908-7755-45b1-8624-63c8b7a26704-kube-api-access-xxppb" (OuterVolumeSpecName: "kube-api-access-xxppb") pod "efbd2908-7755-45b1-8624-63c8b7a26704" (UID: "efbd2908-7755-45b1-8624-63c8b7a26704"). InnerVolumeSpecName "kube-api-access-xxppb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:11:52 crc kubenswrapper[4795]: I1129 08:11:52.583581 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efbd2908-7755-45b1-8624-63c8b7a26704-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "efbd2908-7755-45b1-8624-63c8b7a26704" (UID: "efbd2908-7755-45b1-8624-63c8b7a26704"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:11:52 crc kubenswrapper[4795]: I1129 08:11:52.637747 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxppb\" (UniqueName: \"kubernetes.io/projected/efbd2908-7755-45b1-8624-63c8b7a26704-kube-api-access-xxppb\") on node \"crc\" DevicePath \"\"" Nov 29 08:11:52 crc kubenswrapper[4795]: I1129 08:11:52.638043 4795 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/efbd2908-7755-45b1-8624-63c8b7a26704-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 29 08:11:52 crc kubenswrapper[4795]: I1129 08:11:52.638163 4795 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efbd2908-7755-45b1-8624-63c8b7a26704-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 08:11:52 crc kubenswrapper[4795]: I1129 08:11:52.638224 4795 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efbd2908-7755-45b1-8624-63c8b7a26704-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 29 08:11:52 crc kubenswrapper[4795]: I1129 08:11:52.653976 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efbd2908-7755-45b1-8624-63c8b7a26704","Type":"ContainerDied","Data":"f511c849a1b7ca19d04858c1de76a9f4b15f4ffca32d8e5d0a07049ae45debae"} Nov 29 08:11:52 crc kubenswrapper[4795]: I1129 08:11:52.654026 4795 scope.go:117] "RemoveContainer" containerID="df71dc900f51af0a977018d884a53296a8c0f29eae5b954c0a30510ca57f7867" Nov 29 08:11:52 crc kubenswrapper[4795]: I1129 08:11:52.654227 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 08:11:52 crc kubenswrapper[4795]: I1129 08:11:52.752691 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efbd2908-7755-45b1-8624-63c8b7a26704-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "efbd2908-7755-45b1-8624-63c8b7a26704" (UID: "efbd2908-7755-45b1-8624-63c8b7a26704"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:11:52 crc kubenswrapper[4795]: I1129 08:11:52.753629 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efbd2908-7755-45b1-8624-63c8b7a26704-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "efbd2908-7755-45b1-8624-63c8b7a26704" (UID: "efbd2908-7755-45b1-8624-63c8b7a26704"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:11:52 crc kubenswrapper[4795]: I1129 08:11:52.785495 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efbd2908-7755-45b1-8624-63c8b7a26704-config-data" (OuterVolumeSpecName: "config-data") pod "efbd2908-7755-45b1-8624-63c8b7a26704" (UID: "efbd2908-7755-45b1-8624-63c8b7a26704"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:11:52 crc kubenswrapper[4795]: I1129 08:11:52.842963 4795 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/efbd2908-7755-45b1-8624-63c8b7a26704-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 29 08:11:52 crc kubenswrapper[4795]: I1129 08:11:52.843559 4795 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efbd2908-7755-45b1-8624-63c8b7a26704-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 08:11:52 crc kubenswrapper[4795]: I1129 08:11:52.843583 4795 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efbd2908-7755-45b1-8624-63c8b7a26704-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.008674 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.026163 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.040121 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 29 08:11:53 crc kubenswrapper[4795]: E1129 08:11:53.040644 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efbd2908-7755-45b1-8624-63c8b7a26704" containerName="sg-core" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.040660 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="efbd2908-7755-45b1-8624-63c8b7a26704" containerName="sg-core" Nov 29 08:11:53 crc kubenswrapper[4795]: E1129 08:11:53.040670 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efbd2908-7755-45b1-8624-63c8b7a26704" containerName="ceilometer-notification-agent" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.040677 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="efbd2908-7755-45b1-8624-63c8b7a26704" containerName="ceilometer-notification-agent" Nov 29 08:11:53 crc kubenswrapper[4795]: E1129 08:11:53.040692 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efbd2908-7755-45b1-8624-63c8b7a26704" containerName="proxy-httpd" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.040698 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="efbd2908-7755-45b1-8624-63c8b7a26704" containerName="proxy-httpd" Nov 29 08:11:53 crc kubenswrapper[4795]: E1129 08:11:53.040717 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efbd2908-7755-45b1-8624-63c8b7a26704" containerName="ceilometer-central-agent" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.040723 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="efbd2908-7755-45b1-8624-63c8b7a26704" containerName="ceilometer-central-agent" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.040928 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="efbd2908-7755-45b1-8624-63c8b7a26704" containerName="ceilometer-central-agent" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.040945 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="efbd2908-7755-45b1-8624-63c8b7a26704" containerName="proxy-httpd" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.040955 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="efbd2908-7755-45b1-8624-63c8b7a26704" containerName="sg-core" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.040970 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="efbd2908-7755-45b1-8624-63c8b7a26704" containerName="ceilometer-notification-agent" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.043575 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.045891 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1187372d-c24e-4ca1-b985-64ba7bb4df2b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1187372d-c24e-4ca1-b985-64ba7bb4df2b\") " pod="openstack/ceilometer-0" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.045939 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1187372d-c24e-4ca1-b985-64ba7bb4df2b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"1187372d-c24e-4ca1-b985-64ba7bb4df2b\") " pod="openstack/ceilometer-0" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.045957 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1187372d-c24e-4ca1-b985-64ba7bb4df2b-run-httpd\") pod \"ceilometer-0\" (UID: \"1187372d-c24e-4ca1-b985-64ba7bb4df2b\") " pod="openstack/ceilometer-0" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.046103 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1187372d-c24e-4ca1-b985-64ba7bb4df2b-config-data\") pod \"ceilometer-0\" (UID: \"1187372d-c24e-4ca1-b985-64ba7bb4df2b\") " pod="openstack/ceilometer-0" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.046211 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5hdv\" (UniqueName: \"kubernetes.io/projected/1187372d-c24e-4ca1-b985-64ba7bb4df2b-kube-api-access-h5hdv\") pod \"ceilometer-0\" (UID: \"1187372d-c24e-4ca1-b985-64ba7bb4df2b\") " pod="openstack/ceilometer-0" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.046268 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1187372d-c24e-4ca1-b985-64ba7bb4df2b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1187372d-c24e-4ca1-b985-64ba7bb4df2b\") " pod="openstack/ceilometer-0" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.046337 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1187372d-c24e-4ca1-b985-64ba7bb4df2b-log-httpd\") pod \"ceilometer-0\" (UID: \"1187372d-c24e-4ca1-b985-64ba7bb4df2b\") " pod="openstack/ceilometer-0" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.046718 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1187372d-c24e-4ca1-b985-64ba7bb4df2b-scripts\") pod \"ceilometer-0\" (UID: \"1187372d-c24e-4ca1-b985-64ba7bb4df2b\") " pod="openstack/ceilometer-0" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.058422 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.063256 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.064370 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.079466 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.148455 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1187372d-c24e-4ca1-b985-64ba7bb4df2b-scripts\") pod \"ceilometer-0\" (UID: \"1187372d-c24e-4ca1-b985-64ba7bb4df2b\") " pod="openstack/ceilometer-0" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.148572 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1187372d-c24e-4ca1-b985-64ba7bb4df2b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1187372d-c24e-4ca1-b985-64ba7bb4df2b\") " pod="openstack/ceilometer-0" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.148613 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1187372d-c24e-4ca1-b985-64ba7bb4df2b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"1187372d-c24e-4ca1-b985-64ba7bb4df2b\") " pod="openstack/ceilometer-0" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.148630 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1187372d-c24e-4ca1-b985-64ba7bb4df2b-run-httpd\") pod \"ceilometer-0\" (UID: \"1187372d-c24e-4ca1-b985-64ba7bb4df2b\") " pod="openstack/ceilometer-0" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.148665 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1187372d-c24e-4ca1-b985-64ba7bb4df2b-config-data\") pod \"ceilometer-0\" (UID: \"1187372d-c24e-4ca1-b985-64ba7bb4df2b\") " pod="openstack/ceilometer-0" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.148686 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5hdv\" (UniqueName: \"kubernetes.io/projected/1187372d-c24e-4ca1-b985-64ba7bb4df2b-kube-api-access-h5hdv\") pod \"ceilometer-0\" (UID: \"1187372d-c24e-4ca1-b985-64ba7bb4df2b\") " pod="openstack/ceilometer-0" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.148703 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1187372d-c24e-4ca1-b985-64ba7bb4df2b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1187372d-c24e-4ca1-b985-64ba7bb4df2b\") " pod="openstack/ceilometer-0" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.148729 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1187372d-c24e-4ca1-b985-64ba7bb4df2b-log-httpd\") pod \"ceilometer-0\" (UID: \"1187372d-c24e-4ca1-b985-64ba7bb4df2b\") " pod="openstack/ceilometer-0" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.149313 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1187372d-c24e-4ca1-b985-64ba7bb4df2b-log-httpd\") pod \"ceilometer-0\" (UID: \"1187372d-c24e-4ca1-b985-64ba7bb4df2b\") " pod="openstack/ceilometer-0" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.149510 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1187372d-c24e-4ca1-b985-64ba7bb4df2b-run-httpd\") pod \"ceilometer-0\" (UID: \"1187372d-c24e-4ca1-b985-64ba7bb4df2b\") " pod="openstack/ceilometer-0" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.161465 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1187372d-c24e-4ca1-b985-64ba7bb4df2b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"1187372d-c24e-4ca1-b985-64ba7bb4df2b\") " pod="openstack/ceilometer-0" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.163499 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1187372d-c24e-4ca1-b985-64ba7bb4df2b-scripts\") pod \"ceilometer-0\" (UID: \"1187372d-c24e-4ca1-b985-64ba7bb4df2b\") " pod="openstack/ceilometer-0" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.164299 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1187372d-c24e-4ca1-b985-64ba7bb4df2b-config-data\") pod \"ceilometer-0\" (UID: \"1187372d-c24e-4ca1-b985-64ba7bb4df2b\") " pod="openstack/ceilometer-0" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.164519 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1187372d-c24e-4ca1-b985-64ba7bb4df2b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1187372d-c24e-4ca1-b985-64ba7bb4df2b\") " pod="openstack/ceilometer-0" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.174945 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1187372d-c24e-4ca1-b985-64ba7bb4df2b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1187372d-c24e-4ca1-b985-64ba7bb4df2b\") " pod="openstack/ceilometer-0" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.175792 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5hdv\" (UniqueName: \"kubernetes.io/projected/1187372d-c24e-4ca1-b985-64ba7bb4df2b-kube-api-access-h5hdv\") pod \"ceilometer-0\" (UID: \"1187372d-c24e-4ca1-b985-64ba7bb4df2b\") " pod="openstack/ceilometer-0" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.253021 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.263601 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 29 08:11:53 crc kubenswrapper[4795]: E1129 08:11:53.269825 4795 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Nov 29 08:11:53 crc kubenswrapper[4795]: E1129 08:11:53.269884 4795 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Nov 29 08:11:53 crc kubenswrapper[4795]: E1129 08:11:53.270001 4795 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zd7nl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-whxgt_openstack(7981887b-f33c-421f-aac5-520c03b7a48a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.270980 4795 scope.go:117] "RemoveContainer" containerID="cc7f9147c3264c7e01e4cf0536092f6ba44d439d1abc73f86856275959712599" Nov 29 08:11:53 crc kubenswrapper[4795]: E1129 08:11:53.271197 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-whxgt" podUID="7981887b-f33c-421f-aac5-520c03b7a48a" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.353812 4795 scope.go:117] "RemoveContainer" containerID="1699fcf66fbf7f2ba77a21508b21b24ae73ce4fd2c05fd3ffe30b6c2764be760" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.356427 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/74169c45-99e0-4179-a18b-07a1c2cade8b-rabbitmq-plugins\") pod \"74169c45-99e0-4179-a18b-07a1c2cade8b\" (UID: \"74169c45-99e0-4179-a18b-07a1c2cade8b\") " Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.356480 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/74169c45-99e0-4179-a18b-07a1c2cade8b-pod-info\") pod \"74169c45-99e0-4179-a18b-07a1c2cade8b\" (UID: \"74169c45-99e0-4179-a18b-07a1c2cade8b\") " Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.356513 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlllp\" (UniqueName: \"kubernetes.io/projected/28c13f65-78c4-4d4a-8960-7ef17a4c93e8-kube-api-access-mlllp\") pod \"28c13f65-78c4-4d4a-8960-7ef17a4c93e8\" (UID: \"28c13f65-78c4-4d4a-8960-7ef17a4c93e8\") " Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.356564 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/28c13f65-78c4-4d4a-8960-7ef17a4c93e8-erlang-cookie-secret\") pod \"28c13f65-78c4-4d4a-8960-7ef17a4c93e8\" (UID: \"28c13f65-78c4-4d4a-8960-7ef17a4c93e8\") " Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.356616 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"74169c45-99e0-4179-a18b-07a1c2cade8b\" (UID: \"74169c45-99e0-4179-a18b-07a1c2cade8b\") " Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.356651 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/28c13f65-78c4-4d4a-8960-7ef17a4c93e8-server-conf\") pod \"28c13f65-78c4-4d4a-8960-7ef17a4c93e8\" (UID: \"28c13f65-78c4-4d4a-8960-7ef17a4c93e8\") " Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.356684 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/28c13f65-78c4-4d4a-8960-7ef17a4c93e8-plugins-conf\") pod \"28c13f65-78c4-4d4a-8960-7ef17a4c93e8\" (UID: \"28c13f65-78c4-4d4a-8960-7ef17a4c93e8\") " Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.358974 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74169c45-99e0-4179-a18b-07a1c2cade8b-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "74169c45-99e0-4179-a18b-07a1c2cade8b" (UID: "74169c45-99e0-4179-a18b-07a1c2cade8b"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.359271 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28c13f65-78c4-4d4a-8960-7ef17a4c93e8-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "28c13f65-78c4-4d4a-8960-7ef17a4c93e8" (UID: "28c13f65-78c4-4d4a-8960-7ef17a4c93e8"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.362325 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28c13f65-78c4-4d4a-8960-7ef17a4c93e8-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "28c13f65-78c4-4d4a-8960-7ef17a4c93e8" (UID: "28c13f65-78c4-4d4a-8960-7ef17a4c93e8"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.364255 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "persistence") pod "74169c45-99e0-4179-a18b-07a1c2cade8b" (UID: "74169c45-99e0-4179-a18b-07a1c2cade8b"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.364624 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28c13f65-78c4-4d4a-8960-7ef17a4c93e8-kube-api-access-mlllp" (OuterVolumeSpecName: "kube-api-access-mlllp") pod "28c13f65-78c4-4d4a-8960-7ef17a4c93e8" (UID: "28c13f65-78c4-4d4a-8960-7ef17a4c93e8"). InnerVolumeSpecName "kube-api-access-mlllp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.366237 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/74169c45-99e0-4179-a18b-07a1c2cade8b-pod-info" (OuterVolumeSpecName: "pod-info") pod "74169c45-99e0-4179-a18b-07a1c2cade8b" (UID: "74169c45-99e0-4179-a18b-07a1c2cade8b"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.368980 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.459736 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkc5t\" (UniqueName: \"kubernetes.io/projected/74169c45-99e0-4179-a18b-07a1c2cade8b-kube-api-access-tkc5t\") pod \"74169c45-99e0-4179-a18b-07a1c2cade8b\" (UID: \"74169c45-99e0-4179-a18b-07a1c2cade8b\") " Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.460056 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/74169c45-99e0-4179-a18b-07a1c2cade8b-server-conf\") pod \"74169c45-99e0-4179-a18b-07a1c2cade8b\" (UID: \"74169c45-99e0-4179-a18b-07a1c2cade8b\") " Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.460084 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/74169c45-99e0-4179-a18b-07a1c2cade8b-rabbitmq-tls\") pod \"74169c45-99e0-4179-a18b-07a1c2cade8b\" (UID: \"74169c45-99e0-4179-a18b-07a1c2cade8b\") " Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.460103 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/74169c45-99e0-4179-a18b-07a1c2cade8b-plugins-conf\") pod \"74169c45-99e0-4179-a18b-07a1c2cade8b\" (UID: \"74169c45-99e0-4179-a18b-07a1c2cade8b\") " Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.460148 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/28c13f65-78c4-4d4a-8960-7ef17a4c93e8-config-data\") pod \"28c13f65-78c4-4d4a-8960-7ef17a4c93e8\" (UID: \"28c13f65-78c4-4d4a-8960-7ef17a4c93e8\") " Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.460200 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/28c13f65-78c4-4d4a-8960-7ef17a4c93e8-rabbitmq-erlang-cookie\") pod \"28c13f65-78c4-4d4a-8960-7ef17a4c93e8\" (UID: \"28c13f65-78c4-4d4a-8960-7ef17a4c93e8\") " Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.462387 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74169c45-99e0-4179-a18b-07a1c2cade8b-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "74169c45-99e0-4179-a18b-07a1c2cade8b" (UID: "74169c45-99e0-4179-a18b-07a1c2cade8b"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.463551 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28c13f65-78c4-4d4a-8960-7ef17a4c93e8-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "28c13f65-78c4-4d4a-8960-7ef17a4c93e8" (UID: "28c13f65-78c4-4d4a-8960-7ef17a4c93e8"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.463581 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/74169c45-99e0-4179-a18b-07a1c2cade8b-rabbitmq-erlang-cookie\") pod \"74169c45-99e0-4179-a18b-07a1c2cade8b\" (UID: \"74169c45-99e0-4179-a18b-07a1c2cade8b\") " Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.463987 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/28c13f65-78c4-4d4a-8960-7ef17a4c93e8-rabbitmq-tls\") pod \"28c13f65-78c4-4d4a-8960-7ef17a4c93e8\" (UID: \"28c13f65-78c4-4d4a-8960-7ef17a4c93e8\") " Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.464476 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/74169c45-99e0-4179-a18b-07a1c2cade8b-rabbitmq-confd\") pod \"74169c45-99e0-4179-a18b-07a1c2cade8b\" (UID: \"74169c45-99e0-4179-a18b-07a1c2cade8b\") " Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.465794 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74169c45-99e0-4179-a18b-07a1c2cade8b-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "74169c45-99e0-4179-a18b-07a1c2cade8b" (UID: "74169c45-99e0-4179-a18b-07a1c2cade8b"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.465925 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/74169c45-99e0-4179-a18b-07a1c2cade8b-erlang-cookie-secret\") pod \"74169c45-99e0-4179-a18b-07a1c2cade8b\" (UID: \"74169c45-99e0-4179-a18b-07a1c2cade8b\") " Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.466277 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/28c13f65-78c4-4d4a-8960-7ef17a4c93e8-rabbitmq-plugins\") pod \"28c13f65-78c4-4d4a-8960-7ef17a4c93e8\" (UID: \"28c13f65-78c4-4d4a-8960-7ef17a4c93e8\") " Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.466319 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"28c13f65-78c4-4d4a-8960-7ef17a4c93e8\" (UID: \"28c13f65-78c4-4d4a-8960-7ef17a4c93e8\") " Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.466360 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/28c13f65-78c4-4d4a-8960-7ef17a4c93e8-rabbitmq-confd\") pod \"28c13f65-78c4-4d4a-8960-7ef17a4c93e8\" (UID: \"28c13f65-78c4-4d4a-8960-7ef17a4c93e8\") " Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.466452 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/28c13f65-78c4-4d4a-8960-7ef17a4c93e8-pod-info\") pod \"28c13f65-78c4-4d4a-8960-7ef17a4c93e8\" (UID: \"28c13f65-78c4-4d4a-8960-7ef17a4c93e8\") " Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.466535 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/74169c45-99e0-4179-a18b-07a1c2cade8b-config-data\") pod \"74169c45-99e0-4179-a18b-07a1c2cade8b\" (UID: \"74169c45-99e0-4179-a18b-07a1c2cade8b\") " Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.467737 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74169c45-99e0-4179-a18b-07a1c2cade8b-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "74169c45-99e0-4179-a18b-07a1c2cade8b" (UID: "74169c45-99e0-4179-a18b-07a1c2cade8b"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.471988 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74169c45-99e0-4179-a18b-07a1c2cade8b-kube-api-access-tkc5t" (OuterVolumeSpecName: "kube-api-access-tkc5t") pod "74169c45-99e0-4179-a18b-07a1c2cade8b" (UID: "74169c45-99e0-4179-a18b-07a1c2cade8b"). InnerVolumeSpecName "kube-api-access-tkc5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.472647 4795 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/74169c45-99e0-4179-a18b-07a1c2cade8b-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.472675 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tkc5t\" (UniqueName: \"kubernetes.io/projected/74169c45-99e0-4179-a18b-07a1c2cade8b-kube-api-access-tkc5t\") on node \"crc\" DevicePath \"\"" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.472692 4795 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/74169c45-99e0-4179-a18b-07a1c2cade8b-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.472706 4795 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/74169c45-99e0-4179-a18b-07a1c2cade8b-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.472719 4795 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/28c13f65-78c4-4d4a-8960-7ef17a4c93e8-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.472731 4795 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/74169c45-99e0-4179-a18b-07a1c2cade8b-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.472744 4795 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/74169c45-99e0-4179-a18b-07a1c2cade8b-pod-info\") on node \"crc\" DevicePath \"\"" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.472756 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mlllp\" (UniqueName: \"kubernetes.io/projected/28c13f65-78c4-4d4a-8960-7ef17a4c93e8-kube-api-access-mlllp\") on node \"crc\" DevicePath \"\"" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.472768 4795 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/28c13f65-78c4-4d4a-8960-7ef17a4c93e8-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.472799 4795 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.472812 4795 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/28c13f65-78c4-4d4a-8960-7ef17a4c93e8-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.473103 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28c13f65-78c4-4d4a-8960-7ef17a4c93e8-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "28c13f65-78c4-4d4a-8960-7ef17a4c93e8" (UID: "28c13f65-78c4-4d4a-8960-7ef17a4c93e8"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.477361 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28c13f65-78c4-4d4a-8960-7ef17a4c93e8-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "28c13f65-78c4-4d4a-8960-7ef17a4c93e8" (UID: "28c13f65-78c4-4d4a-8960-7ef17a4c93e8"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.490713 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74169c45-99e0-4179-a18b-07a1c2cade8b-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "74169c45-99e0-4179-a18b-07a1c2cade8b" (UID: "74169c45-99e0-4179-a18b-07a1c2cade8b"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.496753 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28c13f65-78c4-4d4a-8960-7ef17a4c93e8-server-conf" (OuterVolumeSpecName: "server-conf") pod "28c13f65-78c4-4d4a-8960-7ef17a4c93e8" (UID: "28c13f65-78c4-4d4a-8960-7ef17a4c93e8"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.501114 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/28c13f65-78c4-4d4a-8960-7ef17a4c93e8-pod-info" (OuterVolumeSpecName: "pod-info") pod "28c13f65-78c4-4d4a-8960-7ef17a4c93e8" (UID: "28c13f65-78c4-4d4a-8960-7ef17a4c93e8"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.502282 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "persistence") pod "28c13f65-78c4-4d4a-8960-7ef17a4c93e8" (UID: "28c13f65-78c4-4d4a-8960-7ef17a4c93e8"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.589690 4795 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/28c13f65-78c4-4d4a-8960-7ef17a4c93e8-server-conf\") on node \"crc\" DevicePath \"\"" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.589722 4795 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/28c13f65-78c4-4d4a-8960-7ef17a4c93e8-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.589736 4795 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/74169c45-99e0-4179-a18b-07a1c2cade8b-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.589748 4795 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/28c13f65-78c4-4d4a-8960-7ef17a4c93e8-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.589821 4795 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.589837 4795 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/28c13f65-78c4-4d4a-8960-7ef17a4c93e8-pod-info\") on node \"crc\" DevicePath \"\"" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.629501 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28c13f65-78c4-4d4a-8960-7ef17a4c93e8-config-data" (OuterVolumeSpecName: "config-data") pod "28c13f65-78c4-4d4a-8960-7ef17a4c93e8" (UID: "28c13f65-78c4-4d4a-8960-7ef17a4c93e8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.630240 4795 scope.go:117] "RemoveContainer" containerID="8538c632fd28235313631975f494bd7dc7544f2654ad4385b476cc40f667de98" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.647235 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74169c45-99e0-4179-a18b-07a1c2cade8b-config-data" (OuterVolumeSpecName: "config-data") pod "74169c45-99e0-4179-a18b-07a1c2cade8b" (UID: "74169c45-99e0-4179-a18b-07a1c2cade8b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.651240 4795 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.651827 4795 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.671449 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74169c45-99e0-4179-a18b-07a1c2cade8b-server-conf" (OuterVolumeSpecName: "server-conf") pod "74169c45-99e0-4179-a18b-07a1c2cade8b" (UID: "74169c45-99e0-4179-a18b-07a1c2cade8b"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.692554 4795 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.692607 4795 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/74169c45-99e0-4179-a18b-07a1c2cade8b-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.692620 4795 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/74169c45-99e0-4179-a18b-07a1c2cade8b-server-conf\") on node \"crc\" DevicePath \"\"" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.692633 4795 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/28c13f65-78c4-4d4a-8960-7ef17a4c93e8-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.692644 4795 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.723605 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"28c13f65-78c4-4d4a-8960-7ef17a4c93e8","Type":"ContainerDied","Data":"207431e043164d72eb566e2f5f1c219efb51c4b30289fa78fd050d06d779fead"} Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.723663 4795 scope.go:117] "RemoveContainer" containerID="5113ad63b22ce09343c250010d6d6471a16602dc3aa39754aa991fb1ca8b6f22" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.723792 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.730153 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"74169c45-99e0-4179-a18b-07a1c2cade8b","Type":"ContainerDied","Data":"7ac1f3c96059def2813bbfd912f88e988c365dff2096208217e4df7af9d4757d"} Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.730266 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 29 08:11:53 crc kubenswrapper[4795]: E1129 08:11:53.742940 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-whxgt" podUID="7981887b-f33c-421f-aac5-520c03b7a48a" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.775251 4795 scope.go:117] "RemoveContainer" containerID="66c2994154c169b7826b57d071eca33a6effd691ef0e54ad3484e4595c873484" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.787388 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74169c45-99e0-4179-a18b-07a1c2cade8b-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "74169c45-99e0-4179-a18b-07a1c2cade8b" (UID: "74169c45-99e0-4179-a18b-07a1c2cade8b"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.795306 4795 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/74169c45-99e0-4179-a18b-07a1c2cade8b-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.804491 4795 scope.go:117] "RemoveContainer" containerID="223146acec54e1f27f623b4d80745a10b0a3e4ff9128f8cc93d3e916a4e18752" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.830116 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28c13f65-78c4-4d4a-8960-7ef17a4c93e8-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "28c13f65-78c4-4d4a-8960-7ef17a4c93e8" (UID: "28c13f65-78c4-4d4a-8960-7ef17a4c93e8"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.834846 4795 scope.go:117] "RemoveContainer" containerID="bf11edd6d9ce5ac04733b56bab1255c59a0c0bb035e4e46f3a8d314f0c4f8633" Nov 29 08:11:53 crc kubenswrapper[4795]: I1129 08:11:53.897635 4795 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/28c13f65-78c4-4d4a-8960-7ef17a4c93e8-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.038945 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-594cb89c79-x8bmt"] Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.099865 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.119196 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.152905 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.168732 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.185055 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 29 08:11:54 crc kubenswrapper[4795]: E1129 08:11:54.185638 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28c13f65-78c4-4d4a-8960-7ef17a4c93e8" containerName="setup-container" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.185651 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="28c13f65-78c4-4d4a-8960-7ef17a4c93e8" containerName="setup-container" Nov 29 08:11:54 crc kubenswrapper[4795]: E1129 08:11:54.185669 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28c13f65-78c4-4d4a-8960-7ef17a4c93e8" containerName="rabbitmq" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.185692 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="28c13f65-78c4-4d4a-8960-7ef17a4c93e8" containerName="rabbitmq" Nov 29 08:11:54 crc kubenswrapper[4795]: E1129 08:11:54.185718 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74169c45-99e0-4179-a18b-07a1c2cade8b" containerName="setup-container" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.185725 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="74169c45-99e0-4179-a18b-07a1c2cade8b" containerName="setup-container" Nov 29 08:11:54 crc kubenswrapper[4795]: E1129 08:11:54.185735 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74169c45-99e0-4179-a18b-07a1c2cade8b" containerName="rabbitmq" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.185742 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="74169c45-99e0-4179-a18b-07a1c2cade8b" containerName="rabbitmq" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.186252 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="28c13f65-78c4-4d4a-8960-7ef17a4c93e8" containerName="rabbitmq" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.186284 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="74169c45-99e0-4179-a18b-07a1c2cade8b" containerName="rabbitmq" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.187636 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.192276 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.192567 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.192646 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.192732 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.192826 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.192931 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.195899 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-82kxm" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.227834 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.265732 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.283818 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.286192 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.286525 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.286793 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.286917 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.287204 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-qxp9m" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.287707 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.320691 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/85a82139-8137-40d2-a6e9-b384592f9919-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"85a82139-8137-40d2-a6e9-b384592f9919\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.320757 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/85a82139-8137-40d2-a6e9-b384592f9919-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"85a82139-8137-40d2-a6e9-b384592f9919\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.321028 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7lh8\" (UniqueName: \"kubernetes.io/projected/85a82139-8137-40d2-a6e9-b384592f9919-kube-api-access-x7lh8\") pod \"rabbitmq-cell1-server-0\" (UID: \"85a82139-8137-40d2-a6e9-b384592f9919\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.321087 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"85a82139-8137-40d2-a6e9-b384592f9919\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.321207 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/85a82139-8137-40d2-a6e9-b384592f9919-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"85a82139-8137-40d2-a6e9-b384592f9919\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.321251 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/85a82139-8137-40d2-a6e9-b384592f9919-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"85a82139-8137-40d2-a6e9-b384592f9919\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.321287 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/85a82139-8137-40d2-a6e9-b384592f9919-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"85a82139-8137-40d2-a6e9-b384592f9919\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.321427 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/85a82139-8137-40d2-a6e9-b384592f9919-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"85a82139-8137-40d2-a6e9-b384592f9919\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.321483 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/85a82139-8137-40d2-a6e9-b384592f9919-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"85a82139-8137-40d2-a6e9-b384592f9919\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.321647 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/85a82139-8137-40d2-a6e9-b384592f9919-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"85a82139-8137-40d2-a6e9-b384592f9919\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.321677 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/85a82139-8137-40d2-a6e9-b384592f9919-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"85a82139-8137-40d2-a6e9-b384592f9919\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.333417 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28c13f65-78c4-4d4a-8960-7ef17a4c93e8" path="/var/lib/kubelet/pods/28c13f65-78c4-4d4a-8960-7ef17a4c93e8/volumes" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.339902 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74169c45-99e0-4179-a18b-07a1c2cade8b" path="/var/lib/kubelet/pods/74169c45-99e0-4179-a18b-07a1c2cade8b/volumes" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.341189 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efbd2908-7755-45b1-8624-63c8b7a26704" path="/var/lib/kubelet/pods/efbd2908-7755-45b1-8624-63c8b7a26704/volumes" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.344201 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.353105 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.382953 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.425375 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/85a82139-8137-40d2-a6e9-b384592f9919-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"85a82139-8137-40d2-a6e9-b384592f9919\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.426353 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/85a82139-8137-40d2-a6e9-b384592f9919-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"85a82139-8137-40d2-a6e9-b384592f9919\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.426569 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d1c7dfa2-1b2a-438d-9378-fd998f873999-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"d1c7dfa2-1b2a-438d-9378-fd998f873999\") " pod="openstack/rabbitmq-server-0" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.426787 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d1c7dfa2-1b2a-438d-9378-fd998f873999-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"d1c7dfa2-1b2a-438d-9378-fd998f873999\") " pod="openstack/rabbitmq-server-0" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.426930 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7lh8\" (UniqueName: \"kubernetes.io/projected/85a82139-8137-40d2-a6e9-b384592f9919-kube-api-access-x7lh8\") pod \"rabbitmq-cell1-server-0\" (UID: \"85a82139-8137-40d2-a6e9-b384592f9919\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.427083 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"85a82139-8137-40d2-a6e9-b384592f9919\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.427170 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d1c7dfa2-1b2a-438d-9378-fd998f873999-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"d1c7dfa2-1b2a-438d-9378-fd998f873999\") " pod="openstack/rabbitmq-server-0" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.427253 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d1c7dfa2-1b2a-438d-9378-fd998f873999-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"d1c7dfa2-1b2a-438d-9378-fd998f873999\") " pod="openstack/rabbitmq-server-0" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.427355 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d1c7dfa2-1b2a-438d-9378-fd998f873999-config-data\") pod \"rabbitmq-server-0\" (UID: \"d1c7dfa2-1b2a-438d-9378-fd998f873999\") " pod="openstack/rabbitmq-server-0" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.427535 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsxf8\" (UniqueName: \"kubernetes.io/projected/d1c7dfa2-1b2a-438d-9378-fd998f873999-kube-api-access-vsxf8\") pod \"rabbitmq-server-0\" (UID: \"d1c7dfa2-1b2a-438d-9378-fd998f873999\") " pod="openstack/rabbitmq-server-0" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.427650 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"d1c7dfa2-1b2a-438d-9378-fd998f873999\") " pod="openstack/rabbitmq-server-0" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.428077 4795 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"85a82139-8137-40d2-a6e9-b384592f9919\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-cell1-server-0" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.428456 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/85a82139-8137-40d2-a6e9-b384592f9919-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"85a82139-8137-40d2-a6e9-b384592f9919\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.428624 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/85a82139-8137-40d2-a6e9-b384592f9919-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"85a82139-8137-40d2-a6e9-b384592f9919\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.428743 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d1c7dfa2-1b2a-438d-9378-fd998f873999-pod-info\") pod \"rabbitmq-server-0\" (UID: \"d1c7dfa2-1b2a-438d-9378-fd998f873999\") " pod="openstack/rabbitmq-server-0" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.428859 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/85a82139-8137-40d2-a6e9-b384592f9919-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"85a82139-8137-40d2-a6e9-b384592f9919\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.428977 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d1c7dfa2-1b2a-438d-9378-fd998f873999-server-conf\") pod \"rabbitmq-server-0\" (UID: \"d1c7dfa2-1b2a-438d-9378-fd998f873999\") " pod="openstack/rabbitmq-server-0" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.429176 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d1c7dfa2-1b2a-438d-9378-fd998f873999-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"d1c7dfa2-1b2a-438d-9378-fd998f873999\") " pod="openstack/rabbitmq-server-0" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.429357 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/85a82139-8137-40d2-a6e9-b384592f9919-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"85a82139-8137-40d2-a6e9-b384592f9919\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.429525 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/85a82139-8137-40d2-a6e9-b384592f9919-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"85a82139-8137-40d2-a6e9-b384592f9919\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.429802 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d1c7dfa2-1b2a-438d-9378-fd998f873999-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"d1c7dfa2-1b2a-438d-9378-fd998f873999\") " pod="openstack/rabbitmq-server-0" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.430064 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/85a82139-8137-40d2-a6e9-b384592f9919-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"85a82139-8137-40d2-a6e9-b384592f9919\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.430206 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/85a82139-8137-40d2-a6e9-b384592f9919-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"85a82139-8137-40d2-a6e9-b384592f9919\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.431400 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/85a82139-8137-40d2-a6e9-b384592f9919-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"85a82139-8137-40d2-a6e9-b384592f9919\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.431623 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/85a82139-8137-40d2-a6e9-b384592f9919-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"85a82139-8137-40d2-a6e9-b384592f9919\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.432233 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/85a82139-8137-40d2-a6e9-b384592f9919-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"85a82139-8137-40d2-a6e9-b384592f9919\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.432681 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/85a82139-8137-40d2-a6e9-b384592f9919-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"85a82139-8137-40d2-a6e9-b384592f9919\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.433251 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/85a82139-8137-40d2-a6e9-b384592f9919-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"85a82139-8137-40d2-a6e9-b384592f9919\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.433352 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/85a82139-8137-40d2-a6e9-b384592f9919-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"85a82139-8137-40d2-a6e9-b384592f9919\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.435430 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/85a82139-8137-40d2-a6e9-b384592f9919-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"85a82139-8137-40d2-a6e9-b384592f9919\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.435898 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/85a82139-8137-40d2-a6e9-b384592f9919-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"85a82139-8137-40d2-a6e9-b384592f9919\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.439352 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/85a82139-8137-40d2-a6e9-b384592f9919-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"85a82139-8137-40d2-a6e9-b384592f9919\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.446205 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7lh8\" (UniqueName: \"kubernetes.io/projected/85a82139-8137-40d2-a6e9-b384592f9919-kube-api-access-x7lh8\") pod \"rabbitmq-cell1-server-0\" (UID: \"85a82139-8137-40d2-a6e9-b384592f9919\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.473748 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"85a82139-8137-40d2-a6e9-b384592f9919\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.533141 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d1c7dfa2-1b2a-438d-9378-fd998f873999-server-conf\") pod \"rabbitmq-server-0\" (UID: \"d1c7dfa2-1b2a-438d-9378-fd998f873999\") " pod="openstack/rabbitmq-server-0" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.533242 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d1c7dfa2-1b2a-438d-9378-fd998f873999-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"d1c7dfa2-1b2a-438d-9378-fd998f873999\") " pod="openstack/rabbitmq-server-0" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.533330 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d1c7dfa2-1b2a-438d-9378-fd998f873999-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"d1c7dfa2-1b2a-438d-9378-fd998f873999\") " pod="openstack/rabbitmq-server-0" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.533471 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d1c7dfa2-1b2a-438d-9378-fd998f873999-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"d1c7dfa2-1b2a-438d-9378-fd998f873999\") " pod="openstack/rabbitmq-server-0" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.533527 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d1c7dfa2-1b2a-438d-9378-fd998f873999-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"d1c7dfa2-1b2a-438d-9378-fd998f873999\") " pod="openstack/rabbitmq-server-0" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.533559 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d1c7dfa2-1b2a-438d-9378-fd998f873999-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"d1c7dfa2-1b2a-438d-9378-fd998f873999\") " pod="openstack/rabbitmq-server-0" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.533583 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d1c7dfa2-1b2a-438d-9378-fd998f873999-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"d1c7dfa2-1b2a-438d-9378-fd998f873999\") " pod="openstack/rabbitmq-server-0" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.533638 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d1c7dfa2-1b2a-438d-9378-fd998f873999-config-data\") pod \"rabbitmq-server-0\" (UID: \"d1c7dfa2-1b2a-438d-9378-fd998f873999\") " pod="openstack/rabbitmq-server-0" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.535087 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d1c7dfa2-1b2a-438d-9378-fd998f873999-config-data\") pod \"rabbitmq-server-0\" (UID: \"d1c7dfa2-1b2a-438d-9378-fd998f873999\") " pod="openstack/rabbitmq-server-0" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.534421 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d1c7dfa2-1b2a-438d-9378-fd998f873999-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"d1c7dfa2-1b2a-438d-9378-fd998f873999\") " pod="openstack/rabbitmq-server-0" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.534645 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d1c7dfa2-1b2a-438d-9378-fd998f873999-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"d1c7dfa2-1b2a-438d-9378-fd998f873999\") " pod="openstack/rabbitmq-server-0" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.534667 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d1c7dfa2-1b2a-438d-9378-fd998f873999-server-conf\") pod \"rabbitmq-server-0\" (UID: \"d1c7dfa2-1b2a-438d-9378-fd998f873999\") " pod="openstack/rabbitmq-server-0" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.534261 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d1c7dfa2-1b2a-438d-9378-fd998f873999-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"d1c7dfa2-1b2a-438d-9378-fd998f873999\") " pod="openstack/rabbitmq-server-0" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.535336 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsxf8\" (UniqueName: \"kubernetes.io/projected/d1c7dfa2-1b2a-438d-9378-fd998f873999-kube-api-access-vsxf8\") pod \"rabbitmq-server-0\" (UID: \"d1c7dfa2-1b2a-438d-9378-fd998f873999\") " pod="openstack/rabbitmq-server-0" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.535486 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"d1c7dfa2-1b2a-438d-9378-fd998f873999\") " pod="openstack/rabbitmq-server-0" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.535750 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d1c7dfa2-1b2a-438d-9378-fd998f873999-pod-info\") pod \"rabbitmq-server-0\" (UID: \"d1c7dfa2-1b2a-438d-9378-fd998f873999\") " pod="openstack/rabbitmq-server-0" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.535836 4795 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"d1c7dfa2-1b2a-438d-9378-fd998f873999\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/rabbitmq-server-0" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.537740 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d1c7dfa2-1b2a-438d-9378-fd998f873999-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"d1c7dfa2-1b2a-438d-9378-fd998f873999\") " pod="openstack/rabbitmq-server-0" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.539049 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d1c7dfa2-1b2a-438d-9378-fd998f873999-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"d1c7dfa2-1b2a-438d-9378-fd998f873999\") " pod="openstack/rabbitmq-server-0" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.542818 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d1c7dfa2-1b2a-438d-9378-fd998f873999-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"d1c7dfa2-1b2a-438d-9378-fd998f873999\") " pod="openstack/rabbitmq-server-0" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.546541 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d1c7dfa2-1b2a-438d-9378-fd998f873999-pod-info\") pod \"rabbitmq-server-0\" (UID: \"d1c7dfa2-1b2a-438d-9378-fd998f873999\") " pod="openstack/rabbitmq-server-0" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.554419 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsxf8\" (UniqueName: \"kubernetes.io/projected/d1c7dfa2-1b2a-438d-9378-fd998f873999-kube-api-access-vsxf8\") pod \"rabbitmq-server-0\" (UID: \"d1c7dfa2-1b2a-438d-9378-fd998f873999\") " pod="openstack/rabbitmq-server-0" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.556766 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.605747 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"d1c7dfa2-1b2a-438d-9378-fd998f873999\") " pod="openstack/rabbitmq-server-0" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.618756 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.773032 4795 generic.go:334] "Generic (PLEG): container finished" podID="bb9518bd-e4c7-49ee-9f39-f20800f6813c" containerID="8e7a7d219a2a54c308bdedb4710c195c95891776b3eb37adca13df0e7fbbb0b0" exitCode=0 Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.773083 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-594cb89c79-x8bmt" event={"ID":"bb9518bd-e4c7-49ee-9f39-f20800f6813c","Type":"ContainerDied","Data":"8e7a7d219a2a54c308bdedb4710c195c95891776b3eb37adca13df0e7fbbb0b0"} Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.773203 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-594cb89c79-x8bmt" event={"ID":"bb9518bd-e4c7-49ee-9f39-f20800f6813c","Type":"ContainerStarted","Data":"745a23c9d2b511e07d9589589f91d166e819ee7d56c382f502fb7023482540ce"} Nov 29 08:11:54 crc kubenswrapper[4795]: I1129 08:11:54.786970 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1187372d-c24e-4ca1-b985-64ba7bb4df2b","Type":"ContainerStarted","Data":"8810c506ebdc309c1d14dc843f01ec003124506094708872f6baee0a7a769475"} Nov 29 08:11:55 crc kubenswrapper[4795]: I1129 08:11:55.136779 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 29 08:11:55 crc kubenswrapper[4795]: W1129 08:11:55.138718 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod85a82139_8137_40d2_a6e9_b384592f9919.slice/crio-9120611bfa1fe053ca64846ad0c21fe5478f2f2b29f394b8665607817afff105 WatchSource:0}: Error finding container 9120611bfa1fe053ca64846ad0c21fe5478f2f2b29f394b8665607817afff105: Status 404 returned error can't find the container with id 9120611bfa1fe053ca64846ad0c21fe5478f2f2b29f394b8665607817afff105 Nov 29 08:11:55 crc kubenswrapper[4795]: I1129 08:11:55.251226 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 29 08:11:55 crc kubenswrapper[4795]: W1129 08:11:55.261713 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1c7dfa2_1b2a_438d_9378_fd998f873999.slice/crio-7dd393a66e520152ba157182e7f1ca96629a4ed2a91f6a0a673e27c437dc49a0 WatchSource:0}: Error finding container 7dd393a66e520152ba157182e7f1ca96629a4ed2a91f6a0a673e27c437dc49a0: Status 404 returned error can't find the container with id 7dd393a66e520152ba157182e7f1ca96629a4ed2a91f6a0a673e27c437dc49a0 Nov 29 08:11:55 crc kubenswrapper[4795]: I1129 08:11:55.800997 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"85a82139-8137-40d2-a6e9-b384592f9919","Type":"ContainerStarted","Data":"9120611bfa1fe053ca64846ad0c21fe5478f2f2b29f394b8665607817afff105"} Nov 29 08:11:55 crc kubenswrapper[4795]: I1129 08:11:55.802704 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d1c7dfa2-1b2a-438d-9378-fd998f873999","Type":"ContainerStarted","Data":"7dd393a66e520152ba157182e7f1ca96629a4ed2a91f6a0a673e27c437dc49a0"} Nov 29 08:11:55 crc kubenswrapper[4795]: I1129 08:11:55.804281 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-594cb89c79-x8bmt" event={"ID":"bb9518bd-e4c7-49ee-9f39-f20800f6813c","Type":"ContainerStarted","Data":"fd83879a9e4057d5f12ab4658cb4cb9e618851185046e66d22a9fe73932c8e84"} Nov 29 08:11:55 crc kubenswrapper[4795]: I1129 08:11:55.805773 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-594cb89c79-x8bmt" Nov 29 08:11:59 crc kubenswrapper[4795]: I1129 08:11:59.854026 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"85a82139-8137-40d2-a6e9-b384592f9919","Type":"ContainerStarted","Data":"4a16d289d8ef75df118f17d942e6a7f8b057ae6c7aad1dc07fe3a3df5cf42be2"} Nov 29 08:11:59 crc kubenswrapper[4795]: I1129 08:11:59.887651 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-594cb89c79-x8bmt" podStartSLOduration=11.887624666 podStartE2EDuration="11.887624666s" podCreationTimestamp="2025-11-29 08:11:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 08:11:55.834088214 +0000 UTC m=+1961.809664004" watchObservedRunningTime="2025-11-29 08:11:59.887624666 +0000 UTC m=+1965.863200446" Nov 29 08:12:00 crc kubenswrapper[4795]: I1129 08:12:00.867917 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1187372d-c24e-4ca1-b985-64ba7bb4df2b","Type":"ContainerStarted","Data":"1e6d0780836d8a46b47b6dddaa6abe5bfa1f743f1b6d4730d5399e390be1d059"} Nov 29 08:12:00 crc kubenswrapper[4795]: I1129 08:12:00.869499 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d1c7dfa2-1b2a-438d-9378-fd998f873999","Type":"ContainerStarted","Data":"1c0e5b131ed3937f93562a50240cb6b9c2562d445ddbcce4478062b5f190274b"} Nov 29 08:12:01 crc kubenswrapper[4795]: I1129 08:12:01.276432 4795 scope.go:117] "RemoveContainer" containerID="cb788e622da5bdf804dcfa0c959dfb0158a848bc427ed90337dfd530ea78ea5d" Nov 29 08:12:01 crc kubenswrapper[4795]: E1129 08:12:01.276995 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:12:01 crc kubenswrapper[4795]: I1129 08:12:01.884380 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1187372d-c24e-4ca1-b985-64ba7bb4df2b","Type":"ContainerStarted","Data":"195a0677f80d509fe0148fda86a5b604bfea32e2aa81216f9e2452cf057b1c80"} Nov 29 08:12:02 crc kubenswrapper[4795]: I1129 08:12:02.903544 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1187372d-c24e-4ca1-b985-64ba7bb4df2b","Type":"ContainerStarted","Data":"18dac40ebe409200f1148bb9949b6afe9dd852d6e1df00a04e73a6671883a1bb"} Nov 29 08:12:03 crc kubenswrapper[4795]: I1129 08:12:03.915812 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1187372d-c24e-4ca1-b985-64ba7bb4df2b","Type":"ContainerStarted","Data":"138bb10fd5eae098a3e2113d55fe4f64db12ac79c0db29f27fe1b85e53b2e7da"} Nov 29 08:12:03 crc kubenswrapper[4795]: I1129 08:12:03.915985 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 29 08:12:03 crc kubenswrapper[4795]: I1129 08:12:03.948299 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.839668974 podStartE2EDuration="11.94827798s" podCreationTimestamp="2025-11-29 08:11:52 +0000 UTC" firstStartedPulling="2025-11-29 08:11:54.210920056 +0000 UTC m=+1960.186495846" lastFinishedPulling="2025-11-29 08:12:03.319529062 +0000 UTC m=+1969.295104852" observedRunningTime="2025-11-29 08:12:03.93347515 +0000 UTC m=+1969.909050960" watchObservedRunningTime="2025-11-29 08:12:03.94827798 +0000 UTC m=+1969.923853770" Nov 29 08:12:03 crc kubenswrapper[4795]: I1129 08:12:03.973831 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-594cb89c79-x8bmt" Nov 29 08:12:04 crc kubenswrapper[4795]: I1129 08:12:04.047893 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d99f6bc7f-5xkn9"] Nov 29 08:12:04 crc kubenswrapper[4795]: I1129 08:12:04.048498 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d99f6bc7f-5xkn9" podUID="3fd74474-a041-4d6f-84f9-90d8161e943e" containerName="dnsmasq-dns" containerID="cri-o://8b29db2ef12ff07b0cc2c1a6b7f9ca4ec5319a14eede4bcfcb67ed46af6cd8f1" gracePeriod=10 Nov 29 08:12:04 crc kubenswrapper[4795]: I1129 08:12:04.086210 4795 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d99f6bc7f-5xkn9" podUID="3fd74474-a041-4d6f-84f9-90d8161e943e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.251:5353: connect: connection refused" Nov 29 08:12:04 crc kubenswrapper[4795]: I1129 08:12:04.260101 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5596c69fcc-wq44q"] Nov 29 08:12:04 crc kubenswrapper[4795]: I1129 08:12:04.262857 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5596c69fcc-wq44q" Nov 29 08:12:04 crc kubenswrapper[4795]: I1129 08:12:04.323539 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5596c69fcc-wq44q"] Nov 29 08:12:04 crc kubenswrapper[4795]: I1129 08:12:04.418369 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ad1010b-e9f3-4cbf-aefe-4ad2bed529e1-dns-svc\") pod \"dnsmasq-dns-5596c69fcc-wq44q\" (UID: \"2ad1010b-e9f3-4cbf-aefe-4ad2bed529e1\") " pod="openstack/dnsmasq-dns-5596c69fcc-wq44q" Nov 29 08:12:04 crc kubenswrapper[4795]: I1129 08:12:04.418425 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjcgk\" (UniqueName: \"kubernetes.io/projected/2ad1010b-e9f3-4cbf-aefe-4ad2bed529e1-kube-api-access-hjcgk\") pod \"dnsmasq-dns-5596c69fcc-wq44q\" (UID: \"2ad1010b-e9f3-4cbf-aefe-4ad2bed529e1\") " pod="openstack/dnsmasq-dns-5596c69fcc-wq44q" Nov 29 08:12:04 crc kubenswrapper[4795]: I1129 08:12:04.418445 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ad1010b-e9f3-4cbf-aefe-4ad2bed529e1-ovsdbserver-nb\") pod \"dnsmasq-dns-5596c69fcc-wq44q\" (UID: \"2ad1010b-e9f3-4cbf-aefe-4ad2bed529e1\") " pod="openstack/dnsmasq-dns-5596c69fcc-wq44q" Nov 29 08:12:04 crc kubenswrapper[4795]: I1129 08:12:04.418472 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ad1010b-e9f3-4cbf-aefe-4ad2bed529e1-ovsdbserver-sb\") pod \"dnsmasq-dns-5596c69fcc-wq44q\" (UID: \"2ad1010b-e9f3-4cbf-aefe-4ad2bed529e1\") " pod="openstack/dnsmasq-dns-5596c69fcc-wq44q" Nov 29 08:12:04 crc kubenswrapper[4795]: I1129 08:12:04.418637 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2ad1010b-e9f3-4cbf-aefe-4ad2bed529e1-openstack-edpm-ipam\") pod \"dnsmasq-dns-5596c69fcc-wq44q\" (UID: \"2ad1010b-e9f3-4cbf-aefe-4ad2bed529e1\") " pod="openstack/dnsmasq-dns-5596c69fcc-wq44q" Nov 29 08:12:04 crc kubenswrapper[4795]: I1129 08:12:04.418662 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2ad1010b-e9f3-4cbf-aefe-4ad2bed529e1-dns-swift-storage-0\") pod \"dnsmasq-dns-5596c69fcc-wq44q\" (UID: \"2ad1010b-e9f3-4cbf-aefe-4ad2bed529e1\") " pod="openstack/dnsmasq-dns-5596c69fcc-wq44q" Nov 29 08:12:04 crc kubenswrapper[4795]: I1129 08:12:04.418708 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ad1010b-e9f3-4cbf-aefe-4ad2bed529e1-config\") pod \"dnsmasq-dns-5596c69fcc-wq44q\" (UID: \"2ad1010b-e9f3-4cbf-aefe-4ad2bed529e1\") " pod="openstack/dnsmasq-dns-5596c69fcc-wq44q" Nov 29 08:12:04 crc kubenswrapper[4795]: I1129 08:12:04.521491 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ad1010b-e9f3-4cbf-aefe-4ad2bed529e1-dns-svc\") pod \"dnsmasq-dns-5596c69fcc-wq44q\" (UID: \"2ad1010b-e9f3-4cbf-aefe-4ad2bed529e1\") " pod="openstack/dnsmasq-dns-5596c69fcc-wq44q" Nov 29 08:12:04 crc kubenswrapper[4795]: I1129 08:12:04.521561 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ad1010b-e9f3-4cbf-aefe-4ad2bed529e1-ovsdbserver-nb\") pod \"dnsmasq-dns-5596c69fcc-wq44q\" (UID: \"2ad1010b-e9f3-4cbf-aefe-4ad2bed529e1\") " pod="openstack/dnsmasq-dns-5596c69fcc-wq44q" Nov 29 08:12:04 crc kubenswrapper[4795]: I1129 08:12:04.521583 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjcgk\" (UniqueName: \"kubernetes.io/projected/2ad1010b-e9f3-4cbf-aefe-4ad2bed529e1-kube-api-access-hjcgk\") pod \"dnsmasq-dns-5596c69fcc-wq44q\" (UID: \"2ad1010b-e9f3-4cbf-aefe-4ad2bed529e1\") " pod="openstack/dnsmasq-dns-5596c69fcc-wq44q" Nov 29 08:12:04 crc kubenswrapper[4795]: I1129 08:12:04.521625 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ad1010b-e9f3-4cbf-aefe-4ad2bed529e1-ovsdbserver-sb\") pod \"dnsmasq-dns-5596c69fcc-wq44q\" (UID: \"2ad1010b-e9f3-4cbf-aefe-4ad2bed529e1\") " pod="openstack/dnsmasq-dns-5596c69fcc-wq44q" Nov 29 08:12:04 crc kubenswrapper[4795]: I1129 08:12:04.521736 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2ad1010b-e9f3-4cbf-aefe-4ad2bed529e1-openstack-edpm-ipam\") pod \"dnsmasq-dns-5596c69fcc-wq44q\" (UID: \"2ad1010b-e9f3-4cbf-aefe-4ad2bed529e1\") " pod="openstack/dnsmasq-dns-5596c69fcc-wq44q" Nov 29 08:12:04 crc kubenswrapper[4795]: I1129 08:12:04.521766 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2ad1010b-e9f3-4cbf-aefe-4ad2bed529e1-dns-swift-storage-0\") pod \"dnsmasq-dns-5596c69fcc-wq44q\" (UID: \"2ad1010b-e9f3-4cbf-aefe-4ad2bed529e1\") " pod="openstack/dnsmasq-dns-5596c69fcc-wq44q" Nov 29 08:12:04 crc kubenswrapper[4795]: I1129 08:12:04.521803 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ad1010b-e9f3-4cbf-aefe-4ad2bed529e1-config\") pod \"dnsmasq-dns-5596c69fcc-wq44q\" (UID: \"2ad1010b-e9f3-4cbf-aefe-4ad2bed529e1\") " pod="openstack/dnsmasq-dns-5596c69fcc-wq44q" Nov 29 08:12:04 crc kubenswrapper[4795]: I1129 08:12:04.523357 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ad1010b-e9f3-4cbf-aefe-4ad2bed529e1-dns-svc\") pod \"dnsmasq-dns-5596c69fcc-wq44q\" (UID: \"2ad1010b-e9f3-4cbf-aefe-4ad2bed529e1\") " pod="openstack/dnsmasq-dns-5596c69fcc-wq44q" Nov 29 08:12:04 crc kubenswrapper[4795]: I1129 08:12:04.523503 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ad1010b-e9f3-4cbf-aefe-4ad2bed529e1-ovsdbserver-sb\") pod \"dnsmasq-dns-5596c69fcc-wq44q\" (UID: \"2ad1010b-e9f3-4cbf-aefe-4ad2bed529e1\") " pod="openstack/dnsmasq-dns-5596c69fcc-wq44q" Nov 29 08:12:04 crc kubenswrapper[4795]: I1129 08:12:04.523625 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ad1010b-e9f3-4cbf-aefe-4ad2bed529e1-config\") pod \"dnsmasq-dns-5596c69fcc-wq44q\" (UID: \"2ad1010b-e9f3-4cbf-aefe-4ad2bed529e1\") " pod="openstack/dnsmasq-dns-5596c69fcc-wq44q" Nov 29 08:12:04 crc kubenswrapper[4795]: I1129 08:12:04.523807 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ad1010b-e9f3-4cbf-aefe-4ad2bed529e1-ovsdbserver-nb\") pod \"dnsmasq-dns-5596c69fcc-wq44q\" (UID: \"2ad1010b-e9f3-4cbf-aefe-4ad2bed529e1\") " pod="openstack/dnsmasq-dns-5596c69fcc-wq44q" Nov 29 08:12:04 crc kubenswrapper[4795]: I1129 08:12:04.527192 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2ad1010b-e9f3-4cbf-aefe-4ad2bed529e1-dns-swift-storage-0\") pod \"dnsmasq-dns-5596c69fcc-wq44q\" (UID: \"2ad1010b-e9f3-4cbf-aefe-4ad2bed529e1\") " pod="openstack/dnsmasq-dns-5596c69fcc-wq44q" Nov 29 08:12:04 crc kubenswrapper[4795]: I1129 08:12:04.527963 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2ad1010b-e9f3-4cbf-aefe-4ad2bed529e1-openstack-edpm-ipam\") pod \"dnsmasq-dns-5596c69fcc-wq44q\" (UID: \"2ad1010b-e9f3-4cbf-aefe-4ad2bed529e1\") " pod="openstack/dnsmasq-dns-5596c69fcc-wq44q" Nov 29 08:12:04 crc kubenswrapper[4795]: I1129 08:12:04.623499 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjcgk\" (UniqueName: \"kubernetes.io/projected/2ad1010b-e9f3-4cbf-aefe-4ad2bed529e1-kube-api-access-hjcgk\") pod \"dnsmasq-dns-5596c69fcc-wq44q\" (UID: \"2ad1010b-e9f3-4cbf-aefe-4ad2bed529e1\") " pod="openstack/dnsmasq-dns-5596c69fcc-wq44q" Nov 29 08:12:04 crc kubenswrapper[4795]: I1129 08:12:04.921548 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5596c69fcc-wq44q" Nov 29 08:12:04 crc kubenswrapper[4795]: I1129 08:12:04.928705 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d99f6bc7f-5xkn9" Nov 29 08:12:04 crc kubenswrapper[4795]: I1129 08:12:04.929234 4795 generic.go:334] "Generic (PLEG): container finished" podID="3fd74474-a041-4d6f-84f9-90d8161e943e" containerID="8b29db2ef12ff07b0cc2c1a6b7f9ca4ec5319a14eede4bcfcb67ed46af6cd8f1" exitCode=0 Nov 29 08:12:04 crc kubenswrapper[4795]: I1129 08:12:04.930524 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d99f6bc7f-5xkn9" event={"ID":"3fd74474-a041-4d6f-84f9-90d8161e943e","Type":"ContainerDied","Data":"8b29db2ef12ff07b0cc2c1a6b7f9ca4ec5319a14eede4bcfcb67ed46af6cd8f1"} Nov 29 08:12:04 crc kubenswrapper[4795]: I1129 08:12:04.930559 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d99f6bc7f-5xkn9" event={"ID":"3fd74474-a041-4d6f-84f9-90d8161e943e","Type":"ContainerDied","Data":"cf74017088d42cf3a2e23610a538b6fc218a2eb8eba9ed42c8e25ec1e35b93e8"} Nov 29 08:12:04 crc kubenswrapper[4795]: I1129 08:12:04.930576 4795 scope.go:117] "RemoveContainer" containerID="8b29db2ef12ff07b0cc2c1a6b7f9ca4ec5319a14eede4bcfcb67ed46af6cd8f1" Nov 29 08:12:05 crc kubenswrapper[4795]: I1129 08:12:05.037210 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lpzx\" (UniqueName: \"kubernetes.io/projected/3fd74474-a041-4d6f-84f9-90d8161e943e-kube-api-access-2lpzx\") pod \"3fd74474-a041-4d6f-84f9-90d8161e943e\" (UID: \"3fd74474-a041-4d6f-84f9-90d8161e943e\") " Nov 29 08:12:05 crc kubenswrapper[4795]: I1129 08:12:05.037429 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3fd74474-a041-4d6f-84f9-90d8161e943e-ovsdbserver-sb\") pod \"3fd74474-a041-4d6f-84f9-90d8161e943e\" (UID: \"3fd74474-a041-4d6f-84f9-90d8161e943e\") " Nov 29 08:12:05 crc kubenswrapper[4795]: I1129 08:12:05.037483 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3fd74474-a041-4d6f-84f9-90d8161e943e-dns-svc\") pod \"3fd74474-a041-4d6f-84f9-90d8161e943e\" (UID: \"3fd74474-a041-4d6f-84f9-90d8161e943e\") " Nov 29 08:12:05 crc kubenswrapper[4795]: I1129 08:12:05.037543 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fd74474-a041-4d6f-84f9-90d8161e943e-config\") pod \"3fd74474-a041-4d6f-84f9-90d8161e943e\" (UID: \"3fd74474-a041-4d6f-84f9-90d8161e943e\") " Nov 29 08:12:05 crc kubenswrapper[4795]: I1129 08:12:05.037574 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3fd74474-a041-4d6f-84f9-90d8161e943e-ovsdbserver-nb\") pod \"3fd74474-a041-4d6f-84f9-90d8161e943e\" (UID: \"3fd74474-a041-4d6f-84f9-90d8161e943e\") " Nov 29 08:12:05 crc kubenswrapper[4795]: I1129 08:12:05.037797 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3fd74474-a041-4d6f-84f9-90d8161e943e-dns-swift-storage-0\") pod \"3fd74474-a041-4d6f-84f9-90d8161e943e\" (UID: \"3fd74474-a041-4d6f-84f9-90d8161e943e\") " Nov 29 08:12:05 crc kubenswrapper[4795]: I1129 08:12:05.048005 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fd74474-a041-4d6f-84f9-90d8161e943e-kube-api-access-2lpzx" (OuterVolumeSpecName: "kube-api-access-2lpzx") pod "3fd74474-a041-4d6f-84f9-90d8161e943e" (UID: "3fd74474-a041-4d6f-84f9-90d8161e943e"). InnerVolumeSpecName "kube-api-access-2lpzx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:12:05 crc kubenswrapper[4795]: I1129 08:12:05.130011 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fd74474-a041-4d6f-84f9-90d8161e943e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3fd74474-a041-4d6f-84f9-90d8161e943e" (UID: "3fd74474-a041-4d6f-84f9-90d8161e943e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:12:05 crc kubenswrapper[4795]: I1129 08:12:05.133989 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fd74474-a041-4d6f-84f9-90d8161e943e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3fd74474-a041-4d6f-84f9-90d8161e943e" (UID: "3fd74474-a041-4d6f-84f9-90d8161e943e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:12:05 crc kubenswrapper[4795]: I1129 08:12:05.138559 4795 scope.go:117] "RemoveContainer" containerID="8c97c16d2137ecd9b8901d0d08a51bab89eccaf6fa16c62308b0908b67fbd1e7" Nov 29 08:12:05 crc kubenswrapper[4795]: I1129 08:12:05.147101 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lpzx\" (UniqueName: \"kubernetes.io/projected/3fd74474-a041-4d6f-84f9-90d8161e943e-kube-api-access-2lpzx\") on node \"crc\" DevicePath \"\"" Nov 29 08:12:05 crc kubenswrapper[4795]: I1129 08:12:05.147147 4795 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3fd74474-a041-4d6f-84f9-90d8161e943e-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 08:12:05 crc kubenswrapper[4795]: I1129 08:12:05.147158 4795 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3fd74474-a041-4d6f-84f9-90d8161e943e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 29 08:12:05 crc kubenswrapper[4795]: I1129 08:12:05.148137 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fd74474-a041-4d6f-84f9-90d8161e943e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3fd74474-a041-4d6f-84f9-90d8161e943e" (UID: "3fd74474-a041-4d6f-84f9-90d8161e943e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:12:05 crc kubenswrapper[4795]: I1129 08:12:05.158806 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fd74474-a041-4d6f-84f9-90d8161e943e-config" (OuterVolumeSpecName: "config") pod "3fd74474-a041-4d6f-84f9-90d8161e943e" (UID: "3fd74474-a041-4d6f-84f9-90d8161e943e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:12:05 crc kubenswrapper[4795]: I1129 08:12:05.210666 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fd74474-a041-4d6f-84f9-90d8161e943e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3fd74474-a041-4d6f-84f9-90d8161e943e" (UID: "3fd74474-a041-4d6f-84f9-90d8161e943e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:12:05 crc kubenswrapper[4795]: I1129 08:12:05.211214 4795 scope.go:117] "RemoveContainer" containerID="8b29db2ef12ff07b0cc2c1a6b7f9ca4ec5319a14eede4bcfcb67ed46af6cd8f1" Nov 29 08:12:05 crc kubenswrapper[4795]: E1129 08:12:05.214052 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b29db2ef12ff07b0cc2c1a6b7f9ca4ec5319a14eede4bcfcb67ed46af6cd8f1\": container with ID starting with 8b29db2ef12ff07b0cc2c1a6b7f9ca4ec5319a14eede4bcfcb67ed46af6cd8f1 not found: ID does not exist" containerID="8b29db2ef12ff07b0cc2c1a6b7f9ca4ec5319a14eede4bcfcb67ed46af6cd8f1" Nov 29 08:12:05 crc kubenswrapper[4795]: I1129 08:12:05.214107 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b29db2ef12ff07b0cc2c1a6b7f9ca4ec5319a14eede4bcfcb67ed46af6cd8f1"} err="failed to get container status \"8b29db2ef12ff07b0cc2c1a6b7f9ca4ec5319a14eede4bcfcb67ed46af6cd8f1\": rpc error: code = NotFound desc = could not find container \"8b29db2ef12ff07b0cc2c1a6b7f9ca4ec5319a14eede4bcfcb67ed46af6cd8f1\": container with ID starting with 8b29db2ef12ff07b0cc2c1a6b7f9ca4ec5319a14eede4bcfcb67ed46af6cd8f1 not found: ID does not exist" Nov 29 08:12:05 crc kubenswrapper[4795]: I1129 08:12:05.214152 4795 scope.go:117] "RemoveContainer" containerID="8c97c16d2137ecd9b8901d0d08a51bab89eccaf6fa16c62308b0908b67fbd1e7" Nov 29 08:12:05 crc kubenswrapper[4795]: E1129 08:12:05.214707 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c97c16d2137ecd9b8901d0d08a51bab89eccaf6fa16c62308b0908b67fbd1e7\": container with ID starting with 8c97c16d2137ecd9b8901d0d08a51bab89eccaf6fa16c62308b0908b67fbd1e7 not found: ID does not exist" containerID="8c97c16d2137ecd9b8901d0d08a51bab89eccaf6fa16c62308b0908b67fbd1e7" Nov 29 08:12:05 crc kubenswrapper[4795]: I1129 08:12:05.214754 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c97c16d2137ecd9b8901d0d08a51bab89eccaf6fa16c62308b0908b67fbd1e7"} err="failed to get container status \"8c97c16d2137ecd9b8901d0d08a51bab89eccaf6fa16c62308b0908b67fbd1e7\": rpc error: code = NotFound desc = could not find container \"8c97c16d2137ecd9b8901d0d08a51bab89eccaf6fa16c62308b0908b67fbd1e7\": container with ID starting with 8c97c16d2137ecd9b8901d0d08a51bab89eccaf6fa16c62308b0908b67fbd1e7 not found: ID does not exist" Nov 29 08:12:05 crc kubenswrapper[4795]: I1129 08:12:05.254469 4795 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3fd74474-a041-4d6f-84f9-90d8161e943e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 29 08:12:05 crc kubenswrapper[4795]: I1129 08:12:05.254500 4795 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fd74474-a041-4d6f-84f9-90d8161e943e-config\") on node \"crc\" DevicePath \"\"" Nov 29 08:12:05 crc kubenswrapper[4795]: I1129 08:12:05.254512 4795 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3fd74474-a041-4d6f-84f9-90d8161e943e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 29 08:12:05 crc kubenswrapper[4795]: W1129 08:12:05.650956 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ad1010b_e9f3_4cbf_aefe_4ad2bed529e1.slice/crio-3fb958af13f90a42355291ed354c84b8ade4356aadf56552b1cea8763db07a4c WatchSource:0}: Error finding container 3fb958af13f90a42355291ed354c84b8ade4356aadf56552b1cea8763db07a4c: Status 404 returned error can't find the container with id 3fb958af13f90a42355291ed354c84b8ade4356aadf56552b1cea8763db07a4c Nov 29 08:12:05 crc kubenswrapper[4795]: I1129 08:12:05.651634 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5596c69fcc-wq44q"] Nov 29 08:12:05 crc kubenswrapper[4795]: I1129 08:12:05.942065 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d99f6bc7f-5xkn9" Nov 29 08:12:05 crc kubenswrapper[4795]: I1129 08:12:05.949764 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5596c69fcc-wq44q" event={"ID":"2ad1010b-e9f3-4cbf-aefe-4ad2bed529e1","Type":"ContainerStarted","Data":"3fb958af13f90a42355291ed354c84b8ade4356aadf56552b1cea8763db07a4c"} Nov 29 08:12:05 crc kubenswrapper[4795]: I1129 08:12:05.993251 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d99f6bc7f-5xkn9"] Nov 29 08:12:06 crc kubenswrapper[4795]: I1129 08:12:06.007201 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d99f6bc7f-5xkn9"] Nov 29 08:12:06 crc kubenswrapper[4795]: I1129 08:12:06.293388 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fd74474-a041-4d6f-84f9-90d8161e943e" path="/var/lib/kubelet/pods/3fd74474-a041-4d6f-84f9-90d8161e943e/volumes" Nov 29 08:12:06 crc kubenswrapper[4795]: I1129 08:12:06.961230 4795 generic.go:334] "Generic (PLEG): container finished" podID="2ad1010b-e9f3-4cbf-aefe-4ad2bed529e1" containerID="1bf9b5a6c6080cb05f328d004b5710cd59621ab8ecc9fa27974732972f4f4eef" exitCode=0 Nov 29 08:12:06 crc kubenswrapper[4795]: I1129 08:12:06.961519 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5596c69fcc-wq44q" event={"ID":"2ad1010b-e9f3-4cbf-aefe-4ad2bed529e1","Type":"ContainerDied","Data":"1bf9b5a6c6080cb05f328d004b5710cd59621ab8ecc9fa27974732972f4f4eef"} Nov 29 08:12:07 crc kubenswrapper[4795]: I1129 08:12:07.975873 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-whxgt" event={"ID":"7981887b-f33c-421f-aac5-520c03b7a48a","Type":"ContainerStarted","Data":"08ee91d888f547b8d9b4ffe4dea9f5f3051cb8b8385a0596f100ada7252b6e67"} Nov 29 08:12:07 crc kubenswrapper[4795]: I1129 08:12:07.981069 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5596c69fcc-wq44q" event={"ID":"2ad1010b-e9f3-4cbf-aefe-4ad2bed529e1","Type":"ContainerStarted","Data":"c518f9bcb1c01504daf28cf9ba9c220e2b3281b2351f0f970af3f9982278640d"} Nov 29 08:12:07 crc kubenswrapper[4795]: I1129 08:12:07.981938 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5596c69fcc-wq44q" Nov 29 08:12:08 crc kubenswrapper[4795]: I1129 08:12:08.003902 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-whxgt" podStartSLOduration=2.083404798 podStartE2EDuration="36.0038784s" podCreationTimestamp="2025-11-29 08:11:32 +0000 UTC" firstStartedPulling="2025-11-29 08:11:33.528237555 +0000 UTC m=+1939.503813345" lastFinishedPulling="2025-11-29 08:12:07.448711157 +0000 UTC m=+1973.424286947" observedRunningTime="2025-11-29 08:12:07.999346711 +0000 UTC m=+1973.974922511" watchObservedRunningTime="2025-11-29 08:12:08.0038784 +0000 UTC m=+1973.979454190" Nov 29 08:12:08 crc kubenswrapper[4795]: I1129 08:12:08.025087 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5596c69fcc-wq44q" podStartSLOduration=4.02506331 podStartE2EDuration="4.02506331s" podCreationTimestamp="2025-11-29 08:12:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 08:12:08.017313861 +0000 UTC m=+1973.992889651" watchObservedRunningTime="2025-11-29 08:12:08.02506331 +0000 UTC m=+1974.000639100" Nov 29 08:12:11 crc kubenswrapper[4795]: I1129 08:12:11.014127 4795 generic.go:334] "Generic (PLEG): container finished" podID="7981887b-f33c-421f-aac5-520c03b7a48a" containerID="08ee91d888f547b8d9b4ffe4dea9f5f3051cb8b8385a0596f100ada7252b6e67" exitCode=0 Nov 29 08:12:11 crc kubenswrapper[4795]: I1129 08:12:11.014229 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-whxgt" event={"ID":"7981887b-f33c-421f-aac5-520c03b7a48a","Type":"ContainerDied","Data":"08ee91d888f547b8d9b4ffe4dea9f5f3051cb8b8385a0596f100ada7252b6e67"} Nov 29 08:12:12 crc kubenswrapper[4795]: I1129 08:12:12.429192 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-whxgt" Nov 29 08:12:12 crc kubenswrapper[4795]: I1129 08:12:12.579888 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7981887b-f33c-421f-aac5-520c03b7a48a-config-data\") pod \"7981887b-f33c-421f-aac5-520c03b7a48a\" (UID: \"7981887b-f33c-421f-aac5-520c03b7a48a\") " Nov 29 08:12:12 crc kubenswrapper[4795]: I1129 08:12:12.579979 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zd7nl\" (UniqueName: \"kubernetes.io/projected/7981887b-f33c-421f-aac5-520c03b7a48a-kube-api-access-zd7nl\") pod \"7981887b-f33c-421f-aac5-520c03b7a48a\" (UID: \"7981887b-f33c-421f-aac5-520c03b7a48a\") " Nov 29 08:12:12 crc kubenswrapper[4795]: I1129 08:12:12.580038 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7981887b-f33c-421f-aac5-520c03b7a48a-combined-ca-bundle\") pod \"7981887b-f33c-421f-aac5-520c03b7a48a\" (UID: \"7981887b-f33c-421f-aac5-520c03b7a48a\") " Nov 29 08:12:12 crc kubenswrapper[4795]: I1129 08:12:12.592238 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7981887b-f33c-421f-aac5-520c03b7a48a-kube-api-access-zd7nl" (OuterVolumeSpecName: "kube-api-access-zd7nl") pod "7981887b-f33c-421f-aac5-520c03b7a48a" (UID: "7981887b-f33c-421f-aac5-520c03b7a48a"). InnerVolumeSpecName "kube-api-access-zd7nl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:12:12 crc kubenswrapper[4795]: I1129 08:12:12.621030 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7981887b-f33c-421f-aac5-520c03b7a48a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7981887b-f33c-421f-aac5-520c03b7a48a" (UID: "7981887b-f33c-421f-aac5-520c03b7a48a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:12:12 crc kubenswrapper[4795]: I1129 08:12:12.683526 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zd7nl\" (UniqueName: \"kubernetes.io/projected/7981887b-f33c-421f-aac5-520c03b7a48a-kube-api-access-zd7nl\") on node \"crc\" DevicePath \"\"" Nov 29 08:12:12 crc kubenswrapper[4795]: I1129 08:12:12.683563 4795 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7981887b-f33c-421f-aac5-520c03b7a48a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 08:12:12 crc kubenswrapper[4795]: I1129 08:12:12.684192 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7981887b-f33c-421f-aac5-520c03b7a48a-config-data" (OuterVolumeSpecName: "config-data") pod "7981887b-f33c-421f-aac5-520c03b7a48a" (UID: "7981887b-f33c-421f-aac5-520c03b7a48a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:12:12 crc kubenswrapper[4795]: I1129 08:12:12.785962 4795 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7981887b-f33c-421f-aac5-520c03b7a48a-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 08:12:13 crc kubenswrapper[4795]: I1129 08:12:13.036223 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-whxgt" event={"ID":"7981887b-f33c-421f-aac5-520c03b7a48a","Type":"ContainerDied","Data":"7f7545ec8106390706d66eed52e97a7228d067dfcaf015ee9e891a447dd6c13b"} Nov 29 08:12:13 crc kubenswrapper[4795]: I1129 08:12:13.036285 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f7545ec8106390706d66eed52e97a7228d067dfcaf015ee9e891a447dd6c13b" Nov 29 08:12:13 crc kubenswrapper[4795]: I1129 08:12:13.036362 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-whxgt" Nov 29 08:12:13 crc kubenswrapper[4795]: I1129 08:12:13.973452 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-9d7d54f9b-6mtps"] Nov 29 08:12:13 crc kubenswrapper[4795]: E1129 08:12:13.974331 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fd74474-a041-4d6f-84f9-90d8161e943e" containerName="dnsmasq-dns" Nov 29 08:12:13 crc kubenswrapper[4795]: I1129 08:12:13.974347 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fd74474-a041-4d6f-84f9-90d8161e943e" containerName="dnsmasq-dns" Nov 29 08:12:13 crc kubenswrapper[4795]: E1129 08:12:13.974364 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7981887b-f33c-421f-aac5-520c03b7a48a" containerName="heat-db-sync" Nov 29 08:12:13 crc kubenswrapper[4795]: I1129 08:12:13.974370 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="7981887b-f33c-421f-aac5-520c03b7a48a" containerName="heat-db-sync" Nov 29 08:12:13 crc kubenswrapper[4795]: E1129 08:12:13.974410 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fd74474-a041-4d6f-84f9-90d8161e943e" containerName="init" Nov 29 08:12:13 crc kubenswrapper[4795]: I1129 08:12:13.974416 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fd74474-a041-4d6f-84f9-90d8161e943e" containerName="init" Nov 29 08:12:13 crc kubenswrapper[4795]: I1129 08:12:13.974684 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fd74474-a041-4d6f-84f9-90d8161e943e" containerName="dnsmasq-dns" Nov 29 08:12:13 crc kubenswrapper[4795]: I1129 08:12:13.974717 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="7981887b-f33c-421f-aac5-520c03b7a48a" containerName="heat-db-sync" Nov 29 08:12:13 crc kubenswrapper[4795]: I1129 08:12:13.975577 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-9d7d54f9b-6mtps" Nov 29 08:12:14 crc kubenswrapper[4795]: I1129 08:12:14.007294 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-9d7d54f9b-6mtps"] Nov 29 08:12:14 crc kubenswrapper[4795]: I1129 08:12:14.027488 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2d909210-4168-4e0a-967e-dfde70b1762b-config-data-custom\") pod \"heat-engine-9d7d54f9b-6mtps\" (UID: \"2d909210-4168-4e0a-967e-dfde70b1762b\") " pod="openstack/heat-engine-9d7d54f9b-6mtps" Nov 29 08:12:14 crc kubenswrapper[4795]: I1129 08:12:14.028391 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xjvz\" (UniqueName: \"kubernetes.io/projected/2d909210-4168-4e0a-967e-dfde70b1762b-kube-api-access-9xjvz\") pod \"heat-engine-9d7d54f9b-6mtps\" (UID: \"2d909210-4168-4e0a-967e-dfde70b1762b\") " pod="openstack/heat-engine-9d7d54f9b-6mtps" Nov 29 08:12:14 crc kubenswrapper[4795]: I1129 08:12:14.028558 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d909210-4168-4e0a-967e-dfde70b1762b-combined-ca-bundle\") pod \"heat-engine-9d7d54f9b-6mtps\" (UID: \"2d909210-4168-4e0a-967e-dfde70b1762b\") " pod="openstack/heat-engine-9d7d54f9b-6mtps" Nov 29 08:12:14 crc kubenswrapper[4795]: I1129 08:12:14.028673 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d909210-4168-4e0a-967e-dfde70b1762b-config-data\") pod \"heat-engine-9d7d54f9b-6mtps\" (UID: \"2d909210-4168-4e0a-967e-dfde70b1762b\") " pod="openstack/heat-engine-9d7d54f9b-6mtps" Nov 29 08:12:14 crc kubenswrapper[4795]: I1129 08:12:14.042959 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-6f66c97b48-7q9bs"] Nov 29 08:12:14 crc kubenswrapper[4795]: I1129 08:12:14.044976 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6f66c97b48-7q9bs" Nov 29 08:12:14 crc kubenswrapper[4795]: I1129 08:12:14.131801 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xjvz\" (UniqueName: \"kubernetes.io/projected/2d909210-4168-4e0a-967e-dfde70b1762b-kube-api-access-9xjvz\") pod \"heat-engine-9d7d54f9b-6mtps\" (UID: \"2d909210-4168-4e0a-967e-dfde70b1762b\") " pod="openstack/heat-engine-9d7d54f9b-6mtps" Nov 29 08:12:14 crc kubenswrapper[4795]: I1129 08:12:14.131874 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d909210-4168-4e0a-967e-dfde70b1762b-combined-ca-bundle\") pod \"heat-engine-9d7d54f9b-6mtps\" (UID: \"2d909210-4168-4e0a-967e-dfde70b1762b\") " pod="openstack/heat-engine-9d7d54f9b-6mtps" Nov 29 08:12:14 crc kubenswrapper[4795]: I1129 08:12:14.131916 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d909210-4168-4e0a-967e-dfde70b1762b-config-data\") pod \"heat-engine-9d7d54f9b-6mtps\" (UID: \"2d909210-4168-4e0a-967e-dfde70b1762b\") " pod="openstack/heat-engine-9d7d54f9b-6mtps" Nov 29 08:12:14 crc kubenswrapper[4795]: I1129 08:12:14.134900 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2d909210-4168-4e0a-967e-dfde70b1762b-config-data-custom\") pod \"heat-engine-9d7d54f9b-6mtps\" (UID: \"2d909210-4168-4e0a-967e-dfde70b1762b\") " pod="openstack/heat-engine-9d7d54f9b-6mtps" Nov 29 08:12:14 crc kubenswrapper[4795]: I1129 08:12:14.138509 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d909210-4168-4e0a-967e-dfde70b1762b-combined-ca-bundle\") pod \"heat-engine-9d7d54f9b-6mtps\" (UID: \"2d909210-4168-4e0a-967e-dfde70b1762b\") " pod="openstack/heat-engine-9d7d54f9b-6mtps" Nov 29 08:12:14 crc kubenswrapper[4795]: I1129 08:12:14.140416 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d909210-4168-4e0a-967e-dfde70b1762b-config-data\") pod \"heat-engine-9d7d54f9b-6mtps\" (UID: \"2d909210-4168-4e0a-967e-dfde70b1762b\") " pod="openstack/heat-engine-9d7d54f9b-6mtps" Nov 29 08:12:14 crc kubenswrapper[4795]: I1129 08:12:14.140568 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2d909210-4168-4e0a-967e-dfde70b1762b-config-data-custom\") pod \"heat-engine-9d7d54f9b-6mtps\" (UID: \"2d909210-4168-4e0a-967e-dfde70b1762b\") " pod="openstack/heat-engine-9d7d54f9b-6mtps" Nov 29 08:12:14 crc kubenswrapper[4795]: I1129 08:12:14.163560 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xjvz\" (UniqueName: \"kubernetes.io/projected/2d909210-4168-4e0a-967e-dfde70b1762b-kube-api-access-9xjvz\") pod \"heat-engine-9d7d54f9b-6mtps\" (UID: \"2d909210-4168-4e0a-967e-dfde70b1762b\") " pod="openstack/heat-engine-9d7d54f9b-6mtps" Nov 29 08:12:14 crc kubenswrapper[4795]: I1129 08:12:14.163763 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-94c46bb5b-pj8dm"] Nov 29 08:12:14 crc kubenswrapper[4795]: I1129 08:12:14.166677 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-94c46bb5b-pj8dm" Nov 29 08:12:14 crc kubenswrapper[4795]: I1129 08:12:14.179597 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6f66c97b48-7q9bs"] Nov 29 08:12:14 crc kubenswrapper[4795]: I1129 08:12:14.196974 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-94c46bb5b-pj8dm"] Nov 29 08:12:14 crc kubenswrapper[4795]: I1129 08:12:14.241549 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mk75\" (UniqueName: \"kubernetes.io/projected/d6b4f039-61d1-4b2c-b912-69c1bde3e4a6-kube-api-access-2mk75\") pod \"heat-cfnapi-6f66c97b48-7q9bs\" (UID: \"d6b4f039-61d1-4b2c-b912-69c1bde3e4a6\") " pod="openstack/heat-cfnapi-6f66c97b48-7q9bs" Nov 29 08:12:14 crc kubenswrapper[4795]: I1129 08:12:14.241897 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/59489eb7-639e-4155-b88d-45aee638fbaa-internal-tls-certs\") pod \"heat-api-94c46bb5b-pj8dm\" (UID: \"59489eb7-639e-4155-b88d-45aee638fbaa\") " pod="openstack/heat-api-94c46bb5b-pj8dm" Nov 29 08:12:14 crc kubenswrapper[4795]: I1129 08:12:14.242053 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/59489eb7-639e-4155-b88d-45aee638fbaa-public-tls-certs\") pod \"heat-api-94c46bb5b-pj8dm\" (UID: \"59489eb7-639e-4155-b88d-45aee638fbaa\") " pod="openstack/heat-api-94c46bb5b-pj8dm" Nov 29 08:12:14 crc kubenswrapper[4795]: I1129 08:12:14.242107 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6b4f039-61d1-4b2c-b912-69c1bde3e4a6-config-data\") pod \"heat-cfnapi-6f66c97b48-7q9bs\" (UID: \"d6b4f039-61d1-4b2c-b912-69c1bde3e4a6\") " pod="openstack/heat-cfnapi-6f66c97b48-7q9bs" Nov 29 08:12:14 crc kubenswrapper[4795]: I1129 08:12:14.242146 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59489eb7-639e-4155-b88d-45aee638fbaa-combined-ca-bundle\") pod \"heat-api-94c46bb5b-pj8dm\" (UID: \"59489eb7-639e-4155-b88d-45aee638fbaa\") " pod="openstack/heat-api-94c46bb5b-pj8dm" Nov 29 08:12:14 crc kubenswrapper[4795]: I1129 08:12:14.242240 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/59489eb7-639e-4155-b88d-45aee638fbaa-config-data-custom\") pod \"heat-api-94c46bb5b-pj8dm\" (UID: \"59489eb7-639e-4155-b88d-45aee638fbaa\") " pod="openstack/heat-api-94c46bb5b-pj8dm" Nov 29 08:12:14 crc kubenswrapper[4795]: I1129 08:12:14.242547 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6b4f039-61d1-4b2c-b912-69c1bde3e4a6-combined-ca-bundle\") pod \"heat-cfnapi-6f66c97b48-7q9bs\" (UID: \"d6b4f039-61d1-4b2c-b912-69c1bde3e4a6\") " pod="openstack/heat-cfnapi-6f66c97b48-7q9bs" Nov 29 08:12:14 crc kubenswrapper[4795]: I1129 08:12:14.242594 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59489eb7-639e-4155-b88d-45aee638fbaa-config-data\") pod \"heat-api-94c46bb5b-pj8dm\" (UID: \"59489eb7-639e-4155-b88d-45aee638fbaa\") " pod="openstack/heat-api-94c46bb5b-pj8dm" Nov 29 08:12:14 crc kubenswrapper[4795]: I1129 08:12:14.242888 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d6b4f039-61d1-4b2c-b912-69c1bde3e4a6-config-data-custom\") pod \"heat-cfnapi-6f66c97b48-7q9bs\" (UID: \"d6b4f039-61d1-4b2c-b912-69c1bde3e4a6\") " pod="openstack/heat-cfnapi-6f66c97b48-7q9bs" Nov 29 08:12:14 crc kubenswrapper[4795]: I1129 08:12:14.242941 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wbnv\" (UniqueName: \"kubernetes.io/projected/59489eb7-639e-4155-b88d-45aee638fbaa-kube-api-access-9wbnv\") pod \"heat-api-94c46bb5b-pj8dm\" (UID: \"59489eb7-639e-4155-b88d-45aee638fbaa\") " pod="openstack/heat-api-94c46bb5b-pj8dm" Nov 29 08:12:14 crc kubenswrapper[4795]: I1129 08:12:14.242968 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6b4f039-61d1-4b2c-b912-69c1bde3e4a6-public-tls-certs\") pod \"heat-cfnapi-6f66c97b48-7q9bs\" (UID: \"d6b4f039-61d1-4b2c-b912-69c1bde3e4a6\") " pod="openstack/heat-cfnapi-6f66c97b48-7q9bs" Nov 29 08:12:14 crc kubenswrapper[4795]: I1129 08:12:14.243228 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6b4f039-61d1-4b2c-b912-69c1bde3e4a6-internal-tls-certs\") pod \"heat-cfnapi-6f66c97b48-7q9bs\" (UID: \"d6b4f039-61d1-4b2c-b912-69c1bde3e4a6\") " pod="openstack/heat-cfnapi-6f66c97b48-7q9bs" Nov 29 08:12:14 crc kubenswrapper[4795]: I1129 08:12:14.305668 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-9d7d54f9b-6mtps" Nov 29 08:12:14 crc kubenswrapper[4795]: I1129 08:12:14.345944 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6b4f039-61d1-4b2c-b912-69c1bde3e4a6-internal-tls-certs\") pod \"heat-cfnapi-6f66c97b48-7q9bs\" (UID: \"d6b4f039-61d1-4b2c-b912-69c1bde3e4a6\") " pod="openstack/heat-cfnapi-6f66c97b48-7q9bs" Nov 29 08:12:14 crc kubenswrapper[4795]: I1129 08:12:14.346009 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mk75\" (UniqueName: \"kubernetes.io/projected/d6b4f039-61d1-4b2c-b912-69c1bde3e4a6-kube-api-access-2mk75\") pod \"heat-cfnapi-6f66c97b48-7q9bs\" (UID: \"d6b4f039-61d1-4b2c-b912-69c1bde3e4a6\") " pod="openstack/heat-cfnapi-6f66c97b48-7q9bs" Nov 29 08:12:14 crc kubenswrapper[4795]: I1129 08:12:14.346039 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/59489eb7-639e-4155-b88d-45aee638fbaa-internal-tls-certs\") pod \"heat-api-94c46bb5b-pj8dm\" (UID: \"59489eb7-639e-4155-b88d-45aee638fbaa\") " pod="openstack/heat-api-94c46bb5b-pj8dm" Nov 29 08:12:14 crc kubenswrapper[4795]: I1129 08:12:14.346103 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/59489eb7-639e-4155-b88d-45aee638fbaa-public-tls-certs\") pod \"heat-api-94c46bb5b-pj8dm\" (UID: \"59489eb7-639e-4155-b88d-45aee638fbaa\") " pod="openstack/heat-api-94c46bb5b-pj8dm" Nov 29 08:12:14 crc kubenswrapper[4795]: I1129 08:12:14.346126 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6b4f039-61d1-4b2c-b912-69c1bde3e4a6-config-data\") pod \"heat-cfnapi-6f66c97b48-7q9bs\" (UID: \"d6b4f039-61d1-4b2c-b912-69c1bde3e4a6\") " pod="openstack/heat-cfnapi-6f66c97b48-7q9bs" Nov 29 08:12:14 crc kubenswrapper[4795]: I1129 08:12:14.346145 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59489eb7-639e-4155-b88d-45aee638fbaa-combined-ca-bundle\") pod \"heat-api-94c46bb5b-pj8dm\" (UID: \"59489eb7-639e-4155-b88d-45aee638fbaa\") " pod="openstack/heat-api-94c46bb5b-pj8dm" Nov 29 08:12:14 crc kubenswrapper[4795]: I1129 08:12:14.346187 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/59489eb7-639e-4155-b88d-45aee638fbaa-config-data-custom\") pod \"heat-api-94c46bb5b-pj8dm\" (UID: \"59489eb7-639e-4155-b88d-45aee638fbaa\") " pod="openstack/heat-api-94c46bb5b-pj8dm" Nov 29 08:12:14 crc kubenswrapper[4795]: I1129 08:12:14.346282 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6b4f039-61d1-4b2c-b912-69c1bde3e4a6-combined-ca-bundle\") pod \"heat-cfnapi-6f66c97b48-7q9bs\" (UID: \"d6b4f039-61d1-4b2c-b912-69c1bde3e4a6\") " pod="openstack/heat-cfnapi-6f66c97b48-7q9bs" Nov 29 08:12:14 crc kubenswrapper[4795]: I1129 08:12:14.346300 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59489eb7-639e-4155-b88d-45aee638fbaa-config-data\") pod \"heat-api-94c46bb5b-pj8dm\" (UID: \"59489eb7-639e-4155-b88d-45aee638fbaa\") " pod="openstack/heat-api-94c46bb5b-pj8dm" Nov 29 08:12:14 crc kubenswrapper[4795]: I1129 08:12:14.346354 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d6b4f039-61d1-4b2c-b912-69c1bde3e4a6-config-data-custom\") pod \"heat-cfnapi-6f66c97b48-7q9bs\" (UID: \"d6b4f039-61d1-4b2c-b912-69c1bde3e4a6\") " pod="openstack/heat-cfnapi-6f66c97b48-7q9bs" Nov 29 08:12:14 crc kubenswrapper[4795]: I1129 08:12:14.346377 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wbnv\" (UniqueName: \"kubernetes.io/projected/59489eb7-639e-4155-b88d-45aee638fbaa-kube-api-access-9wbnv\") pod \"heat-api-94c46bb5b-pj8dm\" (UID: \"59489eb7-639e-4155-b88d-45aee638fbaa\") " pod="openstack/heat-api-94c46bb5b-pj8dm" Nov 29 08:12:14 crc kubenswrapper[4795]: I1129 08:12:14.346395 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6b4f039-61d1-4b2c-b912-69c1bde3e4a6-public-tls-certs\") pod \"heat-cfnapi-6f66c97b48-7q9bs\" (UID: \"d6b4f039-61d1-4b2c-b912-69c1bde3e4a6\") " pod="openstack/heat-cfnapi-6f66c97b48-7q9bs" Nov 29 08:12:14 crc kubenswrapper[4795]: I1129 08:12:14.352057 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59489eb7-639e-4155-b88d-45aee638fbaa-combined-ca-bundle\") pod \"heat-api-94c46bb5b-pj8dm\" (UID: \"59489eb7-639e-4155-b88d-45aee638fbaa\") " pod="openstack/heat-api-94c46bb5b-pj8dm" Nov 29 08:12:14 crc kubenswrapper[4795]: I1129 08:12:14.352665 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6b4f039-61d1-4b2c-b912-69c1bde3e4a6-internal-tls-certs\") pod \"heat-cfnapi-6f66c97b48-7q9bs\" (UID: \"d6b4f039-61d1-4b2c-b912-69c1bde3e4a6\") " pod="openstack/heat-cfnapi-6f66c97b48-7q9bs" Nov 29 08:12:14 crc kubenswrapper[4795]: I1129 08:12:14.352812 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d6b4f039-61d1-4b2c-b912-69c1bde3e4a6-config-data-custom\") pod \"heat-cfnapi-6f66c97b48-7q9bs\" (UID: \"d6b4f039-61d1-4b2c-b912-69c1bde3e4a6\") " pod="openstack/heat-cfnapi-6f66c97b48-7q9bs" Nov 29 08:12:14 crc kubenswrapper[4795]: I1129 08:12:14.353113 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6b4f039-61d1-4b2c-b912-69c1bde3e4a6-combined-ca-bundle\") pod \"heat-cfnapi-6f66c97b48-7q9bs\" (UID: \"d6b4f039-61d1-4b2c-b912-69c1bde3e4a6\") " pod="openstack/heat-cfnapi-6f66c97b48-7q9bs" Nov 29 08:12:14 crc kubenswrapper[4795]: I1129 08:12:14.353227 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6b4f039-61d1-4b2c-b912-69c1bde3e4a6-public-tls-certs\") pod \"heat-cfnapi-6f66c97b48-7q9bs\" (UID: \"d6b4f039-61d1-4b2c-b912-69c1bde3e4a6\") " pod="openstack/heat-cfnapi-6f66c97b48-7q9bs" Nov 29 08:12:14 crc kubenswrapper[4795]: I1129 08:12:14.354036 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/59489eb7-639e-4155-b88d-45aee638fbaa-internal-tls-certs\") pod \"heat-api-94c46bb5b-pj8dm\" (UID: \"59489eb7-639e-4155-b88d-45aee638fbaa\") " pod="openstack/heat-api-94c46bb5b-pj8dm" Nov 29 08:12:14 crc kubenswrapper[4795]: I1129 08:12:14.354196 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/59489eb7-639e-4155-b88d-45aee638fbaa-public-tls-certs\") pod \"heat-api-94c46bb5b-pj8dm\" (UID: \"59489eb7-639e-4155-b88d-45aee638fbaa\") " pod="openstack/heat-api-94c46bb5b-pj8dm" Nov 29 08:12:14 crc kubenswrapper[4795]: I1129 08:12:14.355874 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/59489eb7-639e-4155-b88d-45aee638fbaa-config-data-custom\") pod \"heat-api-94c46bb5b-pj8dm\" (UID: \"59489eb7-639e-4155-b88d-45aee638fbaa\") " pod="openstack/heat-api-94c46bb5b-pj8dm" Nov 29 08:12:14 crc kubenswrapper[4795]: I1129 08:12:14.357893 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6b4f039-61d1-4b2c-b912-69c1bde3e4a6-config-data\") pod \"heat-cfnapi-6f66c97b48-7q9bs\" (UID: \"d6b4f039-61d1-4b2c-b912-69c1bde3e4a6\") " pod="openstack/heat-cfnapi-6f66c97b48-7q9bs" Nov 29 08:12:14 crc kubenswrapper[4795]: I1129 08:12:14.364834 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59489eb7-639e-4155-b88d-45aee638fbaa-config-data\") pod \"heat-api-94c46bb5b-pj8dm\" (UID: \"59489eb7-639e-4155-b88d-45aee638fbaa\") " pod="openstack/heat-api-94c46bb5b-pj8dm" Nov 29 08:12:14 crc kubenswrapper[4795]: I1129 08:12:14.365212 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wbnv\" (UniqueName: \"kubernetes.io/projected/59489eb7-639e-4155-b88d-45aee638fbaa-kube-api-access-9wbnv\") pod \"heat-api-94c46bb5b-pj8dm\" (UID: \"59489eb7-639e-4155-b88d-45aee638fbaa\") " pod="openstack/heat-api-94c46bb5b-pj8dm" Nov 29 08:12:14 crc kubenswrapper[4795]: I1129 08:12:14.366032 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mk75\" (UniqueName: \"kubernetes.io/projected/d6b4f039-61d1-4b2c-b912-69c1bde3e4a6-kube-api-access-2mk75\") pod \"heat-cfnapi-6f66c97b48-7q9bs\" (UID: \"d6b4f039-61d1-4b2c-b912-69c1bde3e4a6\") " pod="openstack/heat-cfnapi-6f66c97b48-7q9bs" Nov 29 08:12:14 crc kubenswrapper[4795]: I1129 08:12:14.380183 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6f66c97b48-7q9bs" Nov 29 08:12:14 crc kubenswrapper[4795]: I1129 08:12:14.570569 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-94c46bb5b-pj8dm" Nov 29 08:12:14 crc kubenswrapper[4795]: I1129 08:12:14.796997 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-9d7d54f9b-6mtps"] Nov 29 08:12:14 crc kubenswrapper[4795]: I1129 08:12:14.924789 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5596c69fcc-wq44q" Nov 29 08:12:14 crc kubenswrapper[4795]: I1129 08:12:14.959113 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6f66c97b48-7q9bs"] Nov 29 08:12:15 crc kubenswrapper[4795]: I1129 08:12:15.011477 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-594cb89c79-x8bmt"] Nov 29 08:12:15 crc kubenswrapper[4795]: I1129 08:12:15.011816 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-594cb89c79-x8bmt" podUID="bb9518bd-e4c7-49ee-9f39-f20800f6813c" containerName="dnsmasq-dns" containerID="cri-o://fd83879a9e4057d5f12ab4658cb4cb9e618851185046e66d22a9fe73932c8e84" gracePeriod=10 Nov 29 08:12:15 crc kubenswrapper[4795]: I1129 08:12:15.068916 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-9d7d54f9b-6mtps" event={"ID":"2d909210-4168-4e0a-967e-dfde70b1762b","Type":"ContainerStarted","Data":"b3c40123bea89cfea56673eed9160b3d0231e41e6c7807f543f94d4c36cb4db2"} Nov 29 08:12:15 crc kubenswrapper[4795]: I1129 08:12:15.068976 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-9d7d54f9b-6mtps" event={"ID":"2d909210-4168-4e0a-967e-dfde70b1762b","Type":"ContainerStarted","Data":"4d15a26400c558ef37abac88fcd8249c1a06ff632657b622f00b18a4d8a5ea94"} Nov 29 08:12:15 crc kubenswrapper[4795]: I1129 08:12:15.069145 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-9d7d54f9b-6mtps" Nov 29 08:12:15 crc kubenswrapper[4795]: I1129 08:12:15.071728 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6f66c97b48-7q9bs" event={"ID":"d6b4f039-61d1-4b2c-b912-69c1bde3e4a6","Type":"ContainerStarted","Data":"f835d0d673afdf4a6bea44b8b230efa1e272f3c1ddc329306f43d8cccb582f2e"} Nov 29 08:12:15 crc kubenswrapper[4795]: I1129 08:12:15.106448 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-9d7d54f9b-6mtps" podStartSLOduration=2.106408816 podStartE2EDuration="2.106408816s" podCreationTimestamp="2025-11-29 08:12:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 08:12:15.08644002 +0000 UTC m=+1981.062015810" watchObservedRunningTime="2025-11-29 08:12:15.106408816 +0000 UTC m=+1981.081984606" Nov 29 08:12:15 crc kubenswrapper[4795]: I1129 08:12:15.153237 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-94c46bb5b-pj8dm"] Nov 29 08:12:15 crc kubenswrapper[4795]: I1129 08:12:15.276380 4795 scope.go:117] "RemoveContainer" containerID="cb788e622da5bdf804dcfa0c959dfb0158a848bc427ed90337dfd530ea78ea5d" Nov 29 08:12:15 crc kubenswrapper[4795]: E1129 08:12:15.277453 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:12:15 crc kubenswrapper[4795]: I1129 08:12:15.638736 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-594cb89c79-x8bmt" Nov 29 08:12:15 crc kubenswrapper[4795]: I1129 08:12:15.796592 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bb9518bd-e4c7-49ee-9f39-f20800f6813c-dns-svc\") pod \"bb9518bd-e4c7-49ee-9f39-f20800f6813c\" (UID: \"bb9518bd-e4c7-49ee-9f39-f20800f6813c\") " Nov 29 08:12:15 crc kubenswrapper[4795]: I1129 08:12:15.796981 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb9518bd-e4c7-49ee-9f39-f20800f6813c-config\") pod \"bb9518bd-e4c7-49ee-9f39-f20800f6813c\" (UID: \"bb9518bd-e4c7-49ee-9f39-f20800f6813c\") " Nov 29 08:12:15 crc kubenswrapper[4795]: I1129 08:12:15.797073 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wtsl\" (UniqueName: \"kubernetes.io/projected/bb9518bd-e4c7-49ee-9f39-f20800f6813c-kube-api-access-5wtsl\") pod \"bb9518bd-e4c7-49ee-9f39-f20800f6813c\" (UID: \"bb9518bd-e4c7-49ee-9f39-f20800f6813c\") " Nov 29 08:12:15 crc kubenswrapper[4795]: I1129 08:12:15.797182 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bb9518bd-e4c7-49ee-9f39-f20800f6813c-dns-swift-storage-0\") pod \"bb9518bd-e4c7-49ee-9f39-f20800f6813c\" (UID: \"bb9518bd-e4c7-49ee-9f39-f20800f6813c\") " Nov 29 08:12:15 crc kubenswrapper[4795]: I1129 08:12:15.797404 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/bb9518bd-e4c7-49ee-9f39-f20800f6813c-openstack-edpm-ipam\") pod \"bb9518bd-e4c7-49ee-9f39-f20800f6813c\" (UID: \"bb9518bd-e4c7-49ee-9f39-f20800f6813c\") " Nov 29 08:12:15 crc kubenswrapper[4795]: I1129 08:12:15.797482 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bb9518bd-e4c7-49ee-9f39-f20800f6813c-ovsdbserver-nb\") pod \"bb9518bd-e4c7-49ee-9f39-f20800f6813c\" (UID: \"bb9518bd-e4c7-49ee-9f39-f20800f6813c\") " Nov 29 08:12:15 crc kubenswrapper[4795]: I1129 08:12:15.797720 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bb9518bd-e4c7-49ee-9f39-f20800f6813c-ovsdbserver-sb\") pod \"bb9518bd-e4c7-49ee-9f39-f20800f6813c\" (UID: \"bb9518bd-e4c7-49ee-9f39-f20800f6813c\") " Nov 29 08:12:15 crc kubenswrapper[4795]: I1129 08:12:15.802805 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb9518bd-e4c7-49ee-9f39-f20800f6813c-kube-api-access-5wtsl" (OuterVolumeSpecName: "kube-api-access-5wtsl") pod "bb9518bd-e4c7-49ee-9f39-f20800f6813c" (UID: "bb9518bd-e4c7-49ee-9f39-f20800f6813c"). InnerVolumeSpecName "kube-api-access-5wtsl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:12:15 crc kubenswrapper[4795]: I1129 08:12:15.903407 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wtsl\" (UniqueName: \"kubernetes.io/projected/bb9518bd-e4c7-49ee-9f39-f20800f6813c-kube-api-access-5wtsl\") on node \"crc\" DevicePath \"\"" Nov 29 08:12:15 crc kubenswrapper[4795]: I1129 08:12:15.919256 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb9518bd-e4c7-49ee-9f39-f20800f6813c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "bb9518bd-e4c7-49ee-9f39-f20800f6813c" (UID: "bb9518bd-e4c7-49ee-9f39-f20800f6813c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:12:15 crc kubenswrapper[4795]: I1129 08:12:15.940216 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb9518bd-e4c7-49ee-9f39-f20800f6813c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bb9518bd-e4c7-49ee-9f39-f20800f6813c" (UID: "bb9518bd-e4c7-49ee-9f39-f20800f6813c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:12:15 crc kubenswrapper[4795]: I1129 08:12:15.947217 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb9518bd-e4c7-49ee-9f39-f20800f6813c-config" (OuterVolumeSpecName: "config") pod "bb9518bd-e4c7-49ee-9f39-f20800f6813c" (UID: "bb9518bd-e4c7-49ee-9f39-f20800f6813c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:12:15 crc kubenswrapper[4795]: I1129 08:12:15.947966 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb9518bd-e4c7-49ee-9f39-f20800f6813c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bb9518bd-e4c7-49ee-9f39-f20800f6813c" (UID: "bb9518bd-e4c7-49ee-9f39-f20800f6813c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:12:15 crc kubenswrapper[4795]: I1129 08:12:15.949563 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb9518bd-e4c7-49ee-9f39-f20800f6813c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bb9518bd-e4c7-49ee-9f39-f20800f6813c" (UID: "bb9518bd-e4c7-49ee-9f39-f20800f6813c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:12:15 crc kubenswrapper[4795]: I1129 08:12:15.976001 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb9518bd-e4c7-49ee-9f39-f20800f6813c-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "bb9518bd-e4c7-49ee-9f39-f20800f6813c" (UID: "bb9518bd-e4c7-49ee-9f39-f20800f6813c"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:12:16 crc kubenswrapper[4795]: I1129 08:12:16.006546 4795 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/bb9518bd-e4c7-49ee-9f39-f20800f6813c-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 29 08:12:16 crc kubenswrapper[4795]: I1129 08:12:16.006580 4795 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bb9518bd-e4c7-49ee-9f39-f20800f6813c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 29 08:12:16 crc kubenswrapper[4795]: I1129 08:12:16.006592 4795 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bb9518bd-e4c7-49ee-9f39-f20800f6813c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 29 08:12:16 crc kubenswrapper[4795]: I1129 08:12:16.006634 4795 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bb9518bd-e4c7-49ee-9f39-f20800f6813c-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 08:12:16 crc kubenswrapper[4795]: I1129 08:12:16.006645 4795 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb9518bd-e4c7-49ee-9f39-f20800f6813c-config\") on node \"crc\" DevicePath \"\"" Nov 29 08:12:16 crc kubenswrapper[4795]: I1129 08:12:16.006653 4795 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bb9518bd-e4c7-49ee-9f39-f20800f6813c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 29 08:12:16 crc kubenswrapper[4795]: I1129 08:12:16.086207 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-94c46bb5b-pj8dm" event={"ID":"59489eb7-639e-4155-b88d-45aee638fbaa","Type":"ContainerStarted","Data":"dab0c2aeed54b26e5a7155729bc9354e78f55237676f054d91de038ed82fdc4e"} Nov 29 08:12:16 crc kubenswrapper[4795]: I1129 08:12:16.090412 4795 generic.go:334] "Generic (PLEG): container finished" podID="bb9518bd-e4c7-49ee-9f39-f20800f6813c" containerID="fd83879a9e4057d5f12ab4658cb4cb9e618851185046e66d22a9fe73932c8e84" exitCode=0 Nov 29 08:12:16 crc kubenswrapper[4795]: I1129 08:12:16.091555 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-594cb89c79-x8bmt" Nov 29 08:12:16 crc kubenswrapper[4795]: I1129 08:12:16.091726 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-594cb89c79-x8bmt" event={"ID":"bb9518bd-e4c7-49ee-9f39-f20800f6813c","Type":"ContainerDied","Data":"fd83879a9e4057d5f12ab4658cb4cb9e618851185046e66d22a9fe73932c8e84"} Nov 29 08:12:16 crc kubenswrapper[4795]: I1129 08:12:16.091781 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-594cb89c79-x8bmt" event={"ID":"bb9518bd-e4c7-49ee-9f39-f20800f6813c","Type":"ContainerDied","Data":"745a23c9d2b511e07d9589589f91d166e819ee7d56c382f502fb7023482540ce"} Nov 29 08:12:16 crc kubenswrapper[4795]: I1129 08:12:16.091801 4795 scope.go:117] "RemoveContainer" containerID="fd83879a9e4057d5f12ab4658cb4cb9e618851185046e66d22a9fe73932c8e84" Nov 29 08:12:16 crc kubenswrapper[4795]: I1129 08:12:16.172504 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-594cb89c79-x8bmt"] Nov 29 08:12:16 crc kubenswrapper[4795]: I1129 08:12:16.183156 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-594cb89c79-x8bmt"] Nov 29 08:12:16 crc kubenswrapper[4795]: I1129 08:12:16.289239 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb9518bd-e4c7-49ee-9f39-f20800f6813c" path="/var/lib/kubelet/pods/bb9518bd-e4c7-49ee-9f39-f20800f6813c/volumes" Nov 29 08:12:16 crc kubenswrapper[4795]: I1129 08:12:16.305761 4795 scope.go:117] "RemoveContainer" containerID="8e7a7d219a2a54c308bdedb4710c195c95891776b3eb37adca13df0e7fbbb0b0" Nov 29 08:12:16 crc kubenswrapper[4795]: I1129 08:12:16.749702 4795 scope.go:117] "RemoveContainer" containerID="fd83879a9e4057d5f12ab4658cb4cb9e618851185046e66d22a9fe73932c8e84" Nov 29 08:12:16 crc kubenswrapper[4795]: E1129 08:12:16.750185 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd83879a9e4057d5f12ab4658cb4cb9e618851185046e66d22a9fe73932c8e84\": container with ID starting with fd83879a9e4057d5f12ab4658cb4cb9e618851185046e66d22a9fe73932c8e84 not found: ID does not exist" containerID="fd83879a9e4057d5f12ab4658cb4cb9e618851185046e66d22a9fe73932c8e84" Nov 29 08:12:16 crc kubenswrapper[4795]: I1129 08:12:16.750248 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd83879a9e4057d5f12ab4658cb4cb9e618851185046e66d22a9fe73932c8e84"} err="failed to get container status \"fd83879a9e4057d5f12ab4658cb4cb9e618851185046e66d22a9fe73932c8e84\": rpc error: code = NotFound desc = could not find container \"fd83879a9e4057d5f12ab4658cb4cb9e618851185046e66d22a9fe73932c8e84\": container with ID starting with fd83879a9e4057d5f12ab4658cb4cb9e618851185046e66d22a9fe73932c8e84 not found: ID does not exist" Nov 29 08:12:16 crc kubenswrapper[4795]: I1129 08:12:16.750277 4795 scope.go:117] "RemoveContainer" containerID="8e7a7d219a2a54c308bdedb4710c195c95891776b3eb37adca13df0e7fbbb0b0" Nov 29 08:12:16 crc kubenswrapper[4795]: E1129 08:12:16.750685 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e7a7d219a2a54c308bdedb4710c195c95891776b3eb37adca13df0e7fbbb0b0\": container with ID starting with 8e7a7d219a2a54c308bdedb4710c195c95891776b3eb37adca13df0e7fbbb0b0 not found: ID does not exist" containerID="8e7a7d219a2a54c308bdedb4710c195c95891776b3eb37adca13df0e7fbbb0b0" Nov 29 08:12:16 crc kubenswrapper[4795]: I1129 08:12:16.750769 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e7a7d219a2a54c308bdedb4710c195c95891776b3eb37adca13df0e7fbbb0b0"} err="failed to get container status \"8e7a7d219a2a54c308bdedb4710c195c95891776b3eb37adca13df0e7fbbb0b0\": rpc error: code = NotFound desc = could not find container \"8e7a7d219a2a54c308bdedb4710c195c95891776b3eb37adca13df0e7fbbb0b0\": container with ID starting with 8e7a7d219a2a54c308bdedb4710c195c95891776b3eb37adca13df0e7fbbb0b0 not found: ID does not exist" Nov 29 08:12:18 crc kubenswrapper[4795]: I1129 08:12:18.125867 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-94c46bb5b-pj8dm" event={"ID":"59489eb7-639e-4155-b88d-45aee638fbaa","Type":"ContainerStarted","Data":"6f1861f38ed55e5f6ddff34478a0a26efde324713b06ea65498cf115debd6026"} Nov 29 08:12:18 crc kubenswrapper[4795]: I1129 08:12:18.126181 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-94c46bb5b-pj8dm" Nov 29 08:12:18 crc kubenswrapper[4795]: I1129 08:12:18.127380 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6f66c97b48-7q9bs" event={"ID":"d6b4f039-61d1-4b2c-b912-69c1bde3e4a6","Type":"ContainerStarted","Data":"a63a2da254d1d018f061effa7fa8084414b1e203acfe1368d3d01891cb889385"} Nov 29 08:12:18 crc kubenswrapper[4795]: I1129 08:12:18.127648 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-6f66c97b48-7q9bs" Nov 29 08:12:18 crc kubenswrapper[4795]: I1129 08:12:18.145288 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-94c46bb5b-pj8dm" podStartSLOduration=3.500548075 podStartE2EDuration="5.145264113s" podCreationTimestamp="2025-11-29 08:12:13 +0000 UTC" firstStartedPulling="2025-11-29 08:12:15.166074647 +0000 UTC m=+1981.141650437" lastFinishedPulling="2025-11-29 08:12:16.810790685 +0000 UTC m=+1982.786366475" observedRunningTime="2025-11-29 08:12:18.141455845 +0000 UTC m=+1984.117031635" watchObservedRunningTime="2025-11-29 08:12:18.145264113 +0000 UTC m=+1984.120839923" Nov 29 08:12:18 crc kubenswrapper[4795]: I1129 08:12:18.171236 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-6f66c97b48-7q9bs" podStartSLOduration=3.375110761 podStartE2EDuration="5.171213889s" podCreationTimestamp="2025-11-29 08:12:13 +0000 UTC" firstStartedPulling="2025-11-29 08:12:14.955579852 +0000 UTC m=+1980.931155642" lastFinishedPulling="2025-11-29 08:12:16.75168298 +0000 UTC m=+1982.727258770" observedRunningTime="2025-11-29 08:12:18.157753007 +0000 UTC m=+1984.133328797" watchObservedRunningTime="2025-11-29 08:12:18.171213889 +0000 UTC m=+1984.146789699" Nov 29 08:12:23 crc kubenswrapper[4795]: I1129 08:12:23.380536 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 29 08:12:25 crc kubenswrapper[4795]: I1129 08:12:25.078095 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-r5c5p"] Nov 29 08:12:25 crc kubenswrapper[4795]: E1129 08:12:25.079193 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb9518bd-e4c7-49ee-9f39-f20800f6813c" containerName="dnsmasq-dns" Nov 29 08:12:25 crc kubenswrapper[4795]: I1129 08:12:25.079209 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb9518bd-e4c7-49ee-9f39-f20800f6813c" containerName="dnsmasq-dns" Nov 29 08:12:25 crc kubenswrapper[4795]: E1129 08:12:25.079234 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb9518bd-e4c7-49ee-9f39-f20800f6813c" containerName="init" Nov 29 08:12:25 crc kubenswrapper[4795]: I1129 08:12:25.079242 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb9518bd-e4c7-49ee-9f39-f20800f6813c" containerName="init" Nov 29 08:12:25 crc kubenswrapper[4795]: I1129 08:12:25.079497 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb9518bd-e4c7-49ee-9f39-f20800f6813c" containerName="dnsmasq-dns" Nov 29 08:12:25 crc kubenswrapper[4795]: I1129 08:12:25.080362 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-r5c5p" Nov 29 08:12:25 crc kubenswrapper[4795]: I1129 08:12:25.083208 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nfhsg" Nov 29 08:12:25 crc kubenswrapper[4795]: I1129 08:12:25.083476 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 08:12:25 crc kubenswrapper[4795]: I1129 08:12:25.083756 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 08:12:25 crc kubenswrapper[4795]: I1129 08:12:25.083896 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 08:12:25 crc kubenswrapper[4795]: I1129 08:12:25.095566 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-r5c5p"] Nov 29 08:12:25 crc kubenswrapper[4795]: I1129 08:12:25.232829 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd6f9117-b1f3-4533-b3f6-3b614a790521-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-r5c5p\" (UID: \"fd6f9117-b1f3-4533-b3f6-3b614a790521\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-r5c5p" Nov 29 08:12:25 crc kubenswrapper[4795]: I1129 08:12:25.233042 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd6f9117-b1f3-4533-b3f6-3b614a790521-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-r5c5p\" (UID: \"fd6f9117-b1f3-4533-b3f6-3b614a790521\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-r5c5p" Nov 29 08:12:25 crc kubenswrapper[4795]: I1129 08:12:25.233146 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mslfb\" (UniqueName: \"kubernetes.io/projected/fd6f9117-b1f3-4533-b3f6-3b614a790521-kube-api-access-mslfb\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-r5c5p\" (UID: \"fd6f9117-b1f3-4533-b3f6-3b614a790521\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-r5c5p" Nov 29 08:12:25 crc kubenswrapper[4795]: I1129 08:12:25.233278 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fd6f9117-b1f3-4533-b3f6-3b614a790521-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-r5c5p\" (UID: \"fd6f9117-b1f3-4533-b3f6-3b614a790521\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-r5c5p" Nov 29 08:12:25 crc kubenswrapper[4795]: I1129 08:12:25.335886 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd6f9117-b1f3-4533-b3f6-3b614a790521-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-r5c5p\" (UID: \"fd6f9117-b1f3-4533-b3f6-3b614a790521\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-r5c5p" Nov 29 08:12:25 crc kubenswrapper[4795]: I1129 08:12:25.336305 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd6f9117-b1f3-4533-b3f6-3b614a790521-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-r5c5p\" (UID: \"fd6f9117-b1f3-4533-b3f6-3b614a790521\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-r5c5p" Nov 29 08:12:25 crc kubenswrapper[4795]: I1129 08:12:25.336357 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mslfb\" (UniqueName: \"kubernetes.io/projected/fd6f9117-b1f3-4533-b3f6-3b614a790521-kube-api-access-mslfb\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-r5c5p\" (UID: \"fd6f9117-b1f3-4533-b3f6-3b614a790521\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-r5c5p" Nov 29 08:12:25 crc kubenswrapper[4795]: I1129 08:12:25.336414 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fd6f9117-b1f3-4533-b3f6-3b614a790521-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-r5c5p\" (UID: \"fd6f9117-b1f3-4533-b3f6-3b614a790521\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-r5c5p" Nov 29 08:12:25 crc kubenswrapper[4795]: I1129 08:12:25.349264 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fd6f9117-b1f3-4533-b3f6-3b614a790521-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-r5c5p\" (UID: \"fd6f9117-b1f3-4533-b3f6-3b614a790521\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-r5c5p" Nov 29 08:12:25 crc kubenswrapper[4795]: I1129 08:12:25.349584 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd6f9117-b1f3-4533-b3f6-3b614a790521-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-r5c5p\" (UID: \"fd6f9117-b1f3-4533-b3f6-3b614a790521\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-r5c5p" Nov 29 08:12:25 crc kubenswrapper[4795]: I1129 08:12:25.350036 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd6f9117-b1f3-4533-b3f6-3b614a790521-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-r5c5p\" (UID: \"fd6f9117-b1f3-4533-b3f6-3b614a790521\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-r5c5p" Nov 29 08:12:25 crc kubenswrapper[4795]: I1129 08:12:25.363826 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mslfb\" (UniqueName: \"kubernetes.io/projected/fd6f9117-b1f3-4533-b3f6-3b614a790521-kube-api-access-mslfb\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-r5c5p\" (UID: \"fd6f9117-b1f3-4533-b3f6-3b614a790521\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-r5c5p" Nov 29 08:12:25 crc kubenswrapper[4795]: I1129 08:12:25.409355 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-r5c5p" Nov 29 08:12:26 crc kubenswrapper[4795]: I1129 08:12:26.228754 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-r5c5p"] Nov 29 08:12:26 crc kubenswrapper[4795]: I1129 08:12:26.276476 4795 scope.go:117] "RemoveContainer" containerID="cb788e622da5bdf804dcfa0c959dfb0158a848bc427ed90337dfd530ea78ea5d" Nov 29 08:12:26 crc kubenswrapper[4795]: E1129 08:12:26.277276 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:12:26 crc kubenswrapper[4795]: I1129 08:12:26.471750 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-94c46bb5b-pj8dm" Nov 29 08:12:26 crc kubenswrapper[4795]: I1129 08:12:26.495301 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-6f66c97b48-7q9bs" Nov 29 08:12:26 crc kubenswrapper[4795]: I1129 08:12:26.540720 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-b49cd6b59-h4cg5"] Nov 29 08:12:26 crc kubenswrapper[4795]: I1129 08:12:26.540993 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-b49cd6b59-h4cg5" podUID="3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5" containerName="heat-api" containerID="cri-o://abe61bedf1ec9c606b02562c7d6cc6685d793dc113e02a4333b31f8db4dc6c40" gracePeriod=60 Nov 29 08:12:26 crc kubenswrapper[4795]: I1129 08:12:26.583197 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-67fb9f5ff-86s7p"] Nov 29 08:12:26 crc kubenswrapper[4795]: I1129 08:12:26.583607 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-67fb9f5ff-86s7p" podUID="8a4c1b6d-0527-4254-819b-ce068ddb20d8" containerName="heat-cfnapi" containerID="cri-o://f33376e1c1e9327f8fd667be8faa6f1666b224e32f6426459c3eacb213772e60" gracePeriod=60 Nov 29 08:12:27 crc kubenswrapper[4795]: I1129 08:12:27.233742 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-r5c5p" event={"ID":"fd6f9117-b1f3-4533-b3f6-3b614a790521","Type":"ContainerStarted","Data":"6441ac793a790599f6cc1a2f3e05e0205a79e0f2543d658d840d2f506de1e064"} Nov 29 08:12:28 crc kubenswrapper[4795]: I1129 08:12:28.216740 4795 scope.go:117] "RemoveContainer" containerID="548f34c2652851ca8b5e064e016c8e5abc6aa699b8a7e546ffbe69e24e732616" Nov 29 08:12:28 crc kubenswrapper[4795]: I1129 08:12:28.256834 4795 scope.go:117] "RemoveContainer" containerID="f487adc56134b7e9888c25a49a74e448e87beaf406191e404aa4540ab96cf4d5" Nov 29 08:12:28 crc kubenswrapper[4795]: I1129 08:12:28.293411 4795 scope.go:117] "RemoveContainer" containerID="0256a00828c9b065706a5219cde8a48ea49ec525a93fa015091b651ed2af45f5" Nov 29 08:12:28 crc kubenswrapper[4795]: I1129 08:12:28.321567 4795 scope.go:117] "RemoveContainer" containerID="6626ac55052d6cec6be4cab26b79b381bbc946d286f0273e78ae2de2ebb94a04" Nov 29 08:12:30 crc kubenswrapper[4795]: I1129 08:12:30.161338 4795 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-b49cd6b59-h4cg5" podUID="3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5" containerName="heat-api" probeResult="failure" output="Get \"https://10.217.0.218:8004/healthcheck\": dial tcp 10.217.0.218:8004: connect: connection refused" Nov 29 08:12:30 crc kubenswrapper[4795]: I1129 08:12:30.170689 4795 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-67fb9f5ff-86s7p" podUID="8a4c1b6d-0527-4254-819b-ce068ddb20d8" containerName="heat-cfnapi" probeResult="failure" output="Get \"https://10.217.0.219:8000/healthcheck\": dial tcp 10.217.0.219:8000: connect: connection refused" Nov 29 08:12:30 crc kubenswrapper[4795]: I1129 08:12:30.268151 4795 generic.go:334] "Generic (PLEG): container finished" podID="8a4c1b6d-0527-4254-819b-ce068ddb20d8" containerID="f33376e1c1e9327f8fd667be8faa6f1666b224e32f6426459c3eacb213772e60" exitCode=0 Nov 29 08:12:30 crc kubenswrapper[4795]: I1129 08:12:30.268227 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-67fb9f5ff-86s7p" event={"ID":"8a4c1b6d-0527-4254-819b-ce068ddb20d8","Type":"ContainerDied","Data":"f33376e1c1e9327f8fd667be8faa6f1666b224e32f6426459c3eacb213772e60"} Nov 29 08:12:30 crc kubenswrapper[4795]: I1129 08:12:30.271011 4795 generic.go:334] "Generic (PLEG): container finished" podID="3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5" containerID="abe61bedf1ec9c606b02562c7d6cc6685d793dc113e02a4333b31f8db4dc6c40" exitCode=0 Nov 29 08:12:30 crc kubenswrapper[4795]: I1129 08:12:30.271043 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-b49cd6b59-h4cg5" event={"ID":"3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5","Type":"ContainerDied","Data":"abe61bedf1ec9c606b02562c7d6cc6685d793dc113e02a4333b31f8db4dc6c40"} Nov 29 08:12:32 crc kubenswrapper[4795]: I1129 08:12:32.296181 4795 generic.go:334] "Generic (PLEG): container finished" podID="d1c7dfa2-1b2a-438d-9378-fd998f873999" containerID="1c0e5b131ed3937f93562a50240cb6b9c2562d445ddbcce4478062b5f190274b" exitCode=0 Nov 29 08:12:32 crc kubenswrapper[4795]: I1129 08:12:32.296396 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d1c7dfa2-1b2a-438d-9378-fd998f873999","Type":"ContainerDied","Data":"1c0e5b131ed3937f93562a50240cb6b9c2562d445ddbcce4478062b5f190274b"} Nov 29 08:12:32 crc kubenswrapper[4795]: I1129 08:12:32.303440 4795 generic.go:334] "Generic (PLEG): container finished" podID="85a82139-8137-40d2-a6e9-b384592f9919" containerID="4a16d289d8ef75df118f17d942e6a7f8b057ae6c7aad1dc07fe3a3df5cf42be2" exitCode=0 Nov 29 08:12:32 crc kubenswrapper[4795]: I1129 08:12:32.303501 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"85a82139-8137-40d2-a6e9-b384592f9919","Type":"ContainerDied","Data":"4a16d289d8ef75df118f17d942e6a7f8b057ae6c7aad1dc07fe3a3df5cf42be2"} Nov 29 08:12:34 crc kubenswrapper[4795]: I1129 08:12:34.343046 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-9d7d54f9b-6mtps" Nov 29 08:12:34 crc kubenswrapper[4795]: I1129 08:12:34.403145 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-746fb69fd5-n8596"] Nov 29 08:12:34 crc kubenswrapper[4795]: I1129 08:12:34.403648 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-746fb69fd5-n8596" podUID="c0aa8b2b-26ef-48f3-9a24-da2a5b2c9d3a" containerName="heat-engine" containerID="cri-o://35f22de4a7a9f0afaacfcd28c644998d37499261b4a684b69d858a9a5449bbe9" gracePeriod=60 Nov 29 08:12:35 crc kubenswrapper[4795]: I1129 08:12:35.161093 4795 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-b49cd6b59-h4cg5" podUID="3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5" containerName="heat-api" probeResult="failure" output="Get \"https://10.217.0.218:8004/healthcheck\": dial tcp 10.217.0.218:8004: connect: connection refused" Nov 29 08:12:35 crc kubenswrapper[4795]: I1129 08:12:35.172842 4795 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-67fb9f5ff-86s7p" podUID="8a4c1b6d-0527-4254-819b-ce068ddb20d8" containerName="heat-cfnapi" probeResult="failure" output="Get \"https://10.217.0.219:8000/healthcheck\": dial tcp 10.217.0.219:8000: connect: connection refused" Nov 29 08:12:35 crc kubenswrapper[4795]: I1129 08:12:35.679931 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-b49cd6b59-h4cg5" Nov 29 08:12:35 crc kubenswrapper[4795]: I1129 08:12:35.729887 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-67fb9f5ff-86s7p" Nov 29 08:12:35 crc kubenswrapper[4795]: I1129 08:12:35.867705 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a4c1b6d-0527-4254-819b-ce068ddb20d8-internal-tls-certs\") pod \"8a4c1b6d-0527-4254-819b-ce068ddb20d8\" (UID: \"8a4c1b6d-0527-4254-819b-ce068ddb20d8\") " Nov 29 08:12:35 crc kubenswrapper[4795]: I1129 08:12:35.867842 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5-public-tls-certs\") pod \"3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5\" (UID: \"3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5\") " Nov 29 08:12:35 crc kubenswrapper[4795]: I1129 08:12:35.867898 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5-config-data-custom\") pod \"3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5\" (UID: \"3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5\") " Nov 29 08:12:35 crc kubenswrapper[4795]: I1129 08:12:35.867932 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8a4c1b6d-0527-4254-819b-ce068ddb20d8-config-data-custom\") pod \"8a4c1b6d-0527-4254-819b-ce068ddb20d8\" (UID: \"8a4c1b6d-0527-4254-819b-ce068ddb20d8\") " Nov 29 08:12:35 crc kubenswrapper[4795]: I1129 08:12:35.867981 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a4c1b6d-0527-4254-819b-ce068ddb20d8-combined-ca-bundle\") pod \"8a4c1b6d-0527-4254-819b-ce068ddb20d8\" (UID: \"8a4c1b6d-0527-4254-819b-ce068ddb20d8\") " Nov 29 08:12:35 crc kubenswrapper[4795]: I1129 08:12:35.868021 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5-internal-tls-certs\") pod \"3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5\" (UID: \"3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5\") " Nov 29 08:12:35 crc kubenswrapper[4795]: I1129 08:12:35.868058 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f48nm\" (UniqueName: \"kubernetes.io/projected/3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5-kube-api-access-f48nm\") pod \"3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5\" (UID: \"3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5\") " Nov 29 08:12:35 crc kubenswrapper[4795]: I1129 08:12:35.868082 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a4c1b6d-0527-4254-819b-ce068ddb20d8-config-data\") pod \"8a4c1b6d-0527-4254-819b-ce068ddb20d8\" (UID: \"8a4c1b6d-0527-4254-819b-ce068ddb20d8\") " Nov 29 08:12:35 crc kubenswrapper[4795]: I1129 08:12:35.868108 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2924q\" (UniqueName: \"kubernetes.io/projected/8a4c1b6d-0527-4254-819b-ce068ddb20d8-kube-api-access-2924q\") pod \"8a4c1b6d-0527-4254-819b-ce068ddb20d8\" (UID: \"8a4c1b6d-0527-4254-819b-ce068ddb20d8\") " Nov 29 08:12:35 crc kubenswrapper[4795]: I1129 08:12:35.868153 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a4c1b6d-0527-4254-819b-ce068ddb20d8-public-tls-certs\") pod \"8a4c1b6d-0527-4254-819b-ce068ddb20d8\" (UID: \"8a4c1b6d-0527-4254-819b-ce068ddb20d8\") " Nov 29 08:12:35 crc kubenswrapper[4795]: I1129 08:12:35.868208 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5-combined-ca-bundle\") pod \"3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5\" (UID: \"3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5\") " Nov 29 08:12:35 crc kubenswrapper[4795]: I1129 08:12:35.868232 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5-config-data\") pod \"3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5\" (UID: \"3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5\") " Nov 29 08:12:35 crc kubenswrapper[4795]: I1129 08:12:35.876141 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5-kube-api-access-f48nm" (OuterVolumeSpecName: "kube-api-access-f48nm") pod "3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5" (UID: "3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5"). InnerVolumeSpecName "kube-api-access-f48nm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:12:35 crc kubenswrapper[4795]: I1129 08:12:35.877837 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5" (UID: "3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:12:35 crc kubenswrapper[4795]: I1129 08:12:35.882478 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a4c1b6d-0527-4254-819b-ce068ddb20d8-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8a4c1b6d-0527-4254-819b-ce068ddb20d8" (UID: "8a4c1b6d-0527-4254-819b-ce068ddb20d8"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:12:35 crc kubenswrapper[4795]: I1129 08:12:35.907652 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a4c1b6d-0527-4254-819b-ce068ddb20d8-kube-api-access-2924q" (OuterVolumeSpecName: "kube-api-access-2924q") pod "8a4c1b6d-0527-4254-819b-ce068ddb20d8" (UID: "8a4c1b6d-0527-4254-819b-ce068ddb20d8"). InnerVolumeSpecName "kube-api-access-2924q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:12:35 crc kubenswrapper[4795]: I1129 08:12:35.925872 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5" (UID: "3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:12:35 crc kubenswrapper[4795]: I1129 08:12:35.931669 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a4c1b6d-0527-4254-819b-ce068ddb20d8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8a4c1b6d-0527-4254-819b-ce068ddb20d8" (UID: "8a4c1b6d-0527-4254-819b-ce068ddb20d8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:12:35 crc kubenswrapper[4795]: I1129 08:12:35.957227 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5" (UID: "3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:12:35 crc kubenswrapper[4795]: I1129 08:12:35.961380 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5-config-data" (OuterVolumeSpecName: "config-data") pod "3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5" (UID: "3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:12:35 crc kubenswrapper[4795]: I1129 08:12:35.971068 4795 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a4c1b6d-0527-4254-819b-ce068ddb20d8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 08:12:35 crc kubenswrapper[4795]: I1129 08:12:35.971102 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f48nm\" (UniqueName: \"kubernetes.io/projected/3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5-kube-api-access-f48nm\") on node \"crc\" DevicePath \"\"" Nov 29 08:12:35 crc kubenswrapper[4795]: I1129 08:12:35.971114 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2924q\" (UniqueName: \"kubernetes.io/projected/8a4c1b6d-0527-4254-819b-ce068ddb20d8-kube-api-access-2924q\") on node \"crc\" DevicePath \"\"" Nov 29 08:12:35 crc kubenswrapper[4795]: I1129 08:12:35.971122 4795 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 08:12:35 crc kubenswrapper[4795]: I1129 08:12:35.971130 4795 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 08:12:35 crc kubenswrapper[4795]: I1129 08:12:35.971139 4795 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 29 08:12:35 crc kubenswrapper[4795]: I1129 08:12:35.971150 4795 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 29 08:12:35 crc kubenswrapper[4795]: I1129 08:12:35.971158 4795 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8a4c1b6d-0527-4254-819b-ce068ddb20d8-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 29 08:12:35 crc kubenswrapper[4795]: I1129 08:12:35.971185 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a4c1b6d-0527-4254-819b-ce068ddb20d8-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "8a4c1b6d-0527-4254-819b-ce068ddb20d8" (UID: "8a4c1b6d-0527-4254-819b-ce068ddb20d8"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:12:35 crc kubenswrapper[4795]: I1129 08:12:35.973811 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a4c1b6d-0527-4254-819b-ce068ddb20d8-config-data" (OuterVolumeSpecName: "config-data") pod "8a4c1b6d-0527-4254-819b-ce068ddb20d8" (UID: "8a4c1b6d-0527-4254-819b-ce068ddb20d8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:12:35 crc kubenswrapper[4795]: I1129 08:12:35.973885 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a4c1b6d-0527-4254-819b-ce068ddb20d8-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "8a4c1b6d-0527-4254-819b-ce068ddb20d8" (UID: "8a4c1b6d-0527-4254-819b-ce068ddb20d8"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:12:35 crc kubenswrapper[4795]: I1129 08:12:35.976628 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5" (UID: "3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:12:36 crc kubenswrapper[4795]: I1129 08:12:36.074835 4795 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 29 08:12:36 crc kubenswrapper[4795]: I1129 08:12:36.075546 4795 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a4c1b6d-0527-4254-819b-ce068ddb20d8-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 08:12:36 crc kubenswrapper[4795]: I1129 08:12:36.075560 4795 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a4c1b6d-0527-4254-819b-ce068ddb20d8-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 29 08:12:36 crc kubenswrapper[4795]: I1129 08:12:36.075576 4795 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a4c1b6d-0527-4254-819b-ce068ddb20d8-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 29 08:12:36 crc kubenswrapper[4795]: I1129 08:12:36.398772 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-67fb9f5ff-86s7p" Nov 29 08:12:36 crc kubenswrapper[4795]: I1129 08:12:36.398772 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-67fb9f5ff-86s7p" event={"ID":"8a4c1b6d-0527-4254-819b-ce068ddb20d8","Type":"ContainerDied","Data":"7839e15622d0d53d71a37f793b83e011854b1e7b488021199c579e1126edd358"} Nov 29 08:12:36 crc kubenswrapper[4795]: I1129 08:12:36.398895 4795 scope.go:117] "RemoveContainer" containerID="f33376e1c1e9327f8fd667be8faa6f1666b224e32f6426459c3eacb213772e60" Nov 29 08:12:36 crc kubenswrapper[4795]: I1129 08:12:36.401428 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d1c7dfa2-1b2a-438d-9378-fd998f873999","Type":"ContainerStarted","Data":"c11614100e3cd0f4a3e8b42a3ec41a149aae6143f2d6b59843f221a2de973998"} Nov 29 08:12:36 crc kubenswrapper[4795]: I1129 08:12:36.402287 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 29 08:12:36 crc kubenswrapper[4795]: I1129 08:12:36.404523 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-b49cd6b59-h4cg5" Nov 29 08:12:36 crc kubenswrapper[4795]: I1129 08:12:36.405299 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-b49cd6b59-h4cg5" event={"ID":"3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5","Type":"ContainerDied","Data":"8a83e0c389b24600720243a375bce209c86afe70a471b447c354e7aafa7b83b5"} Nov 29 08:12:36 crc kubenswrapper[4795]: I1129 08:12:36.406757 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-r5c5p" event={"ID":"fd6f9117-b1f3-4533-b3f6-3b614a790521","Type":"ContainerStarted","Data":"280220e7b1dd59722480e8464d3da59c55ea832fcc0e9155cde4ffa0d89b5224"} Nov 29 08:12:36 crc kubenswrapper[4795]: I1129 08:12:36.408750 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"85a82139-8137-40d2-a6e9-b384592f9919","Type":"ContainerStarted","Data":"c7831a299c39a32dcadda7e222b95da9176595bee3fbcc6fa71211779205e25f"} Nov 29 08:12:36 crc kubenswrapper[4795]: I1129 08:12:36.408942 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 29 08:12:36 crc kubenswrapper[4795]: I1129 08:12:36.434910 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-67fb9f5ff-86s7p"] Nov 29 08:12:36 crc kubenswrapper[4795]: I1129 08:12:36.439168 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-67fb9f5ff-86s7p"] Nov 29 08:12:36 crc kubenswrapper[4795]: I1129 08:12:36.443075 4795 scope.go:117] "RemoveContainer" containerID="abe61bedf1ec9c606b02562c7d6cc6685d793dc113e02a4333b31f8db4dc6c40" Nov 29 08:12:36 crc kubenswrapper[4795]: I1129 08:12:36.465429 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=42.465408483 podStartE2EDuration="42.465408483s" podCreationTimestamp="2025-11-29 08:11:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 08:12:36.462365637 +0000 UTC m=+2002.437941437" watchObservedRunningTime="2025-11-29 08:12:36.465408483 +0000 UTC m=+2002.440984273" Nov 29 08:12:36 crc kubenswrapper[4795]: I1129 08:12:36.488572 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-b49cd6b59-h4cg5"] Nov 29 08:12:36 crc kubenswrapper[4795]: I1129 08:12:36.500993 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-b49cd6b59-h4cg5"] Nov 29 08:12:36 crc kubenswrapper[4795]: I1129 08:12:36.520564 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-r5c5p" podStartSLOduration=2.490503026 podStartE2EDuration="11.520537545s" podCreationTimestamp="2025-11-29 08:12:25 +0000 UTC" firstStartedPulling="2025-11-29 08:12:26.244780564 +0000 UTC m=+1992.220356344" lastFinishedPulling="2025-11-29 08:12:35.274815073 +0000 UTC m=+2001.250390863" observedRunningTime="2025-11-29 08:12:36.495785194 +0000 UTC m=+2002.471360984" watchObservedRunningTime="2025-11-29 08:12:36.520537545 +0000 UTC m=+2002.496113335" Nov 29 08:12:36 crc kubenswrapper[4795]: I1129 08:12:36.533569 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=42.533551264 podStartE2EDuration="42.533551264s" podCreationTimestamp="2025-11-29 08:11:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 08:12:36.525123075 +0000 UTC m=+2002.500698865" watchObservedRunningTime="2025-11-29 08:12:36.533551264 +0000 UTC m=+2002.509127054" Nov 29 08:12:38 crc kubenswrapper[4795]: I1129 08:12:38.275722 4795 scope.go:117] "RemoveContainer" containerID="cb788e622da5bdf804dcfa0c959dfb0158a848bc427ed90337dfd530ea78ea5d" Nov 29 08:12:38 crc kubenswrapper[4795]: E1129 08:12:38.276309 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:12:38 crc kubenswrapper[4795]: I1129 08:12:38.287645 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5" path="/var/lib/kubelet/pods/3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5/volumes" Nov 29 08:12:38 crc kubenswrapper[4795]: I1129 08:12:38.288320 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a4c1b6d-0527-4254-819b-ce068ddb20d8" path="/var/lib/kubelet/pods/8a4c1b6d-0527-4254-819b-ce068ddb20d8/volumes" Nov 29 08:12:38 crc kubenswrapper[4795]: E1129 08:12:38.359865 4795 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="35f22de4a7a9f0afaacfcd28c644998d37499261b4a684b69d858a9a5449bbe9" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 29 08:12:38 crc kubenswrapper[4795]: E1129 08:12:38.361579 4795 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="35f22de4a7a9f0afaacfcd28c644998d37499261b4a684b69d858a9a5449bbe9" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 29 08:12:38 crc kubenswrapper[4795]: E1129 08:12:38.362858 4795 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="35f22de4a7a9f0afaacfcd28c644998d37499261b4a684b69d858a9a5449bbe9" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 29 08:12:38 crc kubenswrapper[4795]: E1129 08:12:38.362888 4795 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-746fb69fd5-n8596" podUID="c0aa8b2b-26ef-48f3-9a24-da2a5b2c9d3a" containerName="heat-engine" Nov 29 08:12:42 crc kubenswrapper[4795]: I1129 08:12:42.659541 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-2r5cb"] Nov 29 08:12:42 crc kubenswrapper[4795]: I1129 08:12:42.670215 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-2r5cb"] Nov 29 08:12:42 crc kubenswrapper[4795]: I1129 08:12:42.763648 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-6wfqc"] Nov 29 08:12:42 crc kubenswrapper[4795]: E1129 08:12:42.764133 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a4c1b6d-0527-4254-819b-ce068ddb20d8" containerName="heat-cfnapi" Nov 29 08:12:42 crc kubenswrapper[4795]: I1129 08:12:42.764151 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a4c1b6d-0527-4254-819b-ce068ddb20d8" containerName="heat-cfnapi" Nov 29 08:12:42 crc kubenswrapper[4795]: E1129 08:12:42.764162 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5" containerName="heat-api" Nov 29 08:12:42 crc kubenswrapper[4795]: I1129 08:12:42.764169 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5" containerName="heat-api" Nov 29 08:12:42 crc kubenswrapper[4795]: I1129 08:12:42.764440 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b7ecd96-9c05-4dc4-81ed-b7cf256cc0f5" containerName="heat-api" Nov 29 08:12:42 crc kubenswrapper[4795]: I1129 08:12:42.764464 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a4c1b6d-0527-4254-819b-ce068ddb20d8" containerName="heat-cfnapi" Nov 29 08:12:42 crc kubenswrapper[4795]: I1129 08:12:42.765226 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-6wfqc" Nov 29 08:12:42 crc kubenswrapper[4795]: I1129 08:12:42.769815 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 29 08:12:42 crc kubenswrapper[4795]: I1129 08:12:42.796637 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-6wfqc"] Nov 29 08:12:42 crc kubenswrapper[4795]: I1129 08:12:42.851144 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be948c99-208e-4f5c-8ab7-5971d1efb06e-config-data\") pod \"aodh-db-sync-6wfqc\" (UID: \"be948c99-208e-4f5c-8ab7-5971d1efb06e\") " pod="openstack/aodh-db-sync-6wfqc" Nov 29 08:12:42 crc kubenswrapper[4795]: I1129 08:12:42.851301 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be948c99-208e-4f5c-8ab7-5971d1efb06e-scripts\") pod \"aodh-db-sync-6wfqc\" (UID: \"be948c99-208e-4f5c-8ab7-5971d1efb06e\") " pod="openstack/aodh-db-sync-6wfqc" Nov 29 08:12:42 crc kubenswrapper[4795]: I1129 08:12:42.851957 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be948c99-208e-4f5c-8ab7-5971d1efb06e-combined-ca-bundle\") pod \"aodh-db-sync-6wfqc\" (UID: \"be948c99-208e-4f5c-8ab7-5971d1efb06e\") " pod="openstack/aodh-db-sync-6wfqc" Nov 29 08:12:42 crc kubenswrapper[4795]: I1129 08:12:42.852087 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qn6h\" (UniqueName: \"kubernetes.io/projected/be948c99-208e-4f5c-8ab7-5971d1efb06e-kube-api-access-9qn6h\") pod \"aodh-db-sync-6wfqc\" (UID: \"be948c99-208e-4f5c-8ab7-5971d1efb06e\") " pod="openstack/aodh-db-sync-6wfqc" Nov 29 08:12:42 crc kubenswrapper[4795]: I1129 08:12:42.953565 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qn6h\" (UniqueName: \"kubernetes.io/projected/be948c99-208e-4f5c-8ab7-5971d1efb06e-kube-api-access-9qn6h\") pod \"aodh-db-sync-6wfqc\" (UID: \"be948c99-208e-4f5c-8ab7-5971d1efb06e\") " pod="openstack/aodh-db-sync-6wfqc" Nov 29 08:12:42 crc kubenswrapper[4795]: I1129 08:12:42.953623 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be948c99-208e-4f5c-8ab7-5971d1efb06e-config-data\") pod \"aodh-db-sync-6wfqc\" (UID: \"be948c99-208e-4f5c-8ab7-5971d1efb06e\") " pod="openstack/aodh-db-sync-6wfqc" Nov 29 08:12:42 crc kubenswrapper[4795]: I1129 08:12:42.953694 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be948c99-208e-4f5c-8ab7-5971d1efb06e-scripts\") pod \"aodh-db-sync-6wfqc\" (UID: \"be948c99-208e-4f5c-8ab7-5971d1efb06e\") " pod="openstack/aodh-db-sync-6wfqc" Nov 29 08:12:42 crc kubenswrapper[4795]: I1129 08:12:42.953842 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be948c99-208e-4f5c-8ab7-5971d1efb06e-combined-ca-bundle\") pod \"aodh-db-sync-6wfqc\" (UID: \"be948c99-208e-4f5c-8ab7-5971d1efb06e\") " pod="openstack/aodh-db-sync-6wfqc" Nov 29 08:12:42 crc kubenswrapper[4795]: I1129 08:12:42.960112 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be948c99-208e-4f5c-8ab7-5971d1efb06e-combined-ca-bundle\") pod \"aodh-db-sync-6wfqc\" (UID: \"be948c99-208e-4f5c-8ab7-5971d1efb06e\") " pod="openstack/aodh-db-sync-6wfqc" Nov 29 08:12:42 crc kubenswrapper[4795]: I1129 08:12:42.965049 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be948c99-208e-4f5c-8ab7-5971d1efb06e-scripts\") pod \"aodh-db-sync-6wfqc\" (UID: \"be948c99-208e-4f5c-8ab7-5971d1efb06e\") " pod="openstack/aodh-db-sync-6wfqc" Nov 29 08:12:42 crc kubenswrapper[4795]: I1129 08:12:42.966943 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be948c99-208e-4f5c-8ab7-5971d1efb06e-config-data\") pod \"aodh-db-sync-6wfqc\" (UID: \"be948c99-208e-4f5c-8ab7-5971d1efb06e\") " pod="openstack/aodh-db-sync-6wfqc" Nov 29 08:12:42 crc kubenswrapper[4795]: I1129 08:12:42.981333 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qn6h\" (UniqueName: \"kubernetes.io/projected/be948c99-208e-4f5c-8ab7-5971d1efb06e-kube-api-access-9qn6h\") pod \"aodh-db-sync-6wfqc\" (UID: \"be948c99-208e-4f5c-8ab7-5971d1efb06e\") " pod="openstack/aodh-db-sync-6wfqc" Nov 29 08:12:43 crc kubenswrapper[4795]: I1129 08:12:43.102503 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-6wfqc" Nov 29 08:12:43 crc kubenswrapper[4795]: W1129 08:12:43.642970 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe948c99_208e_4f5c_8ab7_5971d1efb06e.slice/crio-239be3fc1f8f8a8ffdf14609ade64aacda62a32059e6be9d1328dc93ca2eecb9 WatchSource:0}: Error finding container 239be3fc1f8f8a8ffdf14609ade64aacda62a32059e6be9d1328dc93ca2eecb9: Status 404 returned error can't find the container with id 239be3fc1f8f8a8ffdf14609ade64aacda62a32059e6be9d1328dc93ca2eecb9 Nov 29 08:12:43 crc kubenswrapper[4795]: I1129 08:12:43.643991 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-6wfqc"] Nov 29 08:12:43 crc kubenswrapper[4795]: I1129 08:12:43.645513 4795 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 29 08:12:44 crc kubenswrapper[4795]: I1129 08:12:44.289354 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee6363b5-2fea-42a6-94fd-748c7c4c3e66" path="/var/lib/kubelet/pods/ee6363b5-2fea-42a6-94fd-748c7c4c3e66/volumes" Nov 29 08:12:44 crc kubenswrapper[4795]: I1129 08:12:44.497614 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-6wfqc" event={"ID":"be948c99-208e-4f5c-8ab7-5971d1efb06e","Type":"ContainerStarted","Data":"239be3fc1f8f8a8ffdf14609ade64aacda62a32059e6be9d1328dc93ca2eecb9"} Nov 29 08:12:47 crc kubenswrapper[4795]: I1129 08:12:47.529496 4795 generic.go:334] "Generic (PLEG): container finished" podID="fd6f9117-b1f3-4533-b3f6-3b614a790521" containerID="280220e7b1dd59722480e8464d3da59c55ea832fcc0e9155cde4ffa0d89b5224" exitCode=0 Nov 29 08:12:47 crc kubenswrapper[4795]: I1129 08:12:47.529568 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-r5c5p" event={"ID":"fd6f9117-b1f3-4533-b3f6-3b614a790521","Type":"ContainerDied","Data":"280220e7b1dd59722480e8464d3da59c55ea832fcc0e9155cde4ffa0d89b5224"} Nov 29 08:12:47 crc kubenswrapper[4795]: I1129 08:12:47.920377 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-z2fm2"] Nov 29 08:12:47 crc kubenswrapper[4795]: I1129 08:12:47.923756 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z2fm2" Nov 29 08:12:47 crc kubenswrapper[4795]: I1129 08:12:47.939401 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z2fm2"] Nov 29 08:12:47 crc kubenswrapper[4795]: I1129 08:12:47.992031 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08402fb0-7ff1-492e-8d2b-423e248fb787-utilities\") pod \"redhat-operators-z2fm2\" (UID: \"08402fb0-7ff1-492e-8d2b-423e248fb787\") " pod="openshift-marketplace/redhat-operators-z2fm2" Nov 29 08:12:47 crc kubenswrapper[4795]: I1129 08:12:47.992134 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kqhq\" (UniqueName: \"kubernetes.io/projected/08402fb0-7ff1-492e-8d2b-423e248fb787-kube-api-access-2kqhq\") pod \"redhat-operators-z2fm2\" (UID: \"08402fb0-7ff1-492e-8d2b-423e248fb787\") " pod="openshift-marketplace/redhat-operators-z2fm2" Nov 29 08:12:47 crc kubenswrapper[4795]: I1129 08:12:47.992174 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08402fb0-7ff1-492e-8d2b-423e248fb787-catalog-content\") pod \"redhat-operators-z2fm2\" (UID: \"08402fb0-7ff1-492e-8d2b-423e248fb787\") " pod="openshift-marketplace/redhat-operators-z2fm2" Nov 29 08:12:48 crc kubenswrapper[4795]: I1129 08:12:48.095341 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08402fb0-7ff1-492e-8d2b-423e248fb787-utilities\") pod \"redhat-operators-z2fm2\" (UID: \"08402fb0-7ff1-492e-8d2b-423e248fb787\") " pod="openshift-marketplace/redhat-operators-z2fm2" Nov 29 08:12:48 crc kubenswrapper[4795]: I1129 08:12:48.095482 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kqhq\" (UniqueName: \"kubernetes.io/projected/08402fb0-7ff1-492e-8d2b-423e248fb787-kube-api-access-2kqhq\") pod \"redhat-operators-z2fm2\" (UID: \"08402fb0-7ff1-492e-8d2b-423e248fb787\") " pod="openshift-marketplace/redhat-operators-z2fm2" Nov 29 08:12:48 crc kubenswrapper[4795]: I1129 08:12:48.095520 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08402fb0-7ff1-492e-8d2b-423e248fb787-catalog-content\") pod \"redhat-operators-z2fm2\" (UID: \"08402fb0-7ff1-492e-8d2b-423e248fb787\") " pod="openshift-marketplace/redhat-operators-z2fm2" Nov 29 08:12:48 crc kubenswrapper[4795]: I1129 08:12:48.095914 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08402fb0-7ff1-492e-8d2b-423e248fb787-utilities\") pod \"redhat-operators-z2fm2\" (UID: \"08402fb0-7ff1-492e-8d2b-423e248fb787\") " pod="openshift-marketplace/redhat-operators-z2fm2" Nov 29 08:12:48 crc kubenswrapper[4795]: I1129 08:12:48.095986 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08402fb0-7ff1-492e-8d2b-423e248fb787-catalog-content\") pod \"redhat-operators-z2fm2\" (UID: \"08402fb0-7ff1-492e-8d2b-423e248fb787\") " pod="openshift-marketplace/redhat-operators-z2fm2" Nov 29 08:12:48 crc kubenswrapper[4795]: I1129 08:12:48.123252 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kqhq\" (UniqueName: \"kubernetes.io/projected/08402fb0-7ff1-492e-8d2b-423e248fb787-kube-api-access-2kqhq\") pod \"redhat-operators-z2fm2\" (UID: \"08402fb0-7ff1-492e-8d2b-423e248fb787\") " pod="openshift-marketplace/redhat-operators-z2fm2" Nov 29 08:12:48 crc kubenswrapper[4795]: I1129 08:12:48.249360 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z2fm2" Nov 29 08:12:48 crc kubenswrapper[4795]: E1129 08:12:48.359376 4795 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="35f22de4a7a9f0afaacfcd28c644998d37499261b4a684b69d858a9a5449bbe9" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 29 08:12:48 crc kubenswrapper[4795]: E1129 08:12:48.360812 4795 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="35f22de4a7a9f0afaacfcd28c644998d37499261b4a684b69d858a9a5449bbe9" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 29 08:12:48 crc kubenswrapper[4795]: E1129 08:12:48.362155 4795 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="35f22de4a7a9f0afaacfcd28c644998d37499261b4a684b69d858a9a5449bbe9" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 29 08:12:48 crc kubenswrapper[4795]: E1129 08:12:48.362209 4795 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-746fb69fd5-n8596" podUID="c0aa8b2b-26ef-48f3-9a24-da2a5b2c9d3a" containerName="heat-engine" Nov 29 08:12:49 crc kubenswrapper[4795]: I1129 08:12:49.595690 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-r5c5p" event={"ID":"fd6f9117-b1f3-4533-b3f6-3b614a790521","Type":"ContainerDied","Data":"6441ac793a790599f6cc1a2f3e05e0205a79e0f2543d658d840d2f506de1e064"} Nov 29 08:12:49 crc kubenswrapper[4795]: I1129 08:12:49.596546 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6441ac793a790599f6cc1a2f3e05e0205a79e0f2543d658d840d2f506de1e064" Nov 29 08:12:49 crc kubenswrapper[4795]: I1129 08:12:49.600814 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-r5c5p" Nov 29 08:12:49 crc kubenswrapper[4795]: I1129 08:12:49.740734 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd6f9117-b1f3-4533-b3f6-3b614a790521-repo-setup-combined-ca-bundle\") pod \"fd6f9117-b1f3-4533-b3f6-3b614a790521\" (UID: \"fd6f9117-b1f3-4533-b3f6-3b614a790521\") " Nov 29 08:12:49 crc kubenswrapper[4795]: I1129 08:12:49.741357 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fd6f9117-b1f3-4533-b3f6-3b614a790521-ssh-key\") pod \"fd6f9117-b1f3-4533-b3f6-3b614a790521\" (UID: \"fd6f9117-b1f3-4533-b3f6-3b614a790521\") " Nov 29 08:12:49 crc kubenswrapper[4795]: I1129 08:12:49.741620 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mslfb\" (UniqueName: \"kubernetes.io/projected/fd6f9117-b1f3-4533-b3f6-3b614a790521-kube-api-access-mslfb\") pod \"fd6f9117-b1f3-4533-b3f6-3b614a790521\" (UID: \"fd6f9117-b1f3-4533-b3f6-3b614a790521\") " Nov 29 08:12:49 crc kubenswrapper[4795]: I1129 08:12:49.741677 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd6f9117-b1f3-4533-b3f6-3b614a790521-inventory\") pod \"fd6f9117-b1f3-4533-b3f6-3b614a790521\" (UID: \"fd6f9117-b1f3-4533-b3f6-3b614a790521\") " Nov 29 08:12:49 crc kubenswrapper[4795]: I1129 08:12:49.751005 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd6f9117-b1f3-4533-b3f6-3b614a790521-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "fd6f9117-b1f3-4533-b3f6-3b614a790521" (UID: "fd6f9117-b1f3-4533-b3f6-3b614a790521"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:12:49 crc kubenswrapper[4795]: I1129 08:12:49.755998 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd6f9117-b1f3-4533-b3f6-3b614a790521-kube-api-access-mslfb" (OuterVolumeSpecName: "kube-api-access-mslfb") pod "fd6f9117-b1f3-4533-b3f6-3b614a790521" (UID: "fd6f9117-b1f3-4533-b3f6-3b614a790521"). InnerVolumeSpecName "kube-api-access-mslfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:12:49 crc kubenswrapper[4795]: I1129 08:12:49.801028 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd6f9117-b1f3-4533-b3f6-3b614a790521-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "fd6f9117-b1f3-4533-b3f6-3b614a790521" (UID: "fd6f9117-b1f3-4533-b3f6-3b614a790521"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:12:49 crc kubenswrapper[4795]: I1129 08:12:49.812652 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd6f9117-b1f3-4533-b3f6-3b614a790521-inventory" (OuterVolumeSpecName: "inventory") pod "fd6f9117-b1f3-4533-b3f6-3b614a790521" (UID: "fd6f9117-b1f3-4533-b3f6-3b614a790521"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:12:49 crc kubenswrapper[4795]: I1129 08:12:49.845411 4795 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fd6f9117-b1f3-4533-b3f6-3b614a790521-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 08:12:49 crc kubenswrapper[4795]: I1129 08:12:49.845446 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mslfb\" (UniqueName: \"kubernetes.io/projected/fd6f9117-b1f3-4533-b3f6-3b614a790521-kube-api-access-mslfb\") on node \"crc\" DevicePath \"\"" Nov 29 08:12:49 crc kubenswrapper[4795]: I1129 08:12:49.845458 4795 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd6f9117-b1f3-4533-b3f6-3b614a790521-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 08:12:49 crc kubenswrapper[4795]: I1129 08:12:49.845468 4795 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd6f9117-b1f3-4533-b3f6-3b614a790521-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 08:12:50 crc kubenswrapper[4795]: I1129 08:12:50.044762 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z2fm2"] Nov 29 08:12:50 crc kubenswrapper[4795]: W1129 08:12:50.046256 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08402fb0_7ff1_492e_8d2b_423e248fb787.slice/crio-fe28fd2e567b2b93b287e746777ca101774c87b144687b283908d78194265706 WatchSource:0}: Error finding container fe28fd2e567b2b93b287e746777ca101774c87b144687b283908d78194265706: Status 404 returned error can't find the container with id fe28fd2e567b2b93b287e746777ca101774c87b144687b283908d78194265706 Nov 29 08:12:50 crc kubenswrapper[4795]: I1129 08:12:50.118174 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-txz6d"] Nov 29 08:12:50 crc kubenswrapper[4795]: E1129 08:12:50.119007 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd6f9117-b1f3-4533-b3f6-3b614a790521" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 29 08:12:50 crc kubenswrapper[4795]: I1129 08:12:50.119021 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd6f9117-b1f3-4533-b3f6-3b614a790521" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 29 08:12:50 crc kubenswrapper[4795]: I1129 08:12:50.119325 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd6f9117-b1f3-4533-b3f6-3b614a790521" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 29 08:12:50 crc kubenswrapper[4795]: I1129 08:12:50.121260 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-txz6d" Nov 29 08:12:50 crc kubenswrapper[4795]: I1129 08:12:50.154582 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-txz6d"] Nov 29 08:12:50 crc kubenswrapper[4795]: I1129 08:12:50.256137 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e73c53a-c858-4ba0-8a00-6322487811be-utilities\") pod \"community-operators-txz6d\" (UID: \"9e73c53a-c858-4ba0-8a00-6322487811be\") " pod="openshift-marketplace/community-operators-txz6d" Nov 29 08:12:50 crc kubenswrapper[4795]: I1129 08:12:50.256484 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e73c53a-c858-4ba0-8a00-6322487811be-catalog-content\") pod \"community-operators-txz6d\" (UID: \"9e73c53a-c858-4ba0-8a00-6322487811be\") " pod="openshift-marketplace/community-operators-txz6d" Nov 29 08:12:50 crc kubenswrapper[4795]: I1129 08:12:50.256794 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhxf7\" (UniqueName: \"kubernetes.io/projected/9e73c53a-c858-4ba0-8a00-6322487811be-kube-api-access-zhxf7\") pod \"community-operators-txz6d\" (UID: \"9e73c53a-c858-4ba0-8a00-6322487811be\") " pod="openshift-marketplace/community-operators-txz6d" Nov 29 08:12:50 crc kubenswrapper[4795]: I1129 08:12:50.361714 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e73c53a-c858-4ba0-8a00-6322487811be-utilities\") pod \"community-operators-txz6d\" (UID: \"9e73c53a-c858-4ba0-8a00-6322487811be\") " pod="openshift-marketplace/community-operators-txz6d" Nov 29 08:12:50 crc kubenswrapper[4795]: I1129 08:12:50.361911 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e73c53a-c858-4ba0-8a00-6322487811be-catalog-content\") pod \"community-operators-txz6d\" (UID: \"9e73c53a-c858-4ba0-8a00-6322487811be\") " pod="openshift-marketplace/community-operators-txz6d" Nov 29 08:12:50 crc kubenswrapper[4795]: I1129 08:12:50.362071 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhxf7\" (UniqueName: \"kubernetes.io/projected/9e73c53a-c858-4ba0-8a00-6322487811be-kube-api-access-zhxf7\") pod \"community-operators-txz6d\" (UID: \"9e73c53a-c858-4ba0-8a00-6322487811be\") " pod="openshift-marketplace/community-operators-txz6d" Nov 29 08:12:50 crc kubenswrapper[4795]: I1129 08:12:50.362244 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e73c53a-c858-4ba0-8a00-6322487811be-utilities\") pod \"community-operators-txz6d\" (UID: \"9e73c53a-c858-4ba0-8a00-6322487811be\") " pod="openshift-marketplace/community-operators-txz6d" Nov 29 08:12:50 crc kubenswrapper[4795]: I1129 08:12:50.362679 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e73c53a-c858-4ba0-8a00-6322487811be-catalog-content\") pod \"community-operators-txz6d\" (UID: \"9e73c53a-c858-4ba0-8a00-6322487811be\") " pod="openshift-marketplace/community-operators-txz6d" Nov 29 08:12:50 crc kubenswrapper[4795]: I1129 08:12:50.389999 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhxf7\" (UniqueName: \"kubernetes.io/projected/9e73c53a-c858-4ba0-8a00-6322487811be-kube-api-access-zhxf7\") pod \"community-operators-txz6d\" (UID: \"9e73c53a-c858-4ba0-8a00-6322487811be\") " pod="openshift-marketplace/community-operators-txz6d" Nov 29 08:12:50 crc kubenswrapper[4795]: I1129 08:12:50.456342 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-txz6d" Nov 29 08:12:50 crc kubenswrapper[4795]: E1129 08:12:50.621430 4795 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08402fb0_7ff1_492e_8d2b_423e248fb787.slice/crio-conmon-5ad331779aeeb15b768cddfa0d09786aad1066327b956aaf08355788d7e5f2e7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd6f9117_b1f3_4533_b3f6_3b614a790521.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd6f9117_b1f3_4533_b3f6_3b614a790521.slice/crio-6441ac793a790599f6cc1a2f3e05e0205a79e0f2543d658d840d2f506de1e064\": RecentStats: unable to find data in memory cache]" Nov 29 08:12:50 crc kubenswrapper[4795]: I1129 08:12:50.651962 4795 generic.go:334] "Generic (PLEG): container finished" podID="08402fb0-7ff1-492e-8d2b-423e248fb787" containerID="5ad331779aeeb15b768cddfa0d09786aad1066327b956aaf08355788d7e5f2e7" exitCode=0 Nov 29 08:12:50 crc kubenswrapper[4795]: I1129 08:12:50.652222 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z2fm2" event={"ID":"08402fb0-7ff1-492e-8d2b-423e248fb787","Type":"ContainerDied","Data":"5ad331779aeeb15b768cddfa0d09786aad1066327b956aaf08355788d7e5f2e7"} Nov 29 08:12:50 crc kubenswrapper[4795]: I1129 08:12:50.652292 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z2fm2" event={"ID":"08402fb0-7ff1-492e-8d2b-423e248fb787","Type":"ContainerStarted","Data":"fe28fd2e567b2b93b287e746777ca101774c87b144687b283908d78194265706"} Nov 29 08:12:50 crc kubenswrapper[4795]: I1129 08:12:50.676544 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-r5c5p" Nov 29 08:12:50 crc kubenswrapper[4795]: I1129 08:12:50.678245 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-6wfqc" event={"ID":"be948c99-208e-4f5c-8ab7-5971d1efb06e","Type":"ContainerStarted","Data":"d71c348517e9f3873f2cca587e20412d2726d5ffb0522eae784a5d24050f3577"} Nov 29 08:12:50 crc kubenswrapper[4795]: I1129 08:12:50.771021 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-vsfsq"] Nov 29 08:12:50 crc kubenswrapper[4795]: I1129 08:12:50.772922 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vsfsq" Nov 29 08:12:50 crc kubenswrapper[4795]: I1129 08:12:50.778050 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 08:12:50 crc kubenswrapper[4795]: I1129 08:12:50.778485 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 08:12:50 crc kubenswrapper[4795]: I1129 08:12:50.778741 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 08:12:50 crc kubenswrapper[4795]: I1129 08:12:50.778969 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nfhsg" Nov 29 08:12:50 crc kubenswrapper[4795]: I1129 08:12:50.783662 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-6wfqc" podStartSLOduration=2.982025572 podStartE2EDuration="8.783638282s" podCreationTimestamp="2025-11-29 08:12:42 +0000 UTC" firstStartedPulling="2025-11-29 08:12:43.645297521 +0000 UTC m=+2009.620873311" lastFinishedPulling="2025-11-29 08:12:49.446910231 +0000 UTC m=+2015.422486021" observedRunningTime="2025-11-29 08:12:50.730926218 +0000 UTC m=+2016.706502008" watchObservedRunningTime="2025-11-29 08:12:50.783638282 +0000 UTC m=+2016.759214082" Nov 29 08:12:50 crc kubenswrapper[4795]: I1129 08:12:50.818473 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-vsfsq"] Nov 29 08:12:50 crc kubenswrapper[4795]: I1129 08:12:50.883071 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sbdt\" (UniqueName: \"kubernetes.io/projected/52ed8cc8-9050-49af-ad5b-b48bc27eeb12-kube-api-access-9sbdt\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-vsfsq\" (UID: \"52ed8cc8-9050-49af-ad5b-b48bc27eeb12\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vsfsq" Nov 29 08:12:50 crc kubenswrapper[4795]: I1129 08:12:50.883134 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/52ed8cc8-9050-49af-ad5b-b48bc27eeb12-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-vsfsq\" (UID: \"52ed8cc8-9050-49af-ad5b-b48bc27eeb12\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vsfsq" Nov 29 08:12:50 crc kubenswrapper[4795]: I1129 08:12:50.883361 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/52ed8cc8-9050-49af-ad5b-b48bc27eeb12-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-vsfsq\" (UID: \"52ed8cc8-9050-49af-ad5b-b48bc27eeb12\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vsfsq" Nov 29 08:12:50 crc kubenswrapper[4795]: I1129 08:12:50.985645 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/52ed8cc8-9050-49af-ad5b-b48bc27eeb12-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-vsfsq\" (UID: \"52ed8cc8-9050-49af-ad5b-b48bc27eeb12\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vsfsq" Nov 29 08:12:50 crc kubenswrapper[4795]: I1129 08:12:50.986522 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9sbdt\" (UniqueName: \"kubernetes.io/projected/52ed8cc8-9050-49af-ad5b-b48bc27eeb12-kube-api-access-9sbdt\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-vsfsq\" (UID: \"52ed8cc8-9050-49af-ad5b-b48bc27eeb12\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vsfsq" Nov 29 08:12:50 crc kubenswrapper[4795]: I1129 08:12:50.987018 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/52ed8cc8-9050-49af-ad5b-b48bc27eeb12-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-vsfsq\" (UID: \"52ed8cc8-9050-49af-ad5b-b48bc27eeb12\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vsfsq" Nov 29 08:12:51 crc kubenswrapper[4795]: I1129 08:12:50.996253 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/52ed8cc8-9050-49af-ad5b-b48bc27eeb12-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-vsfsq\" (UID: \"52ed8cc8-9050-49af-ad5b-b48bc27eeb12\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vsfsq" Nov 29 08:12:51 crc kubenswrapper[4795]: I1129 08:12:51.005348 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/52ed8cc8-9050-49af-ad5b-b48bc27eeb12-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-vsfsq\" (UID: \"52ed8cc8-9050-49af-ad5b-b48bc27eeb12\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vsfsq" Nov 29 08:12:51 crc kubenswrapper[4795]: I1129 08:12:51.005728 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9sbdt\" (UniqueName: \"kubernetes.io/projected/52ed8cc8-9050-49af-ad5b-b48bc27eeb12-kube-api-access-9sbdt\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-vsfsq\" (UID: \"52ed8cc8-9050-49af-ad5b-b48bc27eeb12\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vsfsq" Nov 29 08:12:51 crc kubenswrapper[4795]: I1129 08:12:51.066351 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-txz6d"] Nov 29 08:12:51 crc kubenswrapper[4795]: I1129 08:12:51.110240 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vsfsq" Nov 29 08:12:51 crc kubenswrapper[4795]: I1129 08:12:51.279547 4795 scope.go:117] "RemoveContainer" containerID="cb788e622da5bdf804dcfa0c959dfb0158a848bc427ed90337dfd530ea78ea5d" Nov 29 08:12:51 crc kubenswrapper[4795]: E1129 08:12:51.281041 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:12:51 crc kubenswrapper[4795]: I1129 08:12:51.699514 4795 generic.go:334] "Generic (PLEG): container finished" podID="9e73c53a-c858-4ba0-8a00-6322487811be" containerID="d183823c144c774d04d6d8840a404c4367b2c9e893c171f9089c1cf2be5423ee" exitCode=0 Nov 29 08:12:51 crc kubenswrapper[4795]: I1129 08:12:51.701395 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-txz6d" event={"ID":"9e73c53a-c858-4ba0-8a00-6322487811be","Type":"ContainerDied","Data":"d183823c144c774d04d6d8840a404c4367b2c9e893c171f9089c1cf2be5423ee"} Nov 29 08:12:51 crc kubenswrapper[4795]: I1129 08:12:51.701458 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-txz6d" event={"ID":"9e73c53a-c858-4ba0-8a00-6322487811be","Type":"ContainerStarted","Data":"a3563331bcceebe537fbf46862c356c3dae431bf1640de225a1c1f0c20b2bc7b"} Nov 29 08:12:51 crc kubenswrapper[4795]: I1129 08:12:51.815087 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-vsfsq"] Nov 29 08:12:52 crc kubenswrapper[4795]: I1129 08:12:52.719537 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-746fb69fd5-n8596" Nov 29 08:12:52 crc kubenswrapper[4795]: I1129 08:12:52.721149 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z2fm2" event={"ID":"08402fb0-7ff1-492e-8d2b-423e248fb787","Type":"ContainerStarted","Data":"c9d7519b038636f4bec26463cb6672dd649a61335eb303ca3d6fb18470bef1d7"} Nov 29 08:12:52 crc kubenswrapper[4795]: I1129 08:12:52.724000 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vsfsq" event={"ID":"52ed8cc8-9050-49af-ad5b-b48bc27eeb12","Type":"ContainerStarted","Data":"28e5ab72a7447eff98efe3db99cfbb64a88d2c6d6077f13e520a5ff27a5632d7"} Nov 29 08:12:52 crc kubenswrapper[4795]: I1129 08:12:52.741859 4795 generic.go:334] "Generic (PLEG): container finished" podID="c0aa8b2b-26ef-48f3-9a24-da2a5b2c9d3a" containerID="35f22de4a7a9f0afaacfcd28c644998d37499261b4a684b69d858a9a5449bbe9" exitCode=0 Nov 29 08:12:52 crc kubenswrapper[4795]: I1129 08:12:52.741911 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-746fb69fd5-n8596" event={"ID":"c0aa8b2b-26ef-48f3-9a24-da2a5b2c9d3a","Type":"ContainerDied","Data":"35f22de4a7a9f0afaacfcd28c644998d37499261b4a684b69d858a9a5449bbe9"} Nov 29 08:12:52 crc kubenswrapper[4795]: I1129 08:12:52.741945 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-746fb69fd5-n8596" event={"ID":"c0aa8b2b-26ef-48f3-9a24-da2a5b2c9d3a","Type":"ContainerDied","Data":"40496373a728dce126728d64b96e580ef3c93fe0f365db9dfc7720949b17b47c"} Nov 29 08:12:52 crc kubenswrapper[4795]: I1129 08:12:52.741969 4795 scope.go:117] "RemoveContainer" containerID="35f22de4a7a9f0afaacfcd28c644998d37499261b4a684b69d858a9a5449bbe9" Nov 29 08:12:52 crc kubenswrapper[4795]: I1129 08:12:52.742165 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-746fb69fd5-n8596" Nov 29 08:12:52 crc kubenswrapper[4795]: I1129 08:12:52.750378 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0aa8b2b-26ef-48f3-9a24-da2a5b2c9d3a-combined-ca-bundle\") pod \"c0aa8b2b-26ef-48f3-9a24-da2a5b2c9d3a\" (UID: \"c0aa8b2b-26ef-48f3-9a24-da2a5b2c9d3a\") " Nov 29 08:12:52 crc kubenswrapper[4795]: I1129 08:12:52.750631 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dn5f\" (UniqueName: \"kubernetes.io/projected/c0aa8b2b-26ef-48f3-9a24-da2a5b2c9d3a-kube-api-access-4dn5f\") pod \"c0aa8b2b-26ef-48f3-9a24-da2a5b2c9d3a\" (UID: \"c0aa8b2b-26ef-48f3-9a24-da2a5b2c9d3a\") " Nov 29 08:12:52 crc kubenswrapper[4795]: I1129 08:12:52.750675 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0aa8b2b-26ef-48f3-9a24-da2a5b2c9d3a-config-data\") pod \"c0aa8b2b-26ef-48f3-9a24-da2a5b2c9d3a\" (UID: \"c0aa8b2b-26ef-48f3-9a24-da2a5b2c9d3a\") " Nov 29 08:12:52 crc kubenswrapper[4795]: I1129 08:12:52.750783 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c0aa8b2b-26ef-48f3-9a24-da2a5b2c9d3a-config-data-custom\") pod \"c0aa8b2b-26ef-48f3-9a24-da2a5b2c9d3a\" (UID: \"c0aa8b2b-26ef-48f3-9a24-da2a5b2c9d3a\") " Nov 29 08:12:52 crc kubenswrapper[4795]: I1129 08:12:52.757860 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0aa8b2b-26ef-48f3-9a24-da2a5b2c9d3a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c0aa8b2b-26ef-48f3-9a24-da2a5b2c9d3a" (UID: "c0aa8b2b-26ef-48f3-9a24-da2a5b2c9d3a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:12:52 crc kubenswrapper[4795]: I1129 08:12:52.774806 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0aa8b2b-26ef-48f3-9a24-da2a5b2c9d3a-kube-api-access-4dn5f" (OuterVolumeSpecName: "kube-api-access-4dn5f") pod "c0aa8b2b-26ef-48f3-9a24-da2a5b2c9d3a" (UID: "c0aa8b2b-26ef-48f3-9a24-da2a5b2c9d3a"). InnerVolumeSpecName "kube-api-access-4dn5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:12:52 crc kubenswrapper[4795]: I1129 08:12:52.805694 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0aa8b2b-26ef-48f3-9a24-da2a5b2c9d3a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c0aa8b2b-26ef-48f3-9a24-da2a5b2c9d3a" (UID: "c0aa8b2b-26ef-48f3-9a24-da2a5b2c9d3a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:12:52 crc kubenswrapper[4795]: I1129 08:12:52.860284 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dn5f\" (UniqueName: \"kubernetes.io/projected/c0aa8b2b-26ef-48f3-9a24-da2a5b2c9d3a-kube-api-access-4dn5f\") on node \"crc\" DevicePath \"\"" Nov 29 08:12:52 crc kubenswrapper[4795]: I1129 08:12:52.860328 4795 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c0aa8b2b-26ef-48f3-9a24-da2a5b2c9d3a-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 29 08:12:52 crc kubenswrapper[4795]: I1129 08:12:52.860337 4795 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0aa8b2b-26ef-48f3-9a24-da2a5b2c9d3a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 08:12:52 crc kubenswrapper[4795]: I1129 08:12:52.872163 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0aa8b2b-26ef-48f3-9a24-da2a5b2c9d3a-config-data" (OuterVolumeSpecName: "config-data") pod "c0aa8b2b-26ef-48f3-9a24-da2a5b2c9d3a" (UID: "c0aa8b2b-26ef-48f3-9a24-da2a5b2c9d3a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:12:52 crc kubenswrapper[4795]: I1129 08:12:52.948052 4795 scope.go:117] "RemoveContainer" containerID="35f22de4a7a9f0afaacfcd28c644998d37499261b4a684b69d858a9a5449bbe9" Nov 29 08:12:52 crc kubenswrapper[4795]: E1129 08:12:52.948958 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35f22de4a7a9f0afaacfcd28c644998d37499261b4a684b69d858a9a5449bbe9\": container with ID starting with 35f22de4a7a9f0afaacfcd28c644998d37499261b4a684b69d858a9a5449bbe9 not found: ID does not exist" containerID="35f22de4a7a9f0afaacfcd28c644998d37499261b4a684b69d858a9a5449bbe9" Nov 29 08:12:52 crc kubenswrapper[4795]: I1129 08:12:52.949001 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35f22de4a7a9f0afaacfcd28c644998d37499261b4a684b69d858a9a5449bbe9"} err="failed to get container status \"35f22de4a7a9f0afaacfcd28c644998d37499261b4a684b69d858a9a5449bbe9\": rpc error: code = NotFound desc = could not find container \"35f22de4a7a9f0afaacfcd28c644998d37499261b4a684b69d858a9a5449bbe9\": container with ID starting with 35f22de4a7a9f0afaacfcd28c644998d37499261b4a684b69d858a9a5449bbe9 not found: ID does not exist" Nov 29 08:12:52 crc kubenswrapper[4795]: I1129 08:12:52.970210 4795 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0aa8b2b-26ef-48f3-9a24-da2a5b2c9d3a-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 08:12:53 crc kubenswrapper[4795]: I1129 08:12:53.114554 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-746fb69fd5-n8596"] Nov 29 08:12:53 crc kubenswrapper[4795]: I1129 08:12:53.128502 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-746fb69fd5-n8596"] Nov 29 08:12:53 crc kubenswrapper[4795]: I1129 08:12:53.758971 4795 generic.go:334] "Generic (PLEG): container finished" podID="be948c99-208e-4f5c-8ab7-5971d1efb06e" containerID="d71c348517e9f3873f2cca587e20412d2726d5ffb0522eae784a5d24050f3577" exitCode=0 Nov 29 08:12:53 crc kubenswrapper[4795]: I1129 08:12:53.759035 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-6wfqc" event={"ID":"be948c99-208e-4f5c-8ab7-5971d1efb06e","Type":"ContainerDied","Data":"d71c348517e9f3873f2cca587e20412d2726d5ffb0522eae784a5d24050f3577"} Nov 29 08:12:53 crc kubenswrapper[4795]: I1129 08:12:53.762910 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vsfsq" event={"ID":"52ed8cc8-9050-49af-ad5b-b48bc27eeb12","Type":"ContainerStarted","Data":"fcdbf1d92863363fe1e2cb6cbe950b2454ace7aefd75add9fa10200419570cee"} Nov 29 08:12:53 crc kubenswrapper[4795]: I1129 08:12:53.802354 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vsfsq" podStartSLOduration=3.253058023 podStartE2EDuration="3.802332718s" podCreationTimestamp="2025-11-29 08:12:50 +0000 UTC" firstStartedPulling="2025-11-29 08:12:51.833180945 +0000 UTC m=+2017.808756735" lastFinishedPulling="2025-11-29 08:12:52.38245564 +0000 UTC m=+2018.358031430" observedRunningTime="2025-11-29 08:12:53.79783612 +0000 UTC m=+2019.773411930" watchObservedRunningTime="2025-11-29 08:12:53.802332718 +0000 UTC m=+2019.777908508" Nov 29 08:12:54 crc kubenswrapper[4795]: I1129 08:12:54.294953 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0aa8b2b-26ef-48f3-9a24-da2a5b2c9d3a" path="/var/lib/kubelet/pods/c0aa8b2b-26ef-48f3-9a24-da2a5b2c9d3a/volumes" Nov 29 08:12:54 crc kubenswrapper[4795]: I1129 08:12:54.568984 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 29 08:12:54 crc kubenswrapper[4795]: I1129 08:12:54.626927 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 29 08:12:57 crc kubenswrapper[4795]: I1129 08:12:57.850340 4795 generic.go:334] "Generic (PLEG): container finished" podID="08402fb0-7ff1-492e-8d2b-423e248fb787" containerID="c9d7519b038636f4bec26463cb6672dd649a61335eb303ca3d6fb18470bef1d7" exitCode=0 Nov 29 08:12:57 crc kubenswrapper[4795]: I1129 08:12:57.850411 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z2fm2" event={"ID":"08402fb0-7ff1-492e-8d2b-423e248fb787","Type":"ContainerDied","Data":"c9d7519b038636f4bec26463cb6672dd649a61335eb303ca3d6fb18470bef1d7"} Nov 29 08:12:58 crc kubenswrapper[4795]: I1129 08:12:58.335417 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-6wfqc" Nov 29 08:12:58 crc kubenswrapper[4795]: I1129 08:12:58.462188 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be948c99-208e-4f5c-8ab7-5971d1efb06e-scripts\") pod \"be948c99-208e-4f5c-8ab7-5971d1efb06e\" (UID: \"be948c99-208e-4f5c-8ab7-5971d1efb06e\") " Nov 29 08:12:58 crc kubenswrapper[4795]: I1129 08:12:58.462561 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be948c99-208e-4f5c-8ab7-5971d1efb06e-combined-ca-bundle\") pod \"be948c99-208e-4f5c-8ab7-5971d1efb06e\" (UID: \"be948c99-208e-4f5c-8ab7-5971d1efb06e\") " Nov 29 08:12:58 crc kubenswrapper[4795]: I1129 08:12:58.462668 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be948c99-208e-4f5c-8ab7-5971d1efb06e-config-data\") pod \"be948c99-208e-4f5c-8ab7-5971d1efb06e\" (UID: \"be948c99-208e-4f5c-8ab7-5971d1efb06e\") " Nov 29 08:12:58 crc kubenswrapper[4795]: I1129 08:12:58.462715 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qn6h\" (UniqueName: \"kubernetes.io/projected/be948c99-208e-4f5c-8ab7-5971d1efb06e-kube-api-access-9qn6h\") pod \"be948c99-208e-4f5c-8ab7-5971d1efb06e\" (UID: \"be948c99-208e-4f5c-8ab7-5971d1efb06e\") " Nov 29 08:12:58 crc kubenswrapper[4795]: I1129 08:12:58.469413 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be948c99-208e-4f5c-8ab7-5971d1efb06e-kube-api-access-9qn6h" (OuterVolumeSpecName: "kube-api-access-9qn6h") pod "be948c99-208e-4f5c-8ab7-5971d1efb06e" (UID: "be948c99-208e-4f5c-8ab7-5971d1efb06e"). InnerVolumeSpecName "kube-api-access-9qn6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:12:58 crc kubenswrapper[4795]: I1129 08:12:58.469904 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be948c99-208e-4f5c-8ab7-5971d1efb06e-scripts" (OuterVolumeSpecName: "scripts") pod "be948c99-208e-4f5c-8ab7-5971d1efb06e" (UID: "be948c99-208e-4f5c-8ab7-5971d1efb06e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:12:58 crc kubenswrapper[4795]: I1129 08:12:58.516176 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be948c99-208e-4f5c-8ab7-5971d1efb06e-config-data" (OuterVolumeSpecName: "config-data") pod "be948c99-208e-4f5c-8ab7-5971d1efb06e" (UID: "be948c99-208e-4f5c-8ab7-5971d1efb06e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:12:58 crc kubenswrapper[4795]: I1129 08:12:58.539694 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be948c99-208e-4f5c-8ab7-5971d1efb06e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "be948c99-208e-4f5c-8ab7-5971d1efb06e" (UID: "be948c99-208e-4f5c-8ab7-5971d1efb06e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:12:58 crc kubenswrapper[4795]: I1129 08:12:58.573626 4795 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be948c99-208e-4f5c-8ab7-5971d1efb06e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 08:12:58 crc kubenswrapper[4795]: I1129 08:12:58.573663 4795 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be948c99-208e-4f5c-8ab7-5971d1efb06e-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 08:12:58 crc kubenswrapper[4795]: I1129 08:12:58.573672 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qn6h\" (UniqueName: \"kubernetes.io/projected/be948c99-208e-4f5c-8ab7-5971d1efb06e-kube-api-access-9qn6h\") on node \"crc\" DevicePath \"\"" Nov 29 08:12:58 crc kubenswrapper[4795]: I1129 08:12:58.573684 4795 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be948c99-208e-4f5c-8ab7-5971d1efb06e-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 08:12:58 crc kubenswrapper[4795]: I1129 08:12:58.876079 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-txz6d" event={"ID":"9e73c53a-c858-4ba0-8a00-6322487811be","Type":"ContainerStarted","Data":"3b0bd8e6dd2cee091969f128e0711223727c183afed985876fd17f32f38a9e33"} Nov 29 08:12:58 crc kubenswrapper[4795]: I1129 08:12:58.888702 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-6wfqc" event={"ID":"be948c99-208e-4f5c-8ab7-5971d1efb06e","Type":"ContainerDied","Data":"239be3fc1f8f8a8ffdf14609ade64aacda62a32059e6be9d1328dc93ca2eecb9"} Nov 29 08:12:58 crc kubenswrapper[4795]: I1129 08:12:58.888761 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="239be3fc1f8f8a8ffdf14609ade64aacda62a32059e6be9d1328dc93ca2eecb9" Nov 29 08:12:58 crc kubenswrapper[4795]: I1129 08:12:58.888840 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-6wfqc" Nov 29 08:12:59 crc kubenswrapper[4795]: I1129 08:12:59.902108 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z2fm2" event={"ID":"08402fb0-7ff1-492e-8d2b-423e248fb787","Type":"ContainerStarted","Data":"dae6986c228a930155d7ee99141a0899f2b309f8c762227141215f897de9d002"} Nov 29 08:12:59 crc kubenswrapper[4795]: I1129 08:12:59.932061 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-z2fm2" podStartSLOduration=4.850993489 podStartE2EDuration="12.932039906s" podCreationTimestamp="2025-11-29 08:12:47 +0000 UTC" firstStartedPulling="2025-11-29 08:12:50.659067632 +0000 UTC m=+2016.634643422" lastFinishedPulling="2025-11-29 08:12:58.740114049 +0000 UTC m=+2024.715689839" observedRunningTime="2025-11-29 08:12:59.925691346 +0000 UTC m=+2025.901267136" watchObservedRunningTime="2025-11-29 08:12:59.932039906 +0000 UTC m=+2025.907615696" Nov 29 08:13:00 crc kubenswrapper[4795]: I1129 08:13:00.917269 4795 generic.go:334] "Generic (PLEG): container finished" podID="52ed8cc8-9050-49af-ad5b-b48bc27eeb12" containerID="fcdbf1d92863363fe1e2cb6cbe950b2454ace7aefd75add9fa10200419570cee" exitCode=0 Nov 29 08:13:00 crc kubenswrapper[4795]: I1129 08:13:00.917356 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vsfsq" event={"ID":"52ed8cc8-9050-49af-ad5b-b48bc27eeb12","Type":"ContainerDied","Data":"fcdbf1d92863363fe1e2cb6cbe950b2454ace7aefd75add9fa10200419570cee"} Nov 29 08:13:00 crc kubenswrapper[4795]: I1129 08:13:00.921620 4795 generic.go:334] "Generic (PLEG): container finished" podID="9e73c53a-c858-4ba0-8a00-6322487811be" containerID="3b0bd8e6dd2cee091969f128e0711223727c183afed985876fd17f32f38a9e33" exitCode=0 Nov 29 08:13:00 crc kubenswrapper[4795]: I1129 08:13:00.921678 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-txz6d" event={"ID":"9e73c53a-c858-4ba0-8a00-6322487811be","Type":"ContainerDied","Data":"3b0bd8e6dd2cee091969f128e0711223727c183afed985876fd17f32f38a9e33"} Nov 29 08:13:01 crc kubenswrapper[4795]: I1129 08:13:01.934378 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-txz6d" event={"ID":"9e73c53a-c858-4ba0-8a00-6322487811be","Type":"ContainerStarted","Data":"c60305bfc59b7b8ccbfc588fecad81c6c3f219bb43e58c7cf079d086ddb2d400"} Nov 29 08:13:01 crc kubenswrapper[4795]: I1129 08:13:01.962407 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-txz6d" podStartSLOduration=2.3552484 podStartE2EDuration="11.962385544s" podCreationTimestamp="2025-11-29 08:12:50 +0000 UTC" firstStartedPulling="2025-11-29 08:12:51.712070353 +0000 UTC m=+2017.687646143" lastFinishedPulling="2025-11-29 08:13:01.319207497 +0000 UTC m=+2027.294783287" observedRunningTime="2025-11-29 08:13:01.956982431 +0000 UTC m=+2027.932558221" watchObservedRunningTime="2025-11-29 08:13:01.962385544 +0000 UTC m=+2027.937961334" Nov 29 08:13:02 crc kubenswrapper[4795]: I1129 08:13:02.472204 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vsfsq" Nov 29 08:13:02 crc kubenswrapper[4795]: I1129 08:13:02.572205 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/52ed8cc8-9050-49af-ad5b-b48bc27eeb12-ssh-key\") pod \"52ed8cc8-9050-49af-ad5b-b48bc27eeb12\" (UID: \"52ed8cc8-9050-49af-ad5b-b48bc27eeb12\") " Nov 29 08:13:02 crc kubenswrapper[4795]: I1129 08:13:02.572293 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/52ed8cc8-9050-49af-ad5b-b48bc27eeb12-inventory\") pod \"52ed8cc8-9050-49af-ad5b-b48bc27eeb12\" (UID: \"52ed8cc8-9050-49af-ad5b-b48bc27eeb12\") " Nov 29 08:13:02 crc kubenswrapper[4795]: I1129 08:13:02.572384 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9sbdt\" (UniqueName: \"kubernetes.io/projected/52ed8cc8-9050-49af-ad5b-b48bc27eeb12-kube-api-access-9sbdt\") pod \"52ed8cc8-9050-49af-ad5b-b48bc27eeb12\" (UID: \"52ed8cc8-9050-49af-ad5b-b48bc27eeb12\") " Nov 29 08:13:02 crc kubenswrapper[4795]: I1129 08:13:02.579652 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52ed8cc8-9050-49af-ad5b-b48bc27eeb12-kube-api-access-9sbdt" (OuterVolumeSpecName: "kube-api-access-9sbdt") pod "52ed8cc8-9050-49af-ad5b-b48bc27eeb12" (UID: "52ed8cc8-9050-49af-ad5b-b48bc27eeb12"). InnerVolumeSpecName "kube-api-access-9sbdt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:13:02 crc kubenswrapper[4795]: I1129 08:13:02.618782 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52ed8cc8-9050-49af-ad5b-b48bc27eeb12-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "52ed8cc8-9050-49af-ad5b-b48bc27eeb12" (UID: "52ed8cc8-9050-49af-ad5b-b48bc27eeb12"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:13:02 crc kubenswrapper[4795]: I1129 08:13:02.628152 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52ed8cc8-9050-49af-ad5b-b48bc27eeb12-inventory" (OuterVolumeSpecName: "inventory") pod "52ed8cc8-9050-49af-ad5b-b48bc27eeb12" (UID: "52ed8cc8-9050-49af-ad5b-b48bc27eeb12"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:13:02 crc kubenswrapper[4795]: I1129 08:13:02.674916 4795 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/52ed8cc8-9050-49af-ad5b-b48bc27eeb12-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 08:13:02 crc kubenswrapper[4795]: I1129 08:13:02.675144 4795 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/52ed8cc8-9050-49af-ad5b-b48bc27eeb12-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 08:13:02 crc kubenswrapper[4795]: I1129 08:13:02.675265 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9sbdt\" (UniqueName: \"kubernetes.io/projected/52ed8cc8-9050-49af-ad5b-b48bc27eeb12-kube-api-access-9sbdt\") on node \"crc\" DevicePath \"\"" Nov 29 08:13:02 crc kubenswrapper[4795]: I1129 08:13:02.786931 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Nov 29 08:13:02 crc kubenswrapper[4795]: I1129 08:13:02.787403 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="72d0f4d2-d953-4cfa-a24e-221e2cf4e994" containerName="aodh-api" containerID="cri-o://ee97bc8feac99071eb68a042602a82d308a7e8e5198684037ba2618866cbcb5d" gracePeriod=30 Nov 29 08:13:02 crc kubenswrapper[4795]: I1129 08:13:02.787493 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="72d0f4d2-d953-4cfa-a24e-221e2cf4e994" containerName="aodh-listener" containerID="cri-o://60eef7bffab2cda952c51080277affae2d2bfddacb15e17b60e8d551203f9d14" gracePeriod=30 Nov 29 08:13:02 crc kubenswrapper[4795]: I1129 08:13:02.787579 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="72d0f4d2-d953-4cfa-a24e-221e2cf4e994" containerName="aodh-notifier" containerID="cri-o://5da0b91ad20b347b423d990141652916306ff6fda9c2ae32b780ef3cff470119" gracePeriod=30 Nov 29 08:13:02 crc kubenswrapper[4795]: I1129 08:13:02.787680 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="72d0f4d2-d953-4cfa-a24e-221e2cf4e994" containerName="aodh-evaluator" containerID="cri-o://cafc02ffd2e71c98c39616ad76cdb228f7f2729d66f07556ce92fc337917f007" gracePeriod=30 Nov 29 08:13:02 crc kubenswrapper[4795]: I1129 08:13:02.947220 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vsfsq" event={"ID":"52ed8cc8-9050-49af-ad5b-b48bc27eeb12","Type":"ContainerDied","Data":"28e5ab72a7447eff98efe3db99cfbb64a88d2c6d6077f13e520a5ff27a5632d7"} Nov 29 08:13:02 crc kubenswrapper[4795]: I1129 08:13:02.947262 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28e5ab72a7447eff98efe3db99cfbb64a88d2c6d6077f13e520a5ff27a5632d7" Nov 29 08:13:02 crc kubenswrapper[4795]: I1129 08:13:02.947469 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vsfsq" Nov 29 08:13:03 crc kubenswrapper[4795]: I1129 08:13:03.037672 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-glkt9"] Nov 29 08:13:03 crc kubenswrapper[4795]: E1129 08:13:03.038223 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be948c99-208e-4f5c-8ab7-5971d1efb06e" containerName="aodh-db-sync" Nov 29 08:13:03 crc kubenswrapper[4795]: I1129 08:13:03.038240 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="be948c99-208e-4f5c-8ab7-5971d1efb06e" containerName="aodh-db-sync" Nov 29 08:13:03 crc kubenswrapper[4795]: E1129 08:13:03.038266 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0aa8b2b-26ef-48f3-9a24-da2a5b2c9d3a" containerName="heat-engine" Nov 29 08:13:03 crc kubenswrapper[4795]: I1129 08:13:03.038274 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0aa8b2b-26ef-48f3-9a24-da2a5b2c9d3a" containerName="heat-engine" Nov 29 08:13:03 crc kubenswrapper[4795]: E1129 08:13:03.038284 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52ed8cc8-9050-49af-ad5b-b48bc27eeb12" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 29 08:13:03 crc kubenswrapper[4795]: I1129 08:13:03.038291 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="52ed8cc8-9050-49af-ad5b-b48bc27eeb12" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 29 08:13:03 crc kubenswrapper[4795]: I1129 08:13:03.038503 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="52ed8cc8-9050-49af-ad5b-b48bc27eeb12" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 29 08:13:03 crc kubenswrapper[4795]: I1129 08:13:03.038538 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="be948c99-208e-4f5c-8ab7-5971d1efb06e" containerName="aodh-db-sync" Nov 29 08:13:03 crc kubenswrapper[4795]: I1129 08:13:03.038550 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0aa8b2b-26ef-48f3-9a24-da2a5b2c9d3a" containerName="heat-engine" Nov 29 08:13:03 crc kubenswrapper[4795]: I1129 08:13:03.039374 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-glkt9" Nov 29 08:13:03 crc kubenswrapper[4795]: I1129 08:13:03.043942 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 08:13:03 crc kubenswrapper[4795]: I1129 08:13:03.044131 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 08:13:03 crc kubenswrapper[4795]: I1129 08:13:03.044333 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nfhsg" Nov 29 08:13:03 crc kubenswrapper[4795]: I1129 08:13:03.044471 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 08:13:03 crc kubenswrapper[4795]: I1129 08:13:03.075628 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-glkt9"] Nov 29 08:13:03 crc kubenswrapper[4795]: I1129 08:13:03.196064 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5db55adf-c067-44de-ad20-4b8a138e2576-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-glkt9\" (UID: \"5db55adf-c067-44de-ad20-4b8a138e2576\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-glkt9" Nov 29 08:13:03 crc kubenswrapper[4795]: I1129 08:13:03.196297 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5db55adf-c067-44de-ad20-4b8a138e2576-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-glkt9\" (UID: \"5db55adf-c067-44de-ad20-4b8a138e2576\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-glkt9" Nov 29 08:13:03 crc kubenswrapper[4795]: I1129 08:13:03.196336 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56w9f\" (UniqueName: \"kubernetes.io/projected/5db55adf-c067-44de-ad20-4b8a138e2576-kube-api-access-56w9f\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-glkt9\" (UID: \"5db55adf-c067-44de-ad20-4b8a138e2576\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-glkt9" Nov 29 08:13:03 crc kubenswrapper[4795]: I1129 08:13:03.196379 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5db55adf-c067-44de-ad20-4b8a138e2576-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-glkt9\" (UID: \"5db55adf-c067-44de-ad20-4b8a138e2576\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-glkt9" Nov 29 08:13:03 crc kubenswrapper[4795]: I1129 08:13:03.275868 4795 scope.go:117] "RemoveContainer" containerID="cb788e622da5bdf804dcfa0c959dfb0158a848bc427ed90337dfd530ea78ea5d" Nov 29 08:13:03 crc kubenswrapper[4795]: E1129 08:13:03.276178 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:13:03 crc kubenswrapper[4795]: I1129 08:13:03.298811 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5db55adf-c067-44de-ad20-4b8a138e2576-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-glkt9\" (UID: \"5db55adf-c067-44de-ad20-4b8a138e2576\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-glkt9" Nov 29 08:13:03 crc kubenswrapper[4795]: I1129 08:13:03.299110 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56w9f\" (UniqueName: \"kubernetes.io/projected/5db55adf-c067-44de-ad20-4b8a138e2576-kube-api-access-56w9f\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-glkt9\" (UID: \"5db55adf-c067-44de-ad20-4b8a138e2576\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-glkt9" Nov 29 08:13:03 crc kubenswrapper[4795]: I1129 08:13:03.299159 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5db55adf-c067-44de-ad20-4b8a138e2576-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-glkt9\" (UID: \"5db55adf-c067-44de-ad20-4b8a138e2576\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-glkt9" Nov 29 08:13:03 crc kubenswrapper[4795]: I1129 08:13:03.299354 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5db55adf-c067-44de-ad20-4b8a138e2576-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-glkt9\" (UID: \"5db55adf-c067-44de-ad20-4b8a138e2576\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-glkt9" Nov 29 08:13:03 crc kubenswrapper[4795]: I1129 08:13:03.305551 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5db55adf-c067-44de-ad20-4b8a138e2576-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-glkt9\" (UID: \"5db55adf-c067-44de-ad20-4b8a138e2576\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-glkt9" Nov 29 08:13:03 crc kubenswrapper[4795]: I1129 08:13:03.305744 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5db55adf-c067-44de-ad20-4b8a138e2576-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-glkt9\" (UID: \"5db55adf-c067-44de-ad20-4b8a138e2576\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-glkt9" Nov 29 08:13:03 crc kubenswrapper[4795]: I1129 08:13:03.315379 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5db55adf-c067-44de-ad20-4b8a138e2576-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-glkt9\" (UID: \"5db55adf-c067-44de-ad20-4b8a138e2576\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-glkt9" Nov 29 08:13:03 crc kubenswrapper[4795]: I1129 08:13:03.322510 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56w9f\" (UniqueName: \"kubernetes.io/projected/5db55adf-c067-44de-ad20-4b8a138e2576-kube-api-access-56w9f\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-glkt9\" (UID: \"5db55adf-c067-44de-ad20-4b8a138e2576\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-glkt9" Nov 29 08:13:03 crc kubenswrapper[4795]: I1129 08:13:03.372276 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-glkt9" Nov 29 08:13:03 crc kubenswrapper[4795]: I1129 08:13:03.965011 4795 generic.go:334] "Generic (PLEG): container finished" podID="72d0f4d2-d953-4cfa-a24e-221e2cf4e994" containerID="cafc02ffd2e71c98c39616ad76cdb228f7f2729d66f07556ce92fc337917f007" exitCode=0 Nov 29 08:13:03 crc kubenswrapper[4795]: I1129 08:13:03.965363 4795 generic.go:334] "Generic (PLEG): container finished" podID="72d0f4d2-d953-4cfa-a24e-221e2cf4e994" containerID="ee97bc8feac99071eb68a042602a82d308a7e8e5198684037ba2618866cbcb5d" exitCode=0 Nov 29 08:13:03 crc kubenswrapper[4795]: I1129 08:13:03.965109 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"72d0f4d2-d953-4cfa-a24e-221e2cf4e994","Type":"ContainerDied","Data":"cafc02ffd2e71c98c39616ad76cdb228f7f2729d66f07556ce92fc337917f007"} Nov 29 08:13:03 crc kubenswrapper[4795]: I1129 08:13:03.965409 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"72d0f4d2-d953-4cfa-a24e-221e2cf4e994","Type":"ContainerDied","Data":"ee97bc8feac99071eb68a042602a82d308a7e8e5198684037ba2618866cbcb5d"} Nov 29 08:13:03 crc kubenswrapper[4795]: I1129 08:13:03.997032 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-glkt9"] Nov 29 08:13:04 crc kubenswrapper[4795]: I1129 08:13:04.979564 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-glkt9" event={"ID":"5db55adf-c067-44de-ad20-4b8a138e2576","Type":"ContainerStarted","Data":"4ce631b12fc130cf2b50f8afc58258b7e4c0adcf4e10ba6a199459e7837e0601"} Nov 29 08:13:04 crc kubenswrapper[4795]: I1129 08:13:04.980022 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-glkt9" event={"ID":"5db55adf-c067-44de-ad20-4b8a138e2576","Type":"ContainerStarted","Data":"e83e94831fe832852bd8a302ff6414eb1273aa5a327fb801bd440b5466edef86"} Nov 29 08:13:05 crc kubenswrapper[4795]: I1129 08:13:05.008505 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-glkt9" podStartSLOduration=1.324542324 podStartE2EDuration="2.008485186s" podCreationTimestamp="2025-11-29 08:13:03 +0000 UTC" firstStartedPulling="2025-11-29 08:13:04.002466607 +0000 UTC m=+2029.978042397" lastFinishedPulling="2025-11-29 08:13:04.686409469 +0000 UTC m=+2030.661985259" observedRunningTime="2025-11-29 08:13:04.994471029 +0000 UTC m=+2030.970046819" watchObservedRunningTime="2025-11-29 08:13:05.008485186 +0000 UTC m=+2030.984060976" Nov 29 08:13:08 crc kubenswrapper[4795]: I1129 08:13:08.251639 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-z2fm2" Nov 29 08:13:08 crc kubenswrapper[4795]: I1129 08:13:08.252379 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-z2fm2" Nov 29 08:13:08 crc kubenswrapper[4795]: I1129 08:13:08.801559 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 29 08:13:08 crc kubenswrapper[4795]: I1129 08:13:08.946566 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/72d0f4d2-d953-4cfa-a24e-221e2cf4e994-internal-tls-certs\") pod \"72d0f4d2-d953-4cfa-a24e-221e2cf4e994\" (UID: \"72d0f4d2-d953-4cfa-a24e-221e2cf4e994\") " Nov 29 08:13:08 crc kubenswrapper[4795]: I1129 08:13:08.946805 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72d0f4d2-d953-4cfa-a24e-221e2cf4e994-combined-ca-bundle\") pod \"72d0f4d2-d953-4cfa-a24e-221e2cf4e994\" (UID: \"72d0f4d2-d953-4cfa-a24e-221e2cf4e994\") " Nov 29 08:13:08 crc kubenswrapper[4795]: I1129 08:13:08.946863 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72d0f4d2-d953-4cfa-a24e-221e2cf4e994-scripts\") pod \"72d0f4d2-d953-4cfa-a24e-221e2cf4e994\" (UID: \"72d0f4d2-d953-4cfa-a24e-221e2cf4e994\") " Nov 29 08:13:08 crc kubenswrapper[4795]: I1129 08:13:08.946897 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtv6q\" (UniqueName: \"kubernetes.io/projected/72d0f4d2-d953-4cfa-a24e-221e2cf4e994-kube-api-access-qtv6q\") pod \"72d0f4d2-d953-4cfa-a24e-221e2cf4e994\" (UID: \"72d0f4d2-d953-4cfa-a24e-221e2cf4e994\") " Nov 29 08:13:08 crc kubenswrapper[4795]: I1129 08:13:08.946966 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/72d0f4d2-d953-4cfa-a24e-221e2cf4e994-public-tls-certs\") pod \"72d0f4d2-d953-4cfa-a24e-221e2cf4e994\" (UID: \"72d0f4d2-d953-4cfa-a24e-221e2cf4e994\") " Nov 29 08:13:08 crc kubenswrapper[4795]: I1129 08:13:08.946993 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72d0f4d2-d953-4cfa-a24e-221e2cf4e994-config-data\") pod \"72d0f4d2-d953-4cfa-a24e-221e2cf4e994\" (UID: \"72d0f4d2-d953-4cfa-a24e-221e2cf4e994\") " Nov 29 08:13:08 crc kubenswrapper[4795]: I1129 08:13:08.956674 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72d0f4d2-d953-4cfa-a24e-221e2cf4e994-scripts" (OuterVolumeSpecName: "scripts") pod "72d0f4d2-d953-4cfa-a24e-221e2cf4e994" (UID: "72d0f4d2-d953-4cfa-a24e-221e2cf4e994"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:13:08 crc kubenswrapper[4795]: I1129 08:13:08.962130 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72d0f4d2-d953-4cfa-a24e-221e2cf4e994-kube-api-access-qtv6q" (OuterVolumeSpecName: "kube-api-access-qtv6q") pod "72d0f4d2-d953-4cfa-a24e-221e2cf4e994" (UID: "72d0f4d2-d953-4cfa-a24e-221e2cf4e994"). InnerVolumeSpecName "kube-api-access-qtv6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:13:09 crc kubenswrapper[4795]: I1129 08:13:09.017454 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72d0f4d2-d953-4cfa-a24e-221e2cf4e994-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "72d0f4d2-d953-4cfa-a24e-221e2cf4e994" (UID: "72d0f4d2-d953-4cfa-a24e-221e2cf4e994"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:13:09 crc kubenswrapper[4795]: I1129 08:13:09.034709 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72d0f4d2-d953-4cfa-a24e-221e2cf4e994-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "72d0f4d2-d953-4cfa-a24e-221e2cf4e994" (UID: "72d0f4d2-d953-4cfa-a24e-221e2cf4e994"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:13:09 crc kubenswrapper[4795]: I1129 08:13:09.036062 4795 generic.go:334] "Generic (PLEG): container finished" podID="72d0f4d2-d953-4cfa-a24e-221e2cf4e994" containerID="60eef7bffab2cda952c51080277affae2d2bfddacb15e17b60e8d551203f9d14" exitCode=0 Nov 29 08:13:09 crc kubenswrapper[4795]: I1129 08:13:09.036137 4795 generic.go:334] "Generic (PLEG): container finished" podID="72d0f4d2-d953-4cfa-a24e-221e2cf4e994" containerID="5da0b91ad20b347b423d990141652916306ff6fda9c2ae32b780ef3cff470119" exitCode=0 Nov 29 08:13:09 crc kubenswrapper[4795]: I1129 08:13:09.036139 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"72d0f4d2-d953-4cfa-a24e-221e2cf4e994","Type":"ContainerDied","Data":"60eef7bffab2cda952c51080277affae2d2bfddacb15e17b60e8d551203f9d14"} Nov 29 08:13:09 crc kubenswrapper[4795]: I1129 08:13:09.036164 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 29 08:13:09 crc kubenswrapper[4795]: I1129 08:13:09.036222 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"72d0f4d2-d953-4cfa-a24e-221e2cf4e994","Type":"ContainerDied","Data":"5da0b91ad20b347b423d990141652916306ff6fda9c2ae32b780ef3cff470119"} Nov 29 08:13:09 crc kubenswrapper[4795]: I1129 08:13:09.036237 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"72d0f4d2-d953-4cfa-a24e-221e2cf4e994","Type":"ContainerDied","Data":"3223a642dc38eb3ad653079297261bf76061fcc0bbd8360b54f0ebe5658dec00"} Nov 29 08:13:09 crc kubenswrapper[4795]: I1129 08:13:09.036255 4795 scope.go:117] "RemoveContainer" containerID="60eef7bffab2cda952c51080277affae2d2bfddacb15e17b60e8d551203f9d14" Nov 29 08:13:09 crc kubenswrapper[4795]: I1129 08:13:09.049984 4795 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72d0f4d2-d953-4cfa-a24e-221e2cf4e994-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 08:13:09 crc kubenswrapper[4795]: I1129 08:13:09.050053 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qtv6q\" (UniqueName: \"kubernetes.io/projected/72d0f4d2-d953-4cfa-a24e-221e2cf4e994-kube-api-access-qtv6q\") on node \"crc\" DevicePath \"\"" Nov 29 08:13:09 crc kubenswrapper[4795]: I1129 08:13:09.050066 4795 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/72d0f4d2-d953-4cfa-a24e-221e2cf4e994-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 29 08:13:09 crc kubenswrapper[4795]: I1129 08:13:09.050075 4795 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/72d0f4d2-d953-4cfa-a24e-221e2cf4e994-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 29 08:13:09 crc kubenswrapper[4795]: I1129 08:13:09.081043 4795 scope.go:117] "RemoveContainer" containerID="5da0b91ad20b347b423d990141652916306ff6fda9c2ae32b780ef3cff470119" Nov 29 08:13:09 crc kubenswrapper[4795]: I1129 08:13:09.097032 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72d0f4d2-d953-4cfa-a24e-221e2cf4e994-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "72d0f4d2-d953-4cfa-a24e-221e2cf4e994" (UID: "72d0f4d2-d953-4cfa-a24e-221e2cf4e994"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:13:09 crc kubenswrapper[4795]: I1129 08:13:09.102286 4795 scope.go:117] "RemoveContainer" containerID="cafc02ffd2e71c98c39616ad76cdb228f7f2729d66f07556ce92fc337917f007" Nov 29 08:13:09 crc kubenswrapper[4795]: I1129 08:13:09.114802 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72d0f4d2-d953-4cfa-a24e-221e2cf4e994-config-data" (OuterVolumeSpecName: "config-data") pod "72d0f4d2-d953-4cfa-a24e-221e2cf4e994" (UID: "72d0f4d2-d953-4cfa-a24e-221e2cf4e994"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:13:09 crc kubenswrapper[4795]: I1129 08:13:09.138732 4795 scope.go:117] "RemoveContainer" containerID="ee97bc8feac99071eb68a042602a82d308a7e8e5198684037ba2618866cbcb5d" Nov 29 08:13:09 crc kubenswrapper[4795]: I1129 08:13:09.152275 4795 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72d0f4d2-d953-4cfa-a24e-221e2cf4e994-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 08:13:09 crc kubenswrapper[4795]: I1129 08:13:09.152315 4795 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72d0f4d2-d953-4cfa-a24e-221e2cf4e994-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 08:13:09 crc kubenswrapper[4795]: I1129 08:13:09.169011 4795 scope.go:117] "RemoveContainer" containerID="60eef7bffab2cda952c51080277affae2d2bfddacb15e17b60e8d551203f9d14" Nov 29 08:13:09 crc kubenswrapper[4795]: E1129 08:13:09.170107 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60eef7bffab2cda952c51080277affae2d2bfddacb15e17b60e8d551203f9d14\": container with ID starting with 60eef7bffab2cda952c51080277affae2d2bfddacb15e17b60e8d551203f9d14 not found: ID does not exist" containerID="60eef7bffab2cda952c51080277affae2d2bfddacb15e17b60e8d551203f9d14" Nov 29 08:13:09 crc kubenswrapper[4795]: I1129 08:13:09.170141 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60eef7bffab2cda952c51080277affae2d2bfddacb15e17b60e8d551203f9d14"} err="failed to get container status \"60eef7bffab2cda952c51080277affae2d2bfddacb15e17b60e8d551203f9d14\": rpc error: code = NotFound desc = could not find container \"60eef7bffab2cda952c51080277affae2d2bfddacb15e17b60e8d551203f9d14\": container with ID starting with 60eef7bffab2cda952c51080277affae2d2bfddacb15e17b60e8d551203f9d14 not found: ID does not exist" Nov 29 08:13:09 crc kubenswrapper[4795]: I1129 08:13:09.170165 4795 scope.go:117] "RemoveContainer" containerID="5da0b91ad20b347b423d990141652916306ff6fda9c2ae32b780ef3cff470119" Nov 29 08:13:09 crc kubenswrapper[4795]: E1129 08:13:09.171745 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5da0b91ad20b347b423d990141652916306ff6fda9c2ae32b780ef3cff470119\": container with ID starting with 5da0b91ad20b347b423d990141652916306ff6fda9c2ae32b780ef3cff470119 not found: ID does not exist" containerID="5da0b91ad20b347b423d990141652916306ff6fda9c2ae32b780ef3cff470119" Nov 29 08:13:09 crc kubenswrapper[4795]: I1129 08:13:09.171790 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5da0b91ad20b347b423d990141652916306ff6fda9c2ae32b780ef3cff470119"} err="failed to get container status \"5da0b91ad20b347b423d990141652916306ff6fda9c2ae32b780ef3cff470119\": rpc error: code = NotFound desc = could not find container \"5da0b91ad20b347b423d990141652916306ff6fda9c2ae32b780ef3cff470119\": container with ID starting with 5da0b91ad20b347b423d990141652916306ff6fda9c2ae32b780ef3cff470119 not found: ID does not exist" Nov 29 08:13:09 crc kubenswrapper[4795]: I1129 08:13:09.171818 4795 scope.go:117] "RemoveContainer" containerID="cafc02ffd2e71c98c39616ad76cdb228f7f2729d66f07556ce92fc337917f007" Nov 29 08:13:09 crc kubenswrapper[4795]: E1129 08:13:09.172638 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cafc02ffd2e71c98c39616ad76cdb228f7f2729d66f07556ce92fc337917f007\": container with ID starting with cafc02ffd2e71c98c39616ad76cdb228f7f2729d66f07556ce92fc337917f007 not found: ID does not exist" containerID="cafc02ffd2e71c98c39616ad76cdb228f7f2729d66f07556ce92fc337917f007" Nov 29 08:13:09 crc kubenswrapper[4795]: I1129 08:13:09.172664 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cafc02ffd2e71c98c39616ad76cdb228f7f2729d66f07556ce92fc337917f007"} err="failed to get container status \"cafc02ffd2e71c98c39616ad76cdb228f7f2729d66f07556ce92fc337917f007\": rpc error: code = NotFound desc = could not find container \"cafc02ffd2e71c98c39616ad76cdb228f7f2729d66f07556ce92fc337917f007\": container with ID starting with cafc02ffd2e71c98c39616ad76cdb228f7f2729d66f07556ce92fc337917f007 not found: ID does not exist" Nov 29 08:13:09 crc kubenswrapper[4795]: I1129 08:13:09.172681 4795 scope.go:117] "RemoveContainer" containerID="ee97bc8feac99071eb68a042602a82d308a7e8e5198684037ba2618866cbcb5d" Nov 29 08:13:09 crc kubenswrapper[4795]: E1129 08:13:09.173121 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee97bc8feac99071eb68a042602a82d308a7e8e5198684037ba2618866cbcb5d\": container with ID starting with ee97bc8feac99071eb68a042602a82d308a7e8e5198684037ba2618866cbcb5d not found: ID does not exist" containerID="ee97bc8feac99071eb68a042602a82d308a7e8e5198684037ba2618866cbcb5d" Nov 29 08:13:09 crc kubenswrapper[4795]: I1129 08:13:09.173148 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee97bc8feac99071eb68a042602a82d308a7e8e5198684037ba2618866cbcb5d"} err="failed to get container status \"ee97bc8feac99071eb68a042602a82d308a7e8e5198684037ba2618866cbcb5d\": rpc error: code = NotFound desc = could not find container \"ee97bc8feac99071eb68a042602a82d308a7e8e5198684037ba2618866cbcb5d\": container with ID starting with ee97bc8feac99071eb68a042602a82d308a7e8e5198684037ba2618866cbcb5d not found: ID does not exist" Nov 29 08:13:09 crc kubenswrapper[4795]: I1129 08:13:09.173165 4795 scope.go:117] "RemoveContainer" containerID="60eef7bffab2cda952c51080277affae2d2bfddacb15e17b60e8d551203f9d14" Nov 29 08:13:09 crc kubenswrapper[4795]: I1129 08:13:09.173723 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60eef7bffab2cda952c51080277affae2d2bfddacb15e17b60e8d551203f9d14"} err="failed to get container status \"60eef7bffab2cda952c51080277affae2d2bfddacb15e17b60e8d551203f9d14\": rpc error: code = NotFound desc = could not find container \"60eef7bffab2cda952c51080277affae2d2bfddacb15e17b60e8d551203f9d14\": container with ID starting with 60eef7bffab2cda952c51080277affae2d2bfddacb15e17b60e8d551203f9d14 not found: ID does not exist" Nov 29 08:13:09 crc kubenswrapper[4795]: I1129 08:13:09.173753 4795 scope.go:117] "RemoveContainer" containerID="5da0b91ad20b347b423d990141652916306ff6fda9c2ae32b780ef3cff470119" Nov 29 08:13:09 crc kubenswrapper[4795]: I1129 08:13:09.174043 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5da0b91ad20b347b423d990141652916306ff6fda9c2ae32b780ef3cff470119"} err="failed to get container status \"5da0b91ad20b347b423d990141652916306ff6fda9c2ae32b780ef3cff470119\": rpc error: code = NotFound desc = could not find container \"5da0b91ad20b347b423d990141652916306ff6fda9c2ae32b780ef3cff470119\": container with ID starting with 5da0b91ad20b347b423d990141652916306ff6fda9c2ae32b780ef3cff470119 not found: ID does not exist" Nov 29 08:13:09 crc kubenswrapper[4795]: I1129 08:13:09.174072 4795 scope.go:117] "RemoveContainer" containerID="cafc02ffd2e71c98c39616ad76cdb228f7f2729d66f07556ce92fc337917f007" Nov 29 08:13:09 crc kubenswrapper[4795]: I1129 08:13:09.174331 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cafc02ffd2e71c98c39616ad76cdb228f7f2729d66f07556ce92fc337917f007"} err="failed to get container status \"cafc02ffd2e71c98c39616ad76cdb228f7f2729d66f07556ce92fc337917f007\": rpc error: code = NotFound desc = could not find container \"cafc02ffd2e71c98c39616ad76cdb228f7f2729d66f07556ce92fc337917f007\": container with ID starting with cafc02ffd2e71c98c39616ad76cdb228f7f2729d66f07556ce92fc337917f007 not found: ID does not exist" Nov 29 08:13:09 crc kubenswrapper[4795]: I1129 08:13:09.174387 4795 scope.go:117] "RemoveContainer" containerID="ee97bc8feac99071eb68a042602a82d308a7e8e5198684037ba2618866cbcb5d" Nov 29 08:13:09 crc kubenswrapper[4795]: I1129 08:13:09.174645 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee97bc8feac99071eb68a042602a82d308a7e8e5198684037ba2618866cbcb5d"} err="failed to get container status \"ee97bc8feac99071eb68a042602a82d308a7e8e5198684037ba2618866cbcb5d\": rpc error: code = NotFound desc = could not find container \"ee97bc8feac99071eb68a042602a82d308a7e8e5198684037ba2618866cbcb5d\": container with ID starting with ee97bc8feac99071eb68a042602a82d308a7e8e5198684037ba2618866cbcb5d not found: ID does not exist" Nov 29 08:13:09 crc kubenswrapper[4795]: I1129 08:13:09.305215 4795 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-z2fm2" podUID="08402fb0-7ff1-492e-8d2b-423e248fb787" containerName="registry-server" probeResult="failure" output=< Nov 29 08:13:09 crc kubenswrapper[4795]: timeout: failed to connect service ":50051" within 1s Nov 29 08:13:09 crc kubenswrapper[4795]: > Nov 29 08:13:09 crc kubenswrapper[4795]: I1129 08:13:09.372743 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Nov 29 08:13:09 crc kubenswrapper[4795]: I1129 08:13:09.389306 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Nov 29 08:13:09 crc kubenswrapper[4795]: I1129 08:13:09.403726 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Nov 29 08:13:09 crc kubenswrapper[4795]: E1129 08:13:09.404405 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72d0f4d2-d953-4cfa-a24e-221e2cf4e994" containerName="aodh-evaluator" Nov 29 08:13:09 crc kubenswrapper[4795]: I1129 08:13:09.404426 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="72d0f4d2-d953-4cfa-a24e-221e2cf4e994" containerName="aodh-evaluator" Nov 29 08:13:09 crc kubenswrapper[4795]: E1129 08:13:09.404454 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72d0f4d2-d953-4cfa-a24e-221e2cf4e994" containerName="aodh-notifier" Nov 29 08:13:09 crc kubenswrapper[4795]: I1129 08:13:09.404461 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="72d0f4d2-d953-4cfa-a24e-221e2cf4e994" containerName="aodh-notifier" Nov 29 08:13:09 crc kubenswrapper[4795]: E1129 08:13:09.404494 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72d0f4d2-d953-4cfa-a24e-221e2cf4e994" containerName="aodh-api" Nov 29 08:13:09 crc kubenswrapper[4795]: I1129 08:13:09.404501 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="72d0f4d2-d953-4cfa-a24e-221e2cf4e994" containerName="aodh-api" Nov 29 08:13:09 crc kubenswrapper[4795]: E1129 08:13:09.404515 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72d0f4d2-d953-4cfa-a24e-221e2cf4e994" containerName="aodh-listener" Nov 29 08:13:09 crc kubenswrapper[4795]: I1129 08:13:09.404524 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="72d0f4d2-d953-4cfa-a24e-221e2cf4e994" containerName="aodh-listener" Nov 29 08:13:09 crc kubenswrapper[4795]: I1129 08:13:09.404842 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="72d0f4d2-d953-4cfa-a24e-221e2cf4e994" containerName="aodh-notifier" Nov 29 08:13:09 crc kubenswrapper[4795]: I1129 08:13:09.404872 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="72d0f4d2-d953-4cfa-a24e-221e2cf4e994" containerName="aodh-api" Nov 29 08:13:09 crc kubenswrapper[4795]: I1129 08:13:09.404894 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="72d0f4d2-d953-4cfa-a24e-221e2cf4e994" containerName="aodh-evaluator" Nov 29 08:13:09 crc kubenswrapper[4795]: I1129 08:13:09.404904 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="72d0f4d2-d953-4cfa-a24e-221e2cf4e994" containerName="aodh-listener" Nov 29 08:13:09 crc kubenswrapper[4795]: I1129 08:13:09.407542 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 29 08:13:09 crc kubenswrapper[4795]: I1129 08:13:09.410050 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Nov 29 08:13:09 crc kubenswrapper[4795]: I1129 08:13:09.410916 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Nov 29 08:13:09 crc kubenswrapper[4795]: I1129 08:13:09.411212 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-xg4wp" Nov 29 08:13:09 crc kubenswrapper[4795]: I1129 08:13:09.411358 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 29 08:13:09 crc kubenswrapper[4795]: I1129 08:13:09.411525 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 29 08:13:09 crc kubenswrapper[4795]: I1129 08:13:09.415555 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 29 08:13:09 crc kubenswrapper[4795]: I1129 08:13:09.564224 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e8728db-5fe3-46f5-a628-2b3a0f708438-combined-ca-bundle\") pod \"aodh-0\" (UID: \"5e8728db-5fe3-46f5-a628-2b3a0f708438\") " pod="openstack/aodh-0" Nov 29 08:13:09 crc kubenswrapper[4795]: I1129 08:13:09.564272 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e8728db-5fe3-46f5-a628-2b3a0f708438-config-data\") pod \"aodh-0\" (UID: \"5e8728db-5fe3-46f5-a628-2b3a0f708438\") " pod="openstack/aodh-0" Nov 29 08:13:09 crc kubenswrapper[4795]: I1129 08:13:09.564431 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e8728db-5fe3-46f5-a628-2b3a0f708438-scripts\") pod \"aodh-0\" (UID: \"5e8728db-5fe3-46f5-a628-2b3a0f708438\") " pod="openstack/aodh-0" Nov 29 08:13:09 crc kubenswrapper[4795]: I1129 08:13:09.564464 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gdmk\" (UniqueName: \"kubernetes.io/projected/5e8728db-5fe3-46f5-a628-2b3a0f708438-kube-api-access-8gdmk\") pod \"aodh-0\" (UID: \"5e8728db-5fe3-46f5-a628-2b3a0f708438\") " pod="openstack/aodh-0" Nov 29 08:13:09 crc kubenswrapper[4795]: I1129 08:13:09.564510 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e8728db-5fe3-46f5-a628-2b3a0f708438-internal-tls-certs\") pod \"aodh-0\" (UID: \"5e8728db-5fe3-46f5-a628-2b3a0f708438\") " pod="openstack/aodh-0" Nov 29 08:13:09 crc kubenswrapper[4795]: I1129 08:13:09.564565 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e8728db-5fe3-46f5-a628-2b3a0f708438-public-tls-certs\") pod \"aodh-0\" (UID: \"5e8728db-5fe3-46f5-a628-2b3a0f708438\") " pod="openstack/aodh-0" Nov 29 08:13:09 crc kubenswrapper[4795]: I1129 08:13:09.667645 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e8728db-5fe3-46f5-a628-2b3a0f708438-scripts\") pod \"aodh-0\" (UID: \"5e8728db-5fe3-46f5-a628-2b3a0f708438\") " pod="openstack/aodh-0" Nov 29 08:13:09 crc kubenswrapper[4795]: I1129 08:13:09.667739 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gdmk\" (UniqueName: \"kubernetes.io/projected/5e8728db-5fe3-46f5-a628-2b3a0f708438-kube-api-access-8gdmk\") pod \"aodh-0\" (UID: \"5e8728db-5fe3-46f5-a628-2b3a0f708438\") " pod="openstack/aodh-0" Nov 29 08:13:09 crc kubenswrapper[4795]: I1129 08:13:09.667804 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e8728db-5fe3-46f5-a628-2b3a0f708438-internal-tls-certs\") pod \"aodh-0\" (UID: \"5e8728db-5fe3-46f5-a628-2b3a0f708438\") " pod="openstack/aodh-0" Nov 29 08:13:09 crc kubenswrapper[4795]: I1129 08:13:09.667871 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e8728db-5fe3-46f5-a628-2b3a0f708438-public-tls-certs\") pod \"aodh-0\" (UID: \"5e8728db-5fe3-46f5-a628-2b3a0f708438\") " pod="openstack/aodh-0" Nov 29 08:13:09 crc kubenswrapper[4795]: I1129 08:13:09.667896 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e8728db-5fe3-46f5-a628-2b3a0f708438-combined-ca-bundle\") pod \"aodh-0\" (UID: \"5e8728db-5fe3-46f5-a628-2b3a0f708438\") " pod="openstack/aodh-0" Nov 29 08:13:09 crc kubenswrapper[4795]: I1129 08:13:09.667913 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e8728db-5fe3-46f5-a628-2b3a0f708438-config-data\") pod \"aodh-0\" (UID: \"5e8728db-5fe3-46f5-a628-2b3a0f708438\") " pod="openstack/aodh-0" Nov 29 08:13:09 crc kubenswrapper[4795]: I1129 08:13:09.672818 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e8728db-5fe3-46f5-a628-2b3a0f708438-scripts\") pod \"aodh-0\" (UID: \"5e8728db-5fe3-46f5-a628-2b3a0f708438\") " pod="openstack/aodh-0" Nov 29 08:13:09 crc kubenswrapper[4795]: I1129 08:13:09.675208 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e8728db-5fe3-46f5-a628-2b3a0f708438-internal-tls-certs\") pod \"aodh-0\" (UID: \"5e8728db-5fe3-46f5-a628-2b3a0f708438\") " pod="openstack/aodh-0" Nov 29 08:13:09 crc kubenswrapper[4795]: I1129 08:13:09.686307 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e8728db-5fe3-46f5-a628-2b3a0f708438-combined-ca-bundle\") pod \"aodh-0\" (UID: \"5e8728db-5fe3-46f5-a628-2b3a0f708438\") " pod="openstack/aodh-0" Nov 29 08:13:09 crc kubenswrapper[4795]: I1129 08:13:09.686812 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gdmk\" (UniqueName: \"kubernetes.io/projected/5e8728db-5fe3-46f5-a628-2b3a0f708438-kube-api-access-8gdmk\") pod \"aodh-0\" (UID: \"5e8728db-5fe3-46f5-a628-2b3a0f708438\") " pod="openstack/aodh-0" Nov 29 08:13:09 crc kubenswrapper[4795]: I1129 08:13:09.686983 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e8728db-5fe3-46f5-a628-2b3a0f708438-public-tls-certs\") pod \"aodh-0\" (UID: \"5e8728db-5fe3-46f5-a628-2b3a0f708438\") " pod="openstack/aodh-0" Nov 29 08:13:09 crc kubenswrapper[4795]: I1129 08:13:09.687548 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e8728db-5fe3-46f5-a628-2b3a0f708438-config-data\") pod \"aodh-0\" (UID: \"5e8728db-5fe3-46f5-a628-2b3a0f708438\") " pod="openstack/aodh-0" Nov 29 08:13:09 crc kubenswrapper[4795]: I1129 08:13:09.732449 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 29 08:13:10 crc kubenswrapper[4795]: I1129 08:13:10.235957 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 29 08:13:10 crc kubenswrapper[4795]: I1129 08:13:10.290070 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72d0f4d2-d953-4cfa-a24e-221e2cf4e994" path="/var/lib/kubelet/pods/72d0f4d2-d953-4cfa-a24e-221e2cf4e994/volumes" Nov 29 08:13:10 crc kubenswrapper[4795]: I1129 08:13:10.456622 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-txz6d" Nov 29 08:13:10 crc kubenswrapper[4795]: I1129 08:13:10.456679 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-txz6d" Nov 29 08:13:10 crc kubenswrapper[4795]: I1129 08:13:10.522405 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-txz6d" Nov 29 08:13:11 crc kubenswrapper[4795]: I1129 08:13:11.079171 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"5e8728db-5fe3-46f5-a628-2b3a0f708438","Type":"ContainerStarted","Data":"003ce0061b6b96cc1f0c4afa34a6f763dd8596ba3512a764e69b5496050895d7"} Nov 29 08:13:11 crc kubenswrapper[4795]: I1129 08:13:11.079506 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"5e8728db-5fe3-46f5-a628-2b3a0f708438","Type":"ContainerStarted","Data":"65fb6d829cb4c37b835bc823ab76e173b6f12b8af016915e1aaa556d8a075861"} Nov 29 08:13:11 crc kubenswrapper[4795]: I1129 08:13:11.134927 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-txz6d" Nov 29 08:13:11 crc kubenswrapper[4795]: I1129 08:13:11.216891 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-txz6d"] Nov 29 08:13:11 crc kubenswrapper[4795]: I1129 08:13:11.271842 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2l6px"] Nov 29 08:13:11 crc kubenswrapper[4795]: I1129 08:13:11.272076 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2l6px" podUID="5c01d062-6304-480c-a957-63313b52599a" containerName="registry-server" containerID="cri-o://83a9e75280f18ed2e7789110f4d5dd0bb23d67327ee66c3de906ffbc2a9d7492" gracePeriod=2 Nov 29 08:13:12 crc kubenswrapper[4795]: I1129 08:13:12.231833 4795 generic.go:334] "Generic (PLEG): container finished" podID="5c01d062-6304-480c-a957-63313b52599a" containerID="83a9e75280f18ed2e7789110f4d5dd0bb23d67327ee66c3de906ffbc2a9d7492" exitCode=0 Nov 29 08:13:12 crc kubenswrapper[4795]: I1129 08:13:12.231901 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2l6px" event={"ID":"5c01d062-6304-480c-a957-63313b52599a","Type":"ContainerDied","Data":"83a9e75280f18ed2e7789110f4d5dd0bb23d67327ee66c3de906ffbc2a9d7492"} Nov 29 08:13:12 crc kubenswrapper[4795]: I1129 08:13:12.247094 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2l6px" Nov 29 08:13:12 crc kubenswrapper[4795]: I1129 08:13:12.339816 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c01d062-6304-480c-a957-63313b52599a-utilities\") pod \"5c01d062-6304-480c-a957-63313b52599a\" (UID: \"5c01d062-6304-480c-a957-63313b52599a\") " Nov 29 08:13:12 crc kubenswrapper[4795]: I1129 08:13:12.339924 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c01d062-6304-480c-a957-63313b52599a-catalog-content\") pod \"5c01d062-6304-480c-a957-63313b52599a\" (UID: \"5c01d062-6304-480c-a957-63313b52599a\") " Nov 29 08:13:12 crc kubenswrapper[4795]: I1129 08:13:12.340017 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zl5pg\" (UniqueName: \"kubernetes.io/projected/5c01d062-6304-480c-a957-63313b52599a-kube-api-access-zl5pg\") pod \"5c01d062-6304-480c-a957-63313b52599a\" (UID: \"5c01d062-6304-480c-a957-63313b52599a\") " Nov 29 08:13:12 crc kubenswrapper[4795]: I1129 08:13:12.346276 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c01d062-6304-480c-a957-63313b52599a-utilities" (OuterVolumeSpecName: "utilities") pod "5c01d062-6304-480c-a957-63313b52599a" (UID: "5c01d062-6304-480c-a957-63313b52599a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:13:12 crc kubenswrapper[4795]: I1129 08:13:12.353627 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c01d062-6304-480c-a957-63313b52599a-kube-api-access-zl5pg" (OuterVolumeSpecName: "kube-api-access-zl5pg") pod "5c01d062-6304-480c-a957-63313b52599a" (UID: "5c01d062-6304-480c-a957-63313b52599a"). InnerVolumeSpecName "kube-api-access-zl5pg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:13:12 crc kubenswrapper[4795]: I1129 08:13:12.443295 4795 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c01d062-6304-480c-a957-63313b52599a-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 08:13:12 crc kubenswrapper[4795]: I1129 08:13:12.443692 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zl5pg\" (UniqueName: \"kubernetes.io/projected/5c01d062-6304-480c-a957-63313b52599a-kube-api-access-zl5pg\") on node \"crc\" DevicePath \"\"" Nov 29 08:13:12 crc kubenswrapper[4795]: I1129 08:13:12.455455 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c01d062-6304-480c-a957-63313b52599a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5c01d062-6304-480c-a957-63313b52599a" (UID: "5c01d062-6304-480c-a957-63313b52599a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:13:12 crc kubenswrapper[4795]: I1129 08:13:12.546400 4795 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c01d062-6304-480c-a957-63313b52599a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 08:13:13 crc kubenswrapper[4795]: I1129 08:13:13.248876 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2l6px" Nov 29 08:13:13 crc kubenswrapper[4795]: I1129 08:13:13.248866 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2l6px" event={"ID":"5c01d062-6304-480c-a957-63313b52599a","Type":"ContainerDied","Data":"2ebe426dd08beb190c6789d6df66068261fdf22b5a79c3a849ad12966571064b"} Nov 29 08:13:13 crc kubenswrapper[4795]: I1129 08:13:13.249053 4795 scope.go:117] "RemoveContainer" containerID="83a9e75280f18ed2e7789110f4d5dd0bb23d67327ee66c3de906ffbc2a9d7492" Nov 29 08:13:13 crc kubenswrapper[4795]: I1129 08:13:13.254144 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"5e8728db-5fe3-46f5-a628-2b3a0f708438","Type":"ContainerStarted","Data":"48ce42ceec7a491310121ab537a8782c2d4acfe933668922b51a42b5947c2942"} Nov 29 08:13:13 crc kubenswrapper[4795]: I1129 08:13:13.296611 4795 scope.go:117] "RemoveContainer" containerID="bad2cd182310519d230ab4499ac6ba9287b35c07a986b124bc4b235cd69b9cb0" Nov 29 08:13:13 crc kubenswrapper[4795]: I1129 08:13:13.300998 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2l6px"] Nov 29 08:13:13 crc kubenswrapper[4795]: I1129 08:13:13.315110 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2l6px"] Nov 29 08:13:13 crc kubenswrapper[4795]: I1129 08:13:13.335231 4795 scope.go:117] "RemoveContainer" containerID="e05a803c11a9096d91d6e0214e99af4e6957f46baa8d70a1a22b067fc1271411" Nov 29 08:13:14 crc kubenswrapper[4795]: I1129 08:13:14.307051 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c01d062-6304-480c-a957-63313b52599a" path="/var/lib/kubelet/pods/5c01d062-6304-480c-a957-63313b52599a/volumes" Nov 29 08:13:14 crc kubenswrapper[4795]: I1129 08:13:14.311946 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"5e8728db-5fe3-46f5-a628-2b3a0f708438","Type":"ContainerStarted","Data":"386b42b4b2f2177b52f7873b2f95b34551272bc3d48632abe699b4826f9598a5"} Nov 29 08:13:15 crc kubenswrapper[4795]: I1129 08:13:15.321580 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"5e8728db-5fe3-46f5-a628-2b3a0f708438","Type":"ContainerStarted","Data":"d0daec0d307cce4d1427ff0045a25585d6c5f388ff25192db328d933b0dc4e8c"} Nov 29 08:13:15 crc kubenswrapper[4795]: I1129 08:13:15.344287 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=1.7880715120000001 podStartE2EDuration="6.344270259s" podCreationTimestamp="2025-11-29 08:13:09 +0000 UTC" firstStartedPulling="2025-11-29 08:13:10.234400431 +0000 UTC m=+2036.209976221" lastFinishedPulling="2025-11-29 08:13:14.790599178 +0000 UTC m=+2040.766174968" observedRunningTime="2025-11-29 08:13:15.342394536 +0000 UTC m=+2041.317970326" watchObservedRunningTime="2025-11-29 08:13:15.344270259 +0000 UTC m=+2041.319846049" Nov 29 08:13:17 crc kubenswrapper[4795]: I1129 08:13:17.276531 4795 scope.go:117] "RemoveContainer" containerID="cb788e622da5bdf804dcfa0c959dfb0158a848bc427ed90337dfd530ea78ea5d" Nov 29 08:13:17 crc kubenswrapper[4795]: E1129 08:13:17.277488 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:13:18 crc kubenswrapper[4795]: I1129 08:13:18.332992 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-z2fm2" Nov 29 08:13:18 crc kubenswrapper[4795]: I1129 08:13:18.391028 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-z2fm2" Nov 29 08:13:18 crc kubenswrapper[4795]: I1129 08:13:18.968974 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z2fm2"] Nov 29 08:13:19 crc kubenswrapper[4795]: I1129 08:13:19.377770 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-z2fm2" podUID="08402fb0-7ff1-492e-8d2b-423e248fb787" containerName="registry-server" containerID="cri-o://dae6986c228a930155d7ee99141a0899f2b309f8c762227141215f897de9d002" gracePeriod=2 Nov 29 08:13:20 crc kubenswrapper[4795]: I1129 08:13:20.013683 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z2fm2" Nov 29 08:13:20 crc kubenswrapper[4795]: I1129 08:13:20.086645 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08402fb0-7ff1-492e-8d2b-423e248fb787-catalog-content\") pod \"08402fb0-7ff1-492e-8d2b-423e248fb787\" (UID: \"08402fb0-7ff1-492e-8d2b-423e248fb787\") " Nov 29 08:13:20 crc kubenswrapper[4795]: I1129 08:13:20.086923 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2kqhq\" (UniqueName: \"kubernetes.io/projected/08402fb0-7ff1-492e-8d2b-423e248fb787-kube-api-access-2kqhq\") pod \"08402fb0-7ff1-492e-8d2b-423e248fb787\" (UID: \"08402fb0-7ff1-492e-8d2b-423e248fb787\") " Nov 29 08:13:20 crc kubenswrapper[4795]: I1129 08:13:20.086975 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08402fb0-7ff1-492e-8d2b-423e248fb787-utilities\") pod \"08402fb0-7ff1-492e-8d2b-423e248fb787\" (UID: \"08402fb0-7ff1-492e-8d2b-423e248fb787\") " Nov 29 08:13:20 crc kubenswrapper[4795]: I1129 08:13:20.087875 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08402fb0-7ff1-492e-8d2b-423e248fb787-utilities" (OuterVolumeSpecName: "utilities") pod "08402fb0-7ff1-492e-8d2b-423e248fb787" (UID: "08402fb0-7ff1-492e-8d2b-423e248fb787"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:13:20 crc kubenswrapper[4795]: I1129 08:13:20.095165 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08402fb0-7ff1-492e-8d2b-423e248fb787-kube-api-access-2kqhq" (OuterVolumeSpecName: "kube-api-access-2kqhq") pod "08402fb0-7ff1-492e-8d2b-423e248fb787" (UID: "08402fb0-7ff1-492e-8d2b-423e248fb787"). InnerVolumeSpecName "kube-api-access-2kqhq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:13:20 crc kubenswrapper[4795]: I1129 08:13:20.190628 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2kqhq\" (UniqueName: \"kubernetes.io/projected/08402fb0-7ff1-492e-8d2b-423e248fb787-kube-api-access-2kqhq\") on node \"crc\" DevicePath \"\"" Nov 29 08:13:20 crc kubenswrapper[4795]: I1129 08:13:20.190667 4795 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08402fb0-7ff1-492e-8d2b-423e248fb787-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 08:13:20 crc kubenswrapper[4795]: I1129 08:13:20.226289 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08402fb0-7ff1-492e-8d2b-423e248fb787-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "08402fb0-7ff1-492e-8d2b-423e248fb787" (UID: "08402fb0-7ff1-492e-8d2b-423e248fb787"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:13:20 crc kubenswrapper[4795]: I1129 08:13:20.292412 4795 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08402fb0-7ff1-492e-8d2b-423e248fb787-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 08:13:20 crc kubenswrapper[4795]: I1129 08:13:20.390015 4795 generic.go:334] "Generic (PLEG): container finished" podID="08402fb0-7ff1-492e-8d2b-423e248fb787" containerID="dae6986c228a930155d7ee99141a0899f2b309f8c762227141215f897de9d002" exitCode=0 Nov 29 08:13:20 crc kubenswrapper[4795]: I1129 08:13:20.390055 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z2fm2" event={"ID":"08402fb0-7ff1-492e-8d2b-423e248fb787","Type":"ContainerDied","Data":"dae6986c228a930155d7ee99141a0899f2b309f8c762227141215f897de9d002"} Nov 29 08:13:20 crc kubenswrapper[4795]: I1129 08:13:20.390085 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z2fm2" event={"ID":"08402fb0-7ff1-492e-8d2b-423e248fb787","Type":"ContainerDied","Data":"fe28fd2e567b2b93b287e746777ca101774c87b144687b283908d78194265706"} Nov 29 08:13:20 crc kubenswrapper[4795]: I1129 08:13:20.390101 4795 scope.go:117] "RemoveContainer" containerID="dae6986c228a930155d7ee99141a0899f2b309f8c762227141215f897de9d002" Nov 29 08:13:20 crc kubenswrapper[4795]: I1129 08:13:20.390125 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z2fm2" Nov 29 08:13:20 crc kubenswrapper[4795]: I1129 08:13:20.420980 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z2fm2"] Nov 29 08:13:20 crc kubenswrapper[4795]: I1129 08:13:20.422289 4795 scope.go:117] "RemoveContainer" containerID="c9d7519b038636f4bec26463cb6672dd649a61335eb303ca3d6fb18470bef1d7" Nov 29 08:13:20 crc kubenswrapper[4795]: I1129 08:13:20.434166 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-z2fm2"] Nov 29 08:13:20 crc kubenswrapper[4795]: I1129 08:13:20.450092 4795 scope.go:117] "RemoveContainer" containerID="5ad331779aeeb15b768cddfa0d09786aad1066327b956aaf08355788d7e5f2e7" Nov 29 08:13:20 crc kubenswrapper[4795]: I1129 08:13:20.504236 4795 scope.go:117] "RemoveContainer" containerID="dae6986c228a930155d7ee99141a0899f2b309f8c762227141215f897de9d002" Nov 29 08:13:20 crc kubenswrapper[4795]: E1129 08:13:20.505069 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dae6986c228a930155d7ee99141a0899f2b309f8c762227141215f897de9d002\": container with ID starting with dae6986c228a930155d7ee99141a0899f2b309f8c762227141215f897de9d002 not found: ID does not exist" containerID="dae6986c228a930155d7ee99141a0899f2b309f8c762227141215f897de9d002" Nov 29 08:13:20 crc kubenswrapper[4795]: I1129 08:13:20.505109 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dae6986c228a930155d7ee99141a0899f2b309f8c762227141215f897de9d002"} err="failed to get container status \"dae6986c228a930155d7ee99141a0899f2b309f8c762227141215f897de9d002\": rpc error: code = NotFound desc = could not find container \"dae6986c228a930155d7ee99141a0899f2b309f8c762227141215f897de9d002\": container with ID starting with dae6986c228a930155d7ee99141a0899f2b309f8c762227141215f897de9d002 not found: ID does not exist" Nov 29 08:13:20 crc kubenswrapper[4795]: I1129 08:13:20.505138 4795 scope.go:117] "RemoveContainer" containerID="c9d7519b038636f4bec26463cb6672dd649a61335eb303ca3d6fb18470bef1d7" Nov 29 08:13:20 crc kubenswrapper[4795]: E1129 08:13:20.505577 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9d7519b038636f4bec26463cb6672dd649a61335eb303ca3d6fb18470bef1d7\": container with ID starting with c9d7519b038636f4bec26463cb6672dd649a61335eb303ca3d6fb18470bef1d7 not found: ID does not exist" containerID="c9d7519b038636f4bec26463cb6672dd649a61335eb303ca3d6fb18470bef1d7" Nov 29 08:13:20 crc kubenswrapper[4795]: I1129 08:13:20.505639 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9d7519b038636f4bec26463cb6672dd649a61335eb303ca3d6fb18470bef1d7"} err="failed to get container status \"c9d7519b038636f4bec26463cb6672dd649a61335eb303ca3d6fb18470bef1d7\": rpc error: code = NotFound desc = could not find container \"c9d7519b038636f4bec26463cb6672dd649a61335eb303ca3d6fb18470bef1d7\": container with ID starting with c9d7519b038636f4bec26463cb6672dd649a61335eb303ca3d6fb18470bef1d7 not found: ID does not exist" Nov 29 08:13:20 crc kubenswrapper[4795]: I1129 08:13:20.505654 4795 scope.go:117] "RemoveContainer" containerID="5ad331779aeeb15b768cddfa0d09786aad1066327b956aaf08355788d7e5f2e7" Nov 29 08:13:20 crc kubenswrapper[4795]: E1129 08:13:20.506129 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ad331779aeeb15b768cddfa0d09786aad1066327b956aaf08355788d7e5f2e7\": container with ID starting with 5ad331779aeeb15b768cddfa0d09786aad1066327b956aaf08355788d7e5f2e7 not found: ID does not exist" containerID="5ad331779aeeb15b768cddfa0d09786aad1066327b956aaf08355788d7e5f2e7" Nov 29 08:13:20 crc kubenswrapper[4795]: I1129 08:13:20.506149 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ad331779aeeb15b768cddfa0d09786aad1066327b956aaf08355788d7e5f2e7"} err="failed to get container status \"5ad331779aeeb15b768cddfa0d09786aad1066327b956aaf08355788d7e5f2e7\": rpc error: code = NotFound desc = could not find container \"5ad331779aeeb15b768cddfa0d09786aad1066327b956aaf08355788d7e5f2e7\": container with ID starting with 5ad331779aeeb15b768cddfa0d09786aad1066327b956aaf08355788d7e5f2e7 not found: ID does not exist" Nov 29 08:13:22 crc kubenswrapper[4795]: I1129 08:13:22.291817 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08402fb0-7ff1-492e-8d2b-423e248fb787" path="/var/lib/kubelet/pods/08402fb0-7ff1-492e-8d2b-423e248fb787/volumes" Nov 29 08:13:28 crc kubenswrapper[4795]: I1129 08:13:28.277192 4795 scope.go:117] "RemoveContainer" containerID="cb788e622da5bdf804dcfa0c959dfb0158a848bc427ed90337dfd530ea78ea5d" Nov 29 08:13:28 crc kubenswrapper[4795]: E1129 08:13:28.278192 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:13:28 crc kubenswrapper[4795]: I1129 08:13:28.616165 4795 scope.go:117] "RemoveContainer" containerID="e157dbb58ff15a1ab1105d74992b5fcb067df0425fca9c798c7f42c6439b79c2" Nov 29 08:13:28 crc kubenswrapper[4795]: I1129 08:13:28.638996 4795 scope.go:117] "RemoveContainer" containerID="93f7ecd76efba30a5b144876a832efb59a9fe30245e7dd0500e1efcce1cf2ce4" Nov 29 08:13:28 crc kubenswrapper[4795]: I1129 08:13:28.664821 4795 scope.go:117] "RemoveContainer" containerID="d391db1ee9e1b18be80d972b0c3b6ac2b4b1f01cef28302248aa1682e5d5d0df" Nov 29 08:13:28 crc kubenswrapper[4795]: I1129 08:13:28.687841 4795 scope.go:117] "RemoveContainer" containerID="54d689b5c4a8a4ba1c171dd892162e93f0d06097d24ec5730b0bedca6d37472f" Nov 29 08:13:28 crc kubenswrapper[4795]: I1129 08:13:28.712920 4795 scope.go:117] "RemoveContainer" containerID="52236dc856c9ff58b724e49ea15972d1a4376f5bffb0f668fd6fc0b638a4778d" Nov 29 08:13:41 crc kubenswrapper[4795]: I1129 08:13:41.276161 4795 scope.go:117] "RemoveContainer" containerID="cb788e622da5bdf804dcfa0c959dfb0158a848bc427ed90337dfd530ea78ea5d" Nov 29 08:13:41 crc kubenswrapper[4795]: E1129 08:13:41.277243 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:13:45 crc kubenswrapper[4795]: I1129 08:13:45.053805 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-4710-account-create-update-469z8"] Nov 29 08:13:45 crc kubenswrapper[4795]: I1129 08:13:45.068166 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-b6t5r"] Nov 29 08:13:45 crc kubenswrapper[4795]: I1129 08:13:45.080350 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-dff1-account-create-update-qpdmv"] Nov 29 08:13:45 crc kubenswrapper[4795]: I1129 08:13:45.091768 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-6ef0-account-create-update-7swjc"] Nov 29 08:13:45 crc kubenswrapper[4795]: I1129 08:13:45.102695 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-4s8c4"] Nov 29 08:13:45 crc kubenswrapper[4795]: I1129 08:13:45.113741 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-dff1-account-create-update-qpdmv"] Nov 29 08:13:45 crc kubenswrapper[4795]: I1129 08:13:45.124017 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-6ef0-account-create-update-7swjc"] Nov 29 08:13:45 crc kubenswrapper[4795]: I1129 08:13:45.154999 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-b6t5r"] Nov 29 08:13:45 crc kubenswrapper[4795]: I1129 08:13:45.166744 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-4s8c4"] Nov 29 08:13:45 crc kubenswrapper[4795]: I1129 08:13:45.177514 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-4710-account-create-update-469z8"] Nov 29 08:13:46 crc kubenswrapper[4795]: I1129 08:13:46.035735 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-346e-account-create-update-cw8zl"] Nov 29 08:13:46 crc kubenswrapper[4795]: I1129 08:13:46.046831 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-hld85"] Nov 29 08:13:46 crc kubenswrapper[4795]: I1129 08:13:46.056736 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-346e-account-create-update-cw8zl"] Nov 29 08:13:46 crc kubenswrapper[4795]: I1129 08:13:46.066712 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-hld85"] Nov 29 08:13:46 crc kubenswrapper[4795]: I1129 08:13:46.290094 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cf81321-88f9-43ee-8356-e0355bc3da52" path="/var/lib/kubelet/pods/2cf81321-88f9-43ee-8356-e0355bc3da52/volumes" Nov 29 08:13:46 crc kubenswrapper[4795]: I1129 08:13:46.291270 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e6def42-cee6-477c-940a-1bb6e20df694" path="/var/lib/kubelet/pods/5e6def42-cee6-477c-940a-1bb6e20df694/volumes" Nov 29 08:13:46 crc kubenswrapper[4795]: I1129 08:13:46.292123 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="696f8b00-2104-47ea-bc5c-0e317dd00de1" path="/var/lib/kubelet/pods/696f8b00-2104-47ea-bc5c-0e317dd00de1/volumes" Nov 29 08:13:46 crc kubenswrapper[4795]: I1129 08:13:46.293003 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c47add8-93ad-456c-8f90-bb854d981a3e" path="/var/lib/kubelet/pods/8c47add8-93ad-456c-8f90-bb854d981a3e/volumes" Nov 29 08:13:46 crc kubenswrapper[4795]: I1129 08:13:46.294457 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4891454-7144-4416-b2d3-f16e56001077" path="/var/lib/kubelet/pods/e4891454-7144-4416-b2d3-f16e56001077/volumes" Nov 29 08:13:46 crc kubenswrapper[4795]: I1129 08:13:46.295257 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0a8025f-2b8a-4f5d-9462-09e032e5b6c2" path="/var/lib/kubelet/pods/f0a8025f-2b8a-4f5d-9462-09e032e5b6c2/volumes" Nov 29 08:13:46 crc kubenswrapper[4795]: I1129 08:13:46.295949 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff621b92-0616-4d28-abd8-b18c17c6990e" path="/var/lib/kubelet/pods/ff621b92-0616-4d28-abd8-b18c17c6990e/volumes" Nov 29 08:13:47 crc kubenswrapper[4795]: I1129 08:13:47.031699 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-f7j9q"] Nov 29 08:13:47 crc kubenswrapper[4795]: I1129 08:13:47.044031 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-f7j9q"] Nov 29 08:13:48 crc kubenswrapper[4795]: I1129 08:13:48.291877 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02097025-8724-4b6a-b817-d8a5d40f2d24" path="/var/lib/kubelet/pods/02097025-8724-4b6a-b817-d8a5d40f2d24/volumes" Nov 29 08:13:56 crc kubenswrapper[4795]: I1129 08:13:56.276956 4795 scope.go:117] "RemoveContainer" containerID="cb788e622da5bdf804dcfa0c959dfb0158a848bc427ed90337dfd530ea78ea5d" Nov 29 08:13:56 crc kubenswrapper[4795]: I1129 08:13:56.822273 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" event={"ID":"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1","Type":"ContainerStarted","Data":"fb7eb9a38d35ff5b0e7cfded64550978b04bdbcec3194cda206b101003d9d236"} Nov 29 08:13:57 crc kubenswrapper[4795]: I1129 08:13:57.049364 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-wjgpz"] Nov 29 08:13:57 crc kubenswrapper[4795]: I1129 08:13:57.064349 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-47c8-account-create-update-jvpms"] Nov 29 08:13:57 crc kubenswrapper[4795]: I1129 08:13:57.079051 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-47c8-account-create-update-jvpms"] Nov 29 08:13:57 crc kubenswrapper[4795]: I1129 08:13:57.091783 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-wjgpz"] Nov 29 08:13:58 crc kubenswrapper[4795]: I1129 08:13:58.313100 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="613a66e8-273f-4355-bea7-08909eb514e8" path="/var/lib/kubelet/pods/613a66e8-273f-4355-bea7-08909eb514e8/volumes" Nov 29 08:13:58 crc kubenswrapper[4795]: I1129 08:13:58.315137 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff80bff6-f385-4876-9967-5622f2b44e9f" path="/var/lib/kubelet/pods/ff80bff6-f385-4876-9967-5622f2b44e9f/volumes" Nov 29 08:14:20 crc kubenswrapper[4795]: I1129 08:14:20.065232 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-fjkhc"] Nov 29 08:14:20 crc kubenswrapper[4795]: I1129 08:14:20.078810 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-ef15-account-create-update-w8qbc"] Nov 29 08:14:20 crc kubenswrapper[4795]: I1129 08:14:20.089775 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-mlxhw"] Nov 29 08:14:20 crc kubenswrapper[4795]: I1129 08:14:20.105639 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-703d-account-create-update-868mf"] Nov 29 08:14:20 crc kubenswrapper[4795]: I1129 08:14:20.121824 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-5025-account-create-update-jzzfl"] Nov 29 08:14:20 crc kubenswrapper[4795]: I1129 08:14:20.131901 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-rnjzx"] Nov 29 08:14:20 crc kubenswrapper[4795]: I1129 08:14:20.141626 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-96b1-account-create-update-2xx9v"] Nov 29 08:14:20 crc kubenswrapper[4795]: I1129 08:14:20.151113 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-cf9lf"] Nov 29 08:14:20 crc kubenswrapper[4795]: I1129 08:14:20.160471 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-96b1-account-create-update-2xx9v"] Nov 29 08:14:20 crc kubenswrapper[4795]: I1129 08:14:20.170389 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-703d-account-create-update-868mf"] Nov 29 08:14:20 crc kubenswrapper[4795]: I1129 08:14:20.185809 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-rnjzx"] Nov 29 08:14:20 crc kubenswrapper[4795]: I1129 08:14:20.200507 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-5025-account-create-update-jzzfl"] Nov 29 08:14:20 crc kubenswrapper[4795]: I1129 08:14:20.212579 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-fjkhc"] Nov 29 08:14:20 crc kubenswrapper[4795]: I1129 08:14:20.223934 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-ef15-account-create-update-w8qbc"] Nov 29 08:14:20 crc kubenswrapper[4795]: I1129 08:14:20.233930 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-mlxhw"] Nov 29 08:14:20 crc kubenswrapper[4795]: I1129 08:14:20.246961 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-cf9lf"] Nov 29 08:14:20 crc kubenswrapper[4795]: I1129 08:14:20.291298 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b92fb37-6b8f-45de-92a6-c488e9c3f40b" path="/var/lib/kubelet/pods/2b92fb37-6b8f-45de-92a6-c488e9c3f40b/volumes" Nov 29 08:14:20 crc kubenswrapper[4795]: I1129 08:14:20.292874 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="307b1106-2c28-463f-8843-e2d397bf999a" path="/var/lib/kubelet/pods/307b1106-2c28-463f-8843-e2d397bf999a/volumes" Nov 29 08:14:20 crc kubenswrapper[4795]: I1129 08:14:20.293549 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51454db5-2fe5-46bf-b01c-a302030baa3d" path="/var/lib/kubelet/pods/51454db5-2fe5-46bf-b01c-a302030baa3d/volumes" Nov 29 08:14:20 crc kubenswrapper[4795]: I1129 08:14:20.294184 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e81170c-9273-44af-9017-18b86d36e4c9" path="/var/lib/kubelet/pods/5e81170c-9273-44af-9017-18b86d36e4c9/volumes" Nov 29 08:14:20 crc kubenswrapper[4795]: I1129 08:14:20.294922 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="727203c8-6d15-404c-8744-8308e5c7ced8" path="/var/lib/kubelet/pods/727203c8-6d15-404c-8744-8308e5c7ced8/volumes" Nov 29 08:14:20 crc kubenswrapper[4795]: I1129 08:14:20.296044 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="debbb091-39b4-4d01-a468-7f9c7b65ff7e" path="/var/lib/kubelet/pods/debbb091-39b4-4d01-a468-7f9c7b65ff7e/volumes" Nov 29 08:14:20 crc kubenswrapper[4795]: I1129 08:14:20.296718 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f94ad4e7-724c-43f6-bf7d-a50cb51229d3" path="/var/lib/kubelet/pods/f94ad4e7-724c-43f6-bf7d-a50cb51229d3/volumes" Nov 29 08:14:20 crc kubenswrapper[4795]: I1129 08:14:20.297618 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb3e5f06-dfee-4af3-bbe8-38b5d3272220" path="/var/lib/kubelet/pods/fb3e5f06-dfee-4af3-bbe8-38b5d3272220/volumes" Nov 29 08:14:26 crc kubenswrapper[4795]: I1129 08:14:26.043099 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-j9j74"] Nov 29 08:14:26 crc kubenswrapper[4795]: I1129 08:14:26.058542 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-j9j74"] Nov 29 08:14:26 crc kubenswrapper[4795]: I1129 08:14:26.288478 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="605a5a60-d8c8-4128-954a-236dd7cbf8c4" path="/var/lib/kubelet/pods/605a5a60-d8c8-4128-954a-236dd7cbf8c4/volumes" Nov 29 08:14:28 crc kubenswrapper[4795]: I1129 08:14:28.897007 4795 scope.go:117] "RemoveContainer" containerID="def7f5910d1730a6e43214202d8cc16682f75f488b67a4b0fe92039d7c6bf132" Nov 29 08:14:28 crc kubenswrapper[4795]: I1129 08:14:28.921719 4795 scope.go:117] "RemoveContainer" containerID="bd23731ee68d0e60c214898a4938a553356052e90c097ba674e7e37fadb1b958" Nov 29 08:14:28 crc kubenswrapper[4795]: I1129 08:14:28.951898 4795 scope.go:117] "RemoveContainer" containerID="1622293259e24b7de444593ff2fcb10780ecf4f31c9605db98ea4019f5844129" Nov 29 08:14:28 crc kubenswrapper[4795]: I1129 08:14:28.978567 4795 scope.go:117] "RemoveContainer" containerID="19d21fbe15d942ec51a6123b7dfc4f063e1d9ab55ac5d146ba1be1639fd2e4e0" Nov 29 08:14:29 crc kubenswrapper[4795]: I1129 08:14:29.031940 4795 scope.go:117] "RemoveContainer" containerID="bed7b166cf6e44a1c1af812c4b22eb1bbedab6085744da5c8c38cd3ec0cd3a07" Nov 29 08:14:29 crc kubenswrapper[4795]: I1129 08:14:29.057329 4795 scope.go:117] "RemoveContainer" containerID="9c07469073a6d62e6f3d1ce4dc2c82fba1c46f96c4c46c031740f2b80b8681b9" Nov 29 08:14:29 crc kubenswrapper[4795]: I1129 08:14:29.340877 4795 scope.go:117] "RemoveContainer" containerID="f698ae92f9fee52d2ebd1c980cb6346e95539bc1c5d41c27cd57a201cf1950e5" Nov 29 08:14:29 crc kubenswrapper[4795]: I1129 08:14:29.366390 4795 scope.go:117] "RemoveContainer" containerID="7db55f65bc85a7cb487176c0a91de0a6ae7f421913f6c6e20ab01f4a0051fbcf" Nov 29 08:14:29 crc kubenswrapper[4795]: I1129 08:14:29.421982 4795 scope.go:117] "RemoveContainer" containerID="255f116de412296e32effbc55abbb6efb78da338a670a0f386d8190004713006" Nov 29 08:14:29 crc kubenswrapper[4795]: I1129 08:14:29.482035 4795 scope.go:117] "RemoveContainer" containerID="9fe0bb7cbb4235ad426038d0c24f8f524f0d79c2d4ad74d050f8a2838af82749" Nov 29 08:14:29 crc kubenswrapper[4795]: I1129 08:14:29.530487 4795 scope.go:117] "RemoveContainer" containerID="5aa1acd3e95e74c117d71df94c0ff912c2790e1dfc1afa7994c3c1fe083e2937" Nov 29 08:14:29 crc kubenswrapper[4795]: I1129 08:14:29.566320 4795 scope.go:117] "RemoveContainer" containerID="b9f07bb09ffcd836b0c31bbb418f75982da975a7bd60a41fabb99fd46a767f33" Nov 29 08:14:29 crc kubenswrapper[4795]: I1129 08:14:29.590960 4795 scope.go:117] "RemoveContainer" containerID="e2bfe98b406cf8db2d40fe791d8b9eee06db2c4a7314293a41f4604dee5c24d0" Nov 29 08:14:29 crc kubenswrapper[4795]: I1129 08:14:29.613397 4795 scope.go:117] "RemoveContainer" containerID="e1c4997f5fbdbf8d018b224a5421807f3748de57a871648f2813f4b14a3ce231" Nov 29 08:14:29 crc kubenswrapper[4795]: I1129 08:14:29.634828 4795 scope.go:117] "RemoveContainer" containerID="9940228c7b583513058642c80d5f172a1a560532de3c2d4e1fcdc65230125de5" Nov 29 08:14:29 crc kubenswrapper[4795]: I1129 08:14:29.681123 4795 scope.go:117] "RemoveContainer" containerID="b88fd5677e30ef7e944d5c161813b169ab1cb077b0f034bf277cbbc4f3525b02" Nov 29 08:14:29 crc kubenswrapper[4795]: I1129 08:14:29.713627 4795 scope.go:117] "RemoveContainer" containerID="99528692e0224a85c4c4c7b0fad44e9b31c922a5af2675a2687db64700fbe1be" Nov 29 08:14:29 crc kubenswrapper[4795]: I1129 08:14:29.747076 4795 scope.go:117] "RemoveContainer" containerID="9e4c6d901bad313d8ed61f0a899303ff524b3f193979e2ded8a07c8b2a277a31" Nov 29 08:14:29 crc kubenswrapper[4795]: I1129 08:14:29.775117 4795 scope.go:117] "RemoveContainer" containerID="02d3bccc3eb069db5cdb13679eddeeaf3fcc70c21b41a2a903130dac092b2cff" Nov 29 08:14:29 crc kubenswrapper[4795]: I1129 08:14:29.796123 4795 scope.go:117] "RemoveContainer" containerID="4ae6b555a501b4ab2dee9dca62b036c42c21222fa517f7b4c7ddb0016932ff38" Nov 29 08:14:29 crc kubenswrapper[4795]: I1129 08:14:29.823226 4795 scope.go:117] "RemoveContainer" containerID="9e60543aaebc0572ec63143a19c0096bd61764185fad4a545ee6604404fb8a7c" Nov 29 08:14:29 crc kubenswrapper[4795]: I1129 08:14:29.851967 4795 scope.go:117] "RemoveContainer" containerID="44e6bc8df10e28299654b6189eee348f9f4e28760fe42c995ae4595d76551e9e" Nov 29 08:14:29 crc kubenswrapper[4795]: I1129 08:14:29.876330 4795 scope.go:117] "RemoveContainer" containerID="483fdeba8bc0eaec0069cd90617ef89cd7b89f192c0180faab24864d98a4821a" Nov 29 08:14:29 crc kubenswrapper[4795]: I1129 08:14:29.904834 4795 scope.go:117] "RemoveContainer" containerID="d65f024389ef1d5aa608e4a8fb1087518135f5cd8f20e05e297a5bfff0cf1121" Nov 29 08:14:29 crc kubenswrapper[4795]: I1129 08:14:29.931504 4795 scope.go:117] "RemoveContainer" containerID="e57f367be37ab0f422bcbee87b20ed2949e33d3c7f2045b418f6cbc6111149e6" Nov 29 08:14:29 crc kubenswrapper[4795]: I1129 08:14:29.967370 4795 scope.go:117] "RemoveContainer" containerID="89faf16d30ab41549b3c7ea9c3acd604b4ea207322bbdcfaffd169636199cb74" Nov 29 08:14:30 crc kubenswrapper[4795]: I1129 08:14:30.058775 4795 scope.go:117] "RemoveContainer" containerID="4269373f1eb543f8207df3d61d66c9cbd6d73556e63520f1ae2fa6f3bc0a0df6" Nov 29 08:14:30 crc kubenswrapper[4795]: I1129 08:14:30.083237 4795 scope.go:117] "RemoveContainer" containerID="cd27ea5a5bb3ed01e2f88d07f745b42f0ca2cffd325cbf3f25eb938838f94781" Nov 29 08:14:30 crc kubenswrapper[4795]: I1129 08:14:30.135979 4795 scope.go:117] "RemoveContainer" containerID="9689f21b7f18fe8e3a851ee3ba063b06971fff9cf95cbeade85d635cd8b0c26b" Nov 29 08:14:30 crc kubenswrapper[4795]: I1129 08:14:30.155728 4795 scope.go:117] "RemoveContainer" containerID="2f63923573ab34048b911384ddf736793b908e5486529890d64dbd2d8ef51bbe" Nov 29 08:15:00 crc kubenswrapper[4795]: I1129 08:15:00.154619 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406735-mf8f9"] Nov 29 08:15:00 crc kubenswrapper[4795]: E1129 08:15:00.155914 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c01d062-6304-480c-a957-63313b52599a" containerName="extract-content" Nov 29 08:15:00 crc kubenswrapper[4795]: I1129 08:15:00.155931 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c01d062-6304-480c-a957-63313b52599a" containerName="extract-content" Nov 29 08:15:00 crc kubenswrapper[4795]: E1129 08:15:00.155952 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c01d062-6304-480c-a957-63313b52599a" containerName="registry-server" Nov 29 08:15:00 crc kubenswrapper[4795]: I1129 08:15:00.155960 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c01d062-6304-480c-a957-63313b52599a" containerName="registry-server" Nov 29 08:15:00 crc kubenswrapper[4795]: E1129 08:15:00.155989 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08402fb0-7ff1-492e-8d2b-423e248fb787" containerName="registry-server" Nov 29 08:15:00 crc kubenswrapper[4795]: I1129 08:15:00.156024 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="08402fb0-7ff1-492e-8d2b-423e248fb787" containerName="registry-server" Nov 29 08:15:00 crc kubenswrapper[4795]: E1129 08:15:00.156055 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c01d062-6304-480c-a957-63313b52599a" containerName="extract-utilities" Nov 29 08:15:00 crc kubenswrapper[4795]: I1129 08:15:00.156064 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c01d062-6304-480c-a957-63313b52599a" containerName="extract-utilities" Nov 29 08:15:00 crc kubenswrapper[4795]: E1129 08:15:00.156084 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08402fb0-7ff1-492e-8d2b-423e248fb787" containerName="extract-utilities" Nov 29 08:15:00 crc kubenswrapper[4795]: I1129 08:15:00.156092 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="08402fb0-7ff1-492e-8d2b-423e248fb787" containerName="extract-utilities" Nov 29 08:15:00 crc kubenswrapper[4795]: E1129 08:15:00.156107 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08402fb0-7ff1-492e-8d2b-423e248fb787" containerName="extract-content" Nov 29 08:15:00 crc kubenswrapper[4795]: I1129 08:15:00.156114 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="08402fb0-7ff1-492e-8d2b-423e248fb787" containerName="extract-content" Nov 29 08:15:00 crc kubenswrapper[4795]: I1129 08:15:00.156396 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c01d062-6304-480c-a957-63313b52599a" containerName="registry-server" Nov 29 08:15:00 crc kubenswrapper[4795]: I1129 08:15:00.156433 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="08402fb0-7ff1-492e-8d2b-423e248fb787" containerName="registry-server" Nov 29 08:15:00 crc kubenswrapper[4795]: I1129 08:15:00.157569 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406735-mf8f9" Nov 29 08:15:00 crc kubenswrapper[4795]: I1129 08:15:00.159720 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 29 08:15:00 crc kubenswrapper[4795]: I1129 08:15:00.160124 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 29 08:15:00 crc kubenswrapper[4795]: I1129 08:15:00.166216 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406735-mf8f9"] Nov 29 08:15:00 crc kubenswrapper[4795]: I1129 08:15:00.288903 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/075c29f6-7d79-44c2-9cbd-9ac3e3460f6e-config-volume\") pod \"collect-profiles-29406735-mf8f9\" (UID: \"075c29f6-7d79-44c2-9cbd-9ac3e3460f6e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406735-mf8f9" Nov 29 08:15:00 crc kubenswrapper[4795]: I1129 08:15:00.289001 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47glw\" (UniqueName: \"kubernetes.io/projected/075c29f6-7d79-44c2-9cbd-9ac3e3460f6e-kube-api-access-47glw\") pod \"collect-profiles-29406735-mf8f9\" (UID: \"075c29f6-7d79-44c2-9cbd-9ac3e3460f6e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406735-mf8f9" Nov 29 08:15:00 crc kubenswrapper[4795]: I1129 08:15:00.289167 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/075c29f6-7d79-44c2-9cbd-9ac3e3460f6e-secret-volume\") pod \"collect-profiles-29406735-mf8f9\" (UID: \"075c29f6-7d79-44c2-9cbd-9ac3e3460f6e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406735-mf8f9" Nov 29 08:15:00 crc kubenswrapper[4795]: I1129 08:15:00.391490 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/075c29f6-7d79-44c2-9cbd-9ac3e3460f6e-secret-volume\") pod \"collect-profiles-29406735-mf8f9\" (UID: \"075c29f6-7d79-44c2-9cbd-9ac3e3460f6e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406735-mf8f9" Nov 29 08:15:00 crc kubenswrapper[4795]: I1129 08:15:00.391927 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/075c29f6-7d79-44c2-9cbd-9ac3e3460f6e-config-volume\") pod \"collect-profiles-29406735-mf8f9\" (UID: \"075c29f6-7d79-44c2-9cbd-9ac3e3460f6e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406735-mf8f9" Nov 29 08:15:00 crc kubenswrapper[4795]: I1129 08:15:00.392010 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47glw\" (UniqueName: \"kubernetes.io/projected/075c29f6-7d79-44c2-9cbd-9ac3e3460f6e-kube-api-access-47glw\") pod \"collect-profiles-29406735-mf8f9\" (UID: \"075c29f6-7d79-44c2-9cbd-9ac3e3460f6e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406735-mf8f9" Nov 29 08:15:00 crc kubenswrapper[4795]: I1129 08:15:00.393122 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/075c29f6-7d79-44c2-9cbd-9ac3e3460f6e-config-volume\") pod \"collect-profiles-29406735-mf8f9\" (UID: \"075c29f6-7d79-44c2-9cbd-9ac3e3460f6e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406735-mf8f9" Nov 29 08:15:00 crc kubenswrapper[4795]: I1129 08:15:00.399555 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/075c29f6-7d79-44c2-9cbd-9ac3e3460f6e-secret-volume\") pod \"collect-profiles-29406735-mf8f9\" (UID: \"075c29f6-7d79-44c2-9cbd-9ac3e3460f6e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406735-mf8f9" Nov 29 08:15:00 crc kubenswrapper[4795]: I1129 08:15:00.414981 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47glw\" (UniqueName: \"kubernetes.io/projected/075c29f6-7d79-44c2-9cbd-9ac3e3460f6e-kube-api-access-47glw\") pod \"collect-profiles-29406735-mf8f9\" (UID: \"075c29f6-7d79-44c2-9cbd-9ac3e3460f6e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406735-mf8f9" Nov 29 08:15:00 crc kubenswrapper[4795]: I1129 08:15:00.481069 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406735-mf8f9" Nov 29 08:15:00 crc kubenswrapper[4795]: I1129 08:15:00.979005 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406735-mf8f9"] Nov 29 08:15:01 crc kubenswrapper[4795]: I1129 08:15:01.604032 4795 generic.go:334] "Generic (PLEG): container finished" podID="075c29f6-7d79-44c2-9cbd-9ac3e3460f6e" containerID="8b67c7f215ca55272f6d267461cd58ceeaf7d51dc51945e27110925c48476133" exitCode=0 Nov 29 08:15:01 crc kubenswrapper[4795]: I1129 08:15:01.604189 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406735-mf8f9" event={"ID":"075c29f6-7d79-44c2-9cbd-9ac3e3460f6e","Type":"ContainerDied","Data":"8b67c7f215ca55272f6d267461cd58ceeaf7d51dc51945e27110925c48476133"} Nov 29 08:15:01 crc kubenswrapper[4795]: I1129 08:15:01.604281 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406735-mf8f9" event={"ID":"075c29f6-7d79-44c2-9cbd-9ac3e3460f6e","Type":"ContainerStarted","Data":"1d362ef5f2e7ddef94ba016d643e017a8c03b526c90654b95809eda76a21ab45"} Nov 29 08:15:03 crc kubenswrapper[4795]: I1129 08:15:03.104828 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406735-mf8f9" Nov 29 08:15:03 crc kubenswrapper[4795]: I1129 08:15:03.261480 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/075c29f6-7d79-44c2-9cbd-9ac3e3460f6e-secret-volume\") pod \"075c29f6-7d79-44c2-9cbd-9ac3e3460f6e\" (UID: \"075c29f6-7d79-44c2-9cbd-9ac3e3460f6e\") " Nov 29 08:15:03 crc kubenswrapper[4795]: I1129 08:15:03.261931 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/075c29f6-7d79-44c2-9cbd-9ac3e3460f6e-config-volume\") pod \"075c29f6-7d79-44c2-9cbd-9ac3e3460f6e\" (UID: \"075c29f6-7d79-44c2-9cbd-9ac3e3460f6e\") " Nov 29 08:15:03 crc kubenswrapper[4795]: I1129 08:15:03.262052 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47glw\" (UniqueName: \"kubernetes.io/projected/075c29f6-7d79-44c2-9cbd-9ac3e3460f6e-kube-api-access-47glw\") pod \"075c29f6-7d79-44c2-9cbd-9ac3e3460f6e\" (UID: \"075c29f6-7d79-44c2-9cbd-9ac3e3460f6e\") " Nov 29 08:15:03 crc kubenswrapper[4795]: I1129 08:15:03.262451 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/075c29f6-7d79-44c2-9cbd-9ac3e3460f6e-config-volume" (OuterVolumeSpecName: "config-volume") pod "075c29f6-7d79-44c2-9cbd-9ac3e3460f6e" (UID: "075c29f6-7d79-44c2-9cbd-9ac3e3460f6e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:15:03 crc kubenswrapper[4795]: I1129 08:15:03.263143 4795 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/075c29f6-7d79-44c2-9cbd-9ac3e3460f6e-config-volume\") on node \"crc\" DevicePath \"\"" Nov 29 08:15:03 crc kubenswrapper[4795]: I1129 08:15:03.268022 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/075c29f6-7d79-44c2-9cbd-9ac3e3460f6e-kube-api-access-47glw" (OuterVolumeSpecName: "kube-api-access-47glw") pod "075c29f6-7d79-44c2-9cbd-9ac3e3460f6e" (UID: "075c29f6-7d79-44c2-9cbd-9ac3e3460f6e"). InnerVolumeSpecName "kube-api-access-47glw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:15:03 crc kubenswrapper[4795]: I1129 08:15:03.274823 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/075c29f6-7d79-44c2-9cbd-9ac3e3460f6e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "075c29f6-7d79-44c2-9cbd-9ac3e3460f6e" (UID: "075c29f6-7d79-44c2-9cbd-9ac3e3460f6e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:15:03 crc kubenswrapper[4795]: I1129 08:15:03.366154 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47glw\" (UniqueName: \"kubernetes.io/projected/075c29f6-7d79-44c2-9cbd-9ac3e3460f6e-kube-api-access-47glw\") on node \"crc\" DevicePath \"\"" Nov 29 08:15:03 crc kubenswrapper[4795]: I1129 08:15:03.366231 4795 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/075c29f6-7d79-44c2-9cbd-9ac3e3460f6e-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 29 08:15:03 crc kubenswrapper[4795]: I1129 08:15:03.624604 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406735-mf8f9" event={"ID":"075c29f6-7d79-44c2-9cbd-9ac3e3460f6e","Type":"ContainerDied","Data":"1d362ef5f2e7ddef94ba016d643e017a8c03b526c90654b95809eda76a21ab45"} Nov 29 08:15:03 crc kubenswrapper[4795]: I1129 08:15:03.624651 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d362ef5f2e7ddef94ba016d643e017a8c03b526c90654b95809eda76a21ab45" Nov 29 08:15:03 crc kubenswrapper[4795]: I1129 08:15:03.624658 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406735-mf8f9" Nov 29 08:15:04 crc kubenswrapper[4795]: I1129 08:15:04.212376 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406690-dtj9t"] Nov 29 08:15:04 crc kubenswrapper[4795]: I1129 08:15:04.227711 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406690-dtj9t"] Nov 29 08:15:04 crc kubenswrapper[4795]: I1129 08:15:04.292479 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f25efaa-5792-4a51-ba83-e8733af29fdf" path="/var/lib/kubelet/pods/4f25efaa-5792-4a51-ba83-e8733af29fdf/volumes" Nov 29 08:15:26 crc kubenswrapper[4795]: I1129 08:15:26.035618 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-7lg28"] Nov 29 08:15:26 crc kubenswrapper[4795]: I1129 08:15:26.045243 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-7lg28"] Nov 29 08:15:26 crc kubenswrapper[4795]: I1129 08:15:26.288491 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="082a7f2c-1081-4af8-91c8-60a13d787746" path="/var/lib/kubelet/pods/082a7f2c-1081-4af8-91c8-60a13d787746/volumes" Nov 29 08:15:27 crc kubenswrapper[4795]: I1129 08:15:27.077829 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-z6klw"] Nov 29 08:15:27 crc kubenswrapper[4795]: I1129 08:15:27.092965 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-z6klw"] Nov 29 08:15:28 crc kubenswrapper[4795]: I1129 08:15:28.308241 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0c3141e-5bf7-49c4-80fd-2f0d0b361491" path="/var/lib/kubelet/pods/c0c3141e-5bf7-49c4-80fd-2f0d0b361491/volumes" Nov 29 08:15:30 crc kubenswrapper[4795]: I1129 08:15:30.620791 4795 scope.go:117] "RemoveContainer" containerID="245d605dd8455419c5b160025a93f2e21c118a545505ae76282d573abadf30be" Nov 29 08:15:30 crc kubenswrapper[4795]: I1129 08:15:30.645812 4795 scope.go:117] "RemoveContainer" containerID="394aa5c2b264af45bd9815054a03cda1553ea3317bcd26facfc2c583ad2145f2" Nov 29 08:15:30 crc kubenswrapper[4795]: I1129 08:15:30.724953 4795 scope.go:117] "RemoveContainer" containerID="5e2c76259e18576139e5a2976c5246e5f5e801f18ac9837de36eef93473c7277" Nov 29 08:15:39 crc kubenswrapper[4795]: I1129 08:15:39.055293 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-jf2ds"] Nov 29 08:15:39 crc kubenswrapper[4795]: I1129 08:15:39.073172 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-jf2ds"] Nov 29 08:15:40 crc kubenswrapper[4795]: I1129 08:15:40.289124 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20682aa9-7d99-41f5-9214-2c08cb1533ec" path="/var/lib/kubelet/pods/20682aa9-7d99-41f5-9214-2c08cb1533ec/volumes" Nov 29 08:15:48 crc kubenswrapper[4795]: I1129 08:15:48.042435 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-48c6t"] Nov 29 08:15:48 crc kubenswrapper[4795]: I1129 08:15:48.055793 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-48c6t"] Nov 29 08:15:48 crc kubenswrapper[4795]: I1129 08:15:48.289653 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f63e39cf-970a-40df-a823-d1e60521e702" path="/var/lib/kubelet/pods/f63e39cf-970a-40df-a823-d1e60521e702/volumes" Nov 29 08:15:49 crc kubenswrapper[4795]: I1129 08:15:49.877048 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-d9gvz"] Nov 29 08:15:49 crc kubenswrapper[4795]: E1129 08:15:49.878262 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="075c29f6-7d79-44c2-9cbd-9ac3e3460f6e" containerName="collect-profiles" Nov 29 08:15:49 crc kubenswrapper[4795]: I1129 08:15:49.878280 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="075c29f6-7d79-44c2-9cbd-9ac3e3460f6e" containerName="collect-profiles" Nov 29 08:15:49 crc kubenswrapper[4795]: I1129 08:15:49.878647 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="075c29f6-7d79-44c2-9cbd-9ac3e3460f6e" containerName="collect-profiles" Nov 29 08:15:49 crc kubenswrapper[4795]: I1129 08:15:49.880663 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d9gvz" Nov 29 08:15:49 crc kubenswrapper[4795]: I1129 08:15:49.892774 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d9gvz"] Nov 29 08:15:50 crc kubenswrapper[4795]: I1129 08:15:50.036744 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39710449-d9f1-4524-897f-feabf1bfc81f-catalog-content\") pod \"certified-operators-d9gvz\" (UID: \"39710449-d9f1-4524-897f-feabf1bfc81f\") " pod="openshift-marketplace/certified-operators-d9gvz" Nov 29 08:15:50 crc kubenswrapper[4795]: I1129 08:15:50.037093 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxq2h\" (UniqueName: \"kubernetes.io/projected/39710449-d9f1-4524-897f-feabf1bfc81f-kube-api-access-xxq2h\") pod \"certified-operators-d9gvz\" (UID: \"39710449-d9f1-4524-897f-feabf1bfc81f\") " pod="openshift-marketplace/certified-operators-d9gvz" Nov 29 08:15:50 crc kubenswrapper[4795]: I1129 08:15:50.037485 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39710449-d9f1-4524-897f-feabf1bfc81f-utilities\") pod \"certified-operators-d9gvz\" (UID: \"39710449-d9f1-4524-897f-feabf1bfc81f\") " pod="openshift-marketplace/certified-operators-d9gvz" Nov 29 08:15:50 crc kubenswrapper[4795]: I1129 08:15:50.140225 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39710449-d9f1-4524-897f-feabf1bfc81f-utilities\") pod \"certified-operators-d9gvz\" (UID: \"39710449-d9f1-4524-897f-feabf1bfc81f\") " pod="openshift-marketplace/certified-operators-d9gvz" Nov 29 08:15:50 crc kubenswrapper[4795]: I1129 08:15:50.140386 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39710449-d9f1-4524-897f-feabf1bfc81f-catalog-content\") pod \"certified-operators-d9gvz\" (UID: \"39710449-d9f1-4524-897f-feabf1bfc81f\") " pod="openshift-marketplace/certified-operators-d9gvz" Nov 29 08:15:50 crc kubenswrapper[4795]: I1129 08:15:50.140417 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxq2h\" (UniqueName: \"kubernetes.io/projected/39710449-d9f1-4524-897f-feabf1bfc81f-kube-api-access-xxq2h\") pod \"certified-operators-d9gvz\" (UID: \"39710449-d9f1-4524-897f-feabf1bfc81f\") " pod="openshift-marketplace/certified-operators-d9gvz" Nov 29 08:15:50 crc kubenswrapper[4795]: I1129 08:15:50.140684 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39710449-d9f1-4524-897f-feabf1bfc81f-utilities\") pod \"certified-operators-d9gvz\" (UID: \"39710449-d9f1-4524-897f-feabf1bfc81f\") " pod="openshift-marketplace/certified-operators-d9gvz" Nov 29 08:15:50 crc kubenswrapper[4795]: I1129 08:15:50.140913 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39710449-d9f1-4524-897f-feabf1bfc81f-catalog-content\") pod \"certified-operators-d9gvz\" (UID: \"39710449-d9f1-4524-897f-feabf1bfc81f\") " pod="openshift-marketplace/certified-operators-d9gvz" Nov 29 08:15:50 crc kubenswrapper[4795]: I1129 08:15:50.172988 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxq2h\" (UniqueName: \"kubernetes.io/projected/39710449-d9f1-4524-897f-feabf1bfc81f-kube-api-access-xxq2h\") pod \"certified-operators-d9gvz\" (UID: \"39710449-d9f1-4524-897f-feabf1bfc81f\") " pod="openshift-marketplace/certified-operators-d9gvz" Nov 29 08:15:50 crc kubenswrapper[4795]: I1129 08:15:50.203982 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d9gvz" Nov 29 08:15:50 crc kubenswrapper[4795]: I1129 08:15:50.732034 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d9gvz"] Nov 29 08:15:51 crc kubenswrapper[4795]: I1129 08:15:51.146287 4795 generic.go:334] "Generic (PLEG): container finished" podID="39710449-d9f1-4524-897f-feabf1bfc81f" containerID="70786c23073496819a68cc123d2026913931a778df62dac5a5d6c0aa7957e55a" exitCode=0 Nov 29 08:15:51 crc kubenswrapper[4795]: I1129 08:15:51.146485 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d9gvz" event={"ID":"39710449-d9f1-4524-897f-feabf1bfc81f","Type":"ContainerDied","Data":"70786c23073496819a68cc123d2026913931a778df62dac5a5d6c0aa7957e55a"} Nov 29 08:15:51 crc kubenswrapper[4795]: I1129 08:15:51.146584 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d9gvz" event={"ID":"39710449-d9f1-4524-897f-feabf1bfc81f","Type":"ContainerStarted","Data":"b1d5b3c5b01f22170392f379e44b800ceae8ba30a0a7c6539cff7b4e5a8e10cf"} Nov 29 08:15:52 crc kubenswrapper[4795]: I1129 08:15:52.030430 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-fdl5k"] Nov 29 08:15:52 crc kubenswrapper[4795]: I1129 08:15:52.043685 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-fdl5k"] Nov 29 08:15:52 crc kubenswrapper[4795]: I1129 08:15:52.159536 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d9gvz" event={"ID":"39710449-d9f1-4524-897f-feabf1bfc81f","Type":"ContainerStarted","Data":"563ffa4c814b1732e36c18fad5979093497ba1fb2b0ebffb60bf5775cd413dc5"} Nov 29 08:15:52 crc kubenswrapper[4795]: I1129 08:15:52.290855 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6dff0cb-a174-4227-ad82-21a12aee68f5" path="/var/lib/kubelet/pods/f6dff0cb-a174-4227-ad82-21a12aee68f5/volumes" Nov 29 08:15:53 crc kubenswrapper[4795]: I1129 08:15:53.182253 4795 generic.go:334] "Generic (PLEG): container finished" podID="39710449-d9f1-4524-897f-feabf1bfc81f" containerID="563ffa4c814b1732e36c18fad5979093497ba1fb2b0ebffb60bf5775cd413dc5" exitCode=0 Nov 29 08:15:53 crc kubenswrapper[4795]: I1129 08:15:53.182305 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d9gvz" event={"ID":"39710449-d9f1-4524-897f-feabf1bfc81f","Type":"ContainerDied","Data":"563ffa4c814b1732e36c18fad5979093497ba1fb2b0ebffb60bf5775cd413dc5"} Nov 29 08:15:54 crc kubenswrapper[4795]: I1129 08:15:54.193085 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d9gvz" event={"ID":"39710449-d9f1-4524-897f-feabf1bfc81f","Type":"ContainerStarted","Data":"3a9ebc891b5502cbc0c23c519b25878558fa2bfaa4bd0c75907499a485b75cbe"} Nov 29 08:15:54 crc kubenswrapper[4795]: I1129 08:15:54.224532 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-d9gvz" podStartSLOduration=2.751435936 podStartE2EDuration="5.2245138s" podCreationTimestamp="2025-11-29 08:15:49 +0000 UTC" firstStartedPulling="2025-11-29 08:15:51.14843429 +0000 UTC m=+2197.124010080" lastFinishedPulling="2025-11-29 08:15:53.621512154 +0000 UTC m=+2199.597087944" observedRunningTime="2025-11-29 08:15:54.213508008 +0000 UTC m=+2200.189083808" watchObservedRunningTime="2025-11-29 08:15:54.2245138 +0000 UTC m=+2200.200089590" Nov 29 08:15:56 crc kubenswrapper[4795]: I1129 08:15:56.038136 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-tvhw5"] Nov 29 08:15:56 crc kubenswrapper[4795]: I1129 08:15:56.051931 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-tvhw5"] Nov 29 08:15:56 crc kubenswrapper[4795]: I1129 08:15:56.288958 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1082de8f-47bf-41ac-875f-8d7db0baab7b" path="/var/lib/kubelet/pods/1082de8f-47bf-41ac-875f-8d7db0baab7b/volumes" Nov 29 08:16:00 crc kubenswrapper[4795]: I1129 08:16:00.204991 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-d9gvz" Nov 29 08:16:00 crc kubenswrapper[4795]: I1129 08:16:00.205625 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-d9gvz" Nov 29 08:16:00 crc kubenswrapper[4795]: I1129 08:16:00.265361 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-d9gvz" Nov 29 08:16:00 crc kubenswrapper[4795]: I1129 08:16:00.328140 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-d9gvz" Nov 29 08:16:00 crc kubenswrapper[4795]: I1129 08:16:00.523228 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d9gvz"] Nov 29 08:16:02 crc kubenswrapper[4795]: I1129 08:16:02.272263 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-d9gvz" podUID="39710449-d9f1-4524-897f-feabf1bfc81f" containerName="registry-server" containerID="cri-o://3a9ebc891b5502cbc0c23c519b25878558fa2bfaa4bd0c75907499a485b75cbe" gracePeriod=2 Nov 29 08:16:02 crc kubenswrapper[4795]: I1129 08:16:02.943723 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d9gvz" Nov 29 08:16:02 crc kubenswrapper[4795]: I1129 08:16:02.985369 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39710449-d9f1-4524-897f-feabf1bfc81f-catalog-content\") pod \"39710449-d9f1-4524-897f-feabf1bfc81f\" (UID: \"39710449-d9f1-4524-897f-feabf1bfc81f\") " Nov 29 08:16:02 crc kubenswrapper[4795]: I1129 08:16:02.985510 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39710449-d9f1-4524-897f-feabf1bfc81f-utilities\") pod \"39710449-d9f1-4524-897f-feabf1bfc81f\" (UID: \"39710449-d9f1-4524-897f-feabf1bfc81f\") " Nov 29 08:16:02 crc kubenswrapper[4795]: I1129 08:16:02.985535 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxq2h\" (UniqueName: \"kubernetes.io/projected/39710449-d9f1-4524-897f-feabf1bfc81f-kube-api-access-xxq2h\") pod \"39710449-d9f1-4524-897f-feabf1bfc81f\" (UID: \"39710449-d9f1-4524-897f-feabf1bfc81f\") " Nov 29 08:16:02 crc kubenswrapper[4795]: I1129 08:16:02.987445 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39710449-d9f1-4524-897f-feabf1bfc81f-utilities" (OuterVolumeSpecName: "utilities") pod "39710449-d9f1-4524-897f-feabf1bfc81f" (UID: "39710449-d9f1-4524-897f-feabf1bfc81f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:16:02 crc kubenswrapper[4795]: I1129 08:16:02.992485 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39710449-d9f1-4524-897f-feabf1bfc81f-kube-api-access-xxq2h" (OuterVolumeSpecName: "kube-api-access-xxq2h") pod "39710449-d9f1-4524-897f-feabf1bfc81f" (UID: "39710449-d9f1-4524-897f-feabf1bfc81f"). InnerVolumeSpecName "kube-api-access-xxq2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:16:03 crc kubenswrapper[4795]: I1129 08:16:03.047444 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39710449-d9f1-4524-897f-feabf1bfc81f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "39710449-d9f1-4524-897f-feabf1bfc81f" (UID: "39710449-d9f1-4524-897f-feabf1bfc81f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:16:03 crc kubenswrapper[4795]: I1129 08:16:03.086883 4795 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39710449-d9f1-4524-897f-feabf1bfc81f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 08:16:03 crc kubenswrapper[4795]: I1129 08:16:03.086936 4795 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39710449-d9f1-4524-897f-feabf1bfc81f-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 08:16:03 crc kubenswrapper[4795]: I1129 08:16:03.086949 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxq2h\" (UniqueName: \"kubernetes.io/projected/39710449-d9f1-4524-897f-feabf1bfc81f-kube-api-access-xxq2h\") on node \"crc\" DevicePath \"\"" Nov 29 08:16:03 crc kubenswrapper[4795]: I1129 08:16:03.287504 4795 generic.go:334] "Generic (PLEG): container finished" podID="39710449-d9f1-4524-897f-feabf1bfc81f" containerID="3a9ebc891b5502cbc0c23c519b25878558fa2bfaa4bd0c75907499a485b75cbe" exitCode=0 Nov 29 08:16:03 crc kubenswrapper[4795]: I1129 08:16:03.287549 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d9gvz" event={"ID":"39710449-d9f1-4524-897f-feabf1bfc81f","Type":"ContainerDied","Data":"3a9ebc891b5502cbc0c23c519b25878558fa2bfaa4bd0c75907499a485b75cbe"} Nov 29 08:16:03 crc kubenswrapper[4795]: I1129 08:16:03.287604 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d9gvz" event={"ID":"39710449-d9f1-4524-897f-feabf1bfc81f","Type":"ContainerDied","Data":"b1d5b3c5b01f22170392f379e44b800ceae8ba30a0a7c6539cff7b4e5a8e10cf"} Nov 29 08:16:03 crc kubenswrapper[4795]: I1129 08:16:03.287631 4795 scope.go:117] "RemoveContainer" containerID="3a9ebc891b5502cbc0c23c519b25878558fa2bfaa4bd0c75907499a485b75cbe" Nov 29 08:16:03 crc kubenswrapper[4795]: I1129 08:16:03.287629 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d9gvz" Nov 29 08:16:03 crc kubenswrapper[4795]: I1129 08:16:03.331592 4795 scope.go:117] "RemoveContainer" containerID="563ffa4c814b1732e36c18fad5979093497ba1fb2b0ebffb60bf5775cd413dc5" Nov 29 08:16:03 crc kubenswrapper[4795]: I1129 08:16:03.347955 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d9gvz"] Nov 29 08:16:03 crc kubenswrapper[4795]: I1129 08:16:03.360940 4795 scope.go:117] "RemoveContainer" containerID="70786c23073496819a68cc123d2026913931a778df62dac5a5d6c0aa7957e55a" Nov 29 08:16:03 crc kubenswrapper[4795]: I1129 08:16:03.367871 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-d9gvz"] Nov 29 08:16:03 crc kubenswrapper[4795]: I1129 08:16:03.424969 4795 scope.go:117] "RemoveContainer" containerID="3a9ebc891b5502cbc0c23c519b25878558fa2bfaa4bd0c75907499a485b75cbe" Nov 29 08:16:03 crc kubenswrapper[4795]: E1129 08:16:03.426080 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a9ebc891b5502cbc0c23c519b25878558fa2bfaa4bd0c75907499a485b75cbe\": container with ID starting with 3a9ebc891b5502cbc0c23c519b25878558fa2bfaa4bd0c75907499a485b75cbe not found: ID does not exist" containerID="3a9ebc891b5502cbc0c23c519b25878558fa2bfaa4bd0c75907499a485b75cbe" Nov 29 08:16:03 crc kubenswrapper[4795]: I1129 08:16:03.426111 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a9ebc891b5502cbc0c23c519b25878558fa2bfaa4bd0c75907499a485b75cbe"} err="failed to get container status \"3a9ebc891b5502cbc0c23c519b25878558fa2bfaa4bd0c75907499a485b75cbe\": rpc error: code = NotFound desc = could not find container \"3a9ebc891b5502cbc0c23c519b25878558fa2bfaa4bd0c75907499a485b75cbe\": container with ID starting with 3a9ebc891b5502cbc0c23c519b25878558fa2bfaa4bd0c75907499a485b75cbe not found: ID does not exist" Nov 29 08:16:03 crc kubenswrapper[4795]: I1129 08:16:03.426131 4795 scope.go:117] "RemoveContainer" containerID="563ffa4c814b1732e36c18fad5979093497ba1fb2b0ebffb60bf5775cd413dc5" Nov 29 08:16:03 crc kubenswrapper[4795]: E1129 08:16:03.428837 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"563ffa4c814b1732e36c18fad5979093497ba1fb2b0ebffb60bf5775cd413dc5\": container with ID starting with 563ffa4c814b1732e36c18fad5979093497ba1fb2b0ebffb60bf5775cd413dc5 not found: ID does not exist" containerID="563ffa4c814b1732e36c18fad5979093497ba1fb2b0ebffb60bf5775cd413dc5" Nov 29 08:16:03 crc kubenswrapper[4795]: I1129 08:16:03.429037 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"563ffa4c814b1732e36c18fad5979093497ba1fb2b0ebffb60bf5775cd413dc5"} err="failed to get container status \"563ffa4c814b1732e36c18fad5979093497ba1fb2b0ebffb60bf5775cd413dc5\": rpc error: code = NotFound desc = could not find container \"563ffa4c814b1732e36c18fad5979093497ba1fb2b0ebffb60bf5775cd413dc5\": container with ID starting with 563ffa4c814b1732e36c18fad5979093497ba1fb2b0ebffb60bf5775cd413dc5 not found: ID does not exist" Nov 29 08:16:03 crc kubenswrapper[4795]: I1129 08:16:03.429147 4795 scope.go:117] "RemoveContainer" containerID="70786c23073496819a68cc123d2026913931a778df62dac5a5d6c0aa7957e55a" Nov 29 08:16:03 crc kubenswrapper[4795]: E1129 08:16:03.429476 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70786c23073496819a68cc123d2026913931a778df62dac5a5d6c0aa7957e55a\": container with ID starting with 70786c23073496819a68cc123d2026913931a778df62dac5a5d6c0aa7957e55a not found: ID does not exist" containerID="70786c23073496819a68cc123d2026913931a778df62dac5a5d6c0aa7957e55a" Nov 29 08:16:03 crc kubenswrapper[4795]: I1129 08:16:03.429763 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70786c23073496819a68cc123d2026913931a778df62dac5a5d6c0aa7957e55a"} err="failed to get container status \"70786c23073496819a68cc123d2026913931a778df62dac5a5d6c0aa7957e55a\": rpc error: code = NotFound desc = could not find container \"70786c23073496819a68cc123d2026913931a778df62dac5a5d6c0aa7957e55a\": container with ID starting with 70786c23073496819a68cc123d2026913931a778df62dac5a5d6c0aa7957e55a not found: ID does not exist" Nov 29 08:16:04 crc kubenswrapper[4795]: I1129 08:16:04.294972 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39710449-d9f1-4524-897f-feabf1bfc81f" path="/var/lib/kubelet/pods/39710449-d9f1-4524-897f-feabf1bfc81f/volumes" Nov 29 08:16:06 crc kubenswrapper[4795]: I1129 08:16:06.387242 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-htrx9"] Nov 29 08:16:06 crc kubenswrapper[4795]: E1129 08:16:06.388347 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39710449-d9f1-4524-897f-feabf1bfc81f" containerName="extract-content" Nov 29 08:16:06 crc kubenswrapper[4795]: I1129 08:16:06.388367 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="39710449-d9f1-4524-897f-feabf1bfc81f" containerName="extract-content" Nov 29 08:16:06 crc kubenswrapper[4795]: E1129 08:16:06.388409 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39710449-d9f1-4524-897f-feabf1bfc81f" containerName="extract-utilities" Nov 29 08:16:06 crc kubenswrapper[4795]: I1129 08:16:06.388416 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="39710449-d9f1-4524-897f-feabf1bfc81f" containerName="extract-utilities" Nov 29 08:16:06 crc kubenswrapper[4795]: E1129 08:16:06.388435 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39710449-d9f1-4524-897f-feabf1bfc81f" containerName="registry-server" Nov 29 08:16:06 crc kubenswrapper[4795]: I1129 08:16:06.388444 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="39710449-d9f1-4524-897f-feabf1bfc81f" containerName="registry-server" Nov 29 08:16:06 crc kubenswrapper[4795]: I1129 08:16:06.388690 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="39710449-d9f1-4524-897f-feabf1bfc81f" containerName="registry-server" Nov 29 08:16:06 crc kubenswrapper[4795]: I1129 08:16:06.390853 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-htrx9" Nov 29 08:16:06 crc kubenswrapper[4795]: I1129 08:16:06.399094 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-htrx9"] Nov 29 08:16:06 crc kubenswrapper[4795]: I1129 08:16:06.572203 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3193cef0-b83c-4139-8816-2261b14220b0-catalog-content\") pod \"redhat-marketplace-htrx9\" (UID: \"3193cef0-b83c-4139-8816-2261b14220b0\") " pod="openshift-marketplace/redhat-marketplace-htrx9" Nov 29 08:16:06 crc kubenswrapper[4795]: I1129 08:16:06.572538 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7w9j\" (UniqueName: \"kubernetes.io/projected/3193cef0-b83c-4139-8816-2261b14220b0-kube-api-access-v7w9j\") pod \"redhat-marketplace-htrx9\" (UID: \"3193cef0-b83c-4139-8816-2261b14220b0\") " pod="openshift-marketplace/redhat-marketplace-htrx9" Nov 29 08:16:06 crc kubenswrapper[4795]: I1129 08:16:06.572774 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3193cef0-b83c-4139-8816-2261b14220b0-utilities\") pod \"redhat-marketplace-htrx9\" (UID: \"3193cef0-b83c-4139-8816-2261b14220b0\") " pod="openshift-marketplace/redhat-marketplace-htrx9" Nov 29 08:16:06 crc kubenswrapper[4795]: I1129 08:16:06.674902 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3193cef0-b83c-4139-8816-2261b14220b0-utilities\") pod \"redhat-marketplace-htrx9\" (UID: \"3193cef0-b83c-4139-8816-2261b14220b0\") " pod="openshift-marketplace/redhat-marketplace-htrx9" Nov 29 08:16:06 crc kubenswrapper[4795]: I1129 08:16:06.675696 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3193cef0-b83c-4139-8816-2261b14220b0-catalog-content\") pod \"redhat-marketplace-htrx9\" (UID: \"3193cef0-b83c-4139-8816-2261b14220b0\") " pod="openshift-marketplace/redhat-marketplace-htrx9" Nov 29 08:16:06 crc kubenswrapper[4795]: I1129 08:16:06.676044 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7w9j\" (UniqueName: \"kubernetes.io/projected/3193cef0-b83c-4139-8816-2261b14220b0-kube-api-access-v7w9j\") pod \"redhat-marketplace-htrx9\" (UID: \"3193cef0-b83c-4139-8816-2261b14220b0\") " pod="openshift-marketplace/redhat-marketplace-htrx9" Nov 29 08:16:06 crc kubenswrapper[4795]: I1129 08:16:06.675997 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3193cef0-b83c-4139-8816-2261b14220b0-catalog-content\") pod \"redhat-marketplace-htrx9\" (UID: \"3193cef0-b83c-4139-8816-2261b14220b0\") " pod="openshift-marketplace/redhat-marketplace-htrx9" Nov 29 08:16:06 crc kubenswrapper[4795]: I1129 08:16:06.675486 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3193cef0-b83c-4139-8816-2261b14220b0-utilities\") pod \"redhat-marketplace-htrx9\" (UID: \"3193cef0-b83c-4139-8816-2261b14220b0\") " pod="openshift-marketplace/redhat-marketplace-htrx9" Nov 29 08:16:06 crc kubenswrapper[4795]: I1129 08:16:06.709829 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7w9j\" (UniqueName: \"kubernetes.io/projected/3193cef0-b83c-4139-8816-2261b14220b0-kube-api-access-v7w9j\") pod \"redhat-marketplace-htrx9\" (UID: \"3193cef0-b83c-4139-8816-2261b14220b0\") " pod="openshift-marketplace/redhat-marketplace-htrx9" Nov 29 08:16:06 crc kubenswrapper[4795]: I1129 08:16:06.723147 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-htrx9" Nov 29 08:16:07 crc kubenswrapper[4795]: I1129 08:16:07.333387 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-htrx9"] Nov 29 08:16:07 crc kubenswrapper[4795]: W1129 08:16:07.337624 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3193cef0_b83c_4139_8816_2261b14220b0.slice/crio-f512fbf5856d0f8d98091054c66cc3f054b6bfea74eeff0e2897e15b8ee56186 WatchSource:0}: Error finding container f512fbf5856d0f8d98091054c66cc3f054b6bfea74eeff0e2897e15b8ee56186: Status 404 returned error can't find the container with id f512fbf5856d0f8d98091054c66cc3f054b6bfea74eeff0e2897e15b8ee56186 Nov 29 08:16:07 crc kubenswrapper[4795]: I1129 08:16:07.353564 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-htrx9" event={"ID":"3193cef0-b83c-4139-8816-2261b14220b0","Type":"ContainerStarted","Data":"f512fbf5856d0f8d98091054c66cc3f054b6bfea74eeff0e2897e15b8ee56186"} Nov 29 08:16:08 crc kubenswrapper[4795]: I1129 08:16:08.367877 4795 generic.go:334] "Generic (PLEG): container finished" podID="3193cef0-b83c-4139-8816-2261b14220b0" containerID="00bdad793d01e190e4532756f834f4e9fe8f8f4ad3a87833b250608a771691e5" exitCode=0 Nov 29 08:16:08 crc kubenswrapper[4795]: I1129 08:16:08.367945 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-htrx9" event={"ID":"3193cef0-b83c-4139-8816-2261b14220b0","Type":"ContainerDied","Data":"00bdad793d01e190e4532756f834f4e9fe8f8f4ad3a87833b250608a771691e5"} Nov 29 08:16:10 crc kubenswrapper[4795]: I1129 08:16:10.390637 4795 generic.go:334] "Generic (PLEG): container finished" podID="3193cef0-b83c-4139-8816-2261b14220b0" containerID="4d7701c283f6031fb5fdae236d9bd8291a83bd85c5d8165632f28ce01b40e0c4" exitCode=0 Nov 29 08:16:10 crc kubenswrapper[4795]: I1129 08:16:10.390722 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-htrx9" event={"ID":"3193cef0-b83c-4139-8816-2261b14220b0","Type":"ContainerDied","Data":"4d7701c283f6031fb5fdae236d9bd8291a83bd85c5d8165632f28ce01b40e0c4"} Nov 29 08:16:11 crc kubenswrapper[4795]: I1129 08:16:11.405440 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-htrx9" event={"ID":"3193cef0-b83c-4139-8816-2261b14220b0","Type":"ContainerStarted","Data":"8a768f51fcd19137a3a5a64c72f5c0617a752cd0e0a8f860266c701432f436d3"} Nov 29 08:16:11 crc kubenswrapper[4795]: I1129 08:16:11.437923 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-htrx9" podStartSLOduration=2.992621026 podStartE2EDuration="5.437887431s" podCreationTimestamp="2025-11-29 08:16:06 +0000 UTC" firstStartedPulling="2025-11-29 08:16:08.369802279 +0000 UTC m=+2214.345378069" lastFinishedPulling="2025-11-29 08:16:10.815068684 +0000 UTC m=+2216.790644474" observedRunningTime="2025-11-29 08:16:11.424260525 +0000 UTC m=+2217.399836315" watchObservedRunningTime="2025-11-29 08:16:11.437887431 +0000 UTC m=+2217.413463211" Nov 29 08:16:11 crc kubenswrapper[4795]: I1129 08:16:11.941371 4795 patch_prober.go:28] interesting pod/machine-config-daemon-bkmq6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:16:11 crc kubenswrapper[4795]: I1129 08:16:11.941435 4795 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:16:16 crc kubenswrapper[4795]: I1129 08:16:16.724839 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-htrx9" Nov 29 08:16:16 crc kubenswrapper[4795]: I1129 08:16:16.725382 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-htrx9" Nov 29 08:16:16 crc kubenswrapper[4795]: I1129 08:16:16.773318 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-htrx9" Nov 29 08:16:17 crc kubenswrapper[4795]: I1129 08:16:17.522841 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-htrx9" Nov 29 08:16:17 crc kubenswrapper[4795]: I1129 08:16:17.597391 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-htrx9"] Nov 29 08:16:19 crc kubenswrapper[4795]: I1129 08:16:19.498351 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-htrx9" podUID="3193cef0-b83c-4139-8816-2261b14220b0" containerName="registry-server" containerID="cri-o://8a768f51fcd19137a3a5a64c72f5c0617a752cd0e0a8f860266c701432f436d3" gracePeriod=2 Nov 29 08:16:20 crc kubenswrapper[4795]: I1129 08:16:20.028142 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-htrx9" Nov 29 08:16:20 crc kubenswrapper[4795]: I1129 08:16:20.070987 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3193cef0-b83c-4139-8816-2261b14220b0-utilities\") pod \"3193cef0-b83c-4139-8816-2261b14220b0\" (UID: \"3193cef0-b83c-4139-8816-2261b14220b0\") " Nov 29 08:16:20 crc kubenswrapper[4795]: I1129 08:16:20.071100 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3193cef0-b83c-4139-8816-2261b14220b0-catalog-content\") pod \"3193cef0-b83c-4139-8816-2261b14220b0\" (UID: \"3193cef0-b83c-4139-8816-2261b14220b0\") " Nov 29 08:16:20 crc kubenswrapper[4795]: I1129 08:16:20.071362 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7w9j\" (UniqueName: \"kubernetes.io/projected/3193cef0-b83c-4139-8816-2261b14220b0-kube-api-access-v7w9j\") pod \"3193cef0-b83c-4139-8816-2261b14220b0\" (UID: \"3193cef0-b83c-4139-8816-2261b14220b0\") " Nov 29 08:16:20 crc kubenswrapper[4795]: I1129 08:16:20.074362 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3193cef0-b83c-4139-8816-2261b14220b0-utilities" (OuterVolumeSpecName: "utilities") pod "3193cef0-b83c-4139-8816-2261b14220b0" (UID: "3193cef0-b83c-4139-8816-2261b14220b0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:16:20 crc kubenswrapper[4795]: I1129 08:16:20.079480 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3193cef0-b83c-4139-8816-2261b14220b0-kube-api-access-v7w9j" (OuterVolumeSpecName: "kube-api-access-v7w9j") pod "3193cef0-b83c-4139-8816-2261b14220b0" (UID: "3193cef0-b83c-4139-8816-2261b14220b0"). InnerVolumeSpecName "kube-api-access-v7w9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:16:20 crc kubenswrapper[4795]: I1129 08:16:20.090891 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3193cef0-b83c-4139-8816-2261b14220b0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3193cef0-b83c-4139-8816-2261b14220b0" (UID: "3193cef0-b83c-4139-8816-2261b14220b0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:16:20 crc kubenswrapper[4795]: I1129 08:16:20.173973 4795 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3193cef0-b83c-4139-8816-2261b14220b0-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 08:16:20 crc kubenswrapper[4795]: I1129 08:16:20.174011 4795 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3193cef0-b83c-4139-8816-2261b14220b0-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 08:16:20 crc kubenswrapper[4795]: I1129 08:16:20.174023 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7w9j\" (UniqueName: \"kubernetes.io/projected/3193cef0-b83c-4139-8816-2261b14220b0-kube-api-access-v7w9j\") on node \"crc\" DevicePath \"\"" Nov 29 08:16:20 crc kubenswrapper[4795]: I1129 08:16:20.529394 4795 generic.go:334] "Generic (PLEG): container finished" podID="3193cef0-b83c-4139-8816-2261b14220b0" containerID="8a768f51fcd19137a3a5a64c72f5c0617a752cd0e0a8f860266c701432f436d3" exitCode=0 Nov 29 08:16:20 crc kubenswrapper[4795]: I1129 08:16:20.529823 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-htrx9" event={"ID":"3193cef0-b83c-4139-8816-2261b14220b0","Type":"ContainerDied","Data":"8a768f51fcd19137a3a5a64c72f5c0617a752cd0e0a8f860266c701432f436d3"} Nov 29 08:16:20 crc kubenswrapper[4795]: I1129 08:16:20.529857 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-htrx9" event={"ID":"3193cef0-b83c-4139-8816-2261b14220b0","Type":"ContainerDied","Data":"f512fbf5856d0f8d98091054c66cc3f054b6bfea74eeff0e2897e15b8ee56186"} Nov 29 08:16:20 crc kubenswrapper[4795]: I1129 08:16:20.529889 4795 scope.go:117] "RemoveContainer" containerID="8a768f51fcd19137a3a5a64c72f5c0617a752cd0e0a8f860266c701432f436d3" Nov 29 08:16:20 crc kubenswrapper[4795]: I1129 08:16:20.530134 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-htrx9" Nov 29 08:16:20 crc kubenswrapper[4795]: I1129 08:16:20.535298 4795 generic.go:334] "Generic (PLEG): container finished" podID="5db55adf-c067-44de-ad20-4b8a138e2576" containerID="4ce631b12fc130cf2b50f8afc58258b7e4c0adcf4e10ba6a199459e7837e0601" exitCode=0 Nov 29 08:16:20 crc kubenswrapper[4795]: I1129 08:16:20.535463 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-glkt9" event={"ID":"5db55adf-c067-44de-ad20-4b8a138e2576","Type":"ContainerDied","Data":"4ce631b12fc130cf2b50f8afc58258b7e4c0adcf4e10ba6a199459e7837e0601"} Nov 29 08:16:20 crc kubenswrapper[4795]: I1129 08:16:20.575855 4795 scope.go:117] "RemoveContainer" containerID="4d7701c283f6031fb5fdae236d9bd8291a83bd85c5d8165632f28ce01b40e0c4" Nov 29 08:16:20 crc kubenswrapper[4795]: I1129 08:16:20.582701 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-htrx9"] Nov 29 08:16:20 crc kubenswrapper[4795]: I1129 08:16:20.593262 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-htrx9"] Nov 29 08:16:20 crc kubenswrapper[4795]: I1129 08:16:20.603798 4795 scope.go:117] "RemoveContainer" containerID="00bdad793d01e190e4532756f834f4e9fe8f8f4ad3a87833b250608a771691e5" Nov 29 08:16:20 crc kubenswrapper[4795]: I1129 08:16:20.656860 4795 scope.go:117] "RemoveContainer" containerID="8a768f51fcd19137a3a5a64c72f5c0617a752cd0e0a8f860266c701432f436d3" Nov 29 08:16:20 crc kubenswrapper[4795]: E1129 08:16:20.657437 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a768f51fcd19137a3a5a64c72f5c0617a752cd0e0a8f860266c701432f436d3\": container with ID starting with 8a768f51fcd19137a3a5a64c72f5c0617a752cd0e0a8f860266c701432f436d3 not found: ID does not exist" containerID="8a768f51fcd19137a3a5a64c72f5c0617a752cd0e0a8f860266c701432f436d3" Nov 29 08:16:20 crc kubenswrapper[4795]: I1129 08:16:20.657484 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a768f51fcd19137a3a5a64c72f5c0617a752cd0e0a8f860266c701432f436d3"} err="failed to get container status \"8a768f51fcd19137a3a5a64c72f5c0617a752cd0e0a8f860266c701432f436d3\": rpc error: code = NotFound desc = could not find container \"8a768f51fcd19137a3a5a64c72f5c0617a752cd0e0a8f860266c701432f436d3\": container with ID starting with 8a768f51fcd19137a3a5a64c72f5c0617a752cd0e0a8f860266c701432f436d3 not found: ID does not exist" Nov 29 08:16:20 crc kubenswrapper[4795]: I1129 08:16:20.657506 4795 scope.go:117] "RemoveContainer" containerID="4d7701c283f6031fb5fdae236d9bd8291a83bd85c5d8165632f28ce01b40e0c4" Nov 29 08:16:20 crc kubenswrapper[4795]: E1129 08:16:20.657887 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d7701c283f6031fb5fdae236d9bd8291a83bd85c5d8165632f28ce01b40e0c4\": container with ID starting with 4d7701c283f6031fb5fdae236d9bd8291a83bd85c5d8165632f28ce01b40e0c4 not found: ID does not exist" containerID="4d7701c283f6031fb5fdae236d9bd8291a83bd85c5d8165632f28ce01b40e0c4" Nov 29 08:16:20 crc kubenswrapper[4795]: I1129 08:16:20.657919 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d7701c283f6031fb5fdae236d9bd8291a83bd85c5d8165632f28ce01b40e0c4"} err="failed to get container status \"4d7701c283f6031fb5fdae236d9bd8291a83bd85c5d8165632f28ce01b40e0c4\": rpc error: code = NotFound desc = could not find container \"4d7701c283f6031fb5fdae236d9bd8291a83bd85c5d8165632f28ce01b40e0c4\": container with ID starting with 4d7701c283f6031fb5fdae236d9bd8291a83bd85c5d8165632f28ce01b40e0c4 not found: ID does not exist" Nov 29 08:16:20 crc kubenswrapper[4795]: I1129 08:16:20.657935 4795 scope.go:117] "RemoveContainer" containerID="00bdad793d01e190e4532756f834f4e9fe8f8f4ad3a87833b250608a771691e5" Nov 29 08:16:20 crc kubenswrapper[4795]: E1129 08:16:20.658346 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00bdad793d01e190e4532756f834f4e9fe8f8f4ad3a87833b250608a771691e5\": container with ID starting with 00bdad793d01e190e4532756f834f4e9fe8f8f4ad3a87833b250608a771691e5 not found: ID does not exist" containerID="00bdad793d01e190e4532756f834f4e9fe8f8f4ad3a87833b250608a771691e5" Nov 29 08:16:20 crc kubenswrapper[4795]: I1129 08:16:20.658406 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00bdad793d01e190e4532756f834f4e9fe8f8f4ad3a87833b250608a771691e5"} err="failed to get container status \"00bdad793d01e190e4532756f834f4e9fe8f8f4ad3a87833b250608a771691e5\": rpc error: code = NotFound desc = could not find container \"00bdad793d01e190e4532756f834f4e9fe8f8f4ad3a87833b250608a771691e5\": container with ID starting with 00bdad793d01e190e4532756f834f4e9fe8f8f4ad3a87833b250608a771691e5 not found: ID does not exist" Nov 29 08:16:22 crc kubenswrapper[4795]: I1129 08:16:22.045540 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-glkt9" Nov 29 08:16:22 crc kubenswrapper[4795]: I1129 08:16:22.235176 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5db55adf-c067-44de-ad20-4b8a138e2576-ssh-key\") pod \"5db55adf-c067-44de-ad20-4b8a138e2576\" (UID: \"5db55adf-c067-44de-ad20-4b8a138e2576\") " Nov 29 08:16:22 crc kubenswrapper[4795]: I1129 08:16:22.235425 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56w9f\" (UniqueName: \"kubernetes.io/projected/5db55adf-c067-44de-ad20-4b8a138e2576-kube-api-access-56w9f\") pod \"5db55adf-c067-44de-ad20-4b8a138e2576\" (UID: \"5db55adf-c067-44de-ad20-4b8a138e2576\") " Nov 29 08:16:22 crc kubenswrapper[4795]: I1129 08:16:22.235473 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5db55adf-c067-44de-ad20-4b8a138e2576-inventory\") pod \"5db55adf-c067-44de-ad20-4b8a138e2576\" (UID: \"5db55adf-c067-44de-ad20-4b8a138e2576\") " Nov 29 08:16:22 crc kubenswrapper[4795]: I1129 08:16:22.235657 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5db55adf-c067-44de-ad20-4b8a138e2576-bootstrap-combined-ca-bundle\") pod \"5db55adf-c067-44de-ad20-4b8a138e2576\" (UID: \"5db55adf-c067-44de-ad20-4b8a138e2576\") " Nov 29 08:16:22 crc kubenswrapper[4795]: I1129 08:16:22.243946 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5db55adf-c067-44de-ad20-4b8a138e2576-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "5db55adf-c067-44de-ad20-4b8a138e2576" (UID: "5db55adf-c067-44de-ad20-4b8a138e2576"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:16:22 crc kubenswrapper[4795]: I1129 08:16:22.250660 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5db55adf-c067-44de-ad20-4b8a138e2576-kube-api-access-56w9f" (OuterVolumeSpecName: "kube-api-access-56w9f") pod "5db55adf-c067-44de-ad20-4b8a138e2576" (UID: "5db55adf-c067-44de-ad20-4b8a138e2576"). InnerVolumeSpecName "kube-api-access-56w9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:16:22 crc kubenswrapper[4795]: I1129 08:16:22.296723 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5db55adf-c067-44de-ad20-4b8a138e2576-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "5db55adf-c067-44de-ad20-4b8a138e2576" (UID: "5db55adf-c067-44de-ad20-4b8a138e2576"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:16:22 crc kubenswrapper[4795]: I1129 08:16:22.301399 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3193cef0-b83c-4139-8816-2261b14220b0" path="/var/lib/kubelet/pods/3193cef0-b83c-4139-8816-2261b14220b0/volumes" Nov 29 08:16:22 crc kubenswrapper[4795]: I1129 08:16:22.338257 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56w9f\" (UniqueName: \"kubernetes.io/projected/5db55adf-c067-44de-ad20-4b8a138e2576-kube-api-access-56w9f\") on node \"crc\" DevicePath \"\"" Nov 29 08:16:22 crc kubenswrapper[4795]: I1129 08:16:22.338490 4795 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5db55adf-c067-44de-ad20-4b8a138e2576-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 08:16:22 crc kubenswrapper[4795]: I1129 08:16:22.338559 4795 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5db55adf-c067-44de-ad20-4b8a138e2576-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 08:16:22 crc kubenswrapper[4795]: I1129 08:16:22.349624 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5db55adf-c067-44de-ad20-4b8a138e2576-inventory" (OuterVolumeSpecName: "inventory") pod "5db55adf-c067-44de-ad20-4b8a138e2576" (UID: "5db55adf-c067-44de-ad20-4b8a138e2576"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:16:22 crc kubenswrapper[4795]: I1129 08:16:22.440851 4795 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5db55adf-c067-44de-ad20-4b8a138e2576-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 08:16:22 crc kubenswrapper[4795]: I1129 08:16:22.559925 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-glkt9" event={"ID":"5db55adf-c067-44de-ad20-4b8a138e2576","Type":"ContainerDied","Data":"e83e94831fe832852bd8a302ff6414eb1273aa5a327fb801bd440b5466edef86"} Nov 29 08:16:22 crc kubenswrapper[4795]: I1129 08:16:22.559970 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e83e94831fe832852bd8a302ff6414eb1273aa5a327fb801bd440b5466edef86" Nov 29 08:16:22 crc kubenswrapper[4795]: I1129 08:16:22.560021 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-glkt9" Nov 29 08:16:22 crc kubenswrapper[4795]: I1129 08:16:22.643422 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-lvqcr"] Nov 29 08:16:22 crc kubenswrapper[4795]: E1129 08:16:22.643976 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3193cef0-b83c-4139-8816-2261b14220b0" containerName="extract-utilities" Nov 29 08:16:22 crc kubenswrapper[4795]: I1129 08:16:22.644021 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="3193cef0-b83c-4139-8816-2261b14220b0" containerName="extract-utilities" Nov 29 08:16:22 crc kubenswrapper[4795]: E1129 08:16:22.644069 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3193cef0-b83c-4139-8816-2261b14220b0" containerName="extract-content" Nov 29 08:16:22 crc kubenswrapper[4795]: I1129 08:16:22.644078 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="3193cef0-b83c-4139-8816-2261b14220b0" containerName="extract-content" Nov 29 08:16:22 crc kubenswrapper[4795]: E1129 08:16:22.644102 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3193cef0-b83c-4139-8816-2261b14220b0" containerName="registry-server" Nov 29 08:16:22 crc kubenswrapper[4795]: I1129 08:16:22.644109 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="3193cef0-b83c-4139-8816-2261b14220b0" containerName="registry-server" Nov 29 08:16:22 crc kubenswrapper[4795]: E1129 08:16:22.644129 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5db55adf-c067-44de-ad20-4b8a138e2576" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 29 08:16:22 crc kubenswrapper[4795]: I1129 08:16:22.644139 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="5db55adf-c067-44de-ad20-4b8a138e2576" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 29 08:16:22 crc kubenswrapper[4795]: I1129 08:16:22.644407 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="3193cef0-b83c-4139-8816-2261b14220b0" containerName="registry-server" Nov 29 08:16:22 crc kubenswrapper[4795]: I1129 08:16:22.644439 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="5db55adf-c067-44de-ad20-4b8a138e2576" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 29 08:16:22 crc kubenswrapper[4795]: I1129 08:16:22.645456 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-lvqcr" Nov 29 08:16:22 crc kubenswrapper[4795]: I1129 08:16:22.648275 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 08:16:22 crc kubenswrapper[4795]: I1129 08:16:22.648398 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 08:16:22 crc kubenswrapper[4795]: I1129 08:16:22.648496 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nfhsg" Nov 29 08:16:22 crc kubenswrapper[4795]: I1129 08:16:22.649310 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 08:16:22 crc kubenswrapper[4795]: I1129 08:16:22.659288 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-lvqcr"] Nov 29 08:16:22 crc kubenswrapper[4795]: I1129 08:16:22.851806 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ce2a356f-1605-4fb0-ae3c-a40094296d8f-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-lvqcr\" (UID: \"ce2a356f-1605-4fb0-ae3c-a40094296d8f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-lvqcr" Nov 29 08:16:22 crc kubenswrapper[4795]: I1129 08:16:22.852498 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffvx7\" (UniqueName: \"kubernetes.io/projected/ce2a356f-1605-4fb0-ae3c-a40094296d8f-kube-api-access-ffvx7\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-lvqcr\" (UID: \"ce2a356f-1605-4fb0-ae3c-a40094296d8f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-lvqcr" Nov 29 08:16:22 crc kubenswrapper[4795]: I1129 08:16:22.852546 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce2a356f-1605-4fb0-ae3c-a40094296d8f-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-lvqcr\" (UID: \"ce2a356f-1605-4fb0-ae3c-a40094296d8f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-lvqcr" Nov 29 08:16:22 crc kubenswrapper[4795]: I1129 08:16:22.955419 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ce2a356f-1605-4fb0-ae3c-a40094296d8f-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-lvqcr\" (UID: \"ce2a356f-1605-4fb0-ae3c-a40094296d8f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-lvqcr" Nov 29 08:16:22 crc kubenswrapper[4795]: I1129 08:16:22.955623 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffvx7\" (UniqueName: \"kubernetes.io/projected/ce2a356f-1605-4fb0-ae3c-a40094296d8f-kube-api-access-ffvx7\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-lvqcr\" (UID: \"ce2a356f-1605-4fb0-ae3c-a40094296d8f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-lvqcr" Nov 29 08:16:22 crc kubenswrapper[4795]: I1129 08:16:22.955665 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce2a356f-1605-4fb0-ae3c-a40094296d8f-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-lvqcr\" (UID: \"ce2a356f-1605-4fb0-ae3c-a40094296d8f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-lvqcr" Nov 29 08:16:22 crc kubenswrapper[4795]: I1129 08:16:22.959138 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ce2a356f-1605-4fb0-ae3c-a40094296d8f-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-lvqcr\" (UID: \"ce2a356f-1605-4fb0-ae3c-a40094296d8f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-lvqcr" Nov 29 08:16:22 crc kubenswrapper[4795]: I1129 08:16:22.961736 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce2a356f-1605-4fb0-ae3c-a40094296d8f-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-lvqcr\" (UID: \"ce2a356f-1605-4fb0-ae3c-a40094296d8f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-lvqcr" Nov 29 08:16:22 crc kubenswrapper[4795]: I1129 08:16:22.972149 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffvx7\" (UniqueName: \"kubernetes.io/projected/ce2a356f-1605-4fb0-ae3c-a40094296d8f-kube-api-access-ffvx7\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-lvqcr\" (UID: \"ce2a356f-1605-4fb0-ae3c-a40094296d8f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-lvqcr" Nov 29 08:16:22 crc kubenswrapper[4795]: I1129 08:16:22.974741 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-lvqcr" Nov 29 08:16:23 crc kubenswrapper[4795]: I1129 08:16:23.532936 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-lvqcr"] Nov 29 08:16:23 crc kubenswrapper[4795]: W1129 08:16:23.533865 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce2a356f_1605_4fb0_ae3c_a40094296d8f.slice/crio-4fbe120c292e5695e35e42b1e856dd108283aa538f7dce3e6a3d9cca26c068a2 WatchSource:0}: Error finding container 4fbe120c292e5695e35e42b1e856dd108283aa538f7dce3e6a3d9cca26c068a2: Status 404 returned error can't find the container with id 4fbe120c292e5695e35e42b1e856dd108283aa538f7dce3e6a3d9cca26c068a2 Nov 29 08:16:23 crc kubenswrapper[4795]: I1129 08:16:23.571129 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-lvqcr" event={"ID":"ce2a356f-1605-4fb0-ae3c-a40094296d8f","Type":"ContainerStarted","Data":"4fbe120c292e5695e35e42b1e856dd108283aa538f7dce3e6a3d9cca26c068a2"} Nov 29 08:16:24 crc kubenswrapper[4795]: I1129 08:16:24.590381 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-lvqcr" event={"ID":"ce2a356f-1605-4fb0-ae3c-a40094296d8f","Type":"ContainerStarted","Data":"338bafba797e981572d263e848fce73a8114468cb5e264a3bd4f86acc9419387"} Nov 29 08:16:24 crc kubenswrapper[4795]: I1129 08:16:24.611941 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-lvqcr" podStartSLOduration=1.893220587 podStartE2EDuration="2.611921713s" podCreationTimestamp="2025-11-29 08:16:22 +0000 UTC" firstStartedPulling="2025-11-29 08:16:23.535950588 +0000 UTC m=+2229.511526388" lastFinishedPulling="2025-11-29 08:16:24.254651724 +0000 UTC m=+2230.230227514" observedRunningTime="2025-11-29 08:16:24.60405845 +0000 UTC m=+2230.579634240" watchObservedRunningTime="2025-11-29 08:16:24.611921713 +0000 UTC m=+2230.587497523" Nov 29 08:16:30 crc kubenswrapper[4795]: I1129 08:16:30.860865 4795 scope.go:117] "RemoveContainer" containerID="d6b9b5cfd126d42217de35750ef89b86614c5ee2c94ea412aacbe1440b81ae8d" Nov 29 08:16:30 crc kubenswrapper[4795]: I1129 08:16:30.906379 4795 scope.go:117] "RemoveContainer" containerID="3b2c7e91906b4f2c34ee265e94e43b0d265ed328aa7d59bcd1d6ea5a6694e59d" Nov 29 08:16:30 crc kubenswrapper[4795]: I1129 08:16:30.970154 4795 scope.go:117] "RemoveContainer" containerID="bfb19c9ce44284ee4afdf4be4136a2041c2065b1c9beb5337b6bb10989b8dd0c" Nov 29 08:16:31 crc kubenswrapper[4795]: I1129 08:16:31.053886 4795 scope.go:117] "RemoveContainer" containerID="8d764fe84bb5442819e04f7a14c354fa2a8f9df310b70eb300cff428ab13603f" Nov 29 08:16:41 crc kubenswrapper[4795]: I1129 08:16:41.941682 4795 patch_prober.go:28] interesting pod/machine-config-daemon-bkmq6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:16:41 crc kubenswrapper[4795]: I1129 08:16:41.942364 4795 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:17:08 crc kubenswrapper[4795]: I1129 08:17:08.049803 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-k678c"] Nov 29 08:17:08 crc kubenswrapper[4795]: I1129 08:17:08.064402 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-s4hqz"] Nov 29 08:17:08 crc kubenswrapper[4795]: I1129 08:17:08.078055 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-s4hqz"] Nov 29 08:17:08 crc kubenswrapper[4795]: I1129 08:17:08.088582 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-9cfa-account-create-update-dm475"] Nov 29 08:17:08 crc kubenswrapper[4795]: I1129 08:17:08.098251 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-k678c"] Nov 29 08:17:08 crc kubenswrapper[4795]: I1129 08:17:08.108892 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-9cfa-account-create-update-dm475"] Nov 29 08:17:08 crc kubenswrapper[4795]: I1129 08:17:08.288863 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ec0c903-aae0-4c34-959f-15ec09782b09" path="/var/lib/kubelet/pods/2ec0c903-aae0-4c34-959f-15ec09782b09/volumes" Nov 29 08:17:08 crc kubenswrapper[4795]: I1129 08:17:08.290158 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f24d558-d6ea-42f4-9147-1eca4481dcff" path="/var/lib/kubelet/pods/9f24d558-d6ea-42f4-9147-1eca4481dcff/volumes" Nov 29 08:17:08 crc kubenswrapper[4795]: I1129 08:17:08.290898 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e56ba08f-a9c8-49ae-b4e8-d9cd70d63fb5" path="/var/lib/kubelet/pods/e56ba08f-a9c8-49ae-b4e8-d9cd70d63fb5/volumes" Nov 29 08:17:10 crc kubenswrapper[4795]: I1129 08:17:10.039907 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-15a1-account-create-update-7f9hk"] Nov 29 08:17:10 crc kubenswrapper[4795]: I1129 08:17:10.057579 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-rgwdg"] Nov 29 08:17:10 crc kubenswrapper[4795]: I1129 08:17:10.069152 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-15a1-account-create-update-7f9hk"] Nov 29 08:17:10 crc kubenswrapper[4795]: I1129 08:17:10.081027 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-8ada-account-create-update-4lg24"] Nov 29 08:17:10 crc kubenswrapper[4795]: I1129 08:17:10.092189 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-rgwdg"] Nov 29 08:17:10 crc kubenswrapper[4795]: I1129 08:17:10.101960 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-8ada-account-create-update-4lg24"] Nov 29 08:17:10 crc kubenswrapper[4795]: I1129 08:17:10.294720 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="430ba7a8-1572-4074-8597-8a94c3c2c8a0" path="/var/lib/kubelet/pods/430ba7a8-1572-4074-8597-8a94c3c2c8a0/volumes" Nov 29 08:17:10 crc kubenswrapper[4795]: I1129 08:17:10.296250 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43275a86-fba2-41f6-b98c-c57c65e9c0c0" path="/var/lib/kubelet/pods/43275a86-fba2-41f6-b98c-c57c65e9c0c0/volumes" Nov 29 08:17:10 crc kubenswrapper[4795]: I1129 08:17:10.298332 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b93bb162-1cd1-4a20-888d-dd92a1affbd2" path="/var/lib/kubelet/pods/b93bb162-1cd1-4a20-888d-dd92a1affbd2/volumes" Nov 29 08:17:11 crc kubenswrapper[4795]: I1129 08:17:11.941754 4795 patch_prober.go:28] interesting pod/machine-config-daemon-bkmq6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:17:11 crc kubenswrapper[4795]: I1129 08:17:11.942117 4795 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:17:11 crc kubenswrapper[4795]: I1129 08:17:11.942171 4795 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" Nov 29 08:17:11 crc kubenswrapper[4795]: I1129 08:17:11.942839 4795 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fb7eb9a38d35ff5b0e7cfded64550978b04bdbcec3194cda206b101003d9d236"} pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 08:17:11 crc kubenswrapper[4795]: I1129 08:17:11.942896 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" containerID="cri-o://fb7eb9a38d35ff5b0e7cfded64550978b04bdbcec3194cda206b101003d9d236" gracePeriod=600 Nov 29 08:17:12 crc kubenswrapper[4795]: I1129 08:17:12.170558 4795 generic.go:334] "Generic (PLEG): container finished" podID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerID="fb7eb9a38d35ff5b0e7cfded64550978b04bdbcec3194cda206b101003d9d236" exitCode=0 Nov 29 08:17:12 crc kubenswrapper[4795]: I1129 08:17:12.170630 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" event={"ID":"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1","Type":"ContainerDied","Data":"fb7eb9a38d35ff5b0e7cfded64550978b04bdbcec3194cda206b101003d9d236"} Nov 29 08:17:12 crc kubenswrapper[4795]: I1129 08:17:12.171089 4795 scope.go:117] "RemoveContainer" containerID="cb788e622da5bdf804dcfa0c959dfb0158a848bc427ed90337dfd530ea78ea5d" Nov 29 08:17:13 crc kubenswrapper[4795]: I1129 08:17:13.187050 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" event={"ID":"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1","Type":"ContainerStarted","Data":"21e10122a79c2c2a3372f86102d58fee1c9be891bd6896b1affd1c2404af16fa"} Nov 29 08:17:31 crc kubenswrapper[4795]: I1129 08:17:31.246625 4795 scope.go:117] "RemoveContainer" containerID="2f314c2f00f704130d6bec48c02535f0225bca6182dab65d3c925da58f5e06fe" Nov 29 08:17:31 crc kubenswrapper[4795]: I1129 08:17:31.281273 4795 scope.go:117] "RemoveContainer" containerID="3ab63eb8c05f443ae46b6e41f773dce0baf7de1a1a65d394697375a9b9e082d2" Nov 29 08:17:31 crc kubenswrapper[4795]: I1129 08:17:31.346498 4795 scope.go:117] "RemoveContainer" containerID="f6c7f03832cc17c96a9456f66abc4c28bf56ecb19380cfc5f5c1fa8e9525dd60" Nov 29 08:17:31 crc kubenswrapper[4795]: I1129 08:17:31.428242 4795 scope.go:117] "RemoveContainer" containerID="7a65042d0d8579c32131625ebcc12743d85c17a62858a3d7110c74d795004b19" Nov 29 08:17:31 crc kubenswrapper[4795]: I1129 08:17:31.505419 4795 scope.go:117] "RemoveContainer" containerID="169ebce03f7347894f6db834751a217a2e3089035fb7e5d2927d468767571f5c" Nov 29 08:17:31 crc kubenswrapper[4795]: I1129 08:17:31.563188 4795 scope.go:117] "RemoveContainer" containerID="3c0bbfc9ccc426e2185cf41fd5a1c1a35b19db0b45c46899f1e1e29d5a3bba5d" Nov 29 08:17:38 crc kubenswrapper[4795]: I1129 08:17:38.059700 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-s4c9j"] Nov 29 08:17:38 crc kubenswrapper[4795]: I1129 08:17:38.070219 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-s4c9j"] Nov 29 08:17:38 crc kubenswrapper[4795]: I1129 08:17:38.295128 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13fa8118-f5bc-4c64-95dc-89cbfb601187" path="/var/lib/kubelet/pods/13fa8118-f5bc-4c64-95dc-89cbfb601187/volumes" Nov 29 08:17:41 crc kubenswrapper[4795]: I1129 08:17:41.052865 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-mnks7"] Nov 29 08:17:41 crc kubenswrapper[4795]: I1129 08:17:41.070411 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-6a39-account-create-update-q9sbd"] Nov 29 08:17:41 crc kubenswrapper[4795]: I1129 08:17:41.083662 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-mnks7"] Nov 29 08:17:41 crc kubenswrapper[4795]: I1129 08:17:41.093235 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-6a39-account-create-update-q9sbd"] Nov 29 08:17:42 crc kubenswrapper[4795]: I1129 08:17:42.292514 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96ca16b5-5532-4172-bfe8-9154391fa708" path="/var/lib/kubelet/pods/96ca16b5-5532-4172-bfe8-9154391fa708/volumes" Nov 29 08:17:42 crc kubenswrapper[4795]: I1129 08:17:42.293779 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0cb8029-b3e4-4dc3-ae9a-2b181f2c1ce7" path="/var/lib/kubelet/pods/f0cb8029-b3e4-4dc3-ae9a-2b181f2c1ce7/volumes" Nov 29 08:18:06 crc kubenswrapper[4795]: I1129 08:18:06.056095 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-6256x"] Nov 29 08:18:06 crc kubenswrapper[4795]: I1129 08:18:06.070478 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-6256x"] Nov 29 08:18:06 crc kubenswrapper[4795]: I1129 08:18:06.289370 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ac71396-7101-4507-b4cb-577fc1b95a46" path="/var/lib/kubelet/pods/2ac71396-7101-4507-b4cb-577fc1b95a46/volumes" Nov 29 08:18:07 crc kubenswrapper[4795]: I1129 08:18:07.055360 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-wx6m7"] Nov 29 08:18:07 crc kubenswrapper[4795]: I1129 08:18:07.069221 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-wx6m7"] Nov 29 08:18:08 crc kubenswrapper[4795]: I1129 08:18:08.295469 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fc47eb1-ee22-476b-92c2-4ccb500fe572" path="/var/lib/kubelet/pods/7fc47eb1-ee22-476b-92c2-4ccb500fe572/volumes" Nov 29 08:18:31 crc kubenswrapper[4795]: I1129 08:18:31.752789 4795 scope.go:117] "RemoveContainer" containerID="695eeba4b7ef1f6f2a60fc9b58be571eebb29d2667b03ecaf4001ceda62eaf88" Nov 29 08:18:31 crc kubenswrapper[4795]: I1129 08:18:31.815260 4795 scope.go:117] "RemoveContainer" containerID="00417553b6a1dd08bedd57806be9526f404052667614675a0a44b8b979c29ead" Nov 29 08:18:31 crc kubenswrapper[4795]: I1129 08:18:31.840023 4795 scope.go:117] "RemoveContainer" containerID="a78eebbeff9ae167dfc2e49f3979bd87615fed98f86430cef8ec881e1a357c34" Nov 29 08:18:31 crc kubenswrapper[4795]: I1129 08:18:31.889849 4795 scope.go:117] "RemoveContainer" containerID="3549bed3fc9cc52d295eb5ae5d381d8c5c0b7b902b430ef59d544e1efe262dfa" Nov 29 08:18:31 crc kubenswrapper[4795]: I1129 08:18:31.964480 4795 scope.go:117] "RemoveContainer" containerID="ac773d0bbaf8eb4a577d844451949beb38fb9ca36039c8be1d79c0b248c06635" Nov 29 08:18:40 crc kubenswrapper[4795]: I1129 08:18:40.302105 4795 generic.go:334] "Generic (PLEG): container finished" podID="ce2a356f-1605-4fb0-ae3c-a40094296d8f" containerID="338bafba797e981572d263e848fce73a8114468cb5e264a3bd4f86acc9419387" exitCode=0 Nov 29 08:18:40 crc kubenswrapper[4795]: I1129 08:18:40.302195 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-lvqcr" event={"ID":"ce2a356f-1605-4fb0-ae3c-a40094296d8f","Type":"ContainerDied","Data":"338bafba797e981572d263e848fce73a8114468cb5e264a3bd4f86acc9419387"} Nov 29 08:18:41 crc kubenswrapper[4795]: I1129 08:18:41.852007 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-lvqcr" Nov 29 08:18:41 crc kubenswrapper[4795]: I1129 08:18:41.919051 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ce2a356f-1605-4fb0-ae3c-a40094296d8f-ssh-key\") pod \"ce2a356f-1605-4fb0-ae3c-a40094296d8f\" (UID: \"ce2a356f-1605-4fb0-ae3c-a40094296d8f\") " Nov 29 08:18:41 crc kubenswrapper[4795]: I1129 08:18:41.919218 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce2a356f-1605-4fb0-ae3c-a40094296d8f-inventory\") pod \"ce2a356f-1605-4fb0-ae3c-a40094296d8f\" (UID: \"ce2a356f-1605-4fb0-ae3c-a40094296d8f\") " Nov 29 08:18:41 crc kubenswrapper[4795]: I1129 08:18:41.919402 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffvx7\" (UniqueName: \"kubernetes.io/projected/ce2a356f-1605-4fb0-ae3c-a40094296d8f-kube-api-access-ffvx7\") pod \"ce2a356f-1605-4fb0-ae3c-a40094296d8f\" (UID: \"ce2a356f-1605-4fb0-ae3c-a40094296d8f\") " Nov 29 08:18:41 crc kubenswrapper[4795]: I1129 08:18:41.938788 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce2a356f-1605-4fb0-ae3c-a40094296d8f-kube-api-access-ffvx7" (OuterVolumeSpecName: "kube-api-access-ffvx7") pod "ce2a356f-1605-4fb0-ae3c-a40094296d8f" (UID: "ce2a356f-1605-4fb0-ae3c-a40094296d8f"). InnerVolumeSpecName "kube-api-access-ffvx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:18:41 crc kubenswrapper[4795]: I1129 08:18:41.955357 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce2a356f-1605-4fb0-ae3c-a40094296d8f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ce2a356f-1605-4fb0-ae3c-a40094296d8f" (UID: "ce2a356f-1605-4fb0-ae3c-a40094296d8f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:18:41 crc kubenswrapper[4795]: I1129 08:18:41.963648 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce2a356f-1605-4fb0-ae3c-a40094296d8f-inventory" (OuterVolumeSpecName: "inventory") pod "ce2a356f-1605-4fb0-ae3c-a40094296d8f" (UID: "ce2a356f-1605-4fb0-ae3c-a40094296d8f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:18:42 crc kubenswrapper[4795]: I1129 08:18:42.022778 4795 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ce2a356f-1605-4fb0-ae3c-a40094296d8f-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 08:18:42 crc kubenswrapper[4795]: I1129 08:18:42.022809 4795 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce2a356f-1605-4fb0-ae3c-a40094296d8f-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 08:18:42 crc kubenswrapper[4795]: I1129 08:18:42.022824 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ffvx7\" (UniqueName: \"kubernetes.io/projected/ce2a356f-1605-4fb0-ae3c-a40094296d8f-kube-api-access-ffvx7\") on node \"crc\" DevicePath \"\"" Nov 29 08:18:42 crc kubenswrapper[4795]: I1129 08:18:42.326887 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-lvqcr" event={"ID":"ce2a356f-1605-4fb0-ae3c-a40094296d8f","Type":"ContainerDied","Data":"4fbe120c292e5695e35e42b1e856dd108283aa538f7dce3e6a3d9cca26c068a2"} Nov 29 08:18:42 crc kubenswrapper[4795]: I1129 08:18:42.326931 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4fbe120c292e5695e35e42b1e856dd108283aa538f7dce3e6a3d9cca26c068a2" Nov 29 08:18:42 crc kubenswrapper[4795]: I1129 08:18:42.326985 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-lvqcr" Nov 29 08:18:42 crc kubenswrapper[4795]: I1129 08:18:42.405098 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5c5jq"] Nov 29 08:18:42 crc kubenswrapper[4795]: E1129 08:18:42.405707 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce2a356f-1605-4fb0-ae3c-a40094296d8f" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 29 08:18:42 crc kubenswrapper[4795]: I1129 08:18:42.405724 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce2a356f-1605-4fb0-ae3c-a40094296d8f" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 29 08:18:42 crc kubenswrapper[4795]: I1129 08:18:42.405949 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce2a356f-1605-4fb0-ae3c-a40094296d8f" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 29 08:18:42 crc kubenswrapper[4795]: I1129 08:18:42.406777 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5c5jq" Nov 29 08:18:42 crc kubenswrapper[4795]: I1129 08:18:42.412577 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nfhsg" Nov 29 08:18:42 crc kubenswrapper[4795]: I1129 08:18:42.412902 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 08:18:42 crc kubenswrapper[4795]: I1129 08:18:42.413153 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 08:18:42 crc kubenswrapper[4795]: I1129 08:18:42.413331 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 08:18:42 crc kubenswrapper[4795]: I1129 08:18:42.419386 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5c5jq"] Nov 29 08:18:42 crc kubenswrapper[4795]: I1129 08:18:42.562613 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ccffb059-764c-49a4-afd1-356ba3189628-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5c5jq\" (UID: \"ccffb059-764c-49a4-afd1-356ba3189628\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5c5jq" Nov 29 08:18:42 crc kubenswrapper[4795]: I1129 08:18:42.563110 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrrd9\" (UniqueName: \"kubernetes.io/projected/ccffb059-764c-49a4-afd1-356ba3189628-kube-api-access-zrrd9\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5c5jq\" (UID: \"ccffb059-764c-49a4-afd1-356ba3189628\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5c5jq" Nov 29 08:18:42 crc kubenswrapper[4795]: I1129 08:18:42.563172 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ccffb059-764c-49a4-afd1-356ba3189628-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5c5jq\" (UID: \"ccffb059-764c-49a4-afd1-356ba3189628\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5c5jq" Nov 29 08:18:42 crc kubenswrapper[4795]: I1129 08:18:42.666552 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrrd9\" (UniqueName: \"kubernetes.io/projected/ccffb059-764c-49a4-afd1-356ba3189628-kube-api-access-zrrd9\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5c5jq\" (UID: \"ccffb059-764c-49a4-afd1-356ba3189628\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5c5jq" Nov 29 08:18:42 crc kubenswrapper[4795]: I1129 08:18:42.666662 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ccffb059-764c-49a4-afd1-356ba3189628-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5c5jq\" (UID: \"ccffb059-764c-49a4-afd1-356ba3189628\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5c5jq" Nov 29 08:18:42 crc kubenswrapper[4795]: I1129 08:18:42.666703 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ccffb059-764c-49a4-afd1-356ba3189628-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5c5jq\" (UID: \"ccffb059-764c-49a4-afd1-356ba3189628\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5c5jq" Nov 29 08:18:42 crc kubenswrapper[4795]: I1129 08:18:42.670576 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ccffb059-764c-49a4-afd1-356ba3189628-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5c5jq\" (UID: \"ccffb059-764c-49a4-afd1-356ba3189628\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5c5jq" Nov 29 08:18:42 crc kubenswrapper[4795]: I1129 08:18:42.671716 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ccffb059-764c-49a4-afd1-356ba3189628-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5c5jq\" (UID: \"ccffb059-764c-49a4-afd1-356ba3189628\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5c5jq" Nov 29 08:18:42 crc kubenswrapper[4795]: I1129 08:18:42.686328 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrrd9\" (UniqueName: \"kubernetes.io/projected/ccffb059-764c-49a4-afd1-356ba3189628-kube-api-access-zrrd9\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5c5jq\" (UID: \"ccffb059-764c-49a4-afd1-356ba3189628\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5c5jq" Nov 29 08:18:42 crc kubenswrapper[4795]: I1129 08:18:42.770788 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5c5jq" Nov 29 08:18:43 crc kubenswrapper[4795]: I1129 08:18:43.488847 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5c5jq"] Nov 29 08:18:43 crc kubenswrapper[4795]: I1129 08:18:43.493415 4795 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 29 08:18:44 crc kubenswrapper[4795]: I1129 08:18:44.348216 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5c5jq" event={"ID":"ccffb059-764c-49a4-afd1-356ba3189628","Type":"ContainerStarted","Data":"bf98f8232a3113d69368081b7de2896b0fe3993123240d8d6c6dd99e636720f5"} Nov 29 08:18:45 crc kubenswrapper[4795]: I1129 08:18:45.360665 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5c5jq" event={"ID":"ccffb059-764c-49a4-afd1-356ba3189628","Type":"ContainerStarted","Data":"da6d407fc26a538cde541ea527092715df6eed9aaed5108c22f695baf5f9fb2c"} Nov 29 08:18:45 crc kubenswrapper[4795]: I1129 08:18:45.394305 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5c5jq" podStartSLOduration=2.772652848 podStartE2EDuration="3.394280992s" podCreationTimestamp="2025-11-29 08:18:42 +0000 UTC" firstStartedPulling="2025-11-29 08:18:43.493205005 +0000 UTC m=+2369.468780795" lastFinishedPulling="2025-11-29 08:18:44.114833159 +0000 UTC m=+2370.090408939" observedRunningTime="2025-11-29 08:18:45.383586699 +0000 UTC m=+2371.359162509" watchObservedRunningTime="2025-11-29 08:18:45.394280992 +0000 UTC m=+2371.369856812" Nov 29 08:18:49 crc kubenswrapper[4795]: I1129 08:18:49.054789 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-sf8vb"] Nov 29 08:18:49 crc kubenswrapper[4795]: I1129 08:18:49.063157 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-sf8vb"] Nov 29 08:18:50 crc kubenswrapper[4795]: I1129 08:18:50.306272 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce709568-6b90-4abf-96e3-bc9369ea9296" path="/var/lib/kubelet/pods/ce709568-6b90-4abf-96e3-bc9369ea9296/volumes" Nov 29 08:19:32 crc kubenswrapper[4795]: I1129 08:19:32.128268 4795 scope.go:117] "RemoveContainer" containerID="1cd9464ccea1d80c37c072f524a63a068d4673551e6e96f3a6303f30daad348a" Nov 29 08:19:41 crc kubenswrapper[4795]: I1129 08:19:41.941543 4795 patch_prober.go:28] interesting pod/machine-config-daemon-bkmq6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:19:41 crc kubenswrapper[4795]: I1129 08:19:41.942757 4795 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:20:00 crc kubenswrapper[4795]: I1129 08:20:00.154874 4795 generic.go:334] "Generic (PLEG): container finished" podID="ccffb059-764c-49a4-afd1-356ba3189628" containerID="da6d407fc26a538cde541ea527092715df6eed9aaed5108c22f695baf5f9fb2c" exitCode=0 Nov 29 08:20:00 crc kubenswrapper[4795]: I1129 08:20:00.155029 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5c5jq" event={"ID":"ccffb059-764c-49a4-afd1-356ba3189628","Type":"ContainerDied","Data":"da6d407fc26a538cde541ea527092715df6eed9aaed5108c22f695baf5f9fb2c"} Nov 29 08:20:01 crc kubenswrapper[4795]: I1129 08:20:01.693973 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5c5jq" Nov 29 08:20:01 crc kubenswrapper[4795]: I1129 08:20:01.789881 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ccffb059-764c-49a4-afd1-356ba3189628-ssh-key\") pod \"ccffb059-764c-49a4-afd1-356ba3189628\" (UID: \"ccffb059-764c-49a4-afd1-356ba3189628\") " Nov 29 08:20:01 crc kubenswrapper[4795]: I1129 08:20:01.790011 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrrd9\" (UniqueName: \"kubernetes.io/projected/ccffb059-764c-49a4-afd1-356ba3189628-kube-api-access-zrrd9\") pod \"ccffb059-764c-49a4-afd1-356ba3189628\" (UID: \"ccffb059-764c-49a4-afd1-356ba3189628\") " Nov 29 08:20:01 crc kubenswrapper[4795]: I1129 08:20:01.790127 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ccffb059-764c-49a4-afd1-356ba3189628-inventory\") pod \"ccffb059-764c-49a4-afd1-356ba3189628\" (UID: \"ccffb059-764c-49a4-afd1-356ba3189628\") " Nov 29 08:20:01 crc kubenswrapper[4795]: I1129 08:20:01.795494 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccffb059-764c-49a4-afd1-356ba3189628-kube-api-access-zrrd9" (OuterVolumeSpecName: "kube-api-access-zrrd9") pod "ccffb059-764c-49a4-afd1-356ba3189628" (UID: "ccffb059-764c-49a4-afd1-356ba3189628"). InnerVolumeSpecName "kube-api-access-zrrd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:20:01 crc kubenswrapper[4795]: I1129 08:20:01.828165 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccffb059-764c-49a4-afd1-356ba3189628-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ccffb059-764c-49a4-afd1-356ba3189628" (UID: "ccffb059-764c-49a4-afd1-356ba3189628"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:20:01 crc kubenswrapper[4795]: I1129 08:20:01.829998 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccffb059-764c-49a4-afd1-356ba3189628-inventory" (OuterVolumeSpecName: "inventory") pod "ccffb059-764c-49a4-afd1-356ba3189628" (UID: "ccffb059-764c-49a4-afd1-356ba3189628"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:20:01 crc kubenswrapper[4795]: I1129 08:20:01.892812 4795 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ccffb059-764c-49a4-afd1-356ba3189628-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 08:20:01 crc kubenswrapper[4795]: I1129 08:20:01.892846 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrrd9\" (UniqueName: \"kubernetes.io/projected/ccffb059-764c-49a4-afd1-356ba3189628-kube-api-access-zrrd9\") on node \"crc\" DevicePath \"\"" Nov 29 08:20:01 crc kubenswrapper[4795]: I1129 08:20:01.892857 4795 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ccffb059-764c-49a4-afd1-356ba3189628-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 08:20:02 crc kubenswrapper[4795]: I1129 08:20:02.177937 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5c5jq" event={"ID":"ccffb059-764c-49a4-afd1-356ba3189628","Type":"ContainerDied","Data":"bf98f8232a3113d69368081b7de2896b0fe3993123240d8d6c6dd99e636720f5"} Nov 29 08:20:02 crc kubenswrapper[4795]: I1129 08:20:02.177995 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf98f8232a3113d69368081b7de2896b0fe3993123240d8d6c6dd99e636720f5" Nov 29 08:20:02 crc kubenswrapper[4795]: I1129 08:20:02.178128 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5c5jq" Nov 29 08:20:02 crc kubenswrapper[4795]: I1129 08:20:02.293204 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-k58jj"] Nov 29 08:20:02 crc kubenswrapper[4795]: E1129 08:20:02.293864 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccffb059-764c-49a4-afd1-356ba3189628" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 29 08:20:02 crc kubenswrapper[4795]: I1129 08:20:02.293894 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccffb059-764c-49a4-afd1-356ba3189628" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 29 08:20:02 crc kubenswrapper[4795]: I1129 08:20:02.294175 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccffb059-764c-49a4-afd1-356ba3189628" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 29 08:20:02 crc kubenswrapper[4795]: I1129 08:20:02.294982 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-k58jj" Nov 29 08:20:02 crc kubenswrapper[4795]: I1129 08:20:02.320496 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b8224f42-d933-4b1a-bab0-8f79fa3a5369-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-k58jj\" (UID: \"b8224f42-d933-4b1a-bab0-8f79fa3a5369\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-k58jj" Nov 29 08:20:02 crc kubenswrapper[4795]: I1129 08:20:02.320767 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dfx6\" (UniqueName: \"kubernetes.io/projected/b8224f42-d933-4b1a-bab0-8f79fa3a5369-kube-api-access-5dfx6\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-k58jj\" (UID: \"b8224f42-d933-4b1a-bab0-8f79fa3a5369\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-k58jj" Nov 29 08:20:02 crc kubenswrapper[4795]: I1129 08:20:02.320848 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b8224f42-d933-4b1a-bab0-8f79fa3a5369-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-k58jj\" (UID: \"b8224f42-d933-4b1a-bab0-8f79fa3a5369\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-k58jj" Nov 29 08:20:02 crc kubenswrapper[4795]: I1129 08:20:02.327310 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 08:20:02 crc kubenswrapper[4795]: I1129 08:20:02.327539 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nfhsg" Nov 29 08:20:02 crc kubenswrapper[4795]: I1129 08:20:02.327664 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 08:20:02 crc kubenswrapper[4795]: I1129 08:20:02.327744 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 08:20:02 crc kubenswrapper[4795]: I1129 08:20:02.348876 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-k58jj"] Nov 29 08:20:02 crc kubenswrapper[4795]: I1129 08:20:02.423970 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b8224f42-d933-4b1a-bab0-8f79fa3a5369-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-k58jj\" (UID: \"b8224f42-d933-4b1a-bab0-8f79fa3a5369\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-k58jj" Nov 29 08:20:02 crc kubenswrapper[4795]: I1129 08:20:02.424055 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dfx6\" (UniqueName: \"kubernetes.io/projected/b8224f42-d933-4b1a-bab0-8f79fa3a5369-kube-api-access-5dfx6\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-k58jj\" (UID: \"b8224f42-d933-4b1a-bab0-8f79fa3a5369\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-k58jj" Nov 29 08:20:02 crc kubenswrapper[4795]: I1129 08:20:02.424107 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b8224f42-d933-4b1a-bab0-8f79fa3a5369-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-k58jj\" (UID: \"b8224f42-d933-4b1a-bab0-8f79fa3a5369\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-k58jj" Nov 29 08:20:02 crc kubenswrapper[4795]: I1129 08:20:02.430564 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b8224f42-d933-4b1a-bab0-8f79fa3a5369-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-k58jj\" (UID: \"b8224f42-d933-4b1a-bab0-8f79fa3a5369\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-k58jj" Nov 29 08:20:02 crc kubenswrapper[4795]: I1129 08:20:02.430587 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b8224f42-d933-4b1a-bab0-8f79fa3a5369-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-k58jj\" (UID: \"b8224f42-d933-4b1a-bab0-8f79fa3a5369\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-k58jj" Nov 29 08:20:02 crc kubenswrapper[4795]: I1129 08:20:02.445881 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dfx6\" (UniqueName: \"kubernetes.io/projected/b8224f42-d933-4b1a-bab0-8f79fa3a5369-kube-api-access-5dfx6\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-k58jj\" (UID: \"b8224f42-d933-4b1a-bab0-8f79fa3a5369\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-k58jj" Nov 29 08:20:02 crc kubenswrapper[4795]: I1129 08:20:02.641053 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-k58jj" Nov 29 08:20:03 crc kubenswrapper[4795]: I1129 08:20:03.187551 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-k58jj"] Nov 29 08:20:04 crc kubenswrapper[4795]: I1129 08:20:04.204646 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-k58jj" event={"ID":"b8224f42-d933-4b1a-bab0-8f79fa3a5369","Type":"ContainerStarted","Data":"dde79ae94094da986f154c0b7c15da5c86a0ae2a1a138abb78928072b25780f5"} Nov 29 08:20:04 crc kubenswrapper[4795]: I1129 08:20:04.204693 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-k58jj" event={"ID":"b8224f42-d933-4b1a-bab0-8f79fa3a5369","Type":"ContainerStarted","Data":"164f4642a9523d8045493ce1dab8a6d85249ddc9800c4ffcf35486c1e4957e1a"} Nov 29 08:20:04 crc kubenswrapper[4795]: I1129 08:20:04.223939 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-k58jj" podStartSLOduration=1.766402577 podStartE2EDuration="2.223920438s" podCreationTimestamp="2025-11-29 08:20:02 +0000 UTC" firstStartedPulling="2025-11-29 08:20:03.187813194 +0000 UTC m=+2449.163388984" lastFinishedPulling="2025-11-29 08:20:03.645331055 +0000 UTC m=+2449.620906845" observedRunningTime="2025-11-29 08:20:04.222342404 +0000 UTC m=+2450.197918194" watchObservedRunningTime="2025-11-29 08:20:04.223920438 +0000 UTC m=+2450.199496228" Nov 29 08:20:09 crc kubenswrapper[4795]: I1129 08:20:09.265678 4795 generic.go:334] "Generic (PLEG): container finished" podID="b8224f42-d933-4b1a-bab0-8f79fa3a5369" containerID="dde79ae94094da986f154c0b7c15da5c86a0ae2a1a138abb78928072b25780f5" exitCode=0 Nov 29 08:20:09 crc kubenswrapper[4795]: I1129 08:20:09.265786 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-k58jj" event={"ID":"b8224f42-d933-4b1a-bab0-8f79fa3a5369","Type":"ContainerDied","Data":"dde79ae94094da986f154c0b7c15da5c86a0ae2a1a138abb78928072b25780f5"} Nov 29 08:20:10 crc kubenswrapper[4795]: I1129 08:20:10.831821 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-k58jj" Nov 29 08:20:10 crc kubenswrapper[4795]: I1129 08:20:10.937677 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b8224f42-d933-4b1a-bab0-8f79fa3a5369-inventory\") pod \"b8224f42-d933-4b1a-bab0-8f79fa3a5369\" (UID: \"b8224f42-d933-4b1a-bab0-8f79fa3a5369\") " Nov 29 08:20:10 crc kubenswrapper[4795]: I1129 08:20:10.937903 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b8224f42-d933-4b1a-bab0-8f79fa3a5369-ssh-key\") pod \"b8224f42-d933-4b1a-bab0-8f79fa3a5369\" (UID: \"b8224f42-d933-4b1a-bab0-8f79fa3a5369\") " Nov 29 08:20:10 crc kubenswrapper[4795]: I1129 08:20:10.937934 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dfx6\" (UniqueName: \"kubernetes.io/projected/b8224f42-d933-4b1a-bab0-8f79fa3a5369-kube-api-access-5dfx6\") pod \"b8224f42-d933-4b1a-bab0-8f79fa3a5369\" (UID: \"b8224f42-d933-4b1a-bab0-8f79fa3a5369\") " Nov 29 08:20:10 crc kubenswrapper[4795]: I1129 08:20:10.948500 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8224f42-d933-4b1a-bab0-8f79fa3a5369-kube-api-access-5dfx6" (OuterVolumeSpecName: "kube-api-access-5dfx6") pod "b8224f42-d933-4b1a-bab0-8f79fa3a5369" (UID: "b8224f42-d933-4b1a-bab0-8f79fa3a5369"). InnerVolumeSpecName "kube-api-access-5dfx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:20:10 crc kubenswrapper[4795]: I1129 08:20:10.971141 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8224f42-d933-4b1a-bab0-8f79fa3a5369-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b8224f42-d933-4b1a-bab0-8f79fa3a5369" (UID: "b8224f42-d933-4b1a-bab0-8f79fa3a5369"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:20:10 crc kubenswrapper[4795]: I1129 08:20:10.984843 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8224f42-d933-4b1a-bab0-8f79fa3a5369-inventory" (OuterVolumeSpecName: "inventory") pod "b8224f42-d933-4b1a-bab0-8f79fa3a5369" (UID: "b8224f42-d933-4b1a-bab0-8f79fa3a5369"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:20:11 crc kubenswrapper[4795]: I1129 08:20:11.041967 4795 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b8224f42-d933-4b1a-bab0-8f79fa3a5369-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 08:20:11 crc kubenswrapper[4795]: I1129 08:20:11.041993 4795 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b8224f42-d933-4b1a-bab0-8f79fa3a5369-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 08:20:11 crc kubenswrapper[4795]: I1129 08:20:11.042003 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dfx6\" (UniqueName: \"kubernetes.io/projected/b8224f42-d933-4b1a-bab0-8f79fa3a5369-kube-api-access-5dfx6\") on node \"crc\" DevicePath \"\"" Nov 29 08:20:11 crc kubenswrapper[4795]: I1129 08:20:11.296126 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-k58jj" event={"ID":"b8224f42-d933-4b1a-bab0-8f79fa3a5369","Type":"ContainerDied","Data":"164f4642a9523d8045493ce1dab8a6d85249ddc9800c4ffcf35486c1e4957e1a"} Nov 29 08:20:11 crc kubenswrapper[4795]: I1129 08:20:11.296168 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="164f4642a9523d8045493ce1dab8a6d85249ddc9800c4ffcf35486c1e4957e1a" Nov 29 08:20:11 crc kubenswrapper[4795]: I1129 08:20:11.296225 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-k58jj" Nov 29 08:20:11 crc kubenswrapper[4795]: I1129 08:20:11.433236 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-tppn7"] Nov 29 08:20:11 crc kubenswrapper[4795]: E1129 08:20:11.433990 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8224f42-d933-4b1a-bab0-8f79fa3a5369" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 29 08:20:11 crc kubenswrapper[4795]: I1129 08:20:11.434018 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8224f42-d933-4b1a-bab0-8f79fa3a5369" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 29 08:20:11 crc kubenswrapper[4795]: I1129 08:20:11.434353 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8224f42-d933-4b1a-bab0-8f79fa3a5369" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 29 08:20:11 crc kubenswrapper[4795]: I1129 08:20:11.437825 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tppn7" Nov 29 08:20:11 crc kubenswrapper[4795]: I1129 08:20:11.443043 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 08:20:11 crc kubenswrapper[4795]: I1129 08:20:11.443253 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 08:20:11 crc kubenswrapper[4795]: I1129 08:20:11.443483 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nfhsg" Nov 29 08:20:11 crc kubenswrapper[4795]: I1129 08:20:11.443779 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 08:20:11 crc kubenswrapper[4795]: I1129 08:20:11.446454 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-tppn7"] Nov 29 08:20:11 crc kubenswrapper[4795]: I1129 08:20:11.559702 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5wzn\" (UniqueName: \"kubernetes.io/projected/5e0dffd4-e9e3-434d-b842-5b5849bf2fa9-kube-api-access-h5wzn\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-tppn7\" (UID: \"5e0dffd4-e9e3-434d-b842-5b5849bf2fa9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tppn7" Nov 29 08:20:11 crc kubenswrapper[4795]: I1129 08:20:11.559763 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5e0dffd4-e9e3-434d-b842-5b5849bf2fa9-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-tppn7\" (UID: \"5e0dffd4-e9e3-434d-b842-5b5849bf2fa9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tppn7" Nov 29 08:20:11 crc kubenswrapper[4795]: I1129 08:20:11.560035 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5e0dffd4-e9e3-434d-b842-5b5849bf2fa9-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-tppn7\" (UID: \"5e0dffd4-e9e3-434d-b842-5b5849bf2fa9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tppn7" Nov 29 08:20:11 crc kubenswrapper[4795]: I1129 08:20:11.661949 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5e0dffd4-e9e3-434d-b842-5b5849bf2fa9-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-tppn7\" (UID: \"5e0dffd4-e9e3-434d-b842-5b5849bf2fa9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tppn7" Nov 29 08:20:11 crc kubenswrapper[4795]: I1129 08:20:11.662133 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5wzn\" (UniqueName: \"kubernetes.io/projected/5e0dffd4-e9e3-434d-b842-5b5849bf2fa9-kube-api-access-h5wzn\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-tppn7\" (UID: \"5e0dffd4-e9e3-434d-b842-5b5849bf2fa9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tppn7" Nov 29 08:20:11 crc kubenswrapper[4795]: I1129 08:20:11.662157 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5e0dffd4-e9e3-434d-b842-5b5849bf2fa9-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-tppn7\" (UID: \"5e0dffd4-e9e3-434d-b842-5b5849bf2fa9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tppn7" Nov 29 08:20:11 crc kubenswrapper[4795]: I1129 08:20:11.669746 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5e0dffd4-e9e3-434d-b842-5b5849bf2fa9-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-tppn7\" (UID: \"5e0dffd4-e9e3-434d-b842-5b5849bf2fa9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tppn7" Nov 29 08:20:11 crc kubenswrapper[4795]: I1129 08:20:11.670719 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5e0dffd4-e9e3-434d-b842-5b5849bf2fa9-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-tppn7\" (UID: \"5e0dffd4-e9e3-434d-b842-5b5849bf2fa9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tppn7" Nov 29 08:20:11 crc kubenswrapper[4795]: I1129 08:20:11.684534 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5wzn\" (UniqueName: \"kubernetes.io/projected/5e0dffd4-e9e3-434d-b842-5b5849bf2fa9-kube-api-access-h5wzn\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-tppn7\" (UID: \"5e0dffd4-e9e3-434d-b842-5b5849bf2fa9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tppn7" Nov 29 08:20:11 crc kubenswrapper[4795]: I1129 08:20:11.761679 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tppn7" Nov 29 08:20:11 crc kubenswrapper[4795]: I1129 08:20:11.940745 4795 patch_prober.go:28] interesting pod/machine-config-daemon-bkmq6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:20:11 crc kubenswrapper[4795]: I1129 08:20:11.940804 4795 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:20:12 crc kubenswrapper[4795]: I1129 08:20:12.331172 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-tppn7"] Nov 29 08:20:12 crc kubenswrapper[4795]: W1129 08:20:12.336527 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e0dffd4_e9e3_434d_b842_5b5849bf2fa9.slice/crio-02456227226a21f11ad811b54571af9680b98772c637f396544f46afc1d3a546 WatchSource:0}: Error finding container 02456227226a21f11ad811b54571af9680b98772c637f396544f46afc1d3a546: Status 404 returned error can't find the container with id 02456227226a21f11ad811b54571af9680b98772c637f396544f46afc1d3a546 Nov 29 08:20:13 crc kubenswrapper[4795]: I1129 08:20:13.327858 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tppn7" event={"ID":"5e0dffd4-e9e3-434d-b842-5b5849bf2fa9","Type":"ContainerStarted","Data":"eaf72c675c8d986af663369981c2cf2e9fb8fadb6b1bc88bb6344664eb783791"} Nov 29 08:20:13 crc kubenswrapper[4795]: I1129 08:20:13.327917 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tppn7" event={"ID":"5e0dffd4-e9e3-434d-b842-5b5849bf2fa9","Type":"ContainerStarted","Data":"02456227226a21f11ad811b54571af9680b98772c637f396544f46afc1d3a546"} Nov 29 08:20:13 crc kubenswrapper[4795]: I1129 08:20:13.357375 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tppn7" podStartSLOduration=1.920514324 podStartE2EDuration="2.357357708s" podCreationTimestamp="2025-11-29 08:20:11 +0000 UTC" firstStartedPulling="2025-11-29 08:20:12.338978266 +0000 UTC m=+2458.314554066" lastFinishedPulling="2025-11-29 08:20:12.77582165 +0000 UTC m=+2458.751397450" observedRunningTime="2025-11-29 08:20:13.351875872 +0000 UTC m=+2459.327451662" watchObservedRunningTime="2025-11-29 08:20:13.357357708 +0000 UTC m=+2459.332933498" Nov 29 08:20:41 crc kubenswrapper[4795]: I1129 08:20:41.945216 4795 patch_prober.go:28] interesting pod/machine-config-daemon-bkmq6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:20:41 crc kubenswrapper[4795]: I1129 08:20:41.947547 4795 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:20:41 crc kubenswrapper[4795]: I1129 08:20:41.947650 4795 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" Nov 29 08:20:42 crc kubenswrapper[4795]: I1129 08:20:42.214503 4795 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"21e10122a79c2c2a3372f86102d58fee1c9be891bd6896b1affd1c2404af16fa"} pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 08:20:42 crc kubenswrapper[4795]: I1129 08:20:42.214612 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" containerID="cri-o://21e10122a79c2c2a3372f86102d58fee1c9be891bd6896b1affd1c2404af16fa" gracePeriod=600 Nov 29 08:20:42 crc kubenswrapper[4795]: E1129 08:20:42.363057 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:20:43 crc kubenswrapper[4795]: I1129 08:20:43.228732 4795 generic.go:334] "Generic (PLEG): container finished" podID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerID="21e10122a79c2c2a3372f86102d58fee1c9be891bd6896b1affd1c2404af16fa" exitCode=0 Nov 29 08:20:43 crc kubenswrapper[4795]: I1129 08:20:43.228938 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" event={"ID":"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1","Type":"ContainerDied","Data":"21e10122a79c2c2a3372f86102d58fee1c9be891bd6896b1affd1c2404af16fa"} Nov 29 08:20:43 crc kubenswrapper[4795]: I1129 08:20:43.229044 4795 scope.go:117] "RemoveContainer" containerID="fb7eb9a38d35ff5b0e7cfded64550978b04bdbcec3194cda206b101003d9d236" Nov 29 08:20:43 crc kubenswrapper[4795]: I1129 08:20:43.230148 4795 scope.go:117] "RemoveContainer" containerID="21e10122a79c2c2a3372f86102d58fee1c9be891bd6896b1affd1c2404af16fa" Nov 29 08:20:43 crc kubenswrapper[4795]: E1129 08:20:43.230453 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:20:54 crc kubenswrapper[4795]: I1129 08:20:54.275840 4795 scope.go:117] "RemoveContainer" containerID="21e10122a79c2c2a3372f86102d58fee1c9be891bd6896b1affd1c2404af16fa" Nov 29 08:20:54 crc kubenswrapper[4795]: E1129 08:20:54.276732 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:20:54 crc kubenswrapper[4795]: I1129 08:20:54.352383 4795 generic.go:334] "Generic (PLEG): container finished" podID="5e0dffd4-e9e3-434d-b842-5b5849bf2fa9" containerID="eaf72c675c8d986af663369981c2cf2e9fb8fadb6b1bc88bb6344664eb783791" exitCode=0 Nov 29 08:20:54 crc kubenswrapper[4795]: I1129 08:20:54.352428 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tppn7" event={"ID":"5e0dffd4-e9e3-434d-b842-5b5849bf2fa9","Type":"ContainerDied","Data":"eaf72c675c8d986af663369981c2cf2e9fb8fadb6b1bc88bb6344664eb783791"} Nov 29 08:20:55 crc kubenswrapper[4795]: I1129 08:20:55.851960 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tppn7" Nov 29 08:20:55 crc kubenswrapper[4795]: I1129 08:20:55.950492 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5wzn\" (UniqueName: \"kubernetes.io/projected/5e0dffd4-e9e3-434d-b842-5b5849bf2fa9-kube-api-access-h5wzn\") pod \"5e0dffd4-e9e3-434d-b842-5b5849bf2fa9\" (UID: \"5e0dffd4-e9e3-434d-b842-5b5849bf2fa9\") " Nov 29 08:20:55 crc kubenswrapper[4795]: I1129 08:20:55.950581 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5e0dffd4-e9e3-434d-b842-5b5849bf2fa9-ssh-key\") pod \"5e0dffd4-e9e3-434d-b842-5b5849bf2fa9\" (UID: \"5e0dffd4-e9e3-434d-b842-5b5849bf2fa9\") " Nov 29 08:20:55 crc kubenswrapper[4795]: I1129 08:20:55.951038 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5e0dffd4-e9e3-434d-b842-5b5849bf2fa9-inventory\") pod \"5e0dffd4-e9e3-434d-b842-5b5849bf2fa9\" (UID: \"5e0dffd4-e9e3-434d-b842-5b5849bf2fa9\") " Nov 29 08:20:56 crc kubenswrapper[4795]: I1129 08:20:56.039143 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e0dffd4-e9e3-434d-b842-5b5849bf2fa9-kube-api-access-h5wzn" (OuterVolumeSpecName: "kube-api-access-h5wzn") pod "5e0dffd4-e9e3-434d-b842-5b5849bf2fa9" (UID: "5e0dffd4-e9e3-434d-b842-5b5849bf2fa9"). InnerVolumeSpecName "kube-api-access-h5wzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:20:56 crc kubenswrapper[4795]: I1129 08:20:56.062284 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5wzn\" (UniqueName: \"kubernetes.io/projected/5e0dffd4-e9e3-434d-b842-5b5849bf2fa9-kube-api-access-h5wzn\") on node \"crc\" DevicePath \"\"" Nov 29 08:20:56 crc kubenswrapper[4795]: I1129 08:20:56.102711 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e0dffd4-e9e3-434d-b842-5b5849bf2fa9-inventory" (OuterVolumeSpecName: "inventory") pod "5e0dffd4-e9e3-434d-b842-5b5849bf2fa9" (UID: "5e0dffd4-e9e3-434d-b842-5b5849bf2fa9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:20:56 crc kubenswrapper[4795]: I1129 08:20:56.103827 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e0dffd4-e9e3-434d-b842-5b5849bf2fa9-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "5e0dffd4-e9e3-434d-b842-5b5849bf2fa9" (UID: "5e0dffd4-e9e3-434d-b842-5b5849bf2fa9"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:20:56 crc kubenswrapper[4795]: I1129 08:20:56.164516 4795 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5e0dffd4-e9e3-434d-b842-5b5849bf2fa9-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 08:20:56 crc kubenswrapper[4795]: I1129 08:20:56.164555 4795 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5e0dffd4-e9e3-434d-b842-5b5849bf2fa9-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 08:20:56 crc kubenswrapper[4795]: I1129 08:20:56.372546 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tppn7" event={"ID":"5e0dffd4-e9e3-434d-b842-5b5849bf2fa9","Type":"ContainerDied","Data":"02456227226a21f11ad811b54571af9680b98772c637f396544f46afc1d3a546"} Nov 29 08:20:56 crc kubenswrapper[4795]: I1129 08:20:56.372915 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02456227226a21f11ad811b54571af9680b98772c637f396544f46afc1d3a546" Nov 29 08:20:56 crc kubenswrapper[4795]: I1129 08:20:56.372622 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tppn7" Nov 29 08:20:56 crc kubenswrapper[4795]: I1129 08:20:56.464725 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9p64t"] Nov 29 08:20:56 crc kubenswrapper[4795]: E1129 08:20:56.465354 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e0dffd4-e9e3-434d-b842-5b5849bf2fa9" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 29 08:20:56 crc kubenswrapper[4795]: I1129 08:20:56.465382 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e0dffd4-e9e3-434d-b842-5b5849bf2fa9" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 29 08:20:56 crc kubenswrapper[4795]: I1129 08:20:56.465724 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e0dffd4-e9e3-434d-b842-5b5849bf2fa9" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 29 08:20:56 crc kubenswrapper[4795]: I1129 08:20:56.466786 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9p64t" Nov 29 08:20:56 crc kubenswrapper[4795]: I1129 08:20:56.469510 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 08:20:56 crc kubenswrapper[4795]: I1129 08:20:56.469742 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 08:20:56 crc kubenswrapper[4795]: I1129 08:20:56.469898 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 08:20:56 crc kubenswrapper[4795]: I1129 08:20:56.475046 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nfhsg" Nov 29 08:20:56 crc kubenswrapper[4795]: I1129 08:20:56.484458 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9p64t"] Nov 29 08:20:56 crc kubenswrapper[4795]: I1129 08:20:56.573654 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c19e0492-7b5e-4a23-a1aa-f09ea195448d-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9p64t\" (UID: \"c19e0492-7b5e-4a23-a1aa-f09ea195448d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9p64t" Nov 29 08:20:56 crc kubenswrapper[4795]: I1129 08:20:56.573884 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9n7pf\" (UniqueName: \"kubernetes.io/projected/c19e0492-7b5e-4a23-a1aa-f09ea195448d-kube-api-access-9n7pf\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9p64t\" (UID: \"c19e0492-7b5e-4a23-a1aa-f09ea195448d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9p64t" Nov 29 08:20:56 crc kubenswrapper[4795]: I1129 08:20:56.574399 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c19e0492-7b5e-4a23-a1aa-f09ea195448d-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9p64t\" (UID: \"c19e0492-7b5e-4a23-a1aa-f09ea195448d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9p64t" Nov 29 08:20:56 crc kubenswrapper[4795]: I1129 08:20:56.676213 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c19e0492-7b5e-4a23-a1aa-f09ea195448d-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9p64t\" (UID: \"c19e0492-7b5e-4a23-a1aa-f09ea195448d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9p64t" Nov 29 08:20:56 crc kubenswrapper[4795]: I1129 08:20:56.676294 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9n7pf\" (UniqueName: \"kubernetes.io/projected/c19e0492-7b5e-4a23-a1aa-f09ea195448d-kube-api-access-9n7pf\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9p64t\" (UID: \"c19e0492-7b5e-4a23-a1aa-f09ea195448d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9p64t" Nov 29 08:20:56 crc kubenswrapper[4795]: I1129 08:20:56.676539 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c19e0492-7b5e-4a23-a1aa-f09ea195448d-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9p64t\" (UID: \"c19e0492-7b5e-4a23-a1aa-f09ea195448d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9p64t" Nov 29 08:20:56 crc kubenswrapper[4795]: I1129 08:20:56.681662 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c19e0492-7b5e-4a23-a1aa-f09ea195448d-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9p64t\" (UID: \"c19e0492-7b5e-4a23-a1aa-f09ea195448d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9p64t" Nov 29 08:20:56 crc kubenswrapper[4795]: I1129 08:20:56.682737 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c19e0492-7b5e-4a23-a1aa-f09ea195448d-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9p64t\" (UID: \"c19e0492-7b5e-4a23-a1aa-f09ea195448d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9p64t" Nov 29 08:20:56 crc kubenswrapper[4795]: I1129 08:20:56.696227 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9n7pf\" (UniqueName: \"kubernetes.io/projected/c19e0492-7b5e-4a23-a1aa-f09ea195448d-kube-api-access-9n7pf\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9p64t\" (UID: \"c19e0492-7b5e-4a23-a1aa-f09ea195448d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9p64t" Nov 29 08:20:56 crc kubenswrapper[4795]: I1129 08:20:56.788949 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9p64t" Nov 29 08:20:57 crc kubenswrapper[4795]: I1129 08:20:57.330931 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9p64t"] Nov 29 08:20:57 crc kubenswrapper[4795]: I1129 08:20:57.384531 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9p64t" event={"ID":"c19e0492-7b5e-4a23-a1aa-f09ea195448d","Type":"ContainerStarted","Data":"a14941e1a8a2631d2844806839c3cd7a04a23c22b1b33f923b4f2a116e4818d4"} Nov 29 08:20:58 crc kubenswrapper[4795]: I1129 08:20:58.406056 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9p64t" event={"ID":"c19e0492-7b5e-4a23-a1aa-f09ea195448d","Type":"ContainerStarted","Data":"012e2d097e402105d3d050c85486e7d24393d8c6bde47dc76b98689b16a494d5"} Nov 29 08:20:58 crc kubenswrapper[4795]: I1129 08:20:58.446667 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9p64t" podStartSLOduration=1.721710237 podStartE2EDuration="2.446637309s" podCreationTimestamp="2025-11-29 08:20:56 +0000 UTC" firstStartedPulling="2025-11-29 08:20:57.344948616 +0000 UTC m=+2503.320524416" lastFinishedPulling="2025-11-29 08:20:58.069875698 +0000 UTC m=+2504.045451488" observedRunningTime="2025-11-29 08:20:58.433463366 +0000 UTC m=+2504.409039166" watchObservedRunningTime="2025-11-29 08:20:58.446637309 +0000 UTC m=+2504.422213099" Nov 29 08:21:07 crc kubenswrapper[4795]: I1129 08:21:07.276338 4795 scope.go:117] "RemoveContainer" containerID="21e10122a79c2c2a3372f86102d58fee1c9be891bd6896b1affd1c2404af16fa" Nov 29 08:21:07 crc kubenswrapper[4795]: E1129 08:21:07.277894 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:21:19 crc kubenswrapper[4795]: I1129 08:21:19.276357 4795 scope.go:117] "RemoveContainer" containerID="21e10122a79c2c2a3372f86102d58fee1c9be891bd6896b1affd1c2404af16fa" Nov 29 08:21:19 crc kubenswrapper[4795]: E1129 08:21:19.277209 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:21:32 crc kubenswrapper[4795]: I1129 08:21:32.277042 4795 scope.go:117] "RemoveContainer" containerID="21e10122a79c2c2a3372f86102d58fee1c9be891bd6896b1affd1c2404af16fa" Nov 29 08:21:32 crc kubenswrapper[4795]: E1129 08:21:32.278066 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:21:43 crc kubenswrapper[4795]: I1129 08:21:43.276337 4795 scope.go:117] "RemoveContainer" containerID="21e10122a79c2c2a3372f86102d58fee1c9be891bd6896b1affd1c2404af16fa" Nov 29 08:21:43 crc kubenswrapper[4795]: E1129 08:21:43.277136 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:21:55 crc kubenswrapper[4795]: I1129 08:21:55.094541 4795 generic.go:334] "Generic (PLEG): container finished" podID="c19e0492-7b5e-4a23-a1aa-f09ea195448d" containerID="012e2d097e402105d3d050c85486e7d24393d8c6bde47dc76b98689b16a494d5" exitCode=0 Nov 29 08:21:55 crc kubenswrapper[4795]: I1129 08:21:55.094639 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9p64t" event={"ID":"c19e0492-7b5e-4a23-a1aa-f09ea195448d","Type":"ContainerDied","Data":"012e2d097e402105d3d050c85486e7d24393d8c6bde47dc76b98689b16a494d5"} Nov 29 08:21:55 crc kubenswrapper[4795]: I1129 08:21:55.276876 4795 scope.go:117] "RemoveContainer" containerID="21e10122a79c2c2a3372f86102d58fee1c9be891bd6896b1affd1c2404af16fa" Nov 29 08:21:55 crc kubenswrapper[4795]: E1129 08:21:55.277632 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:21:56 crc kubenswrapper[4795]: I1129 08:21:56.677384 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9p64t" Nov 29 08:21:56 crc kubenswrapper[4795]: I1129 08:21:56.784299 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c19e0492-7b5e-4a23-a1aa-f09ea195448d-inventory\") pod \"c19e0492-7b5e-4a23-a1aa-f09ea195448d\" (UID: \"c19e0492-7b5e-4a23-a1aa-f09ea195448d\") " Nov 29 08:21:56 crc kubenswrapper[4795]: I1129 08:21:56.784438 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9n7pf\" (UniqueName: \"kubernetes.io/projected/c19e0492-7b5e-4a23-a1aa-f09ea195448d-kube-api-access-9n7pf\") pod \"c19e0492-7b5e-4a23-a1aa-f09ea195448d\" (UID: \"c19e0492-7b5e-4a23-a1aa-f09ea195448d\") " Nov 29 08:21:56 crc kubenswrapper[4795]: I1129 08:21:56.784699 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c19e0492-7b5e-4a23-a1aa-f09ea195448d-ssh-key\") pod \"c19e0492-7b5e-4a23-a1aa-f09ea195448d\" (UID: \"c19e0492-7b5e-4a23-a1aa-f09ea195448d\") " Nov 29 08:21:56 crc kubenswrapper[4795]: I1129 08:21:56.790007 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c19e0492-7b5e-4a23-a1aa-f09ea195448d-kube-api-access-9n7pf" (OuterVolumeSpecName: "kube-api-access-9n7pf") pod "c19e0492-7b5e-4a23-a1aa-f09ea195448d" (UID: "c19e0492-7b5e-4a23-a1aa-f09ea195448d"). InnerVolumeSpecName "kube-api-access-9n7pf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:21:56 crc kubenswrapper[4795]: I1129 08:21:56.816332 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c19e0492-7b5e-4a23-a1aa-f09ea195448d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c19e0492-7b5e-4a23-a1aa-f09ea195448d" (UID: "c19e0492-7b5e-4a23-a1aa-f09ea195448d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:21:56 crc kubenswrapper[4795]: I1129 08:21:56.818446 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c19e0492-7b5e-4a23-a1aa-f09ea195448d-inventory" (OuterVolumeSpecName: "inventory") pod "c19e0492-7b5e-4a23-a1aa-f09ea195448d" (UID: "c19e0492-7b5e-4a23-a1aa-f09ea195448d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:21:56 crc kubenswrapper[4795]: I1129 08:21:56.888431 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9n7pf\" (UniqueName: \"kubernetes.io/projected/c19e0492-7b5e-4a23-a1aa-f09ea195448d-kube-api-access-9n7pf\") on node \"crc\" DevicePath \"\"" Nov 29 08:21:56 crc kubenswrapper[4795]: I1129 08:21:56.888468 4795 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c19e0492-7b5e-4a23-a1aa-f09ea195448d-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 08:21:56 crc kubenswrapper[4795]: I1129 08:21:56.888482 4795 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c19e0492-7b5e-4a23-a1aa-f09ea195448d-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 08:21:57 crc kubenswrapper[4795]: I1129 08:21:57.114503 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9p64t" event={"ID":"c19e0492-7b5e-4a23-a1aa-f09ea195448d","Type":"ContainerDied","Data":"a14941e1a8a2631d2844806839c3cd7a04a23c22b1b33f923b4f2a116e4818d4"} Nov 29 08:21:57 crc kubenswrapper[4795]: I1129 08:21:57.114550 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a14941e1a8a2631d2844806839c3cd7a04a23c22b1b33f923b4f2a116e4818d4" Nov 29 08:21:57 crc kubenswrapper[4795]: I1129 08:21:57.114572 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9p64t" Nov 29 08:21:57 crc kubenswrapper[4795]: I1129 08:21:57.229830 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-mmx5g"] Nov 29 08:21:57 crc kubenswrapper[4795]: E1129 08:21:57.230539 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c19e0492-7b5e-4a23-a1aa-f09ea195448d" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 29 08:21:57 crc kubenswrapper[4795]: I1129 08:21:57.230565 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="c19e0492-7b5e-4a23-a1aa-f09ea195448d" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 29 08:21:57 crc kubenswrapper[4795]: I1129 08:21:57.230954 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="c19e0492-7b5e-4a23-a1aa-f09ea195448d" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 29 08:21:57 crc kubenswrapper[4795]: I1129 08:21:57.232158 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-mmx5g" Nov 29 08:21:57 crc kubenswrapper[4795]: I1129 08:21:57.235174 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 08:21:57 crc kubenswrapper[4795]: I1129 08:21:57.235538 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 08:21:57 crc kubenswrapper[4795]: I1129 08:21:57.235945 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 08:21:57 crc kubenswrapper[4795]: I1129 08:21:57.236143 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nfhsg" Nov 29 08:21:57 crc kubenswrapper[4795]: I1129 08:21:57.260509 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-mmx5g"] Nov 29 08:21:57 crc kubenswrapper[4795]: I1129 08:21:57.299118 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/a8a9108e-9590-423a-819e-9b009a41e91a-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-mmx5g\" (UID: \"a8a9108e-9590-423a-819e-9b009a41e91a\") " pod="openstack/ssh-known-hosts-edpm-deployment-mmx5g" Nov 29 08:21:57 crc kubenswrapper[4795]: I1129 08:21:57.299428 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nddzr\" (UniqueName: \"kubernetes.io/projected/a8a9108e-9590-423a-819e-9b009a41e91a-kube-api-access-nddzr\") pod \"ssh-known-hosts-edpm-deployment-mmx5g\" (UID: \"a8a9108e-9590-423a-819e-9b009a41e91a\") " pod="openstack/ssh-known-hosts-edpm-deployment-mmx5g" Nov 29 08:21:57 crc kubenswrapper[4795]: I1129 08:21:57.300534 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a8a9108e-9590-423a-819e-9b009a41e91a-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-mmx5g\" (UID: \"a8a9108e-9590-423a-819e-9b009a41e91a\") " pod="openstack/ssh-known-hosts-edpm-deployment-mmx5g" Nov 29 08:21:57 crc kubenswrapper[4795]: I1129 08:21:57.402266 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/a8a9108e-9590-423a-819e-9b009a41e91a-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-mmx5g\" (UID: \"a8a9108e-9590-423a-819e-9b009a41e91a\") " pod="openstack/ssh-known-hosts-edpm-deployment-mmx5g" Nov 29 08:21:57 crc kubenswrapper[4795]: I1129 08:21:57.402416 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nddzr\" (UniqueName: \"kubernetes.io/projected/a8a9108e-9590-423a-819e-9b009a41e91a-kube-api-access-nddzr\") pod \"ssh-known-hosts-edpm-deployment-mmx5g\" (UID: \"a8a9108e-9590-423a-819e-9b009a41e91a\") " pod="openstack/ssh-known-hosts-edpm-deployment-mmx5g" Nov 29 08:21:57 crc kubenswrapper[4795]: I1129 08:21:57.402581 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a8a9108e-9590-423a-819e-9b009a41e91a-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-mmx5g\" (UID: \"a8a9108e-9590-423a-819e-9b009a41e91a\") " pod="openstack/ssh-known-hosts-edpm-deployment-mmx5g" Nov 29 08:21:57 crc kubenswrapper[4795]: I1129 08:21:57.409175 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a8a9108e-9590-423a-819e-9b009a41e91a-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-mmx5g\" (UID: \"a8a9108e-9590-423a-819e-9b009a41e91a\") " pod="openstack/ssh-known-hosts-edpm-deployment-mmx5g" Nov 29 08:21:57 crc kubenswrapper[4795]: I1129 08:21:57.409465 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/a8a9108e-9590-423a-819e-9b009a41e91a-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-mmx5g\" (UID: \"a8a9108e-9590-423a-819e-9b009a41e91a\") " pod="openstack/ssh-known-hosts-edpm-deployment-mmx5g" Nov 29 08:21:57 crc kubenswrapper[4795]: I1129 08:21:57.421353 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nddzr\" (UniqueName: \"kubernetes.io/projected/a8a9108e-9590-423a-819e-9b009a41e91a-kube-api-access-nddzr\") pod \"ssh-known-hosts-edpm-deployment-mmx5g\" (UID: \"a8a9108e-9590-423a-819e-9b009a41e91a\") " pod="openstack/ssh-known-hosts-edpm-deployment-mmx5g" Nov 29 08:21:57 crc kubenswrapper[4795]: I1129 08:21:57.560674 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-mmx5g" Nov 29 08:21:58 crc kubenswrapper[4795]: I1129 08:21:58.705148 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-mmx5g"] Nov 29 08:21:59 crc kubenswrapper[4795]: I1129 08:21:59.146596 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-mmx5g" event={"ID":"a8a9108e-9590-423a-819e-9b009a41e91a","Type":"ContainerStarted","Data":"65e97393d06a7b36892bca1ebb5ca3928c03251a8dd7ae78822de22e5f112a3b"} Nov 29 08:22:00 crc kubenswrapper[4795]: I1129 08:22:00.158450 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-mmx5g" event={"ID":"a8a9108e-9590-423a-819e-9b009a41e91a","Type":"ContainerStarted","Data":"7e656bacdbfa504fe79599833d87110bbc3dc3ebe5170ab6fe30e954889e967b"} Nov 29 08:22:00 crc kubenswrapper[4795]: I1129 08:22:00.187722 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-mmx5g" podStartSLOduration=2.656748839 podStartE2EDuration="3.187700742s" podCreationTimestamp="2025-11-29 08:21:57 +0000 UTC" firstStartedPulling="2025-11-29 08:21:58.70863554 +0000 UTC m=+2564.684211340" lastFinishedPulling="2025-11-29 08:21:59.239587453 +0000 UTC m=+2565.215163243" observedRunningTime="2025-11-29 08:22:00.175060134 +0000 UTC m=+2566.150635934" watchObservedRunningTime="2025-11-29 08:22:00.187700742 +0000 UTC m=+2566.163276532" Nov 29 08:22:07 crc kubenswrapper[4795]: I1129 08:22:07.251370 4795 generic.go:334] "Generic (PLEG): container finished" podID="a8a9108e-9590-423a-819e-9b009a41e91a" containerID="7e656bacdbfa504fe79599833d87110bbc3dc3ebe5170ab6fe30e954889e967b" exitCode=0 Nov 29 08:22:07 crc kubenswrapper[4795]: I1129 08:22:07.252054 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-mmx5g" event={"ID":"a8a9108e-9590-423a-819e-9b009a41e91a","Type":"ContainerDied","Data":"7e656bacdbfa504fe79599833d87110bbc3dc3ebe5170ab6fe30e954889e967b"} Nov 29 08:22:08 crc kubenswrapper[4795]: I1129 08:22:08.863898 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-mmx5g" Nov 29 08:22:08 crc kubenswrapper[4795]: I1129 08:22:08.900662 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nddzr\" (UniqueName: \"kubernetes.io/projected/a8a9108e-9590-423a-819e-9b009a41e91a-kube-api-access-nddzr\") pod \"a8a9108e-9590-423a-819e-9b009a41e91a\" (UID: \"a8a9108e-9590-423a-819e-9b009a41e91a\") " Nov 29 08:22:08 crc kubenswrapper[4795]: I1129 08:22:08.900843 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/a8a9108e-9590-423a-819e-9b009a41e91a-inventory-0\") pod \"a8a9108e-9590-423a-819e-9b009a41e91a\" (UID: \"a8a9108e-9590-423a-819e-9b009a41e91a\") " Nov 29 08:22:08 crc kubenswrapper[4795]: I1129 08:22:08.900898 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a8a9108e-9590-423a-819e-9b009a41e91a-ssh-key-openstack-edpm-ipam\") pod \"a8a9108e-9590-423a-819e-9b009a41e91a\" (UID: \"a8a9108e-9590-423a-819e-9b009a41e91a\") " Nov 29 08:22:08 crc kubenswrapper[4795]: I1129 08:22:08.907936 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8a9108e-9590-423a-819e-9b009a41e91a-kube-api-access-nddzr" (OuterVolumeSpecName: "kube-api-access-nddzr") pod "a8a9108e-9590-423a-819e-9b009a41e91a" (UID: "a8a9108e-9590-423a-819e-9b009a41e91a"). InnerVolumeSpecName "kube-api-access-nddzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:22:08 crc kubenswrapper[4795]: I1129 08:22:08.933146 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8a9108e-9590-423a-819e-9b009a41e91a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a8a9108e-9590-423a-819e-9b009a41e91a" (UID: "a8a9108e-9590-423a-819e-9b009a41e91a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:22:08 crc kubenswrapper[4795]: I1129 08:22:08.948967 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8a9108e-9590-423a-819e-9b009a41e91a-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "a8a9108e-9590-423a-819e-9b009a41e91a" (UID: "a8a9108e-9590-423a-819e-9b009a41e91a"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:22:09 crc kubenswrapper[4795]: I1129 08:22:09.004130 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nddzr\" (UniqueName: \"kubernetes.io/projected/a8a9108e-9590-423a-819e-9b009a41e91a-kube-api-access-nddzr\") on node \"crc\" DevicePath \"\"" Nov 29 08:22:09 crc kubenswrapper[4795]: I1129 08:22:09.004165 4795 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/a8a9108e-9590-423a-819e-9b009a41e91a-inventory-0\") on node \"crc\" DevicePath \"\"" Nov 29 08:22:09 crc kubenswrapper[4795]: I1129 08:22:09.004175 4795 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a8a9108e-9590-423a-819e-9b009a41e91a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 29 08:22:09 crc kubenswrapper[4795]: I1129 08:22:09.276131 4795 scope.go:117] "RemoveContainer" containerID="21e10122a79c2c2a3372f86102d58fee1c9be891bd6896b1affd1c2404af16fa" Nov 29 08:22:09 crc kubenswrapper[4795]: E1129 08:22:09.276782 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:22:09 crc kubenswrapper[4795]: I1129 08:22:09.277572 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-mmx5g" event={"ID":"a8a9108e-9590-423a-819e-9b009a41e91a","Type":"ContainerDied","Data":"65e97393d06a7b36892bca1ebb5ca3928c03251a8dd7ae78822de22e5f112a3b"} Nov 29 08:22:09 crc kubenswrapper[4795]: I1129 08:22:09.277621 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65e97393d06a7b36892bca1ebb5ca3928c03251a8dd7ae78822de22e5f112a3b" Nov 29 08:22:09 crc kubenswrapper[4795]: I1129 08:22:09.277626 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-mmx5g" Nov 29 08:22:09 crc kubenswrapper[4795]: I1129 08:22:09.362038 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-9nkc7"] Nov 29 08:22:09 crc kubenswrapper[4795]: E1129 08:22:09.362641 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8a9108e-9590-423a-819e-9b009a41e91a" containerName="ssh-known-hosts-edpm-deployment" Nov 29 08:22:09 crc kubenswrapper[4795]: I1129 08:22:09.362661 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8a9108e-9590-423a-819e-9b009a41e91a" containerName="ssh-known-hosts-edpm-deployment" Nov 29 08:22:09 crc kubenswrapper[4795]: I1129 08:22:09.362943 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8a9108e-9590-423a-819e-9b009a41e91a" containerName="ssh-known-hosts-edpm-deployment" Nov 29 08:22:09 crc kubenswrapper[4795]: I1129 08:22:09.364140 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9nkc7" Nov 29 08:22:09 crc kubenswrapper[4795]: I1129 08:22:09.366210 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 08:22:09 crc kubenswrapper[4795]: I1129 08:22:09.367432 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nfhsg" Nov 29 08:22:09 crc kubenswrapper[4795]: I1129 08:22:09.367802 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 08:22:09 crc kubenswrapper[4795]: I1129 08:22:09.368888 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 08:22:09 crc kubenswrapper[4795]: I1129 08:22:09.409498 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-9nkc7"] Nov 29 08:22:09 crc kubenswrapper[4795]: I1129 08:22:09.412341 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2e79257d-05f2-41d6-97cb-0872075ec6bf-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9nkc7\" (UID: \"2e79257d-05f2-41d6-97cb-0872075ec6bf\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9nkc7" Nov 29 08:22:09 crc kubenswrapper[4795]: I1129 08:22:09.412651 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2e79257d-05f2-41d6-97cb-0872075ec6bf-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9nkc7\" (UID: \"2e79257d-05f2-41d6-97cb-0872075ec6bf\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9nkc7" Nov 29 08:22:09 crc kubenswrapper[4795]: I1129 08:22:09.412790 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4lq5\" (UniqueName: \"kubernetes.io/projected/2e79257d-05f2-41d6-97cb-0872075ec6bf-kube-api-access-z4lq5\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9nkc7\" (UID: \"2e79257d-05f2-41d6-97cb-0872075ec6bf\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9nkc7" Nov 29 08:22:09 crc kubenswrapper[4795]: I1129 08:22:09.514715 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2e79257d-05f2-41d6-97cb-0872075ec6bf-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9nkc7\" (UID: \"2e79257d-05f2-41d6-97cb-0872075ec6bf\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9nkc7" Nov 29 08:22:09 crc kubenswrapper[4795]: I1129 08:22:09.514777 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4lq5\" (UniqueName: \"kubernetes.io/projected/2e79257d-05f2-41d6-97cb-0872075ec6bf-kube-api-access-z4lq5\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9nkc7\" (UID: \"2e79257d-05f2-41d6-97cb-0872075ec6bf\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9nkc7" Nov 29 08:22:09 crc kubenswrapper[4795]: I1129 08:22:09.514841 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2e79257d-05f2-41d6-97cb-0872075ec6bf-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9nkc7\" (UID: \"2e79257d-05f2-41d6-97cb-0872075ec6bf\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9nkc7" Nov 29 08:22:09 crc kubenswrapper[4795]: I1129 08:22:09.519857 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2e79257d-05f2-41d6-97cb-0872075ec6bf-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9nkc7\" (UID: \"2e79257d-05f2-41d6-97cb-0872075ec6bf\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9nkc7" Nov 29 08:22:09 crc kubenswrapper[4795]: I1129 08:22:09.520018 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2e79257d-05f2-41d6-97cb-0872075ec6bf-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9nkc7\" (UID: \"2e79257d-05f2-41d6-97cb-0872075ec6bf\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9nkc7" Nov 29 08:22:09 crc kubenswrapper[4795]: I1129 08:22:09.530761 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4lq5\" (UniqueName: \"kubernetes.io/projected/2e79257d-05f2-41d6-97cb-0872075ec6bf-kube-api-access-z4lq5\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9nkc7\" (UID: \"2e79257d-05f2-41d6-97cb-0872075ec6bf\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9nkc7" Nov 29 08:22:09 crc kubenswrapper[4795]: I1129 08:22:09.690303 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9nkc7" Nov 29 08:22:10 crc kubenswrapper[4795]: I1129 08:22:10.228122 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-9nkc7"] Nov 29 08:22:10 crc kubenswrapper[4795]: I1129 08:22:10.292331 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9nkc7" event={"ID":"2e79257d-05f2-41d6-97cb-0872075ec6bf","Type":"ContainerStarted","Data":"f5ebeab0b93fbad4b4ee89652cd306dae5c8d21dc713465d8c1cfbcb96ac9dba"} Nov 29 08:22:11 crc kubenswrapper[4795]: I1129 08:22:11.304521 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9nkc7" event={"ID":"2e79257d-05f2-41d6-97cb-0872075ec6bf","Type":"ContainerStarted","Data":"2ec9841c4e8410685271a99b1e27a67635438c70e9262e47f61bc2afa21159e0"} Nov 29 08:22:11 crc kubenswrapper[4795]: I1129 08:22:11.331142 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9nkc7" podStartSLOduration=1.8847577 podStartE2EDuration="2.331123395s" podCreationTimestamp="2025-11-29 08:22:09 +0000 UTC" firstStartedPulling="2025-11-29 08:22:10.245278931 +0000 UTC m=+2576.220854721" lastFinishedPulling="2025-11-29 08:22:10.691644616 +0000 UTC m=+2576.667220416" observedRunningTime="2025-11-29 08:22:11.31964141 +0000 UTC m=+2577.295217210" watchObservedRunningTime="2025-11-29 08:22:11.331123395 +0000 UTC m=+2577.306699185" Nov 29 08:22:13 crc kubenswrapper[4795]: I1129 08:22:13.052668 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-whxgt"] Nov 29 08:22:13 crc kubenswrapper[4795]: I1129 08:22:13.067140 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-whxgt"] Nov 29 08:22:14 crc kubenswrapper[4795]: I1129 08:22:14.304144 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7981887b-f33c-421f-aac5-520c03b7a48a" path="/var/lib/kubelet/pods/7981887b-f33c-421f-aac5-520c03b7a48a/volumes" Nov 29 08:22:19 crc kubenswrapper[4795]: I1129 08:22:19.409963 4795 generic.go:334] "Generic (PLEG): container finished" podID="2e79257d-05f2-41d6-97cb-0872075ec6bf" containerID="2ec9841c4e8410685271a99b1e27a67635438c70e9262e47f61bc2afa21159e0" exitCode=0 Nov 29 08:22:19 crc kubenswrapper[4795]: I1129 08:22:19.410074 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9nkc7" event={"ID":"2e79257d-05f2-41d6-97cb-0872075ec6bf","Type":"ContainerDied","Data":"2ec9841c4e8410685271a99b1e27a67635438c70e9262e47f61bc2afa21159e0"} Nov 29 08:22:20 crc kubenswrapper[4795]: I1129 08:22:20.944940 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9nkc7" Nov 29 08:22:21 crc kubenswrapper[4795]: I1129 08:22:21.024934 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2e79257d-05f2-41d6-97cb-0872075ec6bf-inventory\") pod \"2e79257d-05f2-41d6-97cb-0872075ec6bf\" (UID: \"2e79257d-05f2-41d6-97cb-0872075ec6bf\") " Nov 29 08:22:21 crc kubenswrapper[4795]: I1129 08:22:21.025205 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2e79257d-05f2-41d6-97cb-0872075ec6bf-ssh-key\") pod \"2e79257d-05f2-41d6-97cb-0872075ec6bf\" (UID: \"2e79257d-05f2-41d6-97cb-0872075ec6bf\") " Nov 29 08:22:21 crc kubenswrapper[4795]: I1129 08:22:21.025463 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4lq5\" (UniqueName: \"kubernetes.io/projected/2e79257d-05f2-41d6-97cb-0872075ec6bf-kube-api-access-z4lq5\") pod \"2e79257d-05f2-41d6-97cb-0872075ec6bf\" (UID: \"2e79257d-05f2-41d6-97cb-0872075ec6bf\") " Nov 29 08:22:21 crc kubenswrapper[4795]: I1129 08:22:21.038793 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e79257d-05f2-41d6-97cb-0872075ec6bf-kube-api-access-z4lq5" (OuterVolumeSpecName: "kube-api-access-z4lq5") pod "2e79257d-05f2-41d6-97cb-0872075ec6bf" (UID: "2e79257d-05f2-41d6-97cb-0872075ec6bf"). InnerVolumeSpecName "kube-api-access-z4lq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:22:21 crc kubenswrapper[4795]: I1129 08:22:21.057060 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e79257d-05f2-41d6-97cb-0872075ec6bf-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "2e79257d-05f2-41d6-97cb-0872075ec6bf" (UID: "2e79257d-05f2-41d6-97cb-0872075ec6bf"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:22:21 crc kubenswrapper[4795]: I1129 08:22:21.066364 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e79257d-05f2-41d6-97cb-0872075ec6bf-inventory" (OuterVolumeSpecName: "inventory") pod "2e79257d-05f2-41d6-97cb-0872075ec6bf" (UID: "2e79257d-05f2-41d6-97cb-0872075ec6bf"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:22:21 crc kubenswrapper[4795]: I1129 08:22:21.128792 4795 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2e79257d-05f2-41d6-97cb-0872075ec6bf-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 08:22:21 crc kubenswrapper[4795]: I1129 08:22:21.128832 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4lq5\" (UniqueName: \"kubernetes.io/projected/2e79257d-05f2-41d6-97cb-0872075ec6bf-kube-api-access-z4lq5\") on node \"crc\" DevicePath \"\"" Nov 29 08:22:21 crc kubenswrapper[4795]: I1129 08:22:21.128847 4795 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2e79257d-05f2-41d6-97cb-0872075ec6bf-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 08:22:21 crc kubenswrapper[4795]: I1129 08:22:21.453686 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9nkc7" event={"ID":"2e79257d-05f2-41d6-97cb-0872075ec6bf","Type":"ContainerDied","Data":"f5ebeab0b93fbad4b4ee89652cd306dae5c8d21dc713465d8c1cfbcb96ac9dba"} Nov 29 08:22:21 crc kubenswrapper[4795]: I1129 08:22:21.453747 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5ebeab0b93fbad4b4ee89652cd306dae5c8d21dc713465d8c1cfbcb96ac9dba" Nov 29 08:22:21 crc kubenswrapper[4795]: I1129 08:22:21.453870 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9nkc7" Nov 29 08:22:21 crc kubenswrapper[4795]: I1129 08:22:21.528334 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mcx48"] Nov 29 08:22:21 crc kubenswrapper[4795]: E1129 08:22:21.528945 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e79257d-05f2-41d6-97cb-0872075ec6bf" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 29 08:22:21 crc kubenswrapper[4795]: I1129 08:22:21.528971 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e79257d-05f2-41d6-97cb-0872075ec6bf" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 29 08:22:21 crc kubenswrapper[4795]: I1129 08:22:21.529256 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e79257d-05f2-41d6-97cb-0872075ec6bf" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 29 08:22:21 crc kubenswrapper[4795]: I1129 08:22:21.530990 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mcx48" Nov 29 08:22:21 crc kubenswrapper[4795]: I1129 08:22:21.534389 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 08:22:21 crc kubenswrapper[4795]: I1129 08:22:21.534700 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 08:22:21 crc kubenswrapper[4795]: I1129 08:22:21.534859 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 08:22:21 crc kubenswrapper[4795]: I1129 08:22:21.535087 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nfhsg" Nov 29 08:22:21 crc kubenswrapper[4795]: I1129 08:22:21.541580 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mcx48"] Nov 29 08:22:21 crc kubenswrapper[4795]: I1129 08:22:21.642300 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7bdb8420-3c48-48f2-977d-f163da761f04-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mcx48\" (UID: \"7bdb8420-3c48-48f2-977d-f163da761f04\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mcx48" Nov 29 08:22:21 crc kubenswrapper[4795]: I1129 08:22:21.642736 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7bdb8420-3c48-48f2-977d-f163da761f04-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mcx48\" (UID: \"7bdb8420-3c48-48f2-977d-f163da761f04\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mcx48" Nov 29 08:22:21 crc kubenswrapper[4795]: I1129 08:22:21.643102 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvgkk\" (UniqueName: \"kubernetes.io/projected/7bdb8420-3c48-48f2-977d-f163da761f04-kube-api-access-gvgkk\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mcx48\" (UID: \"7bdb8420-3c48-48f2-977d-f163da761f04\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mcx48" Nov 29 08:22:21 crc kubenswrapper[4795]: I1129 08:22:21.745081 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7bdb8420-3c48-48f2-977d-f163da761f04-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mcx48\" (UID: \"7bdb8420-3c48-48f2-977d-f163da761f04\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mcx48" Nov 29 08:22:21 crc kubenswrapper[4795]: I1129 08:22:21.745304 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvgkk\" (UniqueName: \"kubernetes.io/projected/7bdb8420-3c48-48f2-977d-f163da761f04-kube-api-access-gvgkk\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mcx48\" (UID: \"7bdb8420-3c48-48f2-977d-f163da761f04\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mcx48" Nov 29 08:22:21 crc kubenswrapper[4795]: I1129 08:22:21.745400 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7bdb8420-3c48-48f2-977d-f163da761f04-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mcx48\" (UID: \"7bdb8420-3c48-48f2-977d-f163da761f04\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mcx48" Nov 29 08:22:21 crc kubenswrapper[4795]: I1129 08:22:21.754275 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7bdb8420-3c48-48f2-977d-f163da761f04-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mcx48\" (UID: \"7bdb8420-3c48-48f2-977d-f163da761f04\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mcx48" Nov 29 08:22:21 crc kubenswrapper[4795]: I1129 08:22:21.754390 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7bdb8420-3c48-48f2-977d-f163da761f04-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mcx48\" (UID: \"7bdb8420-3c48-48f2-977d-f163da761f04\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mcx48" Nov 29 08:22:21 crc kubenswrapper[4795]: I1129 08:22:21.761003 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvgkk\" (UniqueName: \"kubernetes.io/projected/7bdb8420-3c48-48f2-977d-f163da761f04-kube-api-access-gvgkk\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mcx48\" (UID: \"7bdb8420-3c48-48f2-977d-f163da761f04\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mcx48" Nov 29 08:22:21 crc kubenswrapper[4795]: I1129 08:22:21.847305 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mcx48" Nov 29 08:22:22 crc kubenswrapper[4795]: I1129 08:22:22.407120 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mcx48"] Nov 29 08:22:22 crc kubenswrapper[4795]: I1129 08:22:22.466392 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mcx48" event={"ID":"7bdb8420-3c48-48f2-977d-f163da761f04","Type":"ContainerStarted","Data":"9759f72154498a3c226a68248ee62294f5ae60db31b1b7ab55598849236c7362"} Nov 29 08:22:23 crc kubenswrapper[4795]: I1129 08:22:23.478772 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mcx48" event={"ID":"7bdb8420-3c48-48f2-977d-f163da761f04","Type":"ContainerStarted","Data":"29eace995eed97448dd1b7fd748796ca259253863b843cf93dff32cfa7283ac3"} Nov 29 08:22:23 crc kubenswrapper[4795]: I1129 08:22:23.502051 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mcx48" podStartSLOduration=1.991601328 podStartE2EDuration="2.501585117s" podCreationTimestamp="2025-11-29 08:22:21 +0000 UTC" firstStartedPulling="2025-11-29 08:22:22.415569377 +0000 UTC m=+2588.391145167" lastFinishedPulling="2025-11-29 08:22:22.925553166 +0000 UTC m=+2588.901128956" observedRunningTime="2025-11-29 08:22:23.492410867 +0000 UTC m=+2589.467986657" watchObservedRunningTime="2025-11-29 08:22:23.501585117 +0000 UTC m=+2589.477160907" Nov 29 08:22:24 crc kubenswrapper[4795]: I1129 08:22:24.282670 4795 scope.go:117] "RemoveContainer" containerID="21e10122a79c2c2a3372f86102d58fee1c9be891bd6896b1affd1c2404af16fa" Nov 29 08:22:24 crc kubenswrapper[4795]: E1129 08:22:24.283306 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:22:32 crc kubenswrapper[4795]: I1129 08:22:32.253185 4795 scope.go:117] "RemoveContainer" containerID="08ee91d888f547b8d9b4ffe4dea9f5f3051cb8b8385a0596f100ada7252b6e67" Nov 29 08:22:33 crc kubenswrapper[4795]: I1129 08:22:33.749452 4795 generic.go:334] "Generic (PLEG): container finished" podID="7bdb8420-3c48-48f2-977d-f163da761f04" containerID="29eace995eed97448dd1b7fd748796ca259253863b843cf93dff32cfa7283ac3" exitCode=0 Nov 29 08:22:33 crc kubenswrapper[4795]: I1129 08:22:33.749508 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mcx48" event={"ID":"7bdb8420-3c48-48f2-977d-f163da761f04","Type":"ContainerDied","Data":"29eace995eed97448dd1b7fd748796ca259253863b843cf93dff32cfa7283ac3"} Nov 29 08:22:35 crc kubenswrapper[4795]: I1129 08:22:35.406521 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mcx48" Nov 29 08:22:35 crc kubenswrapper[4795]: I1129 08:22:35.517669 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7bdb8420-3c48-48f2-977d-f163da761f04-inventory\") pod \"7bdb8420-3c48-48f2-977d-f163da761f04\" (UID: \"7bdb8420-3c48-48f2-977d-f163da761f04\") " Nov 29 08:22:35 crc kubenswrapper[4795]: I1129 08:22:35.517771 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvgkk\" (UniqueName: \"kubernetes.io/projected/7bdb8420-3c48-48f2-977d-f163da761f04-kube-api-access-gvgkk\") pod \"7bdb8420-3c48-48f2-977d-f163da761f04\" (UID: \"7bdb8420-3c48-48f2-977d-f163da761f04\") " Nov 29 08:22:35 crc kubenswrapper[4795]: I1129 08:22:35.517881 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7bdb8420-3c48-48f2-977d-f163da761f04-ssh-key\") pod \"7bdb8420-3c48-48f2-977d-f163da761f04\" (UID: \"7bdb8420-3c48-48f2-977d-f163da761f04\") " Nov 29 08:22:35 crc kubenswrapper[4795]: I1129 08:22:35.526338 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bdb8420-3c48-48f2-977d-f163da761f04-kube-api-access-gvgkk" (OuterVolumeSpecName: "kube-api-access-gvgkk") pod "7bdb8420-3c48-48f2-977d-f163da761f04" (UID: "7bdb8420-3c48-48f2-977d-f163da761f04"). InnerVolumeSpecName "kube-api-access-gvgkk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:22:35 crc kubenswrapper[4795]: I1129 08:22:35.560205 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bdb8420-3c48-48f2-977d-f163da761f04-inventory" (OuterVolumeSpecName: "inventory") pod "7bdb8420-3c48-48f2-977d-f163da761f04" (UID: "7bdb8420-3c48-48f2-977d-f163da761f04"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:22:35 crc kubenswrapper[4795]: I1129 08:22:35.567146 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bdb8420-3c48-48f2-977d-f163da761f04-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "7bdb8420-3c48-48f2-977d-f163da761f04" (UID: "7bdb8420-3c48-48f2-977d-f163da761f04"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:22:35 crc kubenswrapper[4795]: I1129 08:22:35.620954 4795 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7bdb8420-3c48-48f2-977d-f163da761f04-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 08:22:35 crc kubenswrapper[4795]: I1129 08:22:35.620999 4795 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7bdb8420-3c48-48f2-977d-f163da761f04-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 08:22:35 crc kubenswrapper[4795]: I1129 08:22:35.621013 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvgkk\" (UniqueName: \"kubernetes.io/projected/7bdb8420-3c48-48f2-977d-f163da761f04-kube-api-access-gvgkk\") on node \"crc\" DevicePath \"\"" Nov 29 08:22:35 crc kubenswrapper[4795]: I1129 08:22:35.773870 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mcx48" event={"ID":"7bdb8420-3c48-48f2-977d-f163da761f04","Type":"ContainerDied","Data":"9759f72154498a3c226a68248ee62294f5ae60db31b1b7ab55598849236c7362"} Nov 29 08:22:35 crc kubenswrapper[4795]: I1129 08:22:35.774062 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9759f72154498a3c226a68248ee62294f5ae60db31b1b7ab55598849236c7362" Nov 29 08:22:35 crc kubenswrapper[4795]: I1129 08:22:35.774105 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mcx48" Nov 29 08:22:35 crc kubenswrapper[4795]: I1129 08:22:35.853741 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr"] Nov 29 08:22:35 crc kubenswrapper[4795]: E1129 08:22:35.854225 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bdb8420-3c48-48f2-977d-f163da761f04" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 29 08:22:35 crc kubenswrapper[4795]: I1129 08:22:35.854245 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bdb8420-3c48-48f2-977d-f163da761f04" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 29 08:22:35 crc kubenswrapper[4795]: I1129 08:22:35.854505 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bdb8420-3c48-48f2-977d-f163da761f04" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 29 08:22:35 crc kubenswrapper[4795]: I1129 08:22:35.855278 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr" Nov 29 08:22:35 crc kubenswrapper[4795]: I1129 08:22:35.858276 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 08:22:35 crc kubenswrapper[4795]: I1129 08:22:35.858755 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Nov 29 08:22:35 crc kubenswrapper[4795]: I1129 08:22:35.858921 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 08:22:35 crc kubenswrapper[4795]: I1129 08:22:35.859294 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 08:22:35 crc kubenswrapper[4795]: I1129 08:22:35.859303 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Nov 29 08:22:35 crc kubenswrapper[4795]: I1129 08:22:35.859342 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" Nov 29 08:22:35 crc kubenswrapper[4795]: I1129 08:22:35.859614 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Nov 29 08:22:35 crc kubenswrapper[4795]: I1129 08:22:35.859624 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Nov 29 08:22:35 crc kubenswrapper[4795]: I1129 08:22:35.861927 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nfhsg" Nov 29 08:22:35 crc kubenswrapper[4795]: I1129 08:22:35.871540 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr"] Nov 29 08:22:36 crc kubenswrapper[4795]: I1129 08:22:36.029487 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e2c266-8570-40c8-adbc-d4939bde4ad9-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr\" (UID: \"96e2c266-8570-40c8-adbc-d4939bde4ad9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr" Nov 29 08:22:36 crc kubenswrapper[4795]: I1129 08:22:36.029781 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqb4v\" (UniqueName: \"kubernetes.io/projected/96e2c266-8570-40c8-adbc-d4939bde4ad9-kube-api-access-jqb4v\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr\" (UID: \"96e2c266-8570-40c8-adbc-d4939bde4ad9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr" Nov 29 08:22:36 crc kubenswrapper[4795]: I1129 08:22:36.029824 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/96e2c266-8570-40c8-adbc-d4939bde4ad9-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr\" (UID: \"96e2c266-8570-40c8-adbc-d4939bde4ad9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr" Nov 29 08:22:36 crc kubenswrapper[4795]: I1129 08:22:36.029848 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/96e2c266-8570-40c8-adbc-d4939bde4ad9-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr\" (UID: \"96e2c266-8570-40c8-adbc-d4939bde4ad9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr" Nov 29 08:22:36 crc kubenswrapper[4795]: I1129 08:22:36.029865 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/96e2c266-8570-40c8-adbc-d4939bde4ad9-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr\" (UID: \"96e2c266-8570-40c8-adbc-d4939bde4ad9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr" Nov 29 08:22:36 crc kubenswrapper[4795]: I1129 08:22:36.029902 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96e2c266-8570-40c8-adbc-d4939bde4ad9-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr\" (UID: \"96e2c266-8570-40c8-adbc-d4939bde4ad9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr" Nov 29 08:22:36 crc kubenswrapper[4795]: I1129 08:22:36.029939 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e2c266-8570-40c8-adbc-d4939bde4ad9-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr\" (UID: \"96e2c266-8570-40c8-adbc-d4939bde4ad9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr" Nov 29 08:22:36 crc kubenswrapper[4795]: I1129 08:22:36.030022 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/96e2c266-8570-40c8-adbc-d4939bde4ad9-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr\" (UID: \"96e2c266-8570-40c8-adbc-d4939bde4ad9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr" Nov 29 08:22:36 crc kubenswrapper[4795]: I1129 08:22:36.030040 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e2c266-8570-40c8-adbc-d4939bde4ad9-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr\" (UID: \"96e2c266-8570-40c8-adbc-d4939bde4ad9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr" Nov 29 08:22:36 crc kubenswrapper[4795]: I1129 08:22:36.030221 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e2c266-8570-40c8-adbc-d4939bde4ad9-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr\" (UID: \"96e2c266-8570-40c8-adbc-d4939bde4ad9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr" Nov 29 08:22:36 crc kubenswrapper[4795]: I1129 08:22:36.030273 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/96e2c266-8570-40c8-adbc-d4939bde4ad9-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr\" (UID: \"96e2c266-8570-40c8-adbc-d4939bde4ad9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr" Nov 29 08:22:36 crc kubenswrapper[4795]: I1129 08:22:36.030294 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e2c266-8570-40c8-adbc-d4939bde4ad9-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr\" (UID: \"96e2c266-8570-40c8-adbc-d4939bde4ad9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr" Nov 29 08:22:36 crc kubenswrapper[4795]: I1129 08:22:36.030405 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/96e2c266-8570-40c8-adbc-d4939bde4ad9-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr\" (UID: \"96e2c266-8570-40c8-adbc-d4939bde4ad9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr" Nov 29 08:22:36 crc kubenswrapper[4795]: I1129 08:22:36.030515 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e2c266-8570-40c8-adbc-d4939bde4ad9-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr\" (UID: \"96e2c266-8570-40c8-adbc-d4939bde4ad9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr" Nov 29 08:22:36 crc kubenswrapper[4795]: I1129 08:22:36.030563 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e2c266-8570-40c8-adbc-d4939bde4ad9-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr\" (UID: \"96e2c266-8570-40c8-adbc-d4939bde4ad9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr" Nov 29 08:22:36 crc kubenswrapper[4795]: I1129 08:22:36.031720 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e2c266-8570-40c8-adbc-d4939bde4ad9-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr\" (UID: \"96e2c266-8570-40c8-adbc-d4939bde4ad9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr" Nov 29 08:22:36 crc kubenswrapper[4795]: I1129 08:22:36.134185 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/96e2c266-8570-40c8-adbc-d4939bde4ad9-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr\" (UID: \"96e2c266-8570-40c8-adbc-d4939bde4ad9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr" Nov 29 08:22:36 crc kubenswrapper[4795]: I1129 08:22:36.134453 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/96e2c266-8570-40c8-adbc-d4939bde4ad9-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr\" (UID: \"96e2c266-8570-40c8-adbc-d4939bde4ad9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr" Nov 29 08:22:36 crc kubenswrapper[4795]: I1129 08:22:36.134523 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/96e2c266-8570-40c8-adbc-d4939bde4ad9-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr\" (UID: \"96e2c266-8570-40c8-adbc-d4939bde4ad9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr" Nov 29 08:22:36 crc kubenswrapper[4795]: I1129 08:22:36.134669 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96e2c266-8570-40c8-adbc-d4939bde4ad9-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr\" (UID: \"96e2c266-8570-40c8-adbc-d4939bde4ad9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr" Nov 29 08:22:36 crc kubenswrapper[4795]: I1129 08:22:36.134743 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e2c266-8570-40c8-adbc-d4939bde4ad9-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr\" (UID: \"96e2c266-8570-40c8-adbc-d4939bde4ad9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr" Nov 29 08:22:36 crc kubenswrapper[4795]: I1129 08:22:36.134856 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/96e2c266-8570-40c8-adbc-d4939bde4ad9-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr\" (UID: \"96e2c266-8570-40c8-adbc-d4939bde4ad9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr" Nov 29 08:22:36 crc kubenswrapper[4795]: I1129 08:22:36.134932 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e2c266-8570-40c8-adbc-d4939bde4ad9-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr\" (UID: \"96e2c266-8570-40c8-adbc-d4939bde4ad9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr" Nov 29 08:22:36 crc kubenswrapper[4795]: I1129 08:22:36.135014 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e2c266-8570-40c8-adbc-d4939bde4ad9-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr\" (UID: \"96e2c266-8570-40c8-adbc-d4939bde4ad9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr" Nov 29 08:22:36 crc kubenswrapper[4795]: I1129 08:22:36.135084 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/96e2c266-8570-40c8-adbc-d4939bde4ad9-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr\" (UID: \"96e2c266-8570-40c8-adbc-d4939bde4ad9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr" Nov 29 08:22:36 crc kubenswrapper[4795]: I1129 08:22:36.135147 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e2c266-8570-40c8-adbc-d4939bde4ad9-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr\" (UID: \"96e2c266-8570-40c8-adbc-d4939bde4ad9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr" Nov 29 08:22:36 crc kubenswrapper[4795]: I1129 08:22:36.135242 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/96e2c266-8570-40c8-adbc-d4939bde4ad9-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr\" (UID: \"96e2c266-8570-40c8-adbc-d4939bde4ad9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr" Nov 29 08:22:36 crc kubenswrapper[4795]: I1129 08:22:36.135318 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e2c266-8570-40c8-adbc-d4939bde4ad9-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr\" (UID: \"96e2c266-8570-40c8-adbc-d4939bde4ad9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr" Nov 29 08:22:36 crc kubenswrapper[4795]: I1129 08:22:36.135392 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e2c266-8570-40c8-adbc-d4939bde4ad9-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr\" (UID: \"96e2c266-8570-40c8-adbc-d4939bde4ad9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr" Nov 29 08:22:36 crc kubenswrapper[4795]: I1129 08:22:36.135498 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e2c266-8570-40c8-adbc-d4939bde4ad9-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr\" (UID: \"96e2c266-8570-40c8-adbc-d4939bde4ad9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr" Nov 29 08:22:36 crc kubenswrapper[4795]: I1129 08:22:36.135570 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e2c266-8570-40c8-adbc-d4939bde4ad9-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr\" (UID: \"96e2c266-8570-40c8-adbc-d4939bde4ad9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr" Nov 29 08:22:36 crc kubenswrapper[4795]: I1129 08:22:36.135693 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqb4v\" (UniqueName: \"kubernetes.io/projected/96e2c266-8570-40c8-adbc-d4939bde4ad9-kube-api-access-jqb4v\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr\" (UID: \"96e2c266-8570-40c8-adbc-d4939bde4ad9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr" Nov 29 08:22:36 crc kubenswrapper[4795]: I1129 08:22:36.139836 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e2c266-8570-40c8-adbc-d4939bde4ad9-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr\" (UID: \"96e2c266-8570-40c8-adbc-d4939bde4ad9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr" Nov 29 08:22:36 crc kubenswrapper[4795]: I1129 08:22:36.140257 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/96e2c266-8570-40c8-adbc-d4939bde4ad9-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr\" (UID: \"96e2c266-8570-40c8-adbc-d4939bde4ad9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr" Nov 29 08:22:36 crc kubenswrapper[4795]: I1129 08:22:36.140998 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e2c266-8570-40c8-adbc-d4939bde4ad9-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr\" (UID: \"96e2c266-8570-40c8-adbc-d4939bde4ad9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr" Nov 29 08:22:36 crc kubenswrapper[4795]: I1129 08:22:36.141821 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e2c266-8570-40c8-adbc-d4939bde4ad9-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr\" (UID: \"96e2c266-8570-40c8-adbc-d4939bde4ad9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr" Nov 29 08:22:36 crc kubenswrapper[4795]: I1129 08:22:36.142299 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/96e2c266-8570-40c8-adbc-d4939bde4ad9-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr\" (UID: \"96e2c266-8570-40c8-adbc-d4939bde4ad9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr" Nov 29 08:22:36 crc kubenswrapper[4795]: I1129 08:22:36.142951 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e2c266-8570-40c8-adbc-d4939bde4ad9-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr\" (UID: \"96e2c266-8570-40c8-adbc-d4939bde4ad9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr" Nov 29 08:22:36 crc kubenswrapper[4795]: I1129 08:22:36.144868 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/96e2c266-8570-40c8-adbc-d4939bde4ad9-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr\" (UID: \"96e2c266-8570-40c8-adbc-d4939bde4ad9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr" Nov 29 08:22:36 crc kubenswrapper[4795]: I1129 08:22:36.146091 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/96e2c266-8570-40c8-adbc-d4939bde4ad9-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr\" (UID: \"96e2c266-8570-40c8-adbc-d4939bde4ad9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr" Nov 29 08:22:36 crc kubenswrapper[4795]: I1129 08:22:36.146417 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96e2c266-8570-40c8-adbc-d4939bde4ad9-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr\" (UID: \"96e2c266-8570-40c8-adbc-d4939bde4ad9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr" Nov 29 08:22:36 crc kubenswrapper[4795]: I1129 08:22:36.146923 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/96e2c266-8570-40c8-adbc-d4939bde4ad9-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr\" (UID: \"96e2c266-8570-40c8-adbc-d4939bde4ad9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr" Nov 29 08:22:36 crc kubenswrapper[4795]: I1129 08:22:36.147102 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e2c266-8570-40c8-adbc-d4939bde4ad9-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr\" (UID: \"96e2c266-8570-40c8-adbc-d4939bde4ad9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr" Nov 29 08:22:36 crc kubenswrapper[4795]: I1129 08:22:36.147244 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e2c266-8570-40c8-adbc-d4939bde4ad9-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr\" (UID: \"96e2c266-8570-40c8-adbc-d4939bde4ad9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr" Nov 29 08:22:36 crc kubenswrapper[4795]: I1129 08:22:36.147634 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e2c266-8570-40c8-adbc-d4939bde4ad9-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr\" (UID: \"96e2c266-8570-40c8-adbc-d4939bde4ad9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr" Nov 29 08:22:36 crc kubenswrapper[4795]: I1129 08:22:36.148731 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e2c266-8570-40c8-adbc-d4939bde4ad9-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr\" (UID: \"96e2c266-8570-40c8-adbc-d4939bde4ad9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr" Nov 29 08:22:36 crc kubenswrapper[4795]: I1129 08:22:36.151375 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/96e2c266-8570-40c8-adbc-d4939bde4ad9-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr\" (UID: \"96e2c266-8570-40c8-adbc-d4939bde4ad9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr" Nov 29 08:22:36 crc kubenswrapper[4795]: I1129 08:22:36.154393 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqb4v\" (UniqueName: \"kubernetes.io/projected/96e2c266-8570-40c8-adbc-d4939bde4ad9-kube-api-access-jqb4v\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr\" (UID: \"96e2c266-8570-40c8-adbc-d4939bde4ad9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr" Nov 29 08:22:36 crc kubenswrapper[4795]: I1129 08:22:36.176357 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr" Nov 29 08:22:36 crc kubenswrapper[4795]: I1129 08:22:36.276697 4795 scope.go:117] "RemoveContainer" containerID="21e10122a79c2c2a3372f86102d58fee1c9be891bd6896b1affd1c2404af16fa" Nov 29 08:22:36 crc kubenswrapper[4795]: E1129 08:22:36.276967 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:22:36 crc kubenswrapper[4795]: W1129 08:22:36.719751 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod96e2c266_8570_40c8_adbc_d4939bde4ad9.slice/crio-73b8bc841467978e07c9e7c84788374d5185a90e2f9309d3394aa05629bcbd46 WatchSource:0}: Error finding container 73b8bc841467978e07c9e7c84788374d5185a90e2f9309d3394aa05629bcbd46: Status 404 returned error can't find the container with id 73b8bc841467978e07c9e7c84788374d5185a90e2f9309d3394aa05629bcbd46 Nov 29 08:22:36 crc kubenswrapper[4795]: I1129 08:22:36.723172 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr"] Nov 29 08:22:36 crc kubenswrapper[4795]: I1129 08:22:36.786941 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr" event={"ID":"96e2c266-8570-40c8-adbc-d4939bde4ad9","Type":"ContainerStarted","Data":"73b8bc841467978e07c9e7c84788374d5185a90e2f9309d3394aa05629bcbd46"} Nov 29 08:22:37 crc kubenswrapper[4795]: I1129 08:22:37.822532 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr" event={"ID":"96e2c266-8570-40c8-adbc-d4939bde4ad9","Type":"ContainerStarted","Data":"a6a3d4e2cee25d52e1be87cd39ef61825b03ffe471853492d078d016129a3236"} Nov 29 08:22:37 crc kubenswrapper[4795]: I1129 08:22:37.857562 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr" podStartSLOduration=2.259210716 podStartE2EDuration="2.857545079s" podCreationTimestamp="2025-11-29 08:22:35 +0000 UTC" firstStartedPulling="2025-11-29 08:22:36.72181938 +0000 UTC m=+2602.697395160" lastFinishedPulling="2025-11-29 08:22:37.320153723 +0000 UTC m=+2603.295729523" observedRunningTime="2025-11-29 08:22:37.844687744 +0000 UTC m=+2603.820263534" watchObservedRunningTime="2025-11-29 08:22:37.857545079 +0000 UTC m=+2603.833120869" Nov 29 08:22:47 crc kubenswrapper[4795]: I1129 08:22:47.277274 4795 scope.go:117] "RemoveContainer" containerID="21e10122a79c2c2a3372f86102d58fee1c9be891bd6896b1affd1c2404af16fa" Nov 29 08:22:47 crc kubenswrapper[4795]: E1129 08:22:47.282828 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:22:59 crc kubenswrapper[4795]: I1129 08:22:59.044499 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-6wfqc"] Nov 29 08:22:59 crc kubenswrapper[4795]: I1129 08:22:59.084891 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-6wfqc"] Nov 29 08:22:59 crc kubenswrapper[4795]: I1129 08:22:59.276818 4795 scope.go:117] "RemoveContainer" containerID="21e10122a79c2c2a3372f86102d58fee1c9be891bd6896b1affd1c2404af16fa" Nov 29 08:22:59 crc kubenswrapper[4795]: E1129 08:22:59.277753 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:23:00 crc kubenswrapper[4795]: I1129 08:23:00.291526 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be948c99-208e-4f5c-8ab7-5971d1efb06e" path="/var/lib/kubelet/pods/be948c99-208e-4f5c-8ab7-5971d1efb06e/volumes" Nov 29 08:23:10 crc kubenswrapper[4795]: I1129 08:23:10.276562 4795 scope.go:117] "RemoveContainer" containerID="21e10122a79c2c2a3372f86102d58fee1c9be891bd6896b1affd1c2404af16fa" Nov 29 08:23:10 crc kubenswrapper[4795]: E1129 08:23:10.277575 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:23:22 crc kubenswrapper[4795]: I1129 08:23:22.276123 4795 scope.go:117] "RemoveContainer" containerID="21e10122a79c2c2a3372f86102d58fee1c9be891bd6896b1affd1c2404af16fa" Nov 29 08:23:22 crc kubenswrapper[4795]: E1129 08:23:22.277144 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:23:25 crc kubenswrapper[4795]: I1129 08:23:25.367712 4795 generic.go:334] "Generic (PLEG): container finished" podID="96e2c266-8570-40c8-adbc-d4939bde4ad9" containerID="a6a3d4e2cee25d52e1be87cd39ef61825b03ffe471853492d078d016129a3236" exitCode=0 Nov 29 08:23:25 crc kubenswrapper[4795]: I1129 08:23:25.367802 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr" event={"ID":"96e2c266-8570-40c8-adbc-d4939bde4ad9","Type":"ContainerDied","Data":"a6a3d4e2cee25d52e1be87cd39ef61825b03ffe471853492d078d016129a3236"} Nov 29 08:23:26 crc kubenswrapper[4795]: I1129 08:23:26.861408 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr" Nov 29 08:23:26 crc kubenswrapper[4795]: I1129 08:23:26.984292 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/96e2c266-8570-40c8-adbc-d4939bde4ad9-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"96e2c266-8570-40c8-adbc-d4939bde4ad9\" (UID: \"96e2c266-8570-40c8-adbc-d4939bde4ad9\") " Nov 29 08:23:26 crc kubenswrapper[4795]: I1129 08:23:26.984350 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqb4v\" (UniqueName: \"kubernetes.io/projected/96e2c266-8570-40c8-adbc-d4939bde4ad9-kube-api-access-jqb4v\") pod \"96e2c266-8570-40c8-adbc-d4939bde4ad9\" (UID: \"96e2c266-8570-40c8-adbc-d4939bde4ad9\") " Nov 29 08:23:26 crc kubenswrapper[4795]: I1129 08:23:26.984420 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e2c266-8570-40c8-adbc-d4939bde4ad9-libvirt-combined-ca-bundle\") pod \"96e2c266-8570-40c8-adbc-d4939bde4ad9\" (UID: \"96e2c266-8570-40c8-adbc-d4939bde4ad9\") " Nov 29 08:23:26 crc kubenswrapper[4795]: I1129 08:23:26.984440 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/96e2c266-8570-40c8-adbc-d4939bde4ad9-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"96e2c266-8570-40c8-adbc-d4939bde4ad9\" (UID: \"96e2c266-8570-40c8-adbc-d4939bde4ad9\") " Nov 29 08:23:26 crc kubenswrapper[4795]: I1129 08:23:26.984470 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e2c266-8570-40c8-adbc-d4939bde4ad9-bootstrap-combined-ca-bundle\") pod \"96e2c266-8570-40c8-adbc-d4939bde4ad9\" (UID: \"96e2c266-8570-40c8-adbc-d4939bde4ad9\") " Nov 29 08:23:26 crc kubenswrapper[4795]: I1129 08:23:26.984535 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e2c266-8570-40c8-adbc-d4939bde4ad9-telemetry-combined-ca-bundle\") pod \"96e2c266-8570-40c8-adbc-d4939bde4ad9\" (UID: \"96e2c266-8570-40c8-adbc-d4939bde4ad9\") " Nov 29 08:23:26 crc kubenswrapper[4795]: I1129 08:23:26.984560 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/96e2c266-8570-40c8-adbc-d4939bde4ad9-ssh-key\") pod \"96e2c266-8570-40c8-adbc-d4939bde4ad9\" (UID: \"96e2c266-8570-40c8-adbc-d4939bde4ad9\") " Nov 29 08:23:26 crc kubenswrapper[4795]: I1129 08:23:26.984646 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e2c266-8570-40c8-adbc-d4939bde4ad9-ovn-combined-ca-bundle\") pod \"96e2c266-8570-40c8-adbc-d4939bde4ad9\" (UID: \"96e2c266-8570-40c8-adbc-d4939bde4ad9\") " Nov 29 08:23:26 crc kubenswrapper[4795]: I1129 08:23:26.985374 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/96e2c266-8570-40c8-adbc-d4939bde4ad9-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"96e2c266-8570-40c8-adbc-d4939bde4ad9\" (UID: \"96e2c266-8570-40c8-adbc-d4939bde4ad9\") " Nov 29 08:23:26 crc kubenswrapper[4795]: I1129 08:23:26.985415 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96e2c266-8570-40c8-adbc-d4939bde4ad9-inventory\") pod \"96e2c266-8570-40c8-adbc-d4939bde4ad9\" (UID: \"96e2c266-8570-40c8-adbc-d4939bde4ad9\") " Nov 29 08:23:26 crc kubenswrapper[4795]: I1129 08:23:26.985453 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/96e2c266-8570-40c8-adbc-d4939bde4ad9-openstack-edpm-ipam-ovn-default-certs-0\") pod \"96e2c266-8570-40c8-adbc-d4939bde4ad9\" (UID: \"96e2c266-8570-40c8-adbc-d4939bde4ad9\") " Nov 29 08:23:26 crc kubenswrapper[4795]: I1129 08:23:26.985566 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e2c266-8570-40c8-adbc-d4939bde4ad9-telemetry-power-monitoring-combined-ca-bundle\") pod \"96e2c266-8570-40c8-adbc-d4939bde4ad9\" (UID: \"96e2c266-8570-40c8-adbc-d4939bde4ad9\") " Nov 29 08:23:26 crc kubenswrapper[4795]: I1129 08:23:26.985666 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e2c266-8570-40c8-adbc-d4939bde4ad9-nova-combined-ca-bundle\") pod \"96e2c266-8570-40c8-adbc-d4939bde4ad9\" (UID: \"96e2c266-8570-40c8-adbc-d4939bde4ad9\") " Nov 29 08:23:26 crc kubenswrapper[4795]: I1129 08:23:26.985705 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e2c266-8570-40c8-adbc-d4939bde4ad9-repo-setup-combined-ca-bundle\") pod \"96e2c266-8570-40c8-adbc-d4939bde4ad9\" (UID: \"96e2c266-8570-40c8-adbc-d4939bde4ad9\") " Nov 29 08:23:26 crc kubenswrapper[4795]: I1129 08:23:26.985757 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/96e2c266-8570-40c8-adbc-d4939bde4ad9-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"96e2c266-8570-40c8-adbc-d4939bde4ad9\" (UID: \"96e2c266-8570-40c8-adbc-d4939bde4ad9\") " Nov 29 08:23:26 crc kubenswrapper[4795]: I1129 08:23:26.985782 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e2c266-8570-40c8-adbc-d4939bde4ad9-neutron-metadata-combined-ca-bundle\") pod \"96e2c266-8570-40c8-adbc-d4939bde4ad9\" (UID: \"96e2c266-8570-40c8-adbc-d4939bde4ad9\") " Nov 29 08:23:26 crc kubenswrapper[4795]: I1129 08:23:26.991618 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96e2c266-8570-40c8-adbc-d4939bde4ad9-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "96e2c266-8570-40c8-adbc-d4939bde4ad9" (UID: "96e2c266-8570-40c8-adbc-d4939bde4ad9"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:23:26 crc kubenswrapper[4795]: I1129 08:23:26.991665 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96e2c266-8570-40c8-adbc-d4939bde4ad9-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "96e2c266-8570-40c8-adbc-d4939bde4ad9" (UID: "96e2c266-8570-40c8-adbc-d4939bde4ad9"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:23:26 crc kubenswrapper[4795]: I1129 08:23:26.992400 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96e2c266-8570-40c8-adbc-d4939bde4ad9-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "96e2c266-8570-40c8-adbc-d4939bde4ad9" (UID: "96e2c266-8570-40c8-adbc-d4939bde4ad9"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:23:26 crc kubenswrapper[4795]: I1129 08:23:26.992850 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96e2c266-8570-40c8-adbc-d4939bde4ad9-kube-api-access-jqb4v" (OuterVolumeSpecName: "kube-api-access-jqb4v") pod "96e2c266-8570-40c8-adbc-d4939bde4ad9" (UID: "96e2c266-8570-40c8-adbc-d4939bde4ad9"). InnerVolumeSpecName "kube-api-access-jqb4v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:23:26 crc kubenswrapper[4795]: I1129 08:23:26.993046 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96e2c266-8570-40c8-adbc-d4939bde4ad9-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "96e2c266-8570-40c8-adbc-d4939bde4ad9" (UID: "96e2c266-8570-40c8-adbc-d4939bde4ad9"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:23:26 crc kubenswrapper[4795]: I1129 08:23:26.993560 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96e2c266-8570-40c8-adbc-d4939bde4ad9-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "96e2c266-8570-40c8-adbc-d4939bde4ad9" (UID: "96e2c266-8570-40c8-adbc-d4939bde4ad9"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:23:26 crc kubenswrapper[4795]: I1129 08:23:26.994270 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96e2c266-8570-40c8-adbc-d4939bde4ad9-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "96e2c266-8570-40c8-adbc-d4939bde4ad9" (UID: "96e2c266-8570-40c8-adbc-d4939bde4ad9"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:23:26 crc kubenswrapper[4795]: I1129 08:23:26.995046 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96e2c266-8570-40c8-adbc-d4939bde4ad9-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0") pod "96e2c266-8570-40c8-adbc-d4939bde4ad9" (UID: "96e2c266-8570-40c8-adbc-d4939bde4ad9"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:23:26 crc kubenswrapper[4795]: I1129 08:23:26.996087 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96e2c266-8570-40c8-adbc-d4939bde4ad9-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "96e2c266-8570-40c8-adbc-d4939bde4ad9" (UID: "96e2c266-8570-40c8-adbc-d4939bde4ad9"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:23:26 crc kubenswrapper[4795]: I1129 08:23:26.997033 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96e2c266-8570-40c8-adbc-d4939bde4ad9-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "96e2c266-8570-40c8-adbc-d4939bde4ad9" (UID: "96e2c266-8570-40c8-adbc-d4939bde4ad9"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:23:26 crc kubenswrapper[4795]: I1129 08:23:26.997541 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96e2c266-8570-40c8-adbc-d4939bde4ad9-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "96e2c266-8570-40c8-adbc-d4939bde4ad9" (UID: "96e2c266-8570-40c8-adbc-d4939bde4ad9"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:23:26 crc kubenswrapper[4795]: I1129 08:23:26.999155 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96e2c266-8570-40c8-adbc-d4939bde4ad9-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "96e2c266-8570-40c8-adbc-d4939bde4ad9" (UID: "96e2c266-8570-40c8-adbc-d4939bde4ad9"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:23:26 crc kubenswrapper[4795]: I1129 08:23:26.999307 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96e2c266-8570-40c8-adbc-d4939bde4ad9-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "96e2c266-8570-40c8-adbc-d4939bde4ad9" (UID: "96e2c266-8570-40c8-adbc-d4939bde4ad9"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:23:27 crc kubenswrapper[4795]: I1129 08:23:27.002289 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96e2c266-8570-40c8-adbc-d4939bde4ad9-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "96e2c266-8570-40c8-adbc-d4939bde4ad9" (UID: "96e2c266-8570-40c8-adbc-d4939bde4ad9"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:23:27 crc kubenswrapper[4795]: I1129 08:23:27.022655 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96e2c266-8570-40c8-adbc-d4939bde4ad9-inventory" (OuterVolumeSpecName: "inventory") pod "96e2c266-8570-40c8-adbc-d4939bde4ad9" (UID: "96e2c266-8570-40c8-adbc-d4939bde4ad9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:23:27 crc kubenswrapper[4795]: I1129 08:23:27.025159 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96e2c266-8570-40c8-adbc-d4939bde4ad9-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "96e2c266-8570-40c8-adbc-d4939bde4ad9" (UID: "96e2c266-8570-40c8-adbc-d4939bde4ad9"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:23:27 crc kubenswrapper[4795]: I1129 08:23:27.088909 4795 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/96e2c266-8570-40c8-adbc-d4939bde4ad9-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 29 08:23:27 crc kubenswrapper[4795]: I1129 08:23:27.088946 4795 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e2c266-8570-40c8-adbc-d4939bde4ad9-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 08:23:27 crc kubenswrapper[4795]: I1129 08:23:27.088956 4795 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e2c266-8570-40c8-adbc-d4939bde4ad9-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 08:23:27 crc kubenswrapper[4795]: I1129 08:23:27.088964 4795 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e2c266-8570-40c8-adbc-d4939bde4ad9-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 08:23:27 crc kubenswrapper[4795]: I1129 08:23:27.088974 4795 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/96e2c266-8570-40c8-adbc-d4939bde4ad9-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 08:23:27 crc kubenswrapper[4795]: I1129 08:23:27.088982 4795 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e2c266-8570-40c8-adbc-d4939bde4ad9-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 08:23:27 crc kubenswrapper[4795]: I1129 08:23:27.088990 4795 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/96e2c266-8570-40c8-adbc-d4939bde4ad9-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 29 08:23:27 crc kubenswrapper[4795]: I1129 08:23:27.089002 4795 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96e2c266-8570-40c8-adbc-d4939bde4ad9-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 08:23:27 crc kubenswrapper[4795]: I1129 08:23:27.089011 4795 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/96e2c266-8570-40c8-adbc-d4939bde4ad9-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 29 08:23:27 crc kubenswrapper[4795]: I1129 08:23:27.089021 4795 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e2c266-8570-40c8-adbc-d4939bde4ad9-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 08:23:27 crc kubenswrapper[4795]: I1129 08:23:27.089032 4795 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e2c266-8570-40c8-adbc-d4939bde4ad9-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 08:23:27 crc kubenswrapper[4795]: I1129 08:23:27.089041 4795 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e2c266-8570-40c8-adbc-d4939bde4ad9-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 08:23:27 crc kubenswrapper[4795]: I1129 08:23:27.089049 4795 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/96e2c266-8570-40c8-adbc-d4939bde4ad9-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 29 08:23:27 crc kubenswrapper[4795]: I1129 08:23:27.089058 4795 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e2c266-8570-40c8-adbc-d4939bde4ad9-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 08:23:27 crc kubenswrapper[4795]: I1129 08:23:27.089071 4795 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/96e2c266-8570-40c8-adbc-d4939bde4ad9-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 29 08:23:27 crc kubenswrapper[4795]: I1129 08:23:27.089081 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqb4v\" (UniqueName: \"kubernetes.io/projected/96e2c266-8570-40c8-adbc-d4939bde4ad9-kube-api-access-jqb4v\") on node \"crc\" DevicePath \"\"" Nov 29 08:23:27 crc kubenswrapper[4795]: I1129 08:23:27.394357 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr" event={"ID":"96e2c266-8570-40c8-adbc-d4939bde4ad9","Type":"ContainerDied","Data":"73b8bc841467978e07c9e7c84788374d5185a90e2f9309d3394aa05629bcbd46"} Nov 29 08:23:27 crc kubenswrapper[4795]: I1129 08:23:27.394408 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73b8bc841467978e07c9e7c84788374d5185a90e2f9309d3394aa05629bcbd46" Nov 29 08:23:27 crc kubenswrapper[4795]: I1129 08:23:27.394464 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr" Nov 29 08:23:27 crc kubenswrapper[4795]: I1129 08:23:27.514164 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-mrqcm"] Nov 29 08:23:27 crc kubenswrapper[4795]: E1129 08:23:27.515201 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96e2c266-8570-40c8-adbc-d4939bde4ad9" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 29 08:23:27 crc kubenswrapper[4795]: I1129 08:23:27.515232 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="96e2c266-8570-40c8-adbc-d4939bde4ad9" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 29 08:23:27 crc kubenswrapper[4795]: I1129 08:23:27.515575 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="96e2c266-8570-40c8-adbc-d4939bde4ad9" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 29 08:23:27 crc kubenswrapper[4795]: I1129 08:23:27.516796 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mrqcm" Nov 29 08:23:27 crc kubenswrapper[4795]: I1129 08:23:27.519189 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 08:23:27 crc kubenswrapper[4795]: I1129 08:23:27.519212 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Nov 29 08:23:27 crc kubenswrapper[4795]: I1129 08:23:27.519252 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 08:23:27 crc kubenswrapper[4795]: I1129 08:23:27.519142 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 08:23:27 crc kubenswrapper[4795]: I1129 08:23:27.519632 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nfhsg" Nov 29 08:23:27 crc kubenswrapper[4795]: I1129 08:23:27.525215 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-mrqcm"] Nov 29 08:23:27 crc kubenswrapper[4795]: I1129 08:23:27.604531 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a1dfc06-678d-4418-a57f-7a9a2ba2c441-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mrqcm\" (UID: \"1a1dfc06-678d-4418-a57f-7a9a2ba2c441\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mrqcm" Nov 29 08:23:27 crc kubenswrapper[4795]: I1129 08:23:27.604764 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1a1dfc06-678d-4418-a57f-7a9a2ba2c441-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mrqcm\" (UID: \"1a1dfc06-678d-4418-a57f-7a9a2ba2c441\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mrqcm" Nov 29 08:23:27 crc kubenswrapper[4795]: I1129 08:23:27.604828 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1a1dfc06-678d-4418-a57f-7a9a2ba2c441-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mrqcm\" (UID: \"1a1dfc06-678d-4418-a57f-7a9a2ba2c441\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mrqcm" Nov 29 08:23:27 crc kubenswrapper[4795]: I1129 08:23:27.605386 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdl8z\" (UniqueName: \"kubernetes.io/projected/1a1dfc06-678d-4418-a57f-7a9a2ba2c441-kube-api-access-sdl8z\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mrqcm\" (UID: \"1a1dfc06-678d-4418-a57f-7a9a2ba2c441\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mrqcm" Nov 29 08:23:27 crc kubenswrapper[4795]: I1129 08:23:27.605838 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/1a1dfc06-678d-4418-a57f-7a9a2ba2c441-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mrqcm\" (UID: \"1a1dfc06-678d-4418-a57f-7a9a2ba2c441\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mrqcm" Nov 29 08:23:27 crc kubenswrapper[4795]: I1129 08:23:27.708256 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1a1dfc06-678d-4418-a57f-7a9a2ba2c441-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mrqcm\" (UID: \"1a1dfc06-678d-4418-a57f-7a9a2ba2c441\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mrqcm" Nov 29 08:23:27 crc kubenswrapper[4795]: I1129 08:23:27.708311 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1a1dfc06-678d-4418-a57f-7a9a2ba2c441-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mrqcm\" (UID: \"1a1dfc06-678d-4418-a57f-7a9a2ba2c441\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mrqcm" Nov 29 08:23:27 crc kubenswrapper[4795]: I1129 08:23:27.708425 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdl8z\" (UniqueName: \"kubernetes.io/projected/1a1dfc06-678d-4418-a57f-7a9a2ba2c441-kube-api-access-sdl8z\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mrqcm\" (UID: \"1a1dfc06-678d-4418-a57f-7a9a2ba2c441\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mrqcm" Nov 29 08:23:27 crc kubenswrapper[4795]: I1129 08:23:27.708498 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/1a1dfc06-678d-4418-a57f-7a9a2ba2c441-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mrqcm\" (UID: \"1a1dfc06-678d-4418-a57f-7a9a2ba2c441\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mrqcm" Nov 29 08:23:27 crc kubenswrapper[4795]: I1129 08:23:27.708576 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a1dfc06-678d-4418-a57f-7a9a2ba2c441-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mrqcm\" (UID: \"1a1dfc06-678d-4418-a57f-7a9a2ba2c441\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mrqcm" Nov 29 08:23:27 crc kubenswrapper[4795]: I1129 08:23:27.709741 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/1a1dfc06-678d-4418-a57f-7a9a2ba2c441-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mrqcm\" (UID: \"1a1dfc06-678d-4418-a57f-7a9a2ba2c441\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mrqcm" Nov 29 08:23:27 crc kubenswrapper[4795]: I1129 08:23:27.712572 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a1dfc06-678d-4418-a57f-7a9a2ba2c441-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mrqcm\" (UID: \"1a1dfc06-678d-4418-a57f-7a9a2ba2c441\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mrqcm" Nov 29 08:23:27 crc kubenswrapper[4795]: I1129 08:23:27.713574 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1a1dfc06-678d-4418-a57f-7a9a2ba2c441-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mrqcm\" (UID: \"1a1dfc06-678d-4418-a57f-7a9a2ba2c441\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mrqcm" Nov 29 08:23:27 crc kubenswrapper[4795]: I1129 08:23:27.721263 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1a1dfc06-678d-4418-a57f-7a9a2ba2c441-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mrqcm\" (UID: \"1a1dfc06-678d-4418-a57f-7a9a2ba2c441\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mrqcm" Nov 29 08:23:27 crc kubenswrapper[4795]: I1129 08:23:27.740542 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdl8z\" (UniqueName: \"kubernetes.io/projected/1a1dfc06-678d-4418-a57f-7a9a2ba2c441-kube-api-access-sdl8z\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mrqcm\" (UID: \"1a1dfc06-678d-4418-a57f-7a9a2ba2c441\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mrqcm" Nov 29 08:23:27 crc kubenswrapper[4795]: I1129 08:23:27.845676 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mrqcm" Nov 29 08:23:28 crc kubenswrapper[4795]: I1129 08:23:28.434963 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-mrqcm"] Nov 29 08:23:29 crc kubenswrapper[4795]: I1129 08:23:29.416054 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mrqcm" event={"ID":"1a1dfc06-678d-4418-a57f-7a9a2ba2c441","Type":"ContainerStarted","Data":"0c9b2fa227c05efd9bb30f1d45581c3db23e074e2eab80061da4161faef88a6d"} Nov 29 08:23:29 crc kubenswrapper[4795]: I1129 08:23:29.416654 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mrqcm" event={"ID":"1a1dfc06-678d-4418-a57f-7a9a2ba2c441","Type":"ContainerStarted","Data":"ccc119041e9d009a807ce0bee7bc20ab60581fd12e1d7dcaa86d9db136437ca3"} Nov 29 08:23:29 crc kubenswrapper[4795]: I1129 08:23:29.439135 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mrqcm" podStartSLOduration=1.84012116 podStartE2EDuration="2.439111061s" podCreationTimestamp="2025-11-29 08:23:27 +0000 UTC" firstStartedPulling="2025-11-29 08:23:28.443522506 +0000 UTC m=+2654.419098296" lastFinishedPulling="2025-11-29 08:23:29.042512407 +0000 UTC m=+2655.018088197" observedRunningTime="2025-11-29 08:23:29.429489478 +0000 UTC m=+2655.405065288" watchObservedRunningTime="2025-11-29 08:23:29.439111061 +0000 UTC m=+2655.414686871" Nov 29 08:23:32 crc kubenswrapper[4795]: I1129 08:23:32.373806 4795 scope.go:117] "RemoveContainer" containerID="d71c348517e9f3873f2cca587e20412d2726d5ffb0522eae784a5d24050f3577" Nov 29 08:23:33 crc kubenswrapper[4795]: I1129 08:23:33.256551 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-s5rrp"] Nov 29 08:23:33 crc kubenswrapper[4795]: I1129 08:23:33.259891 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s5rrp" Nov 29 08:23:33 crc kubenswrapper[4795]: I1129 08:23:33.277076 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s5rrp"] Nov 29 08:23:33 crc kubenswrapper[4795]: I1129 08:23:33.393617 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64922736-ba13-4d85-81da-62dcc27771fd-utilities\") pod \"community-operators-s5rrp\" (UID: \"64922736-ba13-4d85-81da-62dcc27771fd\") " pod="openshift-marketplace/community-operators-s5rrp" Nov 29 08:23:33 crc kubenswrapper[4795]: I1129 08:23:33.394285 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-784kj\" (UniqueName: \"kubernetes.io/projected/64922736-ba13-4d85-81da-62dcc27771fd-kube-api-access-784kj\") pod \"community-operators-s5rrp\" (UID: \"64922736-ba13-4d85-81da-62dcc27771fd\") " pod="openshift-marketplace/community-operators-s5rrp" Nov 29 08:23:33 crc kubenswrapper[4795]: I1129 08:23:33.394815 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64922736-ba13-4d85-81da-62dcc27771fd-catalog-content\") pod \"community-operators-s5rrp\" (UID: \"64922736-ba13-4d85-81da-62dcc27771fd\") " pod="openshift-marketplace/community-operators-s5rrp" Nov 29 08:23:33 crc kubenswrapper[4795]: I1129 08:23:33.496947 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64922736-ba13-4d85-81da-62dcc27771fd-utilities\") pod \"community-operators-s5rrp\" (UID: \"64922736-ba13-4d85-81da-62dcc27771fd\") " pod="openshift-marketplace/community-operators-s5rrp" Nov 29 08:23:33 crc kubenswrapper[4795]: I1129 08:23:33.497065 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-784kj\" (UniqueName: \"kubernetes.io/projected/64922736-ba13-4d85-81da-62dcc27771fd-kube-api-access-784kj\") pod \"community-operators-s5rrp\" (UID: \"64922736-ba13-4d85-81da-62dcc27771fd\") " pod="openshift-marketplace/community-operators-s5rrp" Nov 29 08:23:33 crc kubenswrapper[4795]: I1129 08:23:33.497146 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64922736-ba13-4d85-81da-62dcc27771fd-catalog-content\") pod \"community-operators-s5rrp\" (UID: \"64922736-ba13-4d85-81da-62dcc27771fd\") " pod="openshift-marketplace/community-operators-s5rrp" Nov 29 08:23:33 crc kubenswrapper[4795]: I1129 08:23:33.497928 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64922736-ba13-4d85-81da-62dcc27771fd-catalog-content\") pod \"community-operators-s5rrp\" (UID: \"64922736-ba13-4d85-81da-62dcc27771fd\") " pod="openshift-marketplace/community-operators-s5rrp" Nov 29 08:23:33 crc kubenswrapper[4795]: I1129 08:23:33.498297 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64922736-ba13-4d85-81da-62dcc27771fd-utilities\") pod \"community-operators-s5rrp\" (UID: \"64922736-ba13-4d85-81da-62dcc27771fd\") " pod="openshift-marketplace/community-operators-s5rrp" Nov 29 08:23:33 crc kubenswrapper[4795]: I1129 08:23:33.519775 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-784kj\" (UniqueName: \"kubernetes.io/projected/64922736-ba13-4d85-81da-62dcc27771fd-kube-api-access-784kj\") pod \"community-operators-s5rrp\" (UID: \"64922736-ba13-4d85-81da-62dcc27771fd\") " pod="openshift-marketplace/community-operators-s5rrp" Nov 29 08:23:33 crc kubenswrapper[4795]: I1129 08:23:33.599621 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s5rrp" Nov 29 08:23:34 crc kubenswrapper[4795]: I1129 08:23:34.188351 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s5rrp"] Nov 29 08:23:34 crc kubenswrapper[4795]: I1129 08:23:34.474986 4795 generic.go:334] "Generic (PLEG): container finished" podID="64922736-ba13-4d85-81da-62dcc27771fd" containerID="23784a69386e91e071bee744635783d77cf49ae5617b0702dcc67e99bcd8d3a9" exitCode=0 Nov 29 08:23:34 crc kubenswrapper[4795]: I1129 08:23:34.475045 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s5rrp" event={"ID":"64922736-ba13-4d85-81da-62dcc27771fd","Type":"ContainerDied","Data":"23784a69386e91e071bee744635783d77cf49ae5617b0702dcc67e99bcd8d3a9"} Nov 29 08:23:34 crc kubenswrapper[4795]: I1129 08:23:34.475365 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s5rrp" event={"ID":"64922736-ba13-4d85-81da-62dcc27771fd","Type":"ContainerStarted","Data":"6b28a0f11c7406e1c6da28a14a02e77c09e139d0e667100e8574282854b6036e"} Nov 29 08:23:35 crc kubenswrapper[4795]: I1129 08:23:35.495887 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s5rrp" event={"ID":"64922736-ba13-4d85-81da-62dcc27771fd","Type":"ContainerStarted","Data":"dadc9080b4724df44f20e1e6bf9f0df4d4fad4541305ffa5446369258966e871"} Nov 29 08:23:36 crc kubenswrapper[4795]: I1129 08:23:36.276740 4795 scope.go:117] "RemoveContainer" containerID="21e10122a79c2c2a3372f86102d58fee1c9be891bd6896b1affd1c2404af16fa" Nov 29 08:23:36 crc kubenswrapper[4795]: E1129 08:23:36.277069 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:23:37 crc kubenswrapper[4795]: I1129 08:23:37.785385 4795 generic.go:334] "Generic (PLEG): container finished" podID="64922736-ba13-4d85-81da-62dcc27771fd" containerID="dadc9080b4724df44f20e1e6bf9f0df4d4fad4541305ffa5446369258966e871" exitCode=0 Nov 29 08:23:37 crc kubenswrapper[4795]: I1129 08:23:37.785532 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s5rrp" event={"ID":"64922736-ba13-4d85-81da-62dcc27771fd","Type":"ContainerDied","Data":"dadc9080b4724df44f20e1e6bf9f0df4d4fad4541305ffa5446369258966e871"} Nov 29 08:23:38 crc kubenswrapper[4795]: I1129 08:23:38.802395 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s5rrp" event={"ID":"64922736-ba13-4d85-81da-62dcc27771fd","Type":"ContainerStarted","Data":"6fb2302bc77e66a5dc00eb3aff8861b7772d1c791d4a1ab60b625f7fb64d98ff"} Nov 29 08:23:42 crc kubenswrapper[4795]: I1129 08:23:42.982833 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-s5rrp" podStartSLOduration=6.262210213 podStartE2EDuration="9.982815365s" podCreationTimestamp="2025-11-29 08:23:33 +0000 UTC" firstStartedPulling="2025-11-29 08:23:34.47952622 +0000 UTC m=+2660.455102010" lastFinishedPulling="2025-11-29 08:23:38.200131362 +0000 UTC m=+2664.175707162" observedRunningTime="2025-11-29 08:23:38.837004618 +0000 UTC m=+2664.812580408" watchObservedRunningTime="2025-11-29 08:23:42.982815365 +0000 UTC m=+2668.958391145" Nov 29 08:23:42 crc kubenswrapper[4795]: I1129 08:23:42.986004 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-h58nq"] Nov 29 08:23:42 crc kubenswrapper[4795]: I1129 08:23:42.990341 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h58nq" Nov 29 08:23:42 crc kubenswrapper[4795]: I1129 08:23:42.993841 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07362978-f752-4a5c-9ad9-f7c86f8f0d5e-utilities\") pod \"redhat-operators-h58nq\" (UID: \"07362978-f752-4a5c-9ad9-f7c86f8f0d5e\") " pod="openshift-marketplace/redhat-operators-h58nq" Nov 29 08:23:42 crc kubenswrapper[4795]: I1129 08:23:42.993891 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kx4z\" (UniqueName: \"kubernetes.io/projected/07362978-f752-4a5c-9ad9-f7c86f8f0d5e-kube-api-access-7kx4z\") pod \"redhat-operators-h58nq\" (UID: \"07362978-f752-4a5c-9ad9-f7c86f8f0d5e\") " pod="openshift-marketplace/redhat-operators-h58nq" Nov 29 08:23:42 crc kubenswrapper[4795]: I1129 08:23:42.993978 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07362978-f752-4a5c-9ad9-f7c86f8f0d5e-catalog-content\") pod \"redhat-operators-h58nq\" (UID: \"07362978-f752-4a5c-9ad9-f7c86f8f0d5e\") " pod="openshift-marketplace/redhat-operators-h58nq" Nov 29 08:23:43 crc kubenswrapper[4795]: I1129 08:23:43.010879 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h58nq"] Nov 29 08:23:43 crc kubenswrapper[4795]: I1129 08:23:43.098351 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07362978-f752-4a5c-9ad9-f7c86f8f0d5e-utilities\") pod \"redhat-operators-h58nq\" (UID: \"07362978-f752-4a5c-9ad9-f7c86f8f0d5e\") " pod="openshift-marketplace/redhat-operators-h58nq" Nov 29 08:23:43 crc kubenswrapper[4795]: I1129 08:23:43.098426 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kx4z\" (UniqueName: \"kubernetes.io/projected/07362978-f752-4a5c-9ad9-f7c86f8f0d5e-kube-api-access-7kx4z\") pod \"redhat-operators-h58nq\" (UID: \"07362978-f752-4a5c-9ad9-f7c86f8f0d5e\") " pod="openshift-marketplace/redhat-operators-h58nq" Nov 29 08:23:43 crc kubenswrapper[4795]: I1129 08:23:43.098560 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07362978-f752-4a5c-9ad9-f7c86f8f0d5e-catalog-content\") pod \"redhat-operators-h58nq\" (UID: \"07362978-f752-4a5c-9ad9-f7c86f8f0d5e\") " pod="openshift-marketplace/redhat-operators-h58nq" Nov 29 08:23:43 crc kubenswrapper[4795]: I1129 08:23:43.099146 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07362978-f752-4a5c-9ad9-f7c86f8f0d5e-catalog-content\") pod \"redhat-operators-h58nq\" (UID: \"07362978-f752-4a5c-9ad9-f7c86f8f0d5e\") " pod="openshift-marketplace/redhat-operators-h58nq" Nov 29 08:23:43 crc kubenswrapper[4795]: I1129 08:23:43.099147 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07362978-f752-4a5c-9ad9-f7c86f8f0d5e-utilities\") pod \"redhat-operators-h58nq\" (UID: \"07362978-f752-4a5c-9ad9-f7c86f8f0d5e\") " pod="openshift-marketplace/redhat-operators-h58nq" Nov 29 08:23:43 crc kubenswrapper[4795]: I1129 08:23:43.126176 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kx4z\" (UniqueName: \"kubernetes.io/projected/07362978-f752-4a5c-9ad9-f7c86f8f0d5e-kube-api-access-7kx4z\") pod \"redhat-operators-h58nq\" (UID: \"07362978-f752-4a5c-9ad9-f7c86f8f0d5e\") " pod="openshift-marketplace/redhat-operators-h58nq" Nov 29 08:23:43 crc kubenswrapper[4795]: I1129 08:23:43.323406 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h58nq" Nov 29 08:23:43 crc kubenswrapper[4795]: I1129 08:23:43.600321 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-s5rrp" Nov 29 08:23:43 crc kubenswrapper[4795]: I1129 08:23:43.600733 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-s5rrp" Nov 29 08:23:43 crc kubenswrapper[4795]: I1129 08:23:43.661493 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-s5rrp" Nov 29 08:23:43 crc kubenswrapper[4795]: I1129 08:23:43.873241 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h58nq"] Nov 29 08:23:43 crc kubenswrapper[4795]: I1129 08:23:43.945840 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-s5rrp" Nov 29 08:23:44 crc kubenswrapper[4795]: I1129 08:23:44.858945 4795 generic.go:334] "Generic (PLEG): container finished" podID="07362978-f752-4a5c-9ad9-f7c86f8f0d5e" containerID="ab4aa9315cf03016b549aafb47a36fca434fe853221efbc03c3c2b44d596003e" exitCode=0 Nov 29 08:23:44 crc kubenswrapper[4795]: I1129 08:23:44.859761 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h58nq" event={"ID":"07362978-f752-4a5c-9ad9-f7c86f8f0d5e","Type":"ContainerDied","Data":"ab4aa9315cf03016b549aafb47a36fca434fe853221efbc03c3c2b44d596003e"} Nov 29 08:23:44 crc kubenswrapper[4795]: I1129 08:23:44.859817 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h58nq" event={"ID":"07362978-f752-4a5c-9ad9-f7c86f8f0d5e","Type":"ContainerStarted","Data":"779acc85afc22bed084a2a3ae3363f9eac8707cdefd347e957aa38644e1f305d"} Nov 29 08:23:44 crc kubenswrapper[4795]: I1129 08:23:44.863686 4795 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 29 08:23:45 crc kubenswrapper[4795]: I1129 08:23:45.872279 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h58nq" event={"ID":"07362978-f752-4a5c-9ad9-f7c86f8f0d5e","Type":"ContainerStarted","Data":"e8a9dc6f561aa3753aecb97ee239e8f6c8b59ff68fea5dfc3ff0903a825937e9"} Nov 29 08:23:45 crc kubenswrapper[4795]: I1129 08:23:45.966765 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s5rrp"] Nov 29 08:23:45 crc kubenswrapper[4795]: I1129 08:23:45.967058 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-s5rrp" podUID="64922736-ba13-4d85-81da-62dcc27771fd" containerName="registry-server" containerID="cri-o://6fb2302bc77e66a5dc00eb3aff8861b7772d1c791d4a1ab60b625f7fb64d98ff" gracePeriod=2 Nov 29 08:23:46 crc kubenswrapper[4795]: I1129 08:23:46.888739 4795 generic.go:334] "Generic (PLEG): container finished" podID="64922736-ba13-4d85-81da-62dcc27771fd" containerID="6fb2302bc77e66a5dc00eb3aff8861b7772d1c791d4a1ab60b625f7fb64d98ff" exitCode=0 Nov 29 08:23:46 crc kubenswrapper[4795]: I1129 08:23:46.889678 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s5rrp" event={"ID":"64922736-ba13-4d85-81da-62dcc27771fd","Type":"ContainerDied","Data":"6fb2302bc77e66a5dc00eb3aff8861b7772d1c791d4a1ab60b625f7fb64d98ff"} Nov 29 08:23:46 crc kubenswrapper[4795]: I1129 08:23:46.997268 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s5rrp" Nov 29 08:23:47 crc kubenswrapper[4795]: I1129 08:23:47.096009 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64922736-ba13-4d85-81da-62dcc27771fd-catalog-content\") pod \"64922736-ba13-4d85-81da-62dcc27771fd\" (UID: \"64922736-ba13-4d85-81da-62dcc27771fd\") " Nov 29 08:23:47 crc kubenswrapper[4795]: I1129 08:23:47.096390 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-784kj\" (UniqueName: \"kubernetes.io/projected/64922736-ba13-4d85-81da-62dcc27771fd-kube-api-access-784kj\") pod \"64922736-ba13-4d85-81da-62dcc27771fd\" (UID: \"64922736-ba13-4d85-81da-62dcc27771fd\") " Nov 29 08:23:47 crc kubenswrapper[4795]: I1129 08:23:47.096585 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64922736-ba13-4d85-81da-62dcc27771fd-utilities\") pod \"64922736-ba13-4d85-81da-62dcc27771fd\" (UID: \"64922736-ba13-4d85-81da-62dcc27771fd\") " Nov 29 08:23:47 crc kubenswrapper[4795]: I1129 08:23:47.097161 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64922736-ba13-4d85-81da-62dcc27771fd-utilities" (OuterVolumeSpecName: "utilities") pod "64922736-ba13-4d85-81da-62dcc27771fd" (UID: "64922736-ba13-4d85-81da-62dcc27771fd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:23:47 crc kubenswrapper[4795]: I1129 08:23:47.097337 4795 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64922736-ba13-4d85-81da-62dcc27771fd-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 08:23:47 crc kubenswrapper[4795]: I1129 08:23:47.101902 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64922736-ba13-4d85-81da-62dcc27771fd-kube-api-access-784kj" (OuterVolumeSpecName: "kube-api-access-784kj") pod "64922736-ba13-4d85-81da-62dcc27771fd" (UID: "64922736-ba13-4d85-81da-62dcc27771fd"). InnerVolumeSpecName "kube-api-access-784kj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:23:47 crc kubenswrapper[4795]: I1129 08:23:47.161942 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64922736-ba13-4d85-81da-62dcc27771fd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "64922736-ba13-4d85-81da-62dcc27771fd" (UID: "64922736-ba13-4d85-81da-62dcc27771fd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:23:47 crc kubenswrapper[4795]: I1129 08:23:47.199517 4795 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64922736-ba13-4d85-81da-62dcc27771fd-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 08:23:47 crc kubenswrapper[4795]: I1129 08:23:47.199554 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-784kj\" (UniqueName: \"kubernetes.io/projected/64922736-ba13-4d85-81da-62dcc27771fd-kube-api-access-784kj\") on node \"crc\" DevicePath \"\"" Nov 29 08:23:47 crc kubenswrapper[4795]: I1129 08:23:47.901658 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s5rrp" event={"ID":"64922736-ba13-4d85-81da-62dcc27771fd","Type":"ContainerDied","Data":"6b28a0f11c7406e1c6da28a14a02e77c09e139d0e667100e8574282854b6036e"} Nov 29 08:23:47 crc kubenswrapper[4795]: I1129 08:23:47.901724 4795 scope.go:117] "RemoveContainer" containerID="6fb2302bc77e66a5dc00eb3aff8861b7772d1c791d4a1ab60b625f7fb64d98ff" Nov 29 08:23:47 crc kubenswrapper[4795]: I1129 08:23:47.901776 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s5rrp" Nov 29 08:23:47 crc kubenswrapper[4795]: I1129 08:23:47.938009 4795 scope.go:117] "RemoveContainer" containerID="dadc9080b4724df44f20e1e6bf9f0df4d4fad4541305ffa5446369258966e871" Nov 29 08:23:47 crc kubenswrapper[4795]: I1129 08:23:47.939783 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s5rrp"] Nov 29 08:23:47 crc kubenswrapper[4795]: I1129 08:23:47.958846 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-s5rrp"] Nov 29 08:23:47 crc kubenswrapper[4795]: I1129 08:23:47.974441 4795 scope.go:117] "RemoveContainer" containerID="23784a69386e91e071bee744635783d77cf49ae5617b0702dcc67e99bcd8d3a9" Nov 29 08:23:48 crc kubenswrapper[4795]: I1129 08:23:48.278827 4795 scope.go:117] "RemoveContainer" containerID="21e10122a79c2c2a3372f86102d58fee1c9be891bd6896b1affd1c2404af16fa" Nov 29 08:23:48 crc kubenswrapper[4795]: E1129 08:23:48.279260 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:23:48 crc kubenswrapper[4795]: I1129 08:23:48.317776 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64922736-ba13-4d85-81da-62dcc27771fd" path="/var/lib/kubelet/pods/64922736-ba13-4d85-81da-62dcc27771fd/volumes" Nov 29 08:23:49 crc kubenswrapper[4795]: I1129 08:23:49.923912 4795 generic.go:334] "Generic (PLEG): container finished" podID="07362978-f752-4a5c-9ad9-f7c86f8f0d5e" containerID="e8a9dc6f561aa3753aecb97ee239e8f6c8b59ff68fea5dfc3ff0903a825937e9" exitCode=0 Nov 29 08:23:49 crc kubenswrapper[4795]: I1129 08:23:49.923974 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h58nq" event={"ID":"07362978-f752-4a5c-9ad9-f7c86f8f0d5e","Type":"ContainerDied","Data":"e8a9dc6f561aa3753aecb97ee239e8f6c8b59ff68fea5dfc3ff0903a825937e9"} Nov 29 08:23:51 crc kubenswrapper[4795]: I1129 08:23:51.950270 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h58nq" event={"ID":"07362978-f752-4a5c-9ad9-f7c86f8f0d5e","Type":"ContainerStarted","Data":"35b238c724f44ccad6d6bb54ecfa23e4a35a9a86dbb31b5377fd5aa445f78cb2"} Nov 29 08:23:51 crc kubenswrapper[4795]: I1129 08:23:51.983229 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-h58nq" podStartSLOduration=4.219740184 podStartE2EDuration="9.983202412s" podCreationTimestamp="2025-11-29 08:23:42 +0000 UTC" firstStartedPulling="2025-11-29 08:23:44.86339893 +0000 UTC m=+2670.838974730" lastFinishedPulling="2025-11-29 08:23:50.626861168 +0000 UTC m=+2676.602436958" observedRunningTime="2025-11-29 08:23:51.978187459 +0000 UTC m=+2677.953763259" watchObservedRunningTime="2025-11-29 08:23:51.983202412 +0000 UTC m=+2677.958778202" Nov 29 08:23:53 crc kubenswrapper[4795]: I1129 08:23:53.325168 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-h58nq" Nov 29 08:23:53 crc kubenswrapper[4795]: I1129 08:23:53.325502 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-h58nq" Nov 29 08:23:54 crc kubenswrapper[4795]: I1129 08:23:54.379329 4795 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-h58nq" podUID="07362978-f752-4a5c-9ad9-f7c86f8f0d5e" containerName="registry-server" probeResult="failure" output=< Nov 29 08:23:54 crc kubenswrapper[4795]: timeout: failed to connect service ":50051" within 1s Nov 29 08:23:54 crc kubenswrapper[4795]: > Nov 29 08:24:02 crc kubenswrapper[4795]: I1129 08:24:02.275657 4795 scope.go:117] "RemoveContainer" containerID="21e10122a79c2c2a3372f86102d58fee1c9be891bd6896b1affd1c2404af16fa" Nov 29 08:24:02 crc kubenswrapper[4795]: E1129 08:24:02.276388 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:24:03 crc kubenswrapper[4795]: I1129 08:24:03.373057 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-h58nq" Nov 29 08:24:03 crc kubenswrapper[4795]: I1129 08:24:03.425918 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-h58nq" Nov 29 08:24:04 crc kubenswrapper[4795]: I1129 08:24:04.435735 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-h58nq"] Nov 29 08:24:05 crc kubenswrapper[4795]: I1129 08:24:05.102980 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-h58nq" podUID="07362978-f752-4a5c-9ad9-f7c86f8f0d5e" containerName="registry-server" containerID="cri-o://35b238c724f44ccad6d6bb54ecfa23e4a35a9a86dbb31b5377fd5aa445f78cb2" gracePeriod=2 Nov 29 08:24:05 crc kubenswrapper[4795]: I1129 08:24:05.759730 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h58nq" Nov 29 08:24:05 crc kubenswrapper[4795]: I1129 08:24:05.944337 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07362978-f752-4a5c-9ad9-f7c86f8f0d5e-catalog-content\") pod \"07362978-f752-4a5c-9ad9-f7c86f8f0d5e\" (UID: \"07362978-f752-4a5c-9ad9-f7c86f8f0d5e\") " Nov 29 08:24:05 crc kubenswrapper[4795]: I1129 08:24:05.951128 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kx4z\" (UniqueName: \"kubernetes.io/projected/07362978-f752-4a5c-9ad9-f7c86f8f0d5e-kube-api-access-7kx4z\") pod \"07362978-f752-4a5c-9ad9-f7c86f8f0d5e\" (UID: \"07362978-f752-4a5c-9ad9-f7c86f8f0d5e\") " Nov 29 08:24:05 crc kubenswrapper[4795]: I1129 08:24:05.951389 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07362978-f752-4a5c-9ad9-f7c86f8f0d5e-utilities\") pod \"07362978-f752-4a5c-9ad9-f7c86f8f0d5e\" (UID: \"07362978-f752-4a5c-9ad9-f7c86f8f0d5e\") " Nov 29 08:24:05 crc kubenswrapper[4795]: I1129 08:24:05.952126 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07362978-f752-4a5c-9ad9-f7c86f8f0d5e-utilities" (OuterVolumeSpecName: "utilities") pod "07362978-f752-4a5c-9ad9-f7c86f8f0d5e" (UID: "07362978-f752-4a5c-9ad9-f7c86f8f0d5e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:24:05 crc kubenswrapper[4795]: I1129 08:24:05.956490 4795 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07362978-f752-4a5c-9ad9-f7c86f8f0d5e-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 08:24:05 crc kubenswrapper[4795]: I1129 08:24:05.960769 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07362978-f752-4a5c-9ad9-f7c86f8f0d5e-kube-api-access-7kx4z" (OuterVolumeSpecName: "kube-api-access-7kx4z") pod "07362978-f752-4a5c-9ad9-f7c86f8f0d5e" (UID: "07362978-f752-4a5c-9ad9-f7c86f8f0d5e"). InnerVolumeSpecName "kube-api-access-7kx4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:24:06 crc kubenswrapper[4795]: I1129 08:24:06.069023 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kx4z\" (UniqueName: \"kubernetes.io/projected/07362978-f752-4a5c-9ad9-f7c86f8f0d5e-kube-api-access-7kx4z\") on node \"crc\" DevicePath \"\"" Nov 29 08:24:06 crc kubenswrapper[4795]: I1129 08:24:06.087624 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07362978-f752-4a5c-9ad9-f7c86f8f0d5e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "07362978-f752-4a5c-9ad9-f7c86f8f0d5e" (UID: "07362978-f752-4a5c-9ad9-f7c86f8f0d5e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:24:06 crc kubenswrapper[4795]: I1129 08:24:06.125565 4795 generic.go:334] "Generic (PLEG): container finished" podID="07362978-f752-4a5c-9ad9-f7c86f8f0d5e" containerID="35b238c724f44ccad6d6bb54ecfa23e4a35a9a86dbb31b5377fd5aa445f78cb2" exitCode=0 Nov 29 08:24:06 crc kubenswrapper[4795]: I1129 08:24:06.125637 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h58nq" event={"ID":"07362978-f752-4a5c-9ad9-f7c86f8f0d5e","Type":"ContainerDied","Data":"35b238c724f44ccad6d6bb54ecfa23e4a35a9a86dbb31b5377fd5aa445f78cb2"} Nov 29 08:24:06 crc kubenswrapper[4795]: I1129 08:24:06.125712 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h58nq" event={"ID":"07362978-f752-4a5c-9ad9-f7c86f8f0d5e","Type":"ContainerDied","Data":"779acc85afc22bed084a2a3ae3363f9eac8707cdefd347e957aa38644e1f305d"} Nov 29 08:24:06 crc kubenswrapper[4795]: I1129 08:24:06.125740 4795 scope.go:117] "RemoveContainer" containerID="35b238c724f44ccad6d6bb54ecfa23e4a35a9a86dbb31b5377fd5aa445f78cb2" Nov 29 08:24:06 crc kubenswrapper[4795]: I1129 08:24:06.125937 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h58nq" Nov 29 08:24:06 crc kubenswrapper[4795]: I1129 08:24:06.166402 4795 scope.go:117] "RemoveContainer" containerID="e8a9dc6f561aa3753aecb97ee239e8f6c8b59ff68fea5dfc3ff0903a825937e9" Nov 29 08:24:06 crc kubenswrapper[4795]: I1129 08:24:06.171096 4795 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07362978-f752-4a5c-9ad9-f7c86f8f0d5e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 08:24:06 crc kubenswrapper[4795]: I1129 08:24:06.175951 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-h58nq"] Nov 29 08:24:06 crc kubenswrapper[4795]: I1129 08:24:06.185153 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-h58nq"] Nov 29 08:24:06 crc kubenswrapper[4795]: I1129 08:24:06.194610 4795 scope.go:117] "RemoveContainer" containerID="ab4aa9315cf03016b549aafb47a36fca434fe853221efbc03c3c2b44d596003e" Nov 29 08:24:06 crc kubenswrapper[4795]: I1129 08:24:06.312427 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07362978-f752-4a5c-9ad9-f7c86f8f0d5e" path="/var/lib/kubelet/pods/07362978-f752-4a5c-9ad9-f7c86f8f0d5e/volumes" Nov 29 08:24:06 crc kubenswrapper[4795]: I1129 08:24:06.314910 4795 scope.go:117] "RemoveContainer" containerID="35b238c724f44ccad6d6bb54ecfa23e4a35a9a86dbb31b5377fd5aa445f78cb2" Nov 29 08:24:06 crc kubenswrapper[4795]: E1129 08:24:06.315350 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35b238c724f44ccad6d6bb54ecfa23e4a35a9a86dbb31b5377fd5aa445f78cb2\": container with ID starting with 35b238c724f44ccad6d6bb54ecfa23e4a35a9a86dbb31b5377fd5aa445f78cb2 not found: ID does not exist" containerID="35b238c724f44ccad6d6bb54ecfa23e4a35a9a86dbb31b5377fd5aa445f78cb2" Nov 29 08:24:06 crc kubenswrapper[4795]: I1129 08:24:06.315381 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35b238c724f44ccad6d6bb54ecfa23e4a35a9a86dbb31b5377fd5aa445f78cb2"} err="failed to get container status \"35b238c724f44ccad6d6bb54ecfa23e4a35a9a86dbb31b5377fd5aa445f78cb2\": rpc error: code = NotFound desc = could not find container \"35b238c724f44ccad6d6bb54ecfa23e4a35a9a86dbb31b5377fd5aa445f78cb2\": container with ID starting with 35b238c724f44ccad6d6bb54ecfa23e4a35a9a86dbb31b5377fd5aa445f78cb2 not found: ID does not exist" Nov 29 08:24:06 crc kubenswrapper[4795]: I1129 08:24:06.315404 4795 scope.go:117] "RemoveContainer" containerID="e8a9dc6f561aa3753aecb97ee239e8f6c8b59ff68fea5dfc3ff0903a825937e9" Nov 29 08:24:06 crc kubenswrapper[4795]: E1129 08:24:06.317284 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8a9dc6f561aa3753aecb97ee239e8f6c8b59ff68fea5dfc3ff0903a825937e9\": container with ID starting with e8a9dc6f561aa3753aecb97ee239e8f6c8b59ff68fea5dfc3ff0903a825937e9 not found: ID does not exist" containerID="e8a9dc6f561aa3753aecb97ee239e8f6c8b59ff68fea5dfc3ff0903a825937e9" Nov 29 08:24:06 crc kubenswrapper[4795]: I1129 08:24:06.317401 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8a9dc6f561aa3753aecb97ee239e8f6c8b59ff68fea5dfc3ff0903a825937e9"} err="failed to get container status \"e8a9dc6f561aa3753aecb97ee239e8f6c8b59ff68fea5dfc3ff0903a825937e9\": rpc error: code = NotFound desc = could not find container \"e8a9dc6f561aa3753aecb97ee239e8f6c8b59ff68fea5dfc3ff0903a825937e9\": container with ID starting with e8a9dc6f561aa3753aecb97ee239e8f6c8b59ff68fea5dfc3ff0903a825937e9 not found: ID does not exist" Nov 29 08:24:06 crc kubenswrapper[4795]: I1129 08:24:06.317491 4795 scope.go:117] "RemoveContainer" containerID="ab4aa9315cf03016b549aafb47a36fca434fe853221efbc03c3c2b44d596003e" Nov 29 08:24:06 crc kubenswrapper[4795]: E1129 08:24:06.318027 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab4aa9315cf03016b549aafb47a36fca434fe853221efbc03c3c2b44d596003e\": container with ID starting with ab4aa9315cf03016b549aafb47a36fca434fe853221efbc03c3c2b44d596003e not found: ID does not exist" containerID="ab4aa9315cf03016b549aafb47a36fca434fe853221efbc03c3c2b44d596003e" Nov 29 08:24:06 crc kubenswrapper[4795]: I1129 08:24:06.318065 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab4aa9315cf03016b549aafb47a36fca434fe853221efbc03c3c2b44d596003e"} err="failed to get container status \"ab4aa9315cf03016b549aafb47a36fca434fe853221efbc03c3c2b44d596003e\": rpc error: code = NotFound desc = could not find container \"ab4aa9315cf03016b549aafb47a36fca434fe853221efbc03c3c2b44d596003e\": container with ID starting with ab4aa9315cf03016b549aafb47a36fca434fe853221efbc03c3c2b44d596003e not found: ID does not exist" Nov 29 08:24:13 crc kubenswrapper[4795]: I1129 08:24:13.282298 4795 scope.go:117] "RemoveContainer" containerID="21e10122a79c2c2a3372f86102d58fee1c9be891bd6896b1affd1c2404af16fa" Nov 29 08:24:13 crc kubenswrapper[4795]: E1129 08:24:13.283203 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:24:25 crc kubenswrapper[4795]: I1129 08:24:25.276451 4795 scope.go:117] "RemoveContainer" containerID="21e10122a79c2c2a3372f86102d58fee1c9be891bd6896b1affd1c2404af16fa" Nov 29 08:24:25 crc kubenswrapper[4795]: E1129 08:24:25.277623 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:24:37 crc kubenswrapper[4795]: I1129 08:24:37.275708 4795 scope.go:117] "RemoveContainer" containerID="21e10122a79c2c2a3372f86102d58fee1c9be891bd6896b1affd1c2404af16fa" Nov 29 08:24:37 crc kubenswrapper[4795]: E1129 08:24:37.276488 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:24:41 crc kubenswrapper[4795]: I1129 08:24:41.504091 4795 generic.go:334] "Generic (PLEG): container finished" podID="1a1dfc06-678d-4418-a57f-7a9a2ba2c441" containerID="0c9b2fa227c05efd9bb30f1d45581c3db23e074e2eab80061da4161faef88a6d" exitCode=0 Nov 29 08:24:41 crc kubenswrapper[4795]: I1129 08:24:41.504139 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mrqcm" event={"ID":"1a1dfc06-678d-4418-a57f-7a9a2ba2c441","Type":"ContainerDied","Data":"0c9b2fa227c05efd9bb30f1d45581c3db23e074e2eab80061da4161faef88a6d"} Nov 29 08:24:42 crc kubenswrapper[4795]: I1129 08:24:42.980621 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mrqcm" Nov 29 08:24:43 crc kubenswrapper[4795]: I1129 08:24:43.151309 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1a1dfc06-678d-4418-a57f-7a9a2ba2c441-ssh-key\") pod \"1a1dfc06-678d-4418-a57f-7a9a2ba2c441\" (UID: \"1a1dfc06-678d-4418-a57f-7a9a2ba2c441\") " Nov 29 08:24:43 crc kubenswrapper[4795]: I1129 08:24:43.151364 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1a1dfc06-678d-4418-a57f-7a9a2ba2c441-inventory\") pod \"1a1dfc06-678d-4418-a57f-7a9a2ba2c441\" (UID: \"1a1dfc06-678d-4418-a57f-7a9a2ba2c441\") " Nov 29 08:24:43 crc kubenswrapper[4795]: I1129 08:24:43.151447 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a1dfc06-678d-4418-a57f-7a9a2ba2c441-ovn-combined-ca-bundle\") pod \"1a1dfc06-678d-4418-a57f-7a9a2ba2c441\" (UID: \"1a1dfc06-678d-4418-a57f-7a9a2ba2c441\") " Nov 29 08:24:43 crc kubenswrapper[4795]: I1129 08:24:43.151613 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/1a1dfc06-678d-4418-a57f-7a9a2ba2c441-ovncontroller-config-0\") pod \"1a1dfc06-678d-4418-a57f-7a9a2ba2c441\" (UID: \"1a1dfc06-678d-4418-a57f-7a9a2ba2c441\") " Nov 29 08:24:43 crc kubenswrapper[4795]: I1129 08:24:43.151706 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdl8z\" (UniqueName: \"kubernetes.io/projected/1a1dfc06-678d-4418-a57f-7a9a2ba2c441-kube-api-access-sdl8z\") pod \"1a1dfc06-678d-4418-a57f-7a9a2ba2c441\" (UID: \"1a1dfc06-678d-4418-a57f-7a9a2ba2c441\") " Nov 29 08:24:43 crc kubenswrapper[4795]: I1129 08:24:43.160300 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a1dfc06-678d-4418-a57f-7a9a2ba2c441-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "1a1dfc06-678d-4418-a57f-7a9a2ba2c441" (UID: "1a1dfc06-678d-4418-a57f-7a9a2ba2c441"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:24:43 crc kubenswrapper[4795]: I1129 08:24:43.163376 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a1dfc06-678d-4418-a57f-7a9a2ba2c441-kube-api-access-sdl8z" (OuterVolumeSpecName: "kube-api-access-sdl8z") pod "1a1dfc06-678d-4418-a57f-7a9a2ba2c441" (UID: "1a1dfc06-678d-4418-a57f-7a9a2ba2c441"). InnerVolumeSpecName "kube-api-access-sdl8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:24:43 crc kubenswrapper[4795]: I1129 08:24:43.201825 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a1dfc06-678d-4418-a57f-7a9a2ba2c441-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "1a1dfc06-678d-4418-a57f-7a9a2ba2c441" (UID: "1a1dfc06-678d-4418-a57f-7a9a2ba2c441"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:24:43 crc kubenswrapper[4795]: I1129 08:24:43.211909 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a1dfc06-678d-4418-a57f-7a9a2ba2c441-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "1a1dfc06-678d-4418-a57f-7a9a2ba2c441" (UID: "1a1dfc06-678d-4418-a57f-7a9a2ba2c441"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:24:43 crc kubenswrapper[4795]: I1129 08:24:43.230006 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a1dfc06-678d-4418-a57f-7a9a2ba2c441-inventory" (OuterVolumeSpecName: "inventory") pod "1a1dfc06-678d-4418-a57f-7a9a2ba2c441" (UID: "1a1dfc06-678d-4418-a57f-7a9a2ba2c441"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:24:43 crc kubenswrapper[4795]: I1129 08:24:43.254834 4795 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1a1dfc06-678d-4418-a57f-7a9a2ba2c441-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 08:24:43 crc kubenswrapper[4795]: I1129 08:24:43.254865 4795 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1a1dfc06-678d-4418-a57f-7a9a2ba2c441-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 08:24:43 crc kubenswrapper[4795]: I1129 08:24:43.254875 4795 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a1dfc06-678d-4418-a57f-7a9a2ba2c441-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 08:24:43 crc kubenswrapper[4795]: I1129 08:24:43.254886 4795 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/1a1dfc06-678d-4418-a57f-7a9a2ba2c441-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Nov 29 08:24:43 crc kubenswrapper[4795]: I1129 08:24:43.254895 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sdl8z\" (UniqueName: \"kubernetes.io/projected/1a1dfc06-678d-4418-a57f-7a9a2ba2c441-kube-api-access-sdl8z\") on node \"crc\" DevicePath \"\"" Nov 29 08:24:43 crc kubenswrapper[4795]: I1129 08:24:43.546508 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mrqcm" event={"ID":"1a1dfc06-678d-4418-a57f-7a9a2ba2c441","Type":"ContainerDied","Data":"ccc119041e9d009a807ce0bee7bc20ab60581fd12e1d7dcaa86d9db136437ca3"} Nov 29 08:24:43 crc kubenswrapper[4795]: I1129 08:24:43.546562 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ccc119041e9d009a807ce0bee7bc20ab60581fd12e1d7dcaa86d9db136437ca3" Nov 29 08:24:43 crc kubenswrapper[4795]: I1129 08:24:43.546675 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mrqcm" Nov 29 08:24:43 crc kubenswrapper[4795]: I1129 08:24:43.638693 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2k97"] Nov 29 08:24:43 crc kubenswrapper[4795]: E1129 08:24:43.639230 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64922736-ba13-4d85-81da-62dcc27771fd" containerName="registry-server" Nov 29 08:24:43 crc kubenswrapper[4795]: I1129 08:24:43.639247 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="64922736-ba13-4d85-81da-62dcc27771fd" containerName="registry-server" Nov 29 08:24:43 crc kubenswrapper[4795]: E1129 08:24:43.639272 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07362978-f752-4a5c-9ad9-f7c86f8f0d5e" containerName="registry-server" Nov 29 08:24:43 crc kubenswrapper[4795]: I1129 08:24:43.639280 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="07362978-f752-4a5c-9ad9-f7c86f8f0d5e" containerName="registry-server" Nov 29 08:24:43 crc kubenswrapper[4795]: E1129 08:24:43.639309 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a1dfc06-678d-4418-a57f-7a9a2ba2c441" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 29 08:24:43 crc kubenswrapper[4795]: I1129 08:24:43.639316 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a1dfc06-678d-4418-a57f-7a9a2ba2c441" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 29 08:24:43 crc kubenswrapper[4795]: E1129 08:24:43.639328 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64922736-ba13-4d85-81da-62dcc27771fd" containerName="extract-utilities" Nov 29 08:24:43 crc kubenswrapper[4795]: I1129 08:24:43.639334 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="64922736-ba13-4d85-81da-62dcc27771fd" containerName="extract-utilities" Nov 29 08:24:43 crc kubenswrapper[4795]: E1129 08:24:43.639351 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64922736-ba13-4d85-81da-62dcc27771fd" containerName="extract-content" Nov 29 08:24:43 crc kubenswrapper[4795]: I1129 08:24:43.639356 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="64922736-ba13-4d85-81da-62dcc27771fd" containerName="extract-content" Nov 29 08:24:43 crc kubenswrapper[4795]: E1129 08:24:43.639364 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07362978-f752-4a5c-9ad9-f7c86f8f0d5e" containerName="extract-content" Nov 29 08:24:43 crc kubenswrapper[4795]: I1129 08:24:43.639370 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="07362978-f752-4a5c-9ad9-f7c86f8f0d5e" containerName="extract-content" Nov 29 08:24:43 crc kubenswrapper[4795]: E1129 08:24:43.639391 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07362978-f752-4a5c-9ad9-f7c86f8f0d5e" containerName="extract-utilities" Nov 29 08:24:43 crc kubenswrapper[4795]: I1129 08:24:43.639397 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="07362978-f752-4a5c-9ad9-f7c86f8f0d5e" containerName="extract-utilities" Nov 29 08:24:43 crc kubenswrapper[4795]: I1129 08:24:43.639615 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="64922736-ba13-4d85-81da-62dcc27771fd" containerName="registry-server" Nov 29 08:24:43 crc kubenswrapper[4795]: I1129 08:24:43.639634 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="07362978-f752-4a5c-9ad9-f7c86f8f0d5e" containerName="registry-server" Nov 29 08:24:43 crc kubenswrapper[4795]: I1129 08:24:43.639643 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a1dfc06-678d-4418-a57f-7a9a2ba2c441" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 29 08:24:43 crc kubenswrapper[4795]: I1129 08:24:43.640565 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2k97" Nov 29 08:24:43 crc kubenswrapper[4795]: I1129 08:24:43.648583 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 08:24:43 crc kubenswrapper[4795]: I1129 08:24:43.648770 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Nov 29 08:24:43 crc kubenswrapper[4795]: I1129 08:24:43.648861 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Nov 29 08:24:43 crc kubenswrapper[4795]: I1129 08:24:43.649061 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 08:24:43 crc kubenswrapper[4795]: I1129 08:24:43.649193 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nfhsg" Nov 29 08:24:43 crc kubenswrapper[4795]: I1129 08:24:43.649585 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 08:24:43 crc kubenswrapper[4795]: I1129 08:24:43.657365 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2k97"] Nov 29 08:24:43 crc kubenswrapper[4795]: I1129 08:24:43.690364 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/b55cbe5f-b90e-47f3-a446-82d1577ac07d-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2k97\" (UID: \"b55cbe5f-b90e-47f3-a446-82d1577ac07d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2k97" Nov 29 08:24:43 crc kubenswrapper[4795]: I1129 08:24:43.690476 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/b55cbe5f-b90e-47f3-a446-82d1577ac07d-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2k97\" (UID: \"b55cbe5f-b90e-47f3-a446-82d1577ac07d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2k97" Nov 29 08:24:43 crc kubenswrapper[4795]: I1129 08:24:43.690509 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b55cbe5f-b90e-47f3-a446-82d1577ac07d-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2k97\" (UID: \"b55cbe5f-b90e-47f3-a446-82d1577ac07d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2k97" Nov 29 08:24:43 crc kubenswrapper[4795]: I1129 08:24:43.690538 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbqbb\" (UniqueName: \"kubernetes.io/projected/b55cbe5f-b90e-47f3-a446-82d1577ac07d-kube-api-access-gbqbb\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2k97\" (UID: \"b55cbe5f-b90e-47f3-a446-82d1577ac07d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2k97" Nov 29 08:24:43 crc kubenswrapper[4795]: I1129 08:24:43.690652 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b55cbe5f-b90e-47f3-a446-82d1577ac07d-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2k97\" (UID: \"b55cbe5f-b90e-47f3-a446-82d1577ac07d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2k97" Nov 29 08:24:43 crc kubenswrapper[4795]: I1129 08:24:43.690886 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b55cbe5f-b90e-47f3-a446-82d1577ac07d-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2k97\" (UID: \"b55cbe5f-b90e-47f3-a446-82d1577ac07d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2k97" Nov 29 08:24:43 crc kubenswrapper[4795]: I1129 08:24:43.793402 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/b55cbe5f-b90e-47f3-a446-82d1577ac07d-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2k97\" (UID: \"b55cbe5f-b90e-47f3-a446-82d1577ac07d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2k97" Nov 29 08:24:43 crc kubenswrapper[4795]: I1129 08:24:43.793761 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b55cbe5f-b90e-47f3-a446-82d1577ac07d-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2k97\" (UID: \"b55cbe5f-b90e-47f3-a446-82d1577ac07d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2k97" Nov 29 08:24:43 crc kubenswrapper[4795]: I1129 08:24:43.793799 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbqbb\" (UniqueName: \"kubernetes.io/projected/b55cbe5f-b90e-47f3-a446-82d1577ac07d-kube-api-access-gbqbb\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2k97\" (UID: \"b55cbe5f-b90e-47f3-a446-82d1577ac07d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2k97" Nov 29 08:24:43 crc kubenswrapper[4795]: I1129 08:24:43.793893 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b55cbe5f-b90e-47f3-a446-82d1577ac07d-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2k97\" (UID: \"b55cbe5f-b90e-47f3-a446-82d1577ac07d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2k97" Nov 29 08:24:43 crc kubenswrapper[4795]: I1129 08:24:43.793966 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b55cbe5f-b90e-47f3-a446-82d1577ac07d-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2k97\" (UID: \"b55cbe5f-b90e-47f3-a446-82d1577ac07d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2k97" Nov 29 08:24:43 crc kubenswrapper[4795]: I1129 08:24:43.794002 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/b55cbe5f-b90e-47f3-a446-82d1577ac07d-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2k97\" (UID: \"b55cbe5f-b90e-47f3-a446-82d1577ac07d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2k97" Nov 29 08:24:43 crc kubenswrapper[4795]: I1129 08:24:43.799777 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b55cbe5f-b90e-47f3-a446-82d1577ac07d-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2k97\" (UID: \"b55cbe5f-b90e-47f3-a446-82d1577ac07d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2k97" Nov 29 08:24:43 crc kubenswrapper[4795]: I1129 08:24:43.799794 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/b55cbe5f-b90e-47f3-a446-82d1577ac07d-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2k97\" (UID: \"b55cbe5f-b90e-47f3-a446-82d1577ac07d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2k97" Nov 29 08:24:43 crc kubenswrapper[4795]: I1129 08:24:43.800271 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b55cbe5f-b90e-47f3-a446-82d1577ac07d-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2k97\" (UID: \"b55cbe5f-b90e-47f3-a446-82d1577ac07d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2k97" Nov 29 08:24:43 crc kubenswrapper[4795]: I1129 08:24:43.801113 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/b55cbe5f-b90e-47f3-a446-82d1577ac07d-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2k97\" (UID: \"b55cbe5f-b90e-47f3-a446-82d1577ac07d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2k97" Nov 29 08:24:43 crc kubenswrapper[4795]: I1129 08:24:43.801423 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b55cbe5f-b90e-47f3-a446-82d1577ac07d-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2k97\" (UID: \"b55cbe5f-b90e-47f3-a446-82d1577ac07d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2k97" Nov 29 08:24:43 crc kubenswrapper[4795]: I1129 08:24:43.809926 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbqbb\" (UniqueName: \"kubernetes.io/projected/b55cbe5f-b90e-47f3-a446-82d1577ac07d-kube-api-access-gbqbb\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2k97\" (UID: \"b55cbe5f-b90e-47f3-a446-82d1577ac07d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2k97" Nov 29 08:24:43 crc kubenswrapper[4795]: I1129 08:24:43.972939 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2k97" Nov 29 08:24:44 crc kubenswrapper[4795]: I1129 08:24:44.544414 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2k97"] Nov 29 08:24:44 crc kubenswrapper[4795]: W1129 08:24:44.556907 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb55cbe5f_b90e_47f3_a446_82d1577ac07d.slice/crio-02b394cb322d70555dbf90b33f531bd935af3b142067449d505c0aba13efe589 WatchSource:0}: Error finding container 02b394cb322d70555dbf90b33f531bd935af3b142067449d505c0aba13efe589: Status 404 returned error can't find the container with id 02b394cb322d70555dbf90b33f531bd935af3b142067449d505c0aba13efe589 Nov 29 08:24:45 crc kubenswrapper[4795]: I1129 08:24:45.577426 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2k97" event={"ID":"b55cbe5f-b90e-47f3-a446-82d1577ac07d","Type":"ContainerStarted","Data":"5922b66d401fc31b581e2e39a7370eebee55e1b8eb8c43ec4ab916c8240449cb"} Nov 29 08:24:45 crc kubenswrapper[4795]: I1129 08:24:45.577661 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2k97" event={"ID":"b55cbe5f-b90e-47f3-a446-82d1577ac07d","Type":"ContainerStarted","Data":"02b394cb322d70555dbf90b33f531bd935af3b142067449d505c0aba13efe589"} Nov 29 08:24:45 crc kubenswrapper[4795]: I1129 08:24:45.607883 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2k97" podStartSLOduration=2.075198997 podStartE2EDuration="2.607863939s" podCreationTimestamp="2025-11-29 08:24:43 +0000 UTC" firstStartedPulling="2025-11-29 08:24:44.559780354 +0000 UTC m=+2730.535356184" lastFinishedPulling="2025-11-29 08:24:45.092445336 +0000 UTC m=+2731.068021126" observedRunningTime="2025-11-29 08:24:45.606365766 +0000 UTC m=+2731.581941576" watchObservedRunningTime="2025-11-29 08:24:45.607863939 +0000 UTC m=+2731.583439739" Nov 29 08:24:48 crc kubenswrapper[4795]: I1129 08:24:48.276541 4795 scope.go:117] "RemoveContainer" containerID="21e10122a79c2c2a3372f86102d58fee1c9be891bd6896b1affd1c2404af16fa" Nov 29 08:24:48 crc kubenswrapper[4795]: E1129 08:24:48.277148 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:25:03 crc kubenswrapper[4795]: I1129 08:25:03.276077 4795 scope.go:117] "RemoveContainer" containerID="21e10122a79c2c2a3372f86102d58fee1c9be891bd6896b1affd1c2404af16fa" Nov 29 08:25:03 crc kubenswrapper[4795]: E1129 08:25:03.276903 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:25:17 crc kubenswrapper[4795]: I1129 08:25:17.276686 4795 scope.go:117] "RemoveContainer" containerID="21e10122a79c2c2a3372f86102d58fee1c9be891bd6896b1affd1c2404af16fa" Nov 29 08:25:17 crc kubenswrapper[4795]: E1129 08:25:17.277573 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:25:31 crc kubenswrapper[4795]: I1129 08:25:31.277137 4795 scope.go:117] "RemoveContainer" containerID="21e10122a79c2c2a3372f86102d58fee1c9be891bd6896b1affd1c2404af16fa" Nov 29 08:25:31 crc kubenswrapper[4795]: E1129 08:25:31.278088 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:25:36 crc kubenswrapper[4795]: I1129 08:25:36.383199 4795 generic.go:334] "Generic (PLEG): container finished" podID="b55cbe5f-b90e-47f3-a446-82d1577ac07d" containerID="5922b66d401fc31b581e2e39a7370eebee55e1b8eb8c43ec4ab916c8240449cb" exitCode=0 Nov 29 08:25:36 crc kubenswrapper[4795]: I1129 08:25:36.383232 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2k97" event={"ID":"b55cbe5f-b90e-47f3-a446-82d1577ac07d","Type":"ContainerDied","Data":"5922b66d401fc31b581e2e39a7370eebee55e1b8eb8c43ec4ab916c8240449cb"} Nov 29 08:25:37 crc kubenswrapper[4795]: I1129 08:25:37.954832 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2k97" Nov 29 08:25:38 crc kubenswrapper[4795]: I1129 08:25:38.094379 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/b55cbe5f-b90e-47f3-a446-82d1577ac07d-nova-metadata-neutron-config-0\") pod \"b55cbe5f-b90e-47f3-a446-82d1577ac07d\" (UID: \"b55cbe5f-b90e-47f3-a446-82d1577ac07d\") " Nov 29 08:25:38 crc kubenswrapper[4795]: I1129 08:25:38.095306 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/b55cbe5f-b90e-47f3-a446-82d1577ac07d-neutron-ovn-metadata-agent-neutron-config-0\") pod \"b55cbe5f-b90e-47f3-a446-82d1577ac07d\" (UID: \"b55cbe5f-b90e-47f3-a446-82d1577ac07d\") " Nov 29 08:25:38 crc kubenswrapper[4795]: I1129 08:25:38.095521 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b55cbe5f-b90e-47f3-a446-82d1577ac07d-neutron-metadata-combined-ca-bundle\") pod \"b55cbe5f-b90e-47f3-a446-82d1577ac07d\" (UID: \"b55cbe5f-b90e-47f3-a446-82d1577ac07d\") " Nov 29 08:25:38 crc kubenswrapper[4795]: I1129 08:25:38.095625 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbqbb\" (UniqueName: \"kubernetes.io/projected/b55cbe5f-b90e-47f3-a446-82d1577ac07d-kube-api-access-gbqbb\") pod \"b55cbe5f-b90e-47f3-a446-82d1577ac07d\" (UID: \"b55cbe5f-b90e-47f3-a446-82d1577ac07d\") " Nov 29 08:25:38 crc kubenswrapper[4795]: I1129 08:25:38.095666 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b55cbe5f-b90e-47f3-a446-82d1577ac07d-ssh-key\") pod \"b55cbe5f-b90e-47f3-a446-82d1577ac07d\" (UID: \"b55cbe5f-b90e-47f3-a446-82d1577ac07d\") " Nov 29 08:25:38 crc kubenswrapper[4795]: I1129 08:25:38.095830 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b55cbe5f-b90e-47f3-a446-82d1577ac07d-inventory\") pod \"b55cbe5f-b90e-47f3-a446-82d1577ac07d\" (UID: \"b55cbe5f-b90e-47f3-a446-82d1577ac07d\") " Nov 29 08:25:38 crc kubenswrapper[4795]: I1129 08:25:38.108129 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b55cbe5f-b90e-47f3-a446-82d1577ac07d-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "b55cbe5f-b90e-47f3-a446-82d1577ac07d" (UID: "b55cbe5f-b90e-47f3-a446-82d1577ac07d"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:25:38 crc kubenswrapper[4795]: I1129 08:25:38.137060 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b55cbe5f-b90e-47f3-a446-82d1577ac07d-kube-api-access-gbqbb" (OuterVolumeSpecName: "kube-api-access-gbqbb") pod "b55cbe5f-b90e-47f3-a446-82d1577ac07d" (UID: "b55cbe5f-b90e-47f3-a446-82d1577ac07d"). InnerVolumeSpecName "kube-api-access-gbqbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:25:38 crc kubenswrapper[4795]: I1129 08:25:38.169358 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b55cbe5f-b90e-47f3-a446-82d1577ac07d-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "b55cbe5f-b90e-47f3-a446-82d1577ac07d" (UID: "b55cbe5f-b90e-47f3-a446-82d1577ac07d"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:25:38 crc kubenswrapper[4795]: I1129 08:25:38.170271 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b55cbe5f-b90e-47f3-a446-82d1577ac07d-inventory" (OuterVolumeSpecName: "inventory") pod "b55cbe5f-b90e-47f3-a446-82d1577ac07d" (UID: "b55cbe5f-b90e-47f3-a446-82d1577ac07d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:25:38 crc kubenswrapper[4795]: I1129 08:25:38.179103 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b55cbe5f-b90e-47f3-a446-82d1577ac07d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b55cbe5f-b90e-47f3-a446-82d1577ac07d" (UID: "b55cbe5f-b90e-47f3-a446-82d1577ac07d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:25:38 crc kubenswrapper[4795]: I1129 08:25:38.190379 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b55cbe5f-b90e-47f3-a446-82d1577ac07d-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "b55cbe5f-b90e-47f3-a446-82d1577ac07d" (UID: "b55cbe5f-b90e-47f3-a446-82d1577ac07d"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:25:38 crc kubenswrapper[4795]: I1129 08:25:38.198236 4795 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b55cbe5f-b90e-47f3-a446-82d1577ac07d-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 08:25:38 crc kubenswrapper[4795]: I1129 08:25:38.198272 4795 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/b55cbe5f-b90e-47f3-a446-82d1577ac07d-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 29 08:25:38 crc kubenswrapper[4795]: I1129 08:25:38.198286 4795 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/b55cbe5f-b90e-47f3-a446-82d1577ac07d-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 29 08:25:38 crc kubenswrapper[4795]: I1129 08:25:38.198297 4795 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b55cbe5f-b90e-47f3-a446-82d1577ac07d-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 08:25:38 crc kubenswrapper[4795]: I1129 08:25:38.198311 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbqbb\" (UniqueName: \"kubernetes.io/projected/b55cbe5f-b90e-47f3-a446-82d1577ac07d-kube-api-access-gbqbb\") on node \"crc\" DevicePath \"\"" Nov 29 08:25:38 crc kubenswrapper[4795]: I1129 08:25:38.198321 4795 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b55cbe5f-b90e-47f3-a446-82d1577ac07d-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 08:25:38 crc kubenswrapper[4795]: I1129 08:25:38.419833 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2k97" event={"ID":"b55cbe5f-b90e-47f3-a446-82d1577ac07d","Type":"ContainerDied","Data":"02b394cb322d70555dbf90b33f531bd935af3b142067449d505c0aba13efe589"} Nov 29 08:25:38 crc kubenswrapper[4795]: I1129 08:25:38.419894 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02b394cb322d70555dbf90b33f531bd935af3b142067449d505c0aba13efe589" Nov 29 08:25:38 crc kubenswrapper[4795]: I1129 08:25:38.419920 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2k97" Nov 29 08:25:38 crc kubenswrapper[4795]: I1129 08:25:38.521130 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mdl5h"] Nov 29 08:25:38 crc kubenswrapper[4795]: E1129 08:25:38.521937 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b55cbe5f-b90e-47f3-a446-82d1577ac07d" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 29 08:25:38 crc kubenswrapper[4795]: I1129 08:25:38.521963 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="b55cbe5f-b90e-47f3-a446-82d1577ac07d" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 29 08:25:38 crc kubenswrapper[4795]: I1129 08:25:38.522308 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="b55cbe5f-b90e-47f3-a446-82d1577ac07d" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 29 08:25:38 crc kubenswrapper[4795]: I1129 08:25:38.523444 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mdl5h" Nov 29 08:25:38 crc kubenswrapper[4795]: I1129 08:25:38.527842 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 08:25:38 crc kubenswrapper[4795]: I1129 08:25:38.527952 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nfhsg" Nov 29 08:25:38 crc kubenswrapper[4795]: I1129 08:25:38.527957 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 08:25:38 crc kubenswrapper[4795]: I1129 08:25:38.528359 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Nov 29 08:25:38 crc kubenswrapper[4795]: I1129 08:25:38.529863 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 08:25:38 crc kubenswrapper[4795]: I1129 08:25:38.543099 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mdl5h"] Nov 29 08:25:38 crc kubenswrapper[4795]: I1129 08:25:38.711814 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8223751f-5de9-4d5c-a9b2-200cf9c164ee-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mdl5h\" (UID: \"8223751f-5de9-4d5c-a9b2-200cf9c164ee\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mdl5h" Nov 29 08:25:38 crc kubenswrapper[4795]: I1129 08:25:38.712522 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8223751f-5de9-4d5c-a9b2-200cf9c164ee-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mdl5h\" (UID: \"8223751f-5de9-4d5c-a9b2-200cf9c164ee\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mdl5h" Nov 29 08:25:38 crc kubenswrapper[4795]: I1129 08:25:38.712731 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68wzd\" (UniqueName: \"kubernetes.io/projected/8223751f-5de9-4d5c-a9b2-200cf9c164ee-kube-api-access-68wzd\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mdl5h\" (UID: \"8223751f-5de9-4d5c-a9b2-200cf9c164ee\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mdl5h" Nov 29 08:25:38 crc kubenswrapper[4795]: I1129 08:25:38.713080 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/8223751f-5de9-4d5c-a9b2-200cf9c164ee-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mdl5h\" (UID: \"8223751f-5de9-4d5c-a9b2-200cf9c164ee\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mdl5h" Nov 29 08:25:38 crc kubenswrapper[4795]: I1129 08:25:38.713310 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8223751f-5de9-4d5c-a9b2-200cf9c164ee-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mdl5h\" (UID: \"8223751f-5de9-4d5c-a9b2-200cf9c164ee\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mdl5h" Nov 29 08:25:38 crc kubenswrapper[4795]: I1129 08:25:38.816034 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8223751f-5de9-4d5c-a9b2-200cf9c164ee-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mdl5h\" (UID: \"8223751f-5de9-4d5c-a9b2-200cf9c164ee\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mdl5h" Nov 29 08:25:38 crc kubenswrapper[4795]: I1129 08:25:38.816101 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8223751f-5de9-4d5c-a9b2-200cf9c164ee-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mdl5h\" (UID: \"8223751f-5de9-4d5c-a9b2-200cf9c164ee\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mdl5h" Nov 29 08:25:38 crc kubenswrapper[4795]: I1129 08:25:38.816156 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68wzd\" (UniqueName: \"kubernetes.io/projected/8223751f-5de9-4d5c-a9b2-200cf9c164ee-kube-api-access-68wzd\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mdl5h\" (UID: \"8223751f-5de9-4d5c-a9b2-200cf9c164ee\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mdl5h" Nov 29 08:25:38 crc kubenswrapper[4795]: I1129 08:25:38.816252 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/8223751f-5de9-4d5c-a9b2-200cf9c164ee-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mdl5h\" (UID: \"8223751f-5de9-4d5c-a9b2-200cf9c164ee\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mdl5h" Nov 29 08:25:38 crc kubenswrapper[4795]: I1129 08:25:38.816315 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8223751f-5de9-4d5c-a9b2-200cf9c164ee-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mdl5h\" (UID: \"8223751f-5de9-4d5c-a9b2-200cf9c164ee\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mdl5h" Nov 29 08:25:38 crc kubenswrapper[4795]: I1129 08:25:38.822826 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8223751f-5de9-4d5c-a9b2-200cf9c164ee-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mdl5h\" (UID: \"8223751f-5de9-4d5c-a9b2-200cf9c164ee\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mdl5h" Nov 29 08:25:38 crc kubenswrapper[4795]: I1129 08:25:38.823138 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8223751f-5de9-4d5c-a9b2-200cf9c164ee-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mdl5h\" (UID: \"8223751f-5de9-4d5c-a9b2-200cf9c164ee\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mdl5h" Nov 29 08:25:38 crc kubenswrapper[4795]: I1129 08:25:38.825367 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8223751f-5de9-4d5c-a9b2-200cf9c164ee-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mdl5h\" (UID: \"8223751f-5de9-4d5c-a9b2-200cf9c164ee\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mdl5h" Nov 29 08:25:38 crc kubenswrapper[4795]: I1129 08:25:38.844789 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/8223751f-5de9-4d5c-a9b2-200cf9c164ee-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mdl5h\" (UID: \"8223751f-5de9-4d5c-a9b2-200cf9c164ee\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mdl5h" Nov 29 08:25:38 crc kubenswrapper[4795]: I1129 08:25:38.846206 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68wzd\" (UniqueName: \"kubernetes.io/projected/8223751f-5de9-4d5c-a9b2-200cf9c164ee-kube-api-access-68wzd\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mdl5h\" (UID: \"8223751f-5de9-4d5c-a9b2-200cf9c164ee\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mdl5h" Nov 29 08:25:38 crc kubenswrapper[4795]: I1129 08:25:38.848514 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mdl5h" Nov 29 08:25:39 crc kubenswrapper[4795]: I1129 08:25:39.516673 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mdl5h"] Nov 29 08:25:40 crc kubenswrapper[4795]: I1129 08:25:40.445478 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mdl5h" event={"ID":"8223751f-5de9-4d5c-a9b2-200cf9c164ee","Type":"ContainerStarted","Data":"8753179a95893cb1fa9196aa5d75303cdaaa31993af080f5c3e2af16ba26c809"} Nov 29 08:25:41 crc kubenswrapper[4795]: I1129 08:25:41.463382 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mdl5h" event={"ID":"8223751f-5de9-4d5c-a9b2-200cf9c164ee","Type":"ContainerStarted","Data":"56a5445b14cf80e6f5584924ee58502b0f90b3b02d7e97167816bd453030a21c"} Nov 29 08:25:41 crc kubenswrapper[4795]: I1129 08:25:41.490193 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mdl5h" podStartSLOduration=2.9093087029999998 podStartE2EDuration="3.490167121s" podCreationTimestamp="2025-11-29 08:25:38 +0000 UTC" firstStartedPulling="2025-11-29 08:25:39.534002892 +0000 UTC m=+2785.509578682" lastFinishedPulling="2025-11-29 08:25:40.11486132 +0000 UTC m=+2786.090437100" observedRunningTime="2025-11-29 08:25:41.481789673 +0000 UTC m=+2787.457365463" watchObservedRunningTime="2025-11-29 08:25:41.490167121 +0000 UTC m=+2787.465742921" Nov 29 08:25:44 crc kubenswrapper[4795]: I1129 08:25:44.291158 4795 scope.go:117] "RemoveContainer" containerID="21e10122a79c2c2a3372f86102d58fee1c9be891bd6896b1affd1c2404af16fa" Nov 29 08:25:45 crc kubenswrapper[4795]: I1129 08:25:45.515160 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" event={"ID":"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1","Type":"ContainerStarted","Data":"58ab07ae66662b6ecfd323741dc69073d735bd5a011805572a265e99d0a11e8b"} Nov 29 08:26:00 crc kubenswrapper[4795]: I1129 08:26:00.990798 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fm5k7"] Nov 29 08:26:00 crc kubenswrapper[4795]: I1129 08:26:00.994871 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fm5k7" Nov 29 08:26:01 crc kubenswrapper[4795]: I1129 08:26:01.014114 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fm5k7"] Nov 29 08:26:01 crc kubenswrapper[4795]: I1129 08:26:01.184065 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f62lb\" (UniqueName: \"kubernetes.io/projected/5b020866-8336-4e5d-a1e2-8998ec9a8744-kube-api-access-f62lb\") pod \"certified-operators-fm5k7\" (UID: \"5b020866-8336-4e5d-a1e2-8998ec9a8744\") " pod="openshift-marketplace/certified-operators-fm5k7" Nov 29 08:26:01 crc kubenswrapper[4795]: I1129 08:26:01.184328 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b020866-8336-4e5d-a1e2-8998ec9a8744-utilities\") pod \"certified-operators-fm5k7\" (UID: \"5b020866-8336-4e5d-a1e2-8998ec9a8744\") " pod="openshift-marketplace/certified-operators-fm5k7" Nov 29 08:26:01 crc kubenswrapper[4795]: I1129 08:26:01.184668 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b020866-8336-4e5d-a1e2-8998ec9a8744-catalog-content\") pod \"certified-operators-fm5k7\" (UID: \"5b020866-8336-4e5d-a1e2-8998ec9a8744\") " pod="openshift-marketplace/certified-operators-fm5k7" Nov 29 08:26:01 crc kubenswrapper[4795]: I1129 08:26:01.286742 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b020866-8336-4e5d-a1e2-8998ec9a8744-catalog-content\") pod \"certified-operators-fm5k7\" (UID: \"5b020866-8336-4e5d-a1e2-8998ec9a8744\") " pod="openshift-marketplace/certified-operators-fm5k7" Nov 29 08:26:01 crc kubenswrapper[4795]: I1129 08:26:01.286859 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f62lb\" (UniqueName: \"kubernetes.io/projected/5b020866-8336-4e5d-a1e2-8998ec9a8744-kube-api-access-f62lb\") pod \"certified-operators-fm5k7\" (UID: \"5b020866-8336-4e5d-a1e2-8998ec9a8744\") " pod="openshift-marketplace/certified-operators-fm5k7" Nov 29 08:26:01 crc kubenswrapper[4795]: I1129 08:26:01.286959 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b020866-8336-4e5d-a1e2-8998ec9a8744-utilities\") pod \"certified-operators-fm5k7\" (UID: \"5b020866-8336-4e5d-a1e2-8998ec9a8744\") " pod="openshift-marketplace/certified-operators-fm5k7" Nov 29 08:26:01 crc kubenswrapper[4795]: I1129 08:26:01.287217 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b020866-8336-4e5d-a1e2-8998ec9a8744-catalog-content\") pod \"certified-operators-fm5k7\" (UID: \"5b020866-8336-4e5d-a1e2-8998ec9a8744\") " pod="openshift-marketplace/certified-operators-fm5k7" Nov 29 08:26:01 crc kubenswrapper[4795]: I1129 08:26:01.287442 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b020866-8336-4e5d-a1e2-8998ec9a8744-utilities\") pod \"certified-operators-fm5k7\" (UID: \"5b020866-8336-4e5d-a1e2-8998ec9a8744\") " pod="openshift-marketplace/certified-operators-fm5k7" Nov 29 08:26:01 crc kubenswrapper[4795]: I1129 08:26:01.309867 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f62lb\" (UniqueName: \"kubernetes.io/projected/5b020866-8336-4e5d-a1e2-8998ec9a8744-kube-api-access-f62lb\") pod \"certified-operators-fm5k7\" (UID: \"5b020866-8336-4e5d-a1e2-8998ec9a8744\") " pod="openshift-marketplace/certified-operators-fm5k7" Nov 29 08:26:01 crc kubenswrapper[4795]: I1129 08:26:01.372626 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fm5k7" Nov 29 08:26:01 crc kubenswrapper[4795]: I1129 08:26:01.987101 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fm5k7"] Nov 29 08:26:02 crc kubenswrapper[4795]: I1129 08:26:02.706797 4795 generic.go:334] "Generic (PLEG): container finished" podID="5b020866-8336-4e5d-a1e2-8998ec9a8744" containerID="53acb9b3fa5961225ee1a4ac6ad458ab151dcbd21c834afa57c4f6c4824609c5" exitCode=0 Nov 29 08:26:02 crc kubenswrapper[4795]: I1129 08:26:02.706856 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fm5k7" event={"ID":"5b020866-8336-4e5d-a1e2-8998ec9a8744","Type":"ContainerDied","Data":"53acb9b3fa5961225ee1a4ac6ad458ab151dcbd21c834afa57c4f6c4824609c5"} Nov 29 08:26:02 crc kubenswrapper[4795]: I1129 08:26:02.707389 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fm5k7" event={"ID":"5b020866-8336-4e5d-a1e2-8998ec9a8744","Type":"ContainerStarted","Data":"32b00681f670d46dc5fa03657078aeef24d2822e52dc9c055814bb2cad614f09"} Nov 29 08:26:04 crc kubenswrapper[4795]: I1129 08:26:04.736555 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fm5k7" event={"ID":"5b020866-8336-4e5d-a1e2-8998ec9a8744","Type":"ContainerStarted","Data":"1f13d663b44428545a193b92f2a4c267201f086f858d9c5b27b7aef9fe75a2a3"} Nov 29 08:26:05 crc kubenswrapper[4795]: I1129 08:26:05.750254 4795 generic.go:334] "Generic (PLEG): container finished" podID="5b020866-8336-4e5d-a1e2-8998ec9a8744" containerID="1f13d663b44428545a193b92f2a4c267201f086f858d9c5b27b7aef9fe75a2a3" exitCode=0 Nov 29 08:26:05 crc kubenswrapper[4795]: I1129 08:26:05.750357 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fm5k7" event={"ID":"5b020866-8336-4e5d-a1e2-8998ec9a8744","Type":"ContainerDied","Data":"1f13d663b44428545a193b92f2a4c267201f086f858d9c5b27b7aef9fe75a2a3"} Nov 29 08:26:06 crc kubenswrapper[4795]: I1129 08:26:06.764323 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fm5k7" event={"ID":"5b020866-8336-4e5d-a1e2-8998ec9a8744","Type":"ContainerStarted","Data":"4e4173d1abc4857d65269ef13df0d5cd69a64880e9d345c7268085477d145b06"} Nov 29 08:26:06 crc kubenswrapper[4795]: I1129 08:26:06.792477 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fm5k7" podStartSLOduration=3.354683606 podStartE2EDuration="6.792455949s" podCreationTimestamp="2025-11-29 08:26:00 +0000 UTC" firstStartedPulling="2025-11-29 08:26:02.708575539 +0000 UTC m=+2808.684151329" lastFinishedPulling="2025-11-29 08:26:06.146347842 +0000 UTC m=+2812.121923672" observedRunningTime="2025-11-29 08:26:06.791291696 +0000 UTC m=+2812.766867536" watchObservedRunningTime="2025-11-29 08:26:06.792455949 +0000 UTC m=+2812.768031739" Nov 29 08:26:11 crc kubenswrapper[4795]: I1129 08:26:11.373419 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fm5k7" Nov 29 08:26:11 crc kubenswrapper[4795]: I1129 08:26:11.373915 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fm5k7" Nov 29 08:26:11 crc kubenswrapper[4795]: I1129 08:26:11.435807 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fm5k7" Nov 29 08:26:11 crc kubenswrapper[4795]: I1129 08:26:11.868394 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fm5k7" Nov 29 08:26:11 crc kubenswrapper[4795]: I1129 08:26:11.920904 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fm5k7"] Nov 29 08:26:13 crc kubenswrapper[4795]: I1129 08:26:13.863941 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fm5k7" podUID="5b020866-8336-4e5d-a1e2-8998ec9a8744" containerName="registry-server" containerID="cri-o://4e4173d1abc4857d65269ef13df0d5cd69a64880e9d345c7268085477d145b06" gracePeriod=2 Nov 29 08:26:14 crc kubenswrapper[4795]: I1129 08:26:14.416815 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fm5k7" Nov 29 08:26:14 crc kubenswrapper[4795]: I1129 08:26:14.532987 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f62lb\" (UniqueName: \"kubernetes.io/projected/5b020866-8336-4e5d-a1e2-8998ec9a8744-kube-api-access-f62lb\") pod \"5b020866-8336-4e5d-a1e2-8998ec9a8744\" (UID: \"5b020866-8336-4e5d-a1e2-8998ec9a8744\") " Nov 29 08:26:14 crc kubenswrapper[4795]: I1129 08:26:14.533151 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b020866-8336-4e5d-a1e2-8998ec9a8744-utilities\") pod \"5b020866-8336-4e5d-a1e2-8998ec9a8744\" (UID: \"5b020866-8336-4e5d-a1e2-8998ec9a8744\") " Nov 29 08:26:14 crc kubenswrapper[4795]: I1129 08:26:14.533316 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b020866-8336-4e5d-a1e2-8998ec9a8744-catalog-content\") pod \"5b020866-8336-4e5d-a1e2-8998ec9a8744\" (UID: \"5b020866-8336-4e5d-a1e2-8998ec9a8744\") " Nov 29 08:26:14 crc kubenswrapper[4795]: I1129 08:26:14.533839 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b020866-8336-4e5d-a1e2-8998ec9a8744-utilities" (OuterVolumeSpecName: "utilities") pod "5b020866-8336-4e5d-a1e2-8998ec9a8744" (UID: "5b020866-8336-4e5d-a1e2-8998ec9a8744"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:26:14 crc kubenswrapper[4795]: I1129 08:26:14.534194 4795 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b020866-8336-4e5d-a1e2-8998ec9a8744-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 08:26:14 crc kubenswrapper[4795]: I1129 08:26:14.549816 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b020866-8336-4e5d-a1e2-8998ec9a8744-kube-api-access-f62lb" (OuterVolumeSpecName: "kube-api-access-f62lb") pod "5b020866-8336-4e5d-a1e2-8998ec9a8744" (UID: "5b020866-8336-4e5d-a1e2-8998ec9a8744"). InnerVolumeSpecName "kube-api-access-f62lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:26:14 crc kubenswrapper[4795]: I1129 08:26:14.585537 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b020866-8336-4e5d-a1e2-8998ec9a8744-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5b020866-8336-4e5d-a1e2-8998ec9a8744" (UID: "5b020866-8336-4e5d-a1e2-8998ec9a8744"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:26:14 crc kubenswrapper[4795]: I1129 08:26:14.636382 4795 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b020866-8336-4e5d-a1e2-8998ec9a8744-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 08:26:14 crc kubenswrapper[4795]: I1129 08:26:14.636420 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f62lb\" (UniqueName: \"kubernetes.io/projected/5b020866-8336-4e5d-a1e2-8998ec9a8744-kube-api-access-f62lb\") on node \"crc\" DevicePath \"\"" Nov 29 08:26:14 crc kubenswrapper[4795]: I1129 08:26:14.876945 4795 generic.go:334] "Generic (PLEG): container finished" podID="5b020866-8336-4e5d-a1e2-8998ec9a8744" containerID="4e4173d1abc4857d65269ef13df0d5cd69a64880e9d345c7268085477d145b06" exitCode=0 Nov 29 08:26:14 crc kubenswrapper[4795]: I1129 08:26:14.876986 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fm5k7" event={"ID":"5b020866-8336-4e5d-a1e2-8998ec9a8744","Type":"ContainerDied","Data":"4e4173d1abc4857d65269ef13df0d5cd69a64880e9d345c7268085477d145b06"} Nov 29 08:26:14 crc kubenswrapper[4795]: I1129 08:26:14.877021 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fm5k7" event={"ID":"5b020866-8336-4e5d-a1e2-8998ec9a8744","Type":"ContainerDied","Data":"32b00681f670d46dc5fa03657078aeef24d2822e52dc9c055814bb2cad614f09"} Nov 29 08:26:14 crc kubenswrapper[4795]: I1129 08:26:14.877038 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fm5k7" Nov 29 08:26:14 crc kubenswrapper[4795]: I1129 08:26:14.877054 4795 scope.go:117] "RemoveContainer" containerID="4e4173d1abc4857d65269ef13df0d5cd69a64880e9d345c7268085477d145b06" Nov 29 08:26:14 crc kubenswrapper[4795]: I1129 08:26:14.910001 4795 scope.go:117] "RemoveContainer" containerID="1f13d663b44428545a193b92f2a4c267201f086f858d9c5b27b7aef9fe75a2a3" Nov 29 08:26:14 crc kubenswrapper[4795]: I1129 08:26:14.930252 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fm5k7"] Nov 29 08:26:14 crc kubenswrapper[4795]: I1129 08:26:14.933435 4795 scope.go:117] "RemoveContainer" containerID="53acb9b3fa5961225ee1a4ac6ad458ab151dcbd21c834afa57c4f6c4824609c5" Nov 29 08:26:14 crc kubenswrapper[4795]: I1129 08:26:14.945256 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fm5k7"] Nov 29 08:26:14 crc kubenswrapper[4795]: I1129 08:26:14.994381 4795 scope.go:117] "RemoveContainer" containerID="4e4173d1abc4857d65269ef13df0d5cd69a64880e9d345c7268085477d145b06" Nov 29 08:26:14 crc kubenswrapper[4795]: E1129 08:26:14.994958 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e4173d1abc4857d65269ef13df0d5cd69a64880e9d345c7268085477d145b06\": container with ID starting with 4e4173d1abc4857d65269ef13df0d5cd69a64880e9d345c7268085477d145b06 not found: ID does not exist" containerID="4e4173d1abc4857d65269ef13df0d5cd69a64880e9d345c7268085477d145b06" Nov 29 08:26:14 crc kubenswrapper[4795]: I1129 08:26:14.994990 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e4173d1abc4857d65269ef13df0d5cd69a64880e9d345c7268085477d145b06"} err="failed to get container status \"4e4173d1abc4857d65269ef13df0d5cd69a64880e9d345c7268085477d145b06\": rpc error: code = NotFound desc = could not find container \"4e4173d1abc4857d65269ef13df0d5cd69a64880e9d345c7268085477d145b06\": container with ID starting with 4e4173d1abc4857d65269ef13df0d5cd69a64880e9d345c7268085477d145b06 not found: ID does not exist" Nov 29 08:26:14 crc kubenswrapper[4795]: I1129 08:26:14.995011 4795 scope.go:117] "RemoveContainer" containerID="1f13d663b44428545a193b92f2a4c267201f086f858d9c5b27b7aef9fe75a2a3" Nov 29 08:26:14 crc kubenswrapper[4795]: E1129 08:26:14.995460 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f13d663b44428545a193b92f2a4c267201f086f858d9c5b27b7aef9fe75a2a3\": container with ID starting with 1f13d663b44428545a193b92f2a4c267201f086f858d9c5b27b7aef9fe75a2a3 not found: ID does not exist" containerID="1f13d663b44428545a193b92f2a4c267201f086f858d9c5b27b7aef9fe75a2a3" Nov 29 08:26:14 crc kubenswrapper[4795]: I1129 08:26:14.995517 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f13d663b44428545a193b92f2a4c267201f086f858d9c5b27b7aef9fe75a2a3"} err="failed to get container status \"1f13d663b44428545a193b92f2a4c267201f086f858d9c5b27b7aef9fe75a2a3\": rpc error: code = NotFound desc = could not find container \"1f13d663b44428545a193b92f2a4c267201f086f858d9c5b27b7aef9fe75a2a3\": container with ID starting with 1f13d663b44428545a193b92f2a4c267201f086f858d9c5b27b7aef9fe75a2a3 not found: ID does not exist" Nov 29 08:26:14 crc kubenswrapper[4795]: I1129 08:26:14.995553 4795 scope.go:117] "RemoveContainer" containerID="53acb9b3fa5961225ee1a4ac6ad458ab151dcbd21c834afa57c4f6c4824609c5" Nov 29 08:26:14 crc kubenswrapper[4795]: E1129 08:26:14.995993 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53acb9b3fa5961225ee1a4ac6ad458ab151dcbd21c834afa57c4f6c4824609c5\": container with ID starting with 53acb9b3fa5961225ee1a4ac6ad458ab151dcbd21c834afa57c4f6c4824609c5 not found: ID does not exist" containerID="53acb9b3fa5961225ee1a4ac6ad458ab151dcbd21c834afa57c4f6c4824609c5" Nov 29 08:26:14 crc kubenswrapper[4795]: I1129 08:26:14.996020 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53acb9b3fa5961225ee1a4ac6ad458ab151dcbd21c834afa57c4f6c4824609c5"} err="failed to get container status \"53acb9b3fa5961225ee1a4ac6ad458ab151dcbd21c834afa57c4f6c4824609c5\": rpc error: code = NotFound desc = could not find container \"53acb9b3fa5961225ee1a4ac6ad458ab151dcbd21c834afa57c4f6c4824609c5\": container with ID starting with 53acb9b3fa5961225ee1a4ac6ad458ab151dcbd21c834afa57c4f6c4824609c5 not found: ID does not exist" Nov 29 08:26:16 crc kubenswrapper[4795]: I1129 08:26:16.291428 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b020866-8336-4e5d-a1e2-8998ec9a8744" path="/var/lib/kubelet/pods/5b020866-8336-4e5d-a1e2-8998ec9a8744/volumes" Nov 29 08:26:25 crc kubenswrapper[4795]: I1129 08:26:25.668242 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bzqx5"] Nov 29 08:26:25 crc kubenswrapper[4795]: E1129 08:26:25.669538 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b020866-8336-4e5d-a1e2-8998ec9a8744" containerName="extract-content" Nov 29 08:26:25 crc kubenswrapper[4795]: I1129 08:26:25.669558 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b020866-8336-4e5d-a1e2-8998ec9a8744" containerName="extract-content" Nov 29 08:26:25 crc kubenswrapper[4795]: E1129 08:26:25.669578 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b020866-8336-4e5d-a1e2-8998ec9a8744" containerName="registry-server" Nov 29 08:26:25 crc kubenswrapper[4795]: I1129 08:26:25.669586 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b020866-8336-4e5d-a1e2-8998ec9a8744" containerName="registry-server" Nov 29 08:26:25 crc kubenswrapper[4795]: E1129 08:26:25.669654 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b020866-8336-4e5d-a1e2-8998ec9a8744" containerName="extract-utilities" Nov 29 08:26:25 crc kubenswrapper[4795]: I1129 08:26:25.669662 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b020866-8336-4e5d-a1e2-8998ec9a8744" containerName="extract-utilities" Nov 29 08:26:25 crc kubenswrapper[4795]: I1129 08:26:25.669973 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b020866-8336-4e5d-a1e2-8998ec9a8744" containerName="registry-server" Nov 29 08:26:25 crc kubenswrapper[4795]: I1129 08:26:25.671981 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bzqx5" Nov 29 08:26:25 crc kubenswrapper[4795]: I1129 08:26:25.684246 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bzqx5"] Nov 29 08:26:25 crc kubenswrapper[4795]: I1129 08:26:25.795015 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de62342d-7e20-4789-8fce-ec059c2767d1-utilities\") pod \"redhat-marketplace-bzqx5\" (UID: \"de62342d-7e20-4789-8fce-ec059c2767d1\") " pod="openshift-marketplace/redhat-marketplace-bzqx5" Nov 29 08:26:25 crc kubenswrapper[4795]: I1129 08:26:25.795402 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de62342d-7e20-4789-8fce-ec059c2767d1-catalog-content\") pod \"redhat-marketplace-bzqx5\" (UID: \"de62342d-7e20-4789-8fce-ec059c2767d1\") " pod="openshift-marketplace/redhat-marketplace-bzqx5" Nov 29 08:26:25 crc kubenswrapper[4795]: I1129 08:26:25.795579 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slkfq\" (UniqueName: \"kubernetes.io/projected/de62342d-7e20-4789-8fce-ec059c2767d1-kube-api-access-slkfq\") pod \"redhat-marketplace-bzqx5\" (UID: \"de62342d-7e20-4789-8fce-ec059c2767d1\") " pod="openshift-marketplace/redhat-marketplace-bzqx5" Nov 29 08:26:25 crc kubenswrapper[4795]: I1129 08:26:25.897885 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slkfq\" (UniqueName: \"kubernetes.io/projected/de62342d-7e20-4789-8fce-ec059c2767d1-kube-api-access-slkfq\") pod \"redhat-marketplace-bzqx5\" (UID: \"de62342d-7e20-4789-8fce-ec059c2767d1\") " pod="openshift-marketplace/redhat-marketplace-bzqx5" Nov 29 08:26:25 crc kubenswrapper[4795]: I1129 08:26:25.898023 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de62342d-7e20-4789-8fce-ec059c2767d1-utilities\") pod \"redhat-marketplace-bzqx5\" (UID: \"de62342d-7e20-4789-8fce-ec059c2767d1\") " pod="openshift-marketplace/redhat-marketplace-bzqx5" Nov 29 08:26:25 crc kubenswrapper[4795]: I1129 08:26:25.898094 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de62342d-7e20-4789-8fce-ec059c2767d1-catalog-content\") pod \"redhat-marketplace-bzqx5\" (UID: \"de62342d-7e20-4789-8fce-ec059c2767d1\") " pod="openshift-marketplace/redhat-marketplace-bzqx5" Nov 29 08:26:25 crc kubenswrapper[4795]: I1129 08:26:25.898547 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de62342d-7e20-4789-8fce-ec059c2767d1-catalog-content\") pod \"redhat-marketplace-bzqx5\" (UID: \"de62342d-7e20-4789-8fce-ec059c2767d1\") " pod="openshift-marketplace/redhat-marketplace-bzqx5" Nov 29 08:26:25 crc kubenswrapper[4795]: I1129 08:26:25.898831 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de62342d-7e20-4789-8fce-ec059c2767d1-utilities\") pod \"redhat-marketplace-bzqx5\" (UID: \"de62342d-7e20-4789-8fce-ec059c2767d1\") " pod="openshift-marketplace/redhat-marketplace-bzqx5" Nov 29 08:26:25 crc kubenswrapper[4795]: I1129 08:26:25.915623 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slkfq\" (UniqueName: \"kubernetes.io/projected/de62342d-7e20-4789-8fce-ec059c2767d1-kube-api-access-slkfq\") pod \"redhat-marketplace-bzqx5\" (UID: \"de62342d-7e20-4789-8fce-ec059c2767d1\") " pod="openshift-marketplace/redhat-marketplace-bzqx5" Nov 29 08:26:25 crc kubenswrapper[4795]: I1129 08:26:25.993179 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bzqx5" Nov 29 08:26:26 crc kubenswrapper[4795]: I1129 08:26:26.552945 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bzqx5"] Nov 29 08:26:27 crc kubenswrapper[4795]: I1129 08:26:27.036965 4795 generic.go:334] "Generic (PLEG): container finished" podID="de62342d-7e20-4789-8fce-ec059c2767d1" containerID="10162a355a16dacf28e7badccdcaff3d8580e5a829df5cda2407b8e77359e392" exitCode=0 Nov 29 08:26:27 crc kubenswrapper[4795]: I1129 08:26:27.037063 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bzqx5" event={"ID":"de62342d-7e20-4789-8fce-ec059c2767d1","Type":"ContainerDied","Data":"10162a355a16dacf28e7badccdcaff3d8580e5a829df5cda2407b8e77359e392"} Nov 29 08:26:27 crc kubenswrapper[4795]: I1129 08:26:27.037262 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bzqx5" event={"ID":"de62342d-7e20-4789-8fce-ec059c2767d1","Type":"ContainerStarted","Data":"c02b5dee0c852807f9684f7b3413a9a4440bec9982332d1936d322184c728a3f"} Nov 29 08:26:28 crc kubenswrapper[4795]: I1129 08:26:28.052936 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bzqx5" event={"ID":"de62342d-7e20-4789-8fce-ec059c2767d1","Type":"ContainerStarted","Data":"0f6b2e4d9db8193338ad6e8b2daa1967417ee233584951996f7af8ce48a4b730"} Nov 29 08:26:29 crc kubenswrapper[4795]: I1129 08:26:29.064235 4795 generic.go:334] "Generic (PLEG): container finished" podID="de62342d-7e20-4789-8fce-ec059c2767d1" containerID="0f6b2e4d9db8193338ad6e8b2daa1967417ee233584951996f7af8ce48a4b730" exitCode=0 Nov 29 08:26:29 crc kubenswrapper[4795]: I1129 08:26:29.064336 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bzqx5" event={"ID":"de62342d-7e20-4789-8fce-ec059c2767d1","Type":"ContainerDied","Data":"0f6b2e4d9db8193338ad6e8b2daa1967417ee233584951996f7af8ce48a4b730"} Nov 29 08:26:30 crc kubenswrapper[4795]: I1129 08:26:30.076971 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bzqx5" event={"ID":"de62342d-7e20-4789-8fce-ec059c2767d1","Type":"ContainerStarted","Data":"d2283ac83a2af32dfa3398164693ef2549c9e09e6fea567310e1336087127064"} Nov 29 08:26:30 crc kubenswrapper[4795]: I1129 08:26:30.096194 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bzqx5" podStartSLOduration=2.584940469 podStartE2EDuration="5.096168684s" podCreationTimestamp="2025-11-29 08:26:25 +0000 UTC" firstStartedPulling="2025-11-29 08:26:27.03930745 +0000 UTC m=+2833.014883240" lastFinishedPulling="2025-11-29 08:26:29.550535665 +0000 UTC m=+2835.526111455" observedRunningTime="2025-11-29 08:26:30.092338326 +0000 UTC m=+2836.067914116" watchObservedRunningTime="2025-11-29 08:26:30.096168684 +0000 UTC m=+2836.071744494" Nov 29 08:26:35 crc kubenswrapper[4795]: I1129 08:26:35.995085 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bzqx5" Nov 29 08:26:35 crc kubenswrapper[4795]: I1129 08:26:35.995508 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bzqx5" Nov 29 08:26:36 crc kubenswrapper[4795]: I1129 08:26:36.068360 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bzqx5" Nov 29 08:26:36 crc kubenswrapper[4795]: I1129 08:26:36.188069 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bzqx5" Nov 29 08:26:38 crc kubenswrapper[4795]: I1129 08:26:38.955011 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bzqx5"] Nov 29 08:26:38 crc kubenswrapper[4795]: I1129 08:26:38.955931 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bzqx5" podUID="de62342d-7e20-4789-8fce-ec059c2767d1" containerName="registry-server" containerID="cri-o://d2283ac83a2af32dfa3398164693ef2549c9e09e6fea567310e1336087127064" gracePeriod=2 Nov 29 08:26:39 crc kubenswrapper[4795]: I1129 08:26:39.174544 4795 generic.go:334] "Generic (PLEG): container finished" podID="de62342d-7e20-4789-8fce-ec059c2767d1" containerID="d2283ac83a2af32dfa3398164693ef2549c9e09e6fea567310e1336087127064" exitCode=0 Nov 29 08:26:39 crc kubenswrapper[4795]: I1129 08:26:39.174579 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bzqx5" event={"ID":"de62342d-7e20-4789-8fce-ec059c2767d1","Type":"ContainerDied","Data":"d2283ac83a2af32dfa3398164693ef2549c9e09e6fea567310e1336087127064"} Nov 29 08:26:39 crc kubenswrapper[4795]: I1129 08:26:39.505862 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bzqx5" Nov 29 08:26:39 crc kubenswrapper[4795]: I1129 08:26:39.652280 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de62342d-7e20-4789-8fce-ec059c2767d1-utilities\") pod \"de62342d-7e20-4789-8fce-ec059c2767d1\" (UID: \"de62342d-7e20-4789-8fce-ec059c2767d1\") " Nov 29 08:26:39 crc kubenswrapper[4795]: I1129 08:26:39.652439 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de62342d-7e20-4789-8fce-ec059c2767d1-catalog-content\") pod \"de62342d-7e20-4789-8fce-ec059c2767d1\" (UID: \"de62342d-7e20-4789-8fce-ec059c2767d1\") " Nov 29 08:26:39 crc kubenswrapper[4795]: I1129 08:26:39.652470 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slkfq\" (UniqueName: \"kubernetes.io/projected/de62342d-7e20-4789-8fce-ec059c2767d1-kube-api-access-slkfq\") pod \"de62342d-7e20-4789-8fce-ec059c2767d1\" (UID: \"de62342d-7e20-4789-8fce-ec059c2767d1\") " Nov 29 08:26:39 crc kubenswrapper[4795]: I1129 08:26:39.653522 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de62342d-7e20-4789-8fce-ec059c2767d1-utilities" (OuterVolumeSpecName: "utilities") pod "de62342d-7e20-4789-8fce-ec059c2767d1" (UID: "de62342d-7e20-4789-8fce-ec059c2767d1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:26:39 crc kubenswrapper[4795]: I1129 08:26:39.653908 4795 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de62342d-7e20-4789-8fce-ec059c2767d1-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 08:26:39 crc kubenswrapper[4795]: I1129 08:26:39.676888 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de62342d-7e20-4789-8fce-ec059c2767d1-kube-api-access-slkfq" (OuterVolumeSpecName: "kube-api-access-slkfq") pod "de62342d-7e20-4789-8fce-ec059c2767d1" (UID: "de62342d-7e20-4789-8fce-ec059c2767d1"). InnerVolumeSpecName "kube-api-access-slkfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:26:39 crc kubenswrapper[4795]: I1129 08:26:39.681216 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de62342d-7e20-4789-8fce-ec059c2767d1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "de62342d-7e20-4789-8fce-ec059c2767d1" (UID: "de62342d-7e20-4789-8fce-ec059c2767d1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:26:39 crc kubenswrapper[4795]: I1129 08:26:39.755848 4795 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de62342d-7e20-4789-8fce-ec059c2767d1-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 08:26:39 crc kubenswrapper[4795]: I1129 08:26:39.755883 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slkfq\" (UniqueName: \"kubernetes.io/projected/de62342d-7e20-4789-8fce-ec059c2767d1-kube-api-access-slkfq\") on node \"crc\" DevicePath \"\"" Nov 29 08:26:40 crc kubenswrapper[4795]: I1129 08:26:40.187015 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bzqx5" event={"ID":"de62342d-7e20-4789-8fce-ec059c2767d1","Type":"ContainerDied","Data":"c02b5dee0c852807f9684f7b3413a9a4440bec9982332d1936d322184c728a3f"} Nov 29 08:26:40 crc kubenswrapper[4795]: I1129 08:26:40.187428 4795 scope.go:117] "RemoveContainer" containerID="d2283ac83a2af32dfa3398164693ef2549c9e09e6fea567310e1336087127064" Nov 29 08:26:40 crc kubenswrapper[4795]: I1129 08:26:40.187429 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bzqx5" Nov 29 08:26:40 crc kubenswrapper[4795]: I1129 08:26:40.211558 4795 scope.go:117] "RemoveContainer" containerID="0f6b2e4d9db8193338ad6e8b2daa1967417ee233584951996f7af8ce48a4b730" Nov 29 08:26:40 crc kubenswrapper[4795]: I1129 08:26:40.239004 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bzqx5"] Nov 29 08:26:40 crc kubenswrapper[4795]: I1129 08:26:40.255506 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bzqx5"] Nov 29 08:26:40 crc kubenswrapper[4795]: I1129 08:26:40.294357 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de62342d-7e20-4789-8fce-ec059c2767d1" path="/var/lib/kubelet/pods/de62342d-7e20-4789-8fce-ec059c2767d1/volumes" Nov 29 08:26:40 crc kubenswrapper[4795]: I1129 08:26:40.302031 4795 scope.go:117] "RemoveContainer" containerID="10162a355a16dacf28e7badccdcaff3d8580e5a829df5cda2407b8e77359e392" Nov 29 08:28:11 crc kubenswrapper[4795]: I1129 08:28:11.941660 4795 patch_prober.go:28] interesting pod/machine-config-daemon-bkmq6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:28:11 crc kubenswrapper[4795]: I1129 08:28:11.942308 4795 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:28:41 crc kubenswrapper[4795]: I1129 08:28:41.941977 4795 patch_prober.go:28] interesting pod/machine-config-daemon-bkmq6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:28:41 crc kubenswrapper[4795]: I1129 08:28:41.943080 4795 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:29:11 crc kubenswrapper[4795]: I1129 08:29:11.941217 4795 patch_prober.go:28] interesting pod/machine-config-daemon-bkmq6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:29:11 crc kubenswrapper[4795]: I1129 08:29:11.941959 4795 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:29:11 crc kubenswrapper[4795]: I1129 08:29:11.942059 4795 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" Nov 29 08:29:11 crc kubenswrapper[4795]: I1129 08:29:11.943497 4795 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"58ab07ae66662b6ecfd323741dc69073d735bd5a011805572a265e99d0a11e8b"} pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 08:29:11 crc kubenswrapper[4795]: I1129 08:29:11.943630 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" containerID="cri-o://58ab07ae66662b6ecfd323741dc69073d735bd5a011805572a265e99d0a11e8b" gracePeriod=600 Nov 29 08:29:12 crc kubenswrapper[4795]: I1129 08:29:12.202048 4795 generic.go:334] "Generic (PLEG): container finished" podID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerID="58ab07ae66662b6ecfd323741dc69073d735bd5a011805572a265e99d0a11e8b" exitCode=0 Nov 29 08:29:12 crc kubenswrapper[4795]: I1129 08:29:12.202277 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" event={"ID":"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1","Type":"ContainerDied","Data":"58ab07ae66662b6ecfd323741dc69073d735bd5a011805572a265e99d0a11e8b"} Nov 29 08:29:12 crc kubenswrapper[4795]: I1129 08:29:12.202522 4795 scope.go:117] "RemoveContainer" containerID="21e10122a79c2c2a3372f86102d58fee1c9be891bd6896b1affd1c2404af16fa" Nov 29 08:29:13 crc kubenswrapper[4795]: I1129 08:29:13.214053 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" event={"ID":"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1","Type":"ContainerStarted","Data":"08469485735dcf4c196646b777c5b1c73e97106d4c854403de03ac1f9b64edb4"} Nov 29 08:30:00 crc kubenswrapper[4795]: I1129 08:30:00.157701 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406750-rpcjn"] Nov 29 08:30:00 crc kubenswrapper[4795]: E1129 08:30:00.158938 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de62342d-7e20-4789-8fce-ec059c2767d1" containerName="extract-utilities" Nov 29 08:30:00 crc kubenswrapper[4795]: I1129 08:30:00.158957 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="de62342d-7e20-4789-8fce-ec059c2767d1" containerName="extract-utilities" Nov 29 08:30:00 crc kubenswrapper[4795]: E1129 08:30:00.159005 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de62342d-7e20-4789-8fce-ec059c2767d1" containerName="extract-content" Nov 29 08:30:00 crc kubenswrapper[4795]: I1129 08:30:00.159014 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="de62342d-7e20-4789-8fce-ec059c2767d1" containerName="extract-content" Nov 29 08:30:00 crc kubenswrapper[4795]: E1129 08:30:00.159049 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de62342d-7e20-4789-8fce-ec059c2767d1" containerName="registry-server" Nov 29 08:30:00 crc kubenswrapper[4795]: I1129 08:30:00.159057 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="de62342d-7e20-4789-8fce-ec059c2767d1" containerName="registry-server" Nov 29 08:30:00 crc kubenswrapper[4795]: I1129 08:30:00.159327 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="de62342d-7e20-4789-8fce-ec059c2767d1" containerName="registry-server" Nov 29 08:30:00 crc kubenswrapper[4795]: I1129 08:30:00.162768 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406750-rpcjn" Nov 29 08:30:00 crc kubenswrapper[4795]: I1129 08:30:00.164655 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 29 08:30:00 crc kubenswrapper[4795]: I1129 08:30:00.165015 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 29 08:30:00 crc kubenswrapper[4795]: I1129 08:30:00.171188 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406750-rpcjn"] Nov 29 08:30:00 crc kubenswrapper[4795]: I1129 08:30:00.257684 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dc8556ef-f832-4c85-b401-b0732fd13b58-config-volume\") pod \"collect-profiles-29406750-rpcjn\" (UID: \"dc8556ef-f832-4c85-b401-b0732fd13b58\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406750-rpcjn" Nov 29 08:30:00 crc kubenswrapper[4795]: I1129 08:30:00.258321 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dc8556ef-f832-4c85-b401-b0732fd13b58-secret-volume\") pod \"collect-profiles-29406750-rpcjn\" (UID: \"dc8556ef-f832-4c85-b401-b0732fd13b58\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406750-rpcjn" Nov 29 08:30:00 crc kubenswrapper[4795]: I1129 08:30:00.258443 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-td4x6\" (UniqueName: \"kubernetes.io/projected/dc8556ef-f832-4c85-b401-b0732fd13b58-kube-api-access-td4x6\") pod \"collect-profiles-29406750-rpcjn\" (UID: \"dc8556ef-f832-4c85-b401-b0732fd13b58\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406750-rpcjn" Nov 29 08:30:00 crc kubenswrapper[4795]: I1129 08:30:00.361399 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dc8556ef-f832-4c85-b401-b0732fd13b58-config-volume\") pod \"collect-profiles-29406750-rpcjn\" (UID: \"dc8556ef-f832-4c85-b401-b0732fd13b58\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406750-rpcjn" Nov 29 08:30:00 crc kubenswrapper[4795]: I1129 08:30:00.361563 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dc8556ef-f832-4c85-b401-b0732fd13b58-secret-volume\") pod \"collect-profiles-29406750-rpcjn\" (UID: \"dc8556ef-f832-4c85-b401-b0732fd13b58\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406750-rpcjn" Nov 29 08:30:00 crc kubenswrapper[4795]: I1129 08:30:00.361646 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-td4x6\" (UniqueName: \"kubernetes.io/projected/dc8556ef-f832-4c85-b401-b0732fd13b58-kube-api-access-td4x6\") pod \"collect-profiles-29406750-rpcjn\" (UID: \"dc8556ef-f832-4c85-b401-b0732fd13b58\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406750-rpcjn" Nov 29 08:30:00 crc kubenswrapper[4795]: I1129 08:30:00.362292 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dc8556ef-f832-4c85-b401-b0732fd13b58-config-volume\") pod \"collect-profiles-29406750-rpcjn\" (UID: \"dc8556ef-f832-4c85-b401-b0732fd13b58\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406750-rpcjn" Nov 29 08:30:00 crc kubenswrapper[4795]: I1129 08:30:00.368056 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dc8556ef-f832-4c85-b401-b0732fd13b58-secret-volume\") pod \"collect-profiles-29406750-rpcjn\" (UID: \"dc8556ef-f832-4c85-b401-b0732fd13b58\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406750-rpcjn" Nov 29 08:30:00 crc kubenswrapper[4795]: I1129 08:30:00.378761 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-td4x6\" (UniqueName: \"kubernetes.io/projected/dc8556ef-f832-4c85-b401-b0732fd13b58-kube-api-access-td4x6\") pod \"collect-profiles-29406750-rpcjn\" (UID: \"dc8556ef-f832-4c85-b401-b0732fd13b58\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406750-rpcjn" Nov 29 08:30:00 crc kubenswrapper[4795]: I1129 08:30:00.488458 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406750-rpcjn" Nov 29 08:30:01 crc kubenswrapper[4795]: I1129 08:30:01.011452 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406750-rpcjn"] Nov 29 08:30:01 crc kubenswrapper[4795]: I1129 08:30:01.756798 4795 generic.go:334] "Generic (PLEG): container finished" podID="dc8556ef-f832-4c85-b401-b0732fd13b58" containerID="7ad30826ff744f33020528cf832ceaab0bf6578f0a0bbdc700d7f6983dac08b6" exitCode=0 Nov 29 08:30:01 crc kubenswrapper[4795]: I1129 08:30:01.756845 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406750-rpcjn" event={"ID":"dc8556ef-f832-4c85-b401-b0732fd13b58","Type":"ContainerDied","Data":"7ad30826ff744f33020528cf832ceaab0bf6578f0a0bbdc700d7f6983dac08b6"} Nov 29 08:30:01 crc kubenswrapper[4795]: I1129 08:30:01.757110 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406750-rpcjn" event={"ID":"dc8556ef-f832-4c85-b401-b0732fd13b58","Type":"ContainerStarted","Data":"8524e2fb4d60f34bdc14eace264f23a961730fa3177ef88f8c0c48cd1e4129fd"} Nov 29 08:30:03 crc kubenswrapper[4795]: I1129 08:30:03.243206 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406750-rpcjn" Nov 29 08:30:03 crc kubenswrapper[4795]: I1129 08:30:03.444360 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dc8556ef-f832-4c85-b401-b0732fd13b58-secret-volume\") pod \"dc8556ef-f832-4c85-b401-b0732fd13b58\" (UID: \"dc8556ef-f832-4c85-b401-b0732fd13b58\") " Nov 29 08:30:03 crc kubenswrapper[4795]: I1129 08:30:03.444485 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dc8556ef-f832-4c85-b401-b0732fd13b58-config-volume\") pod \"dc8556ef-f832-4c85-b401-b0732fd13b58\" (UID: \"dc8556ef-f832-4c85-b401-b0732fd13b58\") " Nov 29 08:30:03 crc kubenswrapper[4795]: I1129 08:30:03.444566 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-td4x6\" (UniqueName: \"kubernetes.io/projected/dc8556ef-f832-4c85-b401-b0732fd13b58-kube-api-access-td4x6\") pod \"dc8556ef-f832-4c85-b401-b0732fd13b58\" (UID: \"dc8556ef-f832-4c85-b401-b0732fd13b58\") " Nov 29 08:30:03 crc kubenswrapper[4795]: I1129 08:30:03.445379 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc8556ef-f832-4c85-b401-b0732fd13b58-config-volume" (OuterVolumeSpecName: "config-volume") pod "dc8556ef-f832-4c85-b401-b0732fd13b58" (UID: "dc8556ef-f832-4c85-b401-b0732fd13b58"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:30:03 crc kubenswrapper[4795]: I1129 08:30:03.467534 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc8556ef-f832-4c85-b401-b0732fd13b58-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "dc8556ef-f832-4c85-b401-b0732fd13b58" (UID: "dc8556ef-f832-4c85-b401-b0732fd13b58"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:30:03 crc kubenswrapper[4795]: I1129 08:30:03.468320 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc8556ef-f832-4c85-b401-b0732fd13b58-kube-api-access-td4x6" (OuterVolumeSpecName: "kube-api-access-td4x6") pod "dc8556ef-f832-4c85-b401-b0732fd13b58" (UID: "dc8556ef-f832-4c85-b401-b0732fd13b58"). InnerVolumeSpecName "kube-api-access-td4x6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:30:03 crc kubenswrapper[4795]: I1129 08:30:03.550088 4795 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dc8556ef-f832-4c85-b401-b0732fd13b58-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 29 08:30:03 crc kubenswrapper[4795]: I1129 08:30:03.550176 4795 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dc8556ef-f832-4c85-b401-b0732fd13b58-config-volume\") on node \"crc\" DevicePath \"\"" Nov 29 08:30:03 crc kubenswrapper[4795]: I1129 08:30:03.550192 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-td4x6\" (UniqueName: \"kubernetes.io/projected/dc8556ef-f832-4c85-b401-b0732fd13b58-kube-api-access-td4x6\") on node \"crc\" DevicePath \"\"" Nov 29 08:30:03 crc kubenswrapper[4795]: I1129 08:30:03.778813 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406750-rpcjn" event={"ID":"dc8556ef-f832-4c85-b401-b0732fd13b58","Type":"ContainerDied","Data":"8524e2fb4d60f34bdc14eace264f23a961730fa3177ef88f8c0c48cd1e4129fd"} Nov 29 08:30:03 crc kubenswrapper[4795]: I1129 08:30:03.779209 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8524e2fb4d60f34bdc14eace264f23a961730fa3177ef88f8c0c48cd1e4129fd" Nov 29 08:30:03 crc kubenswrapper[4795]: I1129 08:30:03.778993 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406750-rpcjn" Nov 29 08:30:04 crc kubenswrapper[4795]: I1129 08:30:04.326050 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406705-89jdp"] Nov 29 08:30:04 crc kubenswrapper[4795]: I1129 08:30:04.338086 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406705-89jdp"] Nov 29 08:30:06 crc kubenswrapper[4795]: I1129 08:30:06.292378 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74935be3-c96a-49da-97c0-7543a4217bd2" path="/var/lib/kubelet/pods/74935be3-c96a-49da-97c0-7543a4217bd2/volumes" Nov 29 08:30:06 crc kubenswrapper[4795]: I1129 08:30:06.812734 4795 generic.go:334] "Generic (PLEG): container finished" podID="8223751f-5de9-4d5c-a9b2-200cf9c164ee" containerID="56a5445b14cf80e6f5584924ee58502b0f90b3b02d7e97167816bd453030a21c" exitCode=0 Nov 29 08:30:06 crc kubenswrapper[4795]: I1129 08:30:06.812777 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mdl5h" event={"ID":"8223751f-5de9-4d5c-a9b2-200cf9c164ee","Type":"ContainerDied","Data":"56a5445b14cf80e6f5584924ee58502b0f90b3b02d7e97167816bd453030a21c"} Nov 29 08:30:08 crc kubenswrapper[4795]: I1129 08:30:08.307189 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mdl5h" Nov 29 08:30:08 crc kubenswrapper[4795]: I1129 08:30:08.491398 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8223751f-5de9-4d5c-a9b2-200cf9c164ee-ssh-key\") pod \"8223751f-5de9-4d5c-a9b2-200cf9c164ee\" (UID: \"8223751f-5de9-4d5c-a9b2-200cf9c164ee\") " Nov 29 08:30:08 crc kubenswrapper[4795]: I1129 08:30:08.491565 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68wzd\" (UniqueName: \"kubernetes.io/projected/8223751f-5de9-4d5c-a9b2-200cf9c164ee-kube-api-access-68wzd\") pod \"8223751f-5de9-4d5c-a9b2-200cf9c164ee\" (UID: \"8223751f-5de9-4d5c-a9b2-200cf9c164ee\") " Nov 29 08:30:08 crc kubenswrapper[4795]: I1129 08:30:08.491614 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8223751f-5de9-4d5c-a9b2-200cf9c164ee-inventory\") pod \"8223751f-5de9-4d5c-a9b2-200cf9c164ee\" (UID: \"8223751f-5de9-4d5c-a9b2-200cf9c164ee\") " Nov 29 08:30:08 crc kubenswrapper[4795]: I1129 08:30:08.491664 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/8223751f-5de9-4d5c-a9b2-200cf9c164ee-libvirt-secret-0\") pod \"8223751f-5de9-4d5c-a9b2-200cf9c164ee\" (UID: \"8223751f-5de9-4d5c-a9b2-200cf9c164ee\") " Nov 29 08:30:08 crc kubenswrapper[4795]: I1129 08:30:08.491763 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8223751f-5de9-4d5c-a9b2-200cf9c164ee-libvirt-combined-ca-bundle\") pod \"8223751f-5de9-4d5c-a9b2-200cf9c164ee\" (UID: \"8223751f-5de9-4d5c-a9b2-200cf9c164ee\") " Nov 29 08:30:08 crc kubenswrapper[4795]: I1129 08:30:08.500279 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8223751f-5de9-4d5c-a9b2-200cf9c164ee-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "8223751f-5de9-4d5c-a9b2-200cf9c164ee" (UID: "8223751f-5de9-4d5c-a9b2-200cf9c164ee"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:30:08 crc kubenswrapper[4795]: I1129 08:30:08.500330 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8223751f-5de9-4d5c-a9b2-200cf9c164ee-kube-api-access-68wzd" (OuterVolumeSpecName: "kube-api-access-68wzd") pod "8223751f-5de9-4d5c-a9b2-200cf9c164ee" (UID: "8223751f-5de9-4d5c-a9b2-200cf9c164ee"). InnerVolumeSpecName "kube-api-access-68wzd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:30:08 crc kubenswrapper[4795]: I1129 08:30:08.525659 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8223751f-5de9-4d5c-a9b2-200cf9c164ee-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "8223751f-5de9-4d5c-a9b2-200cf9c164ee" (UID: "8223751f-5de9-4d5c-a9b2-200cf9c164ee"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:30:08 crc kubenswrapper[4795]: I1129 08:30:08.525692 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8223751f-5de9-4d5c-a9b2-200cf9c164ee-inventory" (OuterVolumeSpecName: "inventory") pod "8223751f-5de9-4d5c-a9b2-200cf9c164ee" (UID: "8223751f-5de9-4d5c-a9b2-200cf9c164ee"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:30:08 crc kubenswrapper[4795]: I1129 08:30:08.531320 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8223751f-5de9-4d5c-a9b2-200cf9c164ee-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "8223751f-5de9-4d5c-a9b2-200cf9c164ee" (UID: "8223751f-5de9-4d5c-a9b2-200cf9c164ee"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:30:08 crc kubenswrapper[4795]: I1129 08:30:08.594781 4795 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8223751f-5de9-4d5c-a9b2-200cf9c164ee-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 08:30:08 crc kubenswrapper[4795]: I1129 08:30:08.594824 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68wzd\" (UniqueName: \"kubernetes.io/projected/8223751f-5de9-4d5c-a9b2-200cf9c164ee-kube-api-access-68wzd\") on node \"crc\" DevicePath \"\"" Nov 29 08:30:08 crc kubenswrapper[4795]: I1129 08:30:08.594842 4795 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8223751f-5de9-4d5c-a9b2-200cf9c164ee-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 08:30:08 crc kubenswrapper[4795]: I1129 08:30:08.594855 4795 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/8223751f-5de9-4d5c-a9b2-200cf9c164ee-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Nov 29 08:30:08 crc kubenswrapper[4795]: I1129 08:30:08.594867 4795 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8223751f-5de9-4d5c-a9b2-200cf9c164ee-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 08:30:08 crc kubenswrapper[4795]: I1129 08:30:08.836672 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mdl5h" event={"ID":"8223751f-5de9-4d5c-a9b2-200cf9c164ee","Type":"ContainerDied","Data":"8753179a95893cb1fa9196aa5d75303cdaaa31993af080f5c3e2af16ba26c809"} Nov 29 08:30:08 crc kubenswrapper[4795]: I1129 08:30:08.836717 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8753179a95893cb1fa9196aa5d75303cdaaa31993af080f5c3e2af16ba26c809" Nov 29 08:30:08 crc kubenswrapper[4795]: I1129 08:30:08.836774 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mdl5h" Nov 29 08:30:08 crc kubenswrapper[4795]: I1129 08:30:08.961716 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-ss9hv"] Nov 29 08:30:08 crc kubenswrapper[4795]: E1129 08:30:08.962228 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc8556ef-f832-4c85-b401-b0732fd13b58" containerName="collect-profiles" Nov 29 08:30:08 crc kubenswrapper[4795]: I1129 08:30:08.962244 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc8556ef-f832-4c85-b401-b0732fd13b58" containerName="collect-profiles" Nov 29 08:30:08 crc kubenswrapper[4795]: E1129 08:30:08.962280 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8223751f-5de9-4d5c-a9b2-200cf9c164ee" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 29 08:30:08 crc kubenswrapper[4795]: I1129 08:30:08.962287 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="8223751f-5de9-4d5c-a9b2-200cf9c164ee" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 29 08:30:08 crc kubenswrapper[4795]: I1129 08:30:08.962540 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="8223751f-5de9-4d5c-a9b2-200cf9c164ee" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 29 08:30:08 crc kubenswrapper[4795]: I1129 08:30:08.962559 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc8556ef-f832-4c85-b401-b0732fd13b58" containerName="collect-profiles" Nov 29 08:30:08 crc kubenswrapper[4795]: I1129 08:30:08.963376 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ss9hv" Nov 29 08:30:08 crc kubenswrapper[4795]: I1129 08:30:08.975925 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nfhsg" Nov 29 08:30:08 crc kubenswrapper[4795]: I1129 08:30:08.976165 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 08:30:08 crc kubenswrapper[4795]: I1129 08:30:08.976316 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 08:30:08 crc kubenswrapper[4795]: I1129 08:30:08.976452 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 08:30:08 crc kubenswrapper[4795]: I1129 08:30:08.976563 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Nov 29 08:30:08 crc kubenswrapper[4795]: I1129 08:30:08.976704 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Nov 29 08:30:08 crc kubenswrapper[4795]: I1129 08:30:08.976785 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Nov 29 08:30:09 crc kubenswrapper[4795]: I1129 08:30:09.056031 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-ss9hv"] Nov 29 08:30:09 crc kubenswrapper[4795]: I1129 08:30:09.105241 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51f82fb3-fb43-4802-9ce6-46930382229b-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ss9hv\" (UID: \"51f82fb3-fb43-4802-9ce6-46930382229b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ss9hv" Nov 29 08:30:09 crc kubenswrapper[4795]: I1129 08:30:09.105317 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/51f82fb3-fb43-4802-9ce6-46930382229b-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ss9hv\" (UID: \"51f82fb3-fb43-4802-9ce6-46930382229b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ss9hv" Nov 29 08:30:09 crc kubenswrapper[4795]: I1129 08:30:09.105348 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/51f82fb3-fb43-4802-9ce6-46930382229b-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ss9hv\" (UID: \"51f82fb3-fb43-4802-9ce6-46930382229b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ss9hv" Nov 29 08:30:09 crc kubenswrapper[4795]: I1129 08:30:09.105408 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/51f82fb3-fb43-4802-9ce6-46930382229b-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ss9hv\" (UID: \"51f82fb3-fb43-4802-9ce6-46930382229b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ss9hv" Nov 29 08:30:09 crc kubenswrapper[4795]: I1129 08:30:09.105457 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/51f82fb3-fb43-4802-9ce6-46930382229b-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ss9hv\" (UID: \"51f82fb3-fb43-4802-9ce6-46930382229b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ss9hv" Nov 29 08:30:09 crc kubenswrapper[4795]: I1129 08:30:09.105548 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/51f82fb3-fb43-4802-9ce6-46930382229b-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ss9hv\" (UID: \"51f82fb3-fb43-4802-9ce6-46930382229b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ss9hv" Nov 29 08:30:09 crc kubenswrapper[4795]: I1129 08:30:09.105611 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/51f82fb3-fb43-4802-9ce6-46930382229b-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ss9hv\" (UID: \"51f82fb3-fb43-4802-9ce6-46930382229b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ss9hv" Nov 29 08:30:09 crc kubenswrapper[4795]: I1129 08:30:09.105679 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/51f82fb3-fb43-4802-9ce6-46930382229b-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ss9hv\" (UID: \"51f82fb3-fb43-4802-9ce6-46930382229b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ss9hv" Nov 29 08:30:09 crc kubenswrapper[4795]: I1129 08:30:09.105728 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzv58\" (UniqueName: \"kubernetes.io/projected/51f82fb3-fb43-4802-9ce6-46930382229b-kube-api-access-vzv58\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ss9hv\" (UID: \"51f82fb3-fb43-4802-9ce6-46930382229b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ss9hv" Nov 29 08:30:09 crc kubenswrapper[4795]: I1129 08:30:09.207789 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/51f82fb3-fb43-4802-9ce6-46930382229b-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ss9hv\" (UID: \"51f82fb3-fb43-4802-9ce6-46930382229b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ss9hv" Nov 29 08:30:09 crc kubenswrapper[4795]: I1129 08:30:09.207866 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/51f82fb3-fb43-4802-9ce6-46930382229b-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ss9hv\" (UID: \"51f82fb3-fb43-4802-9ce6-46930382229b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ss9hv" Nov 29 08:30:09 crc kubenswrapper[4795]: I1129 08:30:09.207956 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/51f82fb3-fb43-4802-9ce6-46930382229b-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ss9hv\" (UID: \"51f82fb3-fb43-4802-9ce6-46930382229b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ss9hv" Nov 29 08:30:09 crc kubenswrapper[4795]: I1129 08:30:09.207997 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/51f82fb3-fb43-4802-9ce6-46930382229b-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ss9hv\" (UID: \"51f82fb3-fb43-4802-9ce6-46930382229b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ss9hv" Nov 29 08:30:09 crc kubenswrapper[4795]: I1129 08:30:09.208050 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/51f82fb3-fb43-4802-9ce6-46930382229b-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ss9hv\" (UID: \"51f82fb3-fb43-4802-9ce6-46930382229b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ss9hv" Nov 29 08:30:09 crc kubenswrapper[4795]: I1129 08:30:09.208088 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzv58\" (UniqueName: \"kubernetes.io/projected/51f82fb3-fb43-4802-9ce6-46930382229b-kube-api-access-vzv58\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ss9hv\" (UID: \"51f82fb3-fb43-4802-9ce6-46930382229b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ss9hv" Nov 29 08:30:09 crc kubenswrapper[4795]: I1129 08:30:09.208115 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51f82fb3-fb43-4802-9ce6-46930382229b-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ss9hv\" (UID: \"51f82fb3-fb43-4802-9ce6-46930382229b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ss9hv" Nov 29 08:30:09 crc kubenswrapper[4795]: I1129 08:30:09.208139 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/51f82fb3-fb43-4802-9ce6-46930382229b-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ss9hv\" (UID: \"51f82fb3-fb43-4802-9ce6-46930382229b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ss9hv" Nov 29 08:30:09 crc kubenswrapper[4795]: I1129 08:30:09.208166 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/51f82fb3-fb43-4802-9ce6-46930382229b-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ss9hv\" (UID: \"51f82fb3-fb43-4802-9ce6-46930382229b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ss9hv" Nov 29 08:30:09 crc kubenswrapper[4795]: I1129 08:30:09.210759 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/51f82fb3-fb43-4802-9ce6-46930382229b-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ss9hv\" (UID: \"51f82fb3-fb43-4802-9ce6-46930382229b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ss9hv" Nov 29 08:30:09 crc kubenswrapper[4795]: I1129 08:30:09.212991 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/51f82fb3-fb43-4802-9ce6-46930382229b-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ss9hv\" (UID: \"51f82fb3-fb43-4802-9ce6-46930382229b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ss9hv" Nov 29 08:30:09 crc kubenswrapper[4795]: I1129 08:30:09.213665 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/51f82fb3-fb43-4802-9ce6-46930382229b-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ss9hv\" (UID: \"51f82fb3-fb43-4802-9ce6-46930382229b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ss9hv" Nov 29 08:30:09 crc kubenswrapper[4795]: I1129 08:30:09.228164 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/51f82fb3-fb43-4802-9ce6-46930382229b-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ss9hv\" (UID: \"51f82fb3-fb43-4802-9ce6-46930382229b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ss9hv" Nov 29 08:30:09 crc kubenswrapper[4795]: I1129 08:30:09.228508 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51f82fb3-fb43-4802-9ce6-46930382229b-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ss9hv\" (UID: \"51f82fb3-fb43-4802-9ce6-46930382229b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ss9hv" Nov 29 08:30:09 crc kubenswrapper[4795]: I1129 08:30:09.228503 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/51f82fb3-fb43-4802-9ce6-46930382229b-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ss9hv\" (UID: \"51f82fb3-fb43-4802-9ce6-46930382229b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ss9hv" Nov 29 08:30:09 crc kubenswrapper[4795]: I1129 08:30:09.228538 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/51f82fb3-fb43-4802-9ce6-46930382229b-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ss9hv\" (UID: \"51f82fb3-fb43-4802-9ce6-46930382229b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ss9hv" Nov 29 08:30:09 crc kubenswrapper[4795]: I1129 08:30:09.229065 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/51f82fb3-fb43-4802-9ce6-46930382229b-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ss9hv\" (UID: \"51f82fb3-fb43-4802-9ce6-46930382229b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ss9hv" Nov 29 08:30:09 crc kubenswrapper[4795]: I1129 08:30:09.232207 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzv58\" (UniqueName: \"kubernetes.io/projected/51f82fb3-fb43-4802-9ce6-46930382229b-kube-api-access-vzv58\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ss9hv\" (UID: \"51f82fb3-fb43-4802-9ce6-46930382229b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ss9hv" Nov 29 08:30:09 crc kubenswrapper[4795]: I1129 08:30:09.287007 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ss9hv" Nov 29 08:30:09 crc kubenswrapper[4795]: I1129 08:30:09.836070 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-ss9hv"] Nov 29 08:30:09 crc kubenswrapper[4795]: I1129 08:30:09.843923 4795 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 29 08:30:10 crc kubenswrapper[4795]: I1129 08:30:10.856822 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ss9hv" event={"ID":"51f82fb3-fb43-4802-9ce6-46930382229b","Type":"ContainerStarted","Data":"c4e86150587d7c6c09500b4d25bc2032301ba37dade69dd3e18f9c24bec65487"} Nov 29 08:30:11 crc kubenswrapper[4795]: I1129 08:30:11.868842 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ss9hv" event={"ID":"51f82fb3-fb43-4802-9ce6-46930382229b","Type":"ContainerStarted","Data":"bcd27f014d5a2e2fc8e1c0524f3680002640f56ae89af4fc5db4e0573ae4e431"} Nov 29 08:30:11 crc kubenswrapper[4795]: I1129 08:30:11.895086 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ss9hv" podStartSLOduration=3.051152951 podStartE2EDuration="3.895066268s" podCreationTimestamp="2025-11-29 08:30:08 +0000 UTC" firstStartedPulling="2025-11-29 08:30:09.843628304 +0000 UTC m=+3055.819204114" lastFinishedPulling="2025-11-29 08:30:10.687541641 +0000 UTC m=+3056.663117431" observedRunningTime="2025-11-29 08:30:11.88668538 +0000 UTC m=+3057.862261170" watchObservedRunningTime="2025-11-29 08:30:11.895066268 +0000 UTC m=+3057.870642058" Nov 29 08:30:32 crc kubenswrapper[4795]: I1129 08:30:32.734069 4795 scope.go:117] "RemoveContainer" containerID="d86ccfc33badf6c25e96bbd8a79ec62946be59723467fe91e294d441702dc110" Nov 29 08:31:41 crc kubenswrapper[4795]: I1129 08:31:41.940953 4795 patch_prober.go:28] interesting pod/machine-config-daemon-bkmq6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:31:41 crc kubenswrapper[4795]: I1129 08:31:41.941461 4795 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:32:11 crc kubenswrapper[4795]: I1129 08:32:11.941887 4795 patch_prober.go:28] interesting pod/machine-config-daemon-bkmq6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:32:11 crc kubenswrapper[4795]: I1129 08:32:11.942709 4795 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:32:41 crc kubenswrapper[4795]: I1129 08:32:41.941474 4795 patch_prober.go:28] interesting pod/machine-config-daemon-bkmq6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:32:41 crc kubenswrapper[4795]: I1129 08:32:41.942110 4795 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:32:41 crc kubenswrapper[4795]: I1129 08:32:41.942157 4795 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" Nov 29 08:32:41 crc kubenswrapper[4795]: I1129 08:32:41.943133 4795 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"08469485735dcf4c196646b777c5b1c73e97106d4c854403de03ac1f9b64edb4"} pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 08:32:41 crc kubenswrapper[4795]: I1129 08:32:41.943263 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" containerID="cri-o://08469485735dcf4c196646b777c5b1c73e97106d4c854403de03ac1f9b64edb4" gracePeriod=600 Nov 29 08:32:42 crc kubenswrapper[4795]: E1129 08:32:42.075032 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:32:42 crc kubenswrapper[4795]: I1129 08:32:42.187459 4795 generic.go:334] "Generic (PLEG): container finished" podID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerID="08469485735dcf4c196646b777c5b1c73e97106d4c854403de03ac1f9b64edb4" exitCode=0 Nov 29 08:32:42 crc kubenswrapper[4795]: I1129 08:32:42.187522 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" event={"ID":"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1","Type":"ContainerDied","Data":"08469485735dcf4c196646b777c5b1c73e97106d4c854403de03ac1f9b64edb4"} Nov 29 08:32:42 crc kubenswrapper[4795]: I1129 08:32:42.187806 4795 scope.go:117] "RemoveContainer" containerID="58ab07ae66662b6ecfd323741dc69073d735bd5a011805572a265e99d0a11e8b" Nov 29 08:32:42 crc kubenswrapper[4795]: I1129 08:32:42.188639 4795 scope.go:117] "RemoveContainer" containerID="08469485735dcf4c196646b777c5b1c73e97106d4c854403de03ac1f9b64edb4" Nov 29 08:32:42 crc kubenswrapper[4795]: E1129 08:32:42.189002 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:32:55 crc kubenswrapper[4795]: I1129 08:32:55.276321 4795 scope.go:117] "RemoveContainer" containerID="08469485735dcf4c196646b777c5b1c73e97106d4c854403de03ac1f9b64edb4" Nov 29 08:32:55 crc kubenswrapper[4795]: E1129 08:32:55.277141 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:33:08 crc kubenswrapper[4795]: I1129 08:33:08.482467 4795 generic.go:334] "Generic (PLEG): container finished" podID="51f82fb3-fb43-4802-9ce6-46930382229b" containerID="bcd27f014d5a2e2fc8e1c0524f3680002640f56ae89af4fc5db4e0573ae4e431" exitCode=0 Nov 29 08:33:08 crc kubenswrapper[4795]: I1129 08:33:08.482561 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ss9hv" event={"ID":"51f82fb3-fb43-4802-9ce6-46930382229b","Type":"ContainerDied","Data":"bcd27f014d5a2e2fc8e1c0524f3680002640f56ae89af4fc5db4e0573ae4e431"} Nov 29 08:33:09 crc kubenswrapper[4795]: I1129 08:33:09.985056 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ss9hv" Nov 29 08:33:10 crc kubenswrapper[4795]: I1129 08:33:10.041464 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/51f82fb3-fb43-4802-9ce6-46930382229b-nova-migration-ssh-key-0\") pod \"51f82fb3-fb43-4802-9ce6-46930382229b\" (UID: \"51f82fb3-fb43-4802-9ce6-46930382229b\") " Nov 29 08:33:10 crc kubenswrapper[4795]: I1129 08:33:10.041526 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzv58\" (UniqueName: \"kubernetes.io/projected/51f82fb3-fb43-4802-9ce6-46930382229b-kube-api-access-vzv58\") pod \"51f82fb3-fb43-4802-9ce6-46930382229b\" (UID: \"51f82fb3-fb43-4802-9ce6-46930382229b\") " Nov 29 08:33:10 crc kubenswrapper[4795]: I1129 08:33:10.042349 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/51f82fb3-fb43-4802-9ce6-46930382229b-nova-cell1-compute-config-1\") pod \"51f82fb3-fb43-4802-9ce6-46930382229b\" (UID: \"51f82fb3-fb43-4802-9ce6-46930382229b\") " Nov 29 08:33:10 crc kubenswrapper[4795]: I1129 08:33:10.042435 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/51f82fb3-fb43-4802-9ce6-46930382229b-nova-extra-config-0\") pod \"51f82fb3-fb43-4802-9ce6-46930382229b\" (UID: \"51f82fb3-fb43-4802-9ce6-46930382229b\") " Nov 29 08:33:10 crc kubenswrapper[4795]: I1129 08:33:10.042543 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/51f82fb3-fb43-4802-9ce6-46930382229b-ssh-key\") pod \"51f82fb3-fb43-4802-9ce6-46930382229b\" (UID: \"51f82fb3-fb43-4802-9ce6-46930382229b\") " Nov 29 08:33:10 crc kubenswrapper[4795]: I1129 08:33:10.042579 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/51f82fb3-fb43-4802-9ce6-46930382229b-nova-migration-ssh-key-1\") pod \"51f82fb3-fb43-4802-9ce6-46930382229b\" (UID: \"51f82fb3-fb43-4802-9ce6-46930382229b\") " Nov 29 08:33:10 crc kubenswrapper[4795]: I1129 08:33:10.042693 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51f82fb3-fb43-4802-9ce6-46930382229b-nova-combined-ca-bundle\") pod \"51f82fb3-fb43-4802-9ce6-46930382229b\" (UID: \"51f82fb3-fb43-4802-9ce6-46930382229b\") " Nov 29 08:33:10 crc kubenswrapper[4795]: I1129 08:33:10.042816 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/51f82fb3-fb43-4802-9ce6-46930382229b-nova-cell1-compute-config-0\") pod \"51f82fb3-fb43-4802-9ce6-46930382229b\" (UID: \"51f82fb3-fb43-4802-9ce6-46930382229b\") " Nov 29 08:33:10 crc kubenswrapper[4795]: I1129 08:33:10.042855 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/51f82fb3-fb43-4802-9ce6-46930382229b-inventory\") pod \"51f82fb3-fb43-4802-9ce6-46930382229b\" (UID: \"51f82fb3-fb43-4802-9ce6-46930382229b\") " Nov 29 08:33:10 crc kubenswrapper[4795]: I1129 08:33:10.055969 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51f82fb3-fb43-4802-9ce6-46930382229b-kube-api-access-vzv58" (OuterVolumeSpecName: "kube-api-access-vzv58") pod "51f82fb3-fb43-4802-9ce6-46930382229b" (UID: "51f82fb3-fb43-4802-9ce6-46930382229b"). InnerVolumeSpecName "kube-api-access-vzv58". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:33:10 crc kubenswrapper[4795]: I1129 08:33:10.057667 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51f82fb3-fb43-4802-9ce6-46930382229b-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "51f82fb3-fb43-4802-9ce6-46930382229b" (UID: "51f82fb3-fb43-4802-9ce6-46930382229b"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:33:10 crc kubenswrapper[4795]: I1129 08:33:10.076313 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51f82fb3-fb43-4802-9ce6-46930382229b-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "51f82fb3-fb43-4802-9ce6-46930382229b" (UID: "51f82fb3-fb43-4802-9ce6-46930382229b"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:33:10 crc kubenswrapper[4795]: I1129 08:33:10.078407 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51f82fb3-fb43-4802-9ce6-46930382229b-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "51f82fb3-fb43-4802-9ce6-46930382229b" (UID: "51f82fb3-fb43-4802-9ce6-46930382229b"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:33:10 crc kubenswrapper[4795]: I1129 08:33:10.080565 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51f82fb3-fb43-4802-9ce6-46930382229b-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "51f82fb3-fb43-4802-9ce6-46930382229b" (UID: "51f82fb3-fb43-4802-9ce6-46930382229b"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:33:10 crc kubenswrapper[4795]: I1129 08:33:10.082207 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51f82fb3-fb43-4802-9ce6-46930382229b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "51f82fb3-fb43-4802-9ce6-46930382229b" (UID: "51f82fb3-fb43-4802-9ce6-46930382229b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:33:10 crc kubenswrapper[4795]: I1129 08:33:10.091531 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51f82fb3-fb43-4802-9ce6-46930382229b-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "51f82fb3-fb43-4802-9ce6-46930382229b" (UID: "51f82fb3-fb43-4802-9ce6-46930382229b"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:33:10 crc kubenswrapper[4795]: I1129 08:33:10.097127 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51f82fb3-fb43-4802-9ce6-46930382229b-inventory" (OuterVolumeSpecName: "inventory") pod "51f82fb3-fb43-4802-9ce6-46930382229b" (UID: "51f82fb3-fb43-4802-9ce6-46930382229b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:33:10 crc kubenswrapper[4795]: I1129 08:33:10.099396 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51f82fb3-fb43-4802-9ce6-46930382229b-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "51f82fb3-fb43-4802-9ce6-46930382229b" (UID: "51f82fb3-fb43-4802-9ce6-46930382229b"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:33:10 crc kubenswrapper[4795]: I1129 08:33:10.146513 4795 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/51f82fb3-fb43-4802-9ce6-46930382229b-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Nov 29 08:33:10 crc kubenswrapper[4795]: I1129 08:33:10.146546 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzv58\" (UniqueName: \"kubernetes.io/projected/51f82fb3-fb43-4802-9ce6-46930382229b-kube-api-access-vzv58\") on node \"crc\" DevicePath \"\"" Nov 29 08:33:10 crc kubenswrapper[4795]: I1129 08:33:10.146555 4795 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/51f82fb3-fb43-4802-9ce6-46930382229b-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Nov 29 08:33:10 crc kubenswrapper[4795]: I1129 08:33:10.146565 4795 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/51f82fb3-fb43-4802-9ce6-46930382229b-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Nov 29 08:33:10 crc kubenswrapper[4795]: I1129 08:33:10.146574 4795 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/51f82fb3-fb43-4802-9ce6-46930382229b-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 08:33:10 crc kubenswrapper[4795]: I1129 08:33:10.146583 4795 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/51f82fb3-fb43-4802-9ce6-46930382229b-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Nov 29 08:33:10 crc kubenswrapper[4795]: I1129 08:33:10.146603 4795 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51f82fb3-fb43-4802-9ce6-46930382229b-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 08:33:10 crc kubenswrapper[4795]: I1129 08:33:10.146613 4795 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/51f82fb3-fb43-4802-9ce6-46930382229b-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Nov 29 08:33:10 crc kubenswrapper[4795]: I1129 08:33:10.146623 4795 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/51f82fb3-fb43-4802-9ce6-46930382229b-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 08:33:10 crc kubenswrapper[4795]: I1129 08:33:10.276093 4795 scope.go:117] "RemoveContainer" containerID="08469485735dcf4c196646b777c5b1c73e97106d4c854403de03ac1f9b64edb4" Nov 29 08:33:10 crc kubenswrapper[4795]: E1129 08:33:10.276506 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:33:10 crc kubenswrapper[4795]: I1129 08:33:10.506231 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ss9hv" event={"ID":"51f82fb3-fb43-4802-9ce6-46930382229b","Type":"ContainerDied","Data":"c4e86150587d7c6c09500b4d25bc2032301ba37dade69dd3e18f9c24bec65487"} Nov 29 08:33:10 crc kubenswrapper[4795]: I1129 08:33:10.506275 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4e86150587d7c6c09500b4d25bc2032301ba37dade69dd3e18f9c24bec65487" Nov 29 08:33:10 crc kubenswrapper[4795]: I1129 08:33:10.506332 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ss9hv" Nov 29 08:33:10 crc kubenswrapper[4795]: I1129 08:33:10.639398 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-z67pl"] Nov 29 08:33:10 crc kubenswrapper[4795]: E1129 08:33:10.640222 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51f82fb3-fb43-4802-9ce6-46930382229b" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 29 08:33:10 crc kubenswrapper[4795]: I1129 08:33:10.640236 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="51f82fb3-fb43-4802-9ce6-46930382229b" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 29 08:33:10 crc kubenswrapper[4795]: I1129 08:33:10.640542 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="51f82fb3-fb43-4802-9ce6-46930382229b" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 29 08:33:10 crc kubenswrapper[4795]: I1129 08:33:10.641378 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-z67pl" Nov 29 08:33:10 crc kubenswrapper[4795]: I1129 08:33:10.646233 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 08:33:10 crc kubenswrapper[4795]: I1129 08:33:10.646931 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 08:33:10 crc kubenswrapper[4795]: I1129 08:33:10.647076 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Nov 29 08:33:10 crc kubenswrapper[4795]: I1129 08:33:10.647247 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nfhsg" Nov 29 08:33:10 crc kubenswrapper[4795]: I1129 08:33:10.647404 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 08:33:10 crc kubenswrapper[4795]: I1129 08:33:10.651155 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-z67pl"] Nov 29 08:33:10 crc kubenswrapper[4795]: I1129 08:33:10.666417 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/de36795d-fe29-4964-bcc6-c63bf2eda290-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-z67pl\" (UID: \"de36795d-fe29-4964-bcc6-c63bf2eda290\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-z67pl" Nov 29 08:33:10 crc kubenswrapper[4795]: I1129 08:33:10.666471 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/de36795d-fe29-4964-bcc6-c63bf2eda290-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-z67pl\" (UID: \"de36795d-fe29-4964-bcc6-c63bf2eda290\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-z67pl" Nov 29 08:33:10 crc kubenswrapper[4795]: I1129 08:33:10.666578 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/de36795d-fe29-4964-bcc6-c63bf2eda290-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-z67pl\" (UID: \"de36795d-fe29-4964-bcc6-c63bf2eda290\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-z67pl" Nov 29 08:33:10 crc kubenswrapper[4795]: I1129 08:33:10.666857 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c95zp\" (UniqueName: \"kubernetes.io/projected/de36795d-fe29-4964-bcc6-c63bf2eda290-kube-api-access-c95zp\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-z67pl\" (UID: \"de36795d-fe29-4964-bcc6-c63bf2eda290\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-z67pl" Nov 29 08:33:10 crc kubenswrapper[4795]: I1129 08:33:10.666970 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de36795d-fe29-4964-bcc6-c63bf2eda290-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-z67pl\" (UID: \"de36795d-fe29-4964-bcc6-c63bf2eda290\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-z67pl" Nov 29 08:33:10 crc kubenswrapper[4795]: I1129 08:33:10.667059 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de36795d-fe29-4964-bcc6-c63bf2eda290-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-z67pl\" (UID: \"de36795d-fe29-4964-bcc6-c63bf2eda290\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-z67pl" Nov 29 08:33:10 crc kubenswrapper[4795]: I1129 08:33:10.667100 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/de36795d-fe29-4964-bcc6-c63bf2eda290-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-z67pl\" (UID: \"de36795d-fe29-4964-bcc6-c63bf2eda290\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-z67pl" Nov 29 08:33:10 crc kubenswrapper[4795]: I1129 08:33:10.769638 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/de36795d-fe29-4964-bcc6-c63bf2eda290-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-z67pl\" (UID: \"de36795d-fe29-4964-bcc6-c63bf2eda290\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-z67pl" Nov 29 08:33:10 crc kubenswrapper[4795]: I1129 08:33:10.769696 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/de36795d-fe29-4964-bcc6-c63bf2eda290-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-z67pl\" (UID: \"de36795d-fe29-4964-bcc6-c63bf2eda290\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-z67pl" Nov 29 08:33:10 crc kubenswrapper[4795]: I1129 08:33:10.769714 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/de36795d-fe29-4964-bcc6-c63bf2eda290-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-z67pl\" (UID: \"de36795d-fe29-4964-bcc6-c63bf2eda290\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-z67pl" Nov 29 08:33:10 crc kubenswrapper[4795]: I1129 08:33:10.769762 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c95zp\" (UniqueName: \"kubernetes.io/projected/de36795d-fe29-4964-bcc6-c63bf2eda290-kube-api-access-c95zp\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-z67pl\" (UID: \"de36795d-fe29-4964-bcc6-c63bf2eda290\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-z67pl" Nov 29 08:33:10 crc kubenswrapper[4795]: I1129 08:33:10.769799 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de36795d-fe29-4964-bcc6-c63bf2eda290-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-z67pl\" (UID: \"de36795d-fe29-4964-bcc6-c63bf2eda290\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-z67pl" Nov 29 08:33:10 crc kubenswrapper[4795]: I1129 08:33:10.769832 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de36795d-fe29-4964-bcc6-c63bf2eda290-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-z67pl\" (UID: \"de36795d-fe29-4964-bcc6-c63bf2eda290\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-z67pl" Nov 29 08:33:10 crc kubenswrapper[4795]: I1129 08:33:10.769872 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/de36795d-fe29-4964-bcc6-c63bf2eda290-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-z67pl\" (UID: \"de36795d-fe29-4964-bcc6-c63bf2eda290\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-z67pl" Nov 29 08:33:10 crc kubenswrapper[4795]: I1129 08:33:10.775640 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de36795d-fe29-4964-bcc6-c63bf2eda290-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-z67pl\" (UID: \"de36795d-fe29-4964-bcc6-c63bf2eda290\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-z67pl" Nov 29 08:33:10 crc kubenswrapper[4795]: I1129 08:33:10.775751 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/de36795d-fe29-4964-bcc6-c63bf2eda290-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-z67pl\" (UID: \"de36795d-fe29-4964-bcc6-c63bf2eda290\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-z67pl" Nov 29 08:33:10 crc kubenswrapper[4795]: I1129 08:33:10.775832 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de36795d-fe29-4964-bcc6-c63bf2eda290-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-z67pl\" (UID: \"de36795d-fe29-4964-bcc6-c63bf2eda290\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-z67pl" Nov 29 08:33:10 crc kubenswrapper[4795]: I1129 08:33:10.776260 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/de36795d-fe29-4964-bcc6-c63bf2eda290-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-z67pl\" (UID: \"de36795d-fe29-4964-bcc6-c63bf2eda290\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-z67pl" Nov 29 08:33:10 crc kubenswrapper[4795]: I1129 08:33:10.776730 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/de36795d-fe29-4964-bcc6-c63bf2eda290-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-z67pl\" (UID: \"de36795d-fe29-4964-bcc6-c63bf2eda290\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-z67pl" Nov 29 08:33:10 crc kubenswrapper[4795]: I1129 08:33:10.784129 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/de36795d-fe29-4964-bcc6-c63bf2eda290-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-z67pl\" (UID: \"de36795d-fe29-4964-bcc6-c63bf2eda290\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-z67pl" Nov 29 08:33:10 crc kubenswrapper[4795]: I1129 08:33:10.791043 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c95zp\" (UniqueName: \"kubernetes.io/projected/de36795d-fe29-4964-bcc6-c63bf2eda290-kube-api-access-c95zp\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-z67pl\" (UID: \"de36795d-fe29-4964-bcc6-c63bf2eda290\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-z67pl" Nov 29 08:33:10 crc kubenswrapper[4795]: I1129 08:33:10.974010 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-z67pl" Nov 29 08:33:11 crc kubenswrapper[4795]: I1129 08:33:11.594497 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-z67pl"] Nov 29 08:33:12 crc kubenswrapper[4795]: I1129 08:33:12.531777 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-z67pl" event={"ID":"de36795d-fe29-4964-bcc6-c63bf2eda290","Type":"ContainerStarted","Data":"b50e4f2e3cb503ba7c53955f0a7a4e5b38e256013c40fcc0bfbd525611c2dedf"} Nov 29 08:33:15 crc kubenswrapper[4795]: I1129 08:33:15.561906 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-z67pl" event={"ID":"de36795d-fe29-4964-bcc6-c63bf2eda290","Type":"ContainerStarted","Data":"53843b3f5805b9e12a612199f4648a71f4502998100751ae575b4efe4c4e7dfe"} Nov 29 08:33:15 crc kubenswrapper[4795]: I1129 08:33:15.587995 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-z67pl" podStartSLOduration=2.917217831 podStartE2EDuration="5.587974343s" podCreationTimestamp="2025-11-29 08:33:10 +0000 UTC" firstStartedPulling="2025-11-29 08:33:11.59807413 +0000 UTC m=+3237.573649920" lastFinishedPulling="2025-11-29 08:33:14.268830642 +0000 UTC m=+3240.244406432" observedRunningTime="2025-11-29 08:33:15.586363248 +0000 UTC m=+3241.561939038" watchObservedRunningTime="2025-11-29 08:33:15.587974343 +0000 UTC m=+3241.563550133" Nov 29 08:33:25 crc kubenswrapper[4795]: I1129 08:33:25.276100 4795 scope.go:117] "RemoveContainer" containerID="08469485735dcf4c196646b777c5b1c73e97106d4c854403de03ac1f9b64edb4" Nov 29 08:33:25 crc kubenswrapper[4795]: E1129 08:33:25.276721 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:33:40 crc kubenswrapper[4795]: I1129 08:33:40.276853 4795 scope.go:117] "RemoveContainer" containerID="08469485735dcf4c196646b777c5b1c73e97106d4c854403de03ac1f9b64edb4" Nov 29 08:33:40 crc kubenswrapper[4795]: E1129 08:33:40.277923 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:33:53 crc kubenswrapper[4795]: I1129 08:33:53.275190 4795 scope.go:117] "RemoveContainer" containerID="08469485735dcf4c196646b777c5b1c73e97106d4c854403de03ac1f9b64edb4" Nov 29 08:33:53 crc kubenswrapper[4795]: E1129 08:33:53.276300 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:34:05 crc kubenswrapper[4795]: I1129 08:34:05.276937 4795 scope.go:117] "RemoveContainer" containerID="08469485735dcf4c196646b777c5b1c73e97106d4c854403de03ac1f9b64edb4" Nov 29 08:34:05 crc kubenswrapper[4795]: E1129 08:34:05.277840 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:34:16 crc kubenswrapper[4795]: I1129 08:34:16.275904 4795 scope.go:117] "RemoveContainer" containerID="08469485735dcf4c196646b777c5b1c73e97106d4c854403de03ac1f9b64edb4" Nov 29 08:34:16 crc kubenswrapper[4795]: E1129 08:34:16.276791 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:34:30 crc kubenswrapper[4795]: I1129 08:34:30.276585 4795 scope.go:117] "RemoveContainer" containerID="08469485735dcf4c196646b777c5b1c73e97106d4c854403de03ac1f9b64edb4" Nov 29 08:34:30 crc kubenswrapper[4795]: E1129 08:34:30.277395 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:34:38 crc kubenswrapper[4795]: I1129 08:34:38.642031 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rgt2v"] Nov 29 08:34:38 crc kubenswrapper[4795]: I1129 08:34:38.645807 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rgt2v" Nov 29 08:34:38 crc kubenswrapper[4795]: I1129 08:34:38.656138 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rgt2v"] Nov 29 08:34:38 crc kubenswrapper[4795]: I1129 08:34:38.776505 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1cd1870-e582-4296-956e-18ea9d637877-utilities\") pod \"redhat-operators-rgt2v\" (UID: \"c1cd1870-e582-4296-956e-18ea9d637877\") " pod="openshift-marketplace/redhat-operators-rgt2v" Nov 29 08:34:38 crc kubenswrapper[4795]: I1129 08:34:38.776841 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvvd2\" (UniqueName: \"kubernetes.io/projected/c1cd1870-e582-4296-956e-18ea9d637877-kube-api-access-kvvd2\") pod \"redhat-operators-rgt2v\" (UID: \"c1cd1870-e582-4296-956e-18ea9d637877\") " pod="openshift-marketplace/redhat-operators-rgt2v" Nov 29 08:34:38 crc kubenswrapper[4795]: I1129 08:34:38.776992 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1cd1870-e582-4296-956e-18ea9d637877-catalog-content\") pod \"redhat-operators-rgt2v\" (UID: \"c1cd1870-e582-4296-956e-18ea9d637877\") " pod="openshift-marketplace/redhat-operators-rgt2v" Nov 29 08:34:38 crc kubenswrapper[4795]: I1129 08:34:38.879289 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1cd1870-e582-4296-956e-18ea9d637877-catalog-content\") pod \"redhat-operators-rgt2v\" (UID: \"c1cd1870-e582-4296-956e-18ea9d637877\") " pod="openshift-marketplace/redhat-operators-rgt2v" Nov 29 08:34:38 crc kubenswrapper[4795]: I1129 08:34:38.879515 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1cd1870-e582-4296-956e-18ea9d637877-utilities\") pod \"redhat-operators-rgt2v\" (UID: \"c1cd1870-e582-4296-956e-18ea9d637877\") " pod="openshift-marketplace/redhat-operators-rgt2v" Nov 29 08:34:38 crc kubenswrapper[4795]: I1129 08:34:38.879632 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvvd2\" (UniqueName: \"kubernetes.io/projected/c1cd1870-e582-4296-956e-18ea9d637877-kube-api-access-kvvd2\") pod \"redhat-operators-rgt2v\" (UID: \"c1cd1870-e582-4296-956e-18ea9d637877\") " pod="openshift-marketplace/redhat-operators-rgt2v" Nov 29 08:34:38 crc kubenswrapper[4795]: I1129 08:34:38.879878 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1cd1870-e582-4296-956e-18ea9d637877-catalog-content\") pod \"redhat-operators-rgt2v\" (UID: \"c1cd1870-e582-4296-956e-18ea9d637877\") " pod="openshift-marketplace/redhat-operators-rgt2v" Nov 29 08:34:38 crc kubenswrapper[4795]: I1129 08:34:38.879993 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1cd1870-e582-4296-956e-18ea9d637877-utilities\") pod \"redhat-operators-rgt2v\" (UID: \"c1cd1870-e582-4296-956e-18ea9d637877\") " pod="openshift-marketplace/redhat-operators-rgt2v" Nov 29 08:34:38 crc kubenswrapper[4795]: I1129 08:34:38.900883 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvvd2\" (UniqueName: \"kubernetes.io/projected/c1cd1870-e582-4296-956e-18ea9d637877-kube-api-access-kvvd2\") pod \"redhat-operators-rgt2v\" (UID: \"c1cd1870-e582-4296-956e-18ea9d637877\") " pod="openshift-marketplace/redhat-operators-rgt2v" Nov 29 08:34:39 crc kubenswrapper[4795]: I1129 08:34:39.010563 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rgt2v" Nov 29 08:34:39 crc kubenswrapper[4795]: I1129 08:34:39.613468 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rgt2v"] Nov 29 08:34:39 crc kubenswrapper[4795]: I1129 08:34:39.912945 4795 generic.go:334] "Generic (PLEG): container finished" podID="c1cd1870-e582-4296-956e-18ea9d637877" containerID="57a50b70c1d3a236734ac9f37b507221348e2f238997d772592125debd76012b" exitCode=0 Nov 29 08:34:39 crc kubenswrapper[4795]: I1129 08:34:39.913009 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rgt2v" event={"ID":"c1cd1870-e582-4296-956e-18ea9d637877","Type":"ContainerDied","Data":"57a50b70c1d3a236734ac9f37b507221348e2f238997d772592125debd76012b"} Nov 29 08:34:39 crc kubenswrapper[4795]: I1129 08:34:39.913254 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rgt2v" event={"ID":"c1cd1870-e582-4296-956e-18ea9d637877","Type":"ContainerStarted","Data":"f634fc2ac23e0e1d452c8f4ce68b0b400d6e1e1edab2c46b6ce5ff121eb388e5"} Nov 29 08:34:40 crc kubenswrapper[4795]: I1129 08:34:40.924085 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rgt2v" event={"ID":"c1cd1870-e582-4296-956e-18ea9d637877","Type":"ContainerStarted","Data":"23d7add3dd0ec2c06638892f8365faeb120f6ef97dc3f0ba5146c16204907846"} Nov 29 08:34:43 crc kubenswrapper[4795]: I1129 08:34:43.279405 4795 scope.go:117] "RemoveContainer" containerID="08469485735dcf4c196646b777c5b1c73e97106d4c854403de03ac1f9b64edb4" Nov 29 08:34:43 crc kubenswrapper[4795]: E1129 08:34:43.280214 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:34:44 crc kubenswrapper[4795]: I1129 08:34:44.752921 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tqdv6"] Nov 29 08:34:44 crc kubenswrapper[4795]: I1129 08:34:44.756120 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tqdv6" Nov 29 08:34:44 crc kubenswrapper[4795]: I1129 08:34:44.780879 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tqdv6"] Nov 29 08:34:44 crc kubenswrapper[4795]: I1129 08:34:44.785560 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cb490cf-3d04-4f08-93c3-51834457ccbc-catalog-content\") pod \"community-operators-tqdv6\" (UID: \"3cb490cf-3d04-4f08-93c3-51834457ccbc\") " pod="openshift-marketplace/community-operators-tqdv6" Nov 29 08:34:44 crc kubenswrapper[4795]: I1129 08:34:44.785689 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjxhn\" (UniqueName: \"kubernetes.io/projected/3cb490cf-3d04-4f08-93c3-51834457ccbc-kube-api-access-tjxhn\") pod \"community-operators-tqdv6\" (UID: \"3cb490cf-3d04-4f08-93c3-51834457ccbc\") " pod="openshift-marketplace/community-operators-tqdv6" Nov 29 08:34:44 crc kubenswrapper[4795]: I1129 08:34:44.785827 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cb490cf-3d04-4f08-93c3-51834457ccbc-utilities\") pod \"community-operators-tqdv6\" (UID: \"3cb490cf-3d04-4f08-93c3-51834457ccbc\") " pod="openshift-marketplace/community-operators-tqdv6" Nov 29 08:34:44 crc kubenswrapper[4795]: I1129 08:34:44.888047 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cb490cf-3d04-4f08-93c3-51834457ccbc-catalog-content\") pod \"community-operators-tqdv6\" (UID: \"3cb490cf-3d04-4f08-93c3-51834457ccbc\") " pod="openshift-marketplace/community-operators-tqdv6" Nov 29 08:34:44 crc kubenswrapper[4795]: I1129 08:34:44.888204 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjxhn\" (UniqueName: \"kubernetes.io/projected/3cb490cf-3d04-4f08-93c3-51834457ccbc-kube-api-access-tjxhn\") pod \"community-operators-tqdv6\" (UID: \"3cb490cf-3d04-4f08-93c3-51834457ccbc\") " pod="openshift-marketplace/community-operators-tqdv6" Nov 29 08:34:44 crc kubenswrapper[4795]: I1129 08:34:44.888483 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cb490cf-3d04-4f08-93c3-51834457ccbc-utilities\") pod \"community-operators-tqdv6\" (UID: \"3cb490cf-3d04-4f08-93c3-51834457ccbc\") " pod="openshift-marketplace/community-operators-tqdv6" Nov 29 08:34:44 crc kubenswrapper[4795]: I1129 08:34:44.888627 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cb490cf-3d04-4f08-93c3-51834457ccbc-catalog-content\") pod \"community-operators-tqdv6\" (UID: \"3cb490cf-3d04-4f08-93c3-51834457ccbc\") " pod="openshift-marketplace/community-operators-tqdv6" Nov 29 08:34:44 crc kubenswrapper[4795]: I1129 08:34:44.888976 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cb490cf-3d04-4f08-93c3-51834457ccbc-utilities\") pod \"community-operators-tqdv6\" (UID: \"3cb490cf-3d04-4f08-93c3-51834457ccbc\") " pod="openshift-marketplace/community-operators-tqdv6" Nov 29 08:34:44 crc kubenswrapper[4795]: I1129 08:34:44.916546 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjxhn\" (UniqueName: \"kubernetes.io/projected/3cb490cf-3d04-4f08-93c3-51834457ccbc-kube-api-access-tjxhn\") pod \"community-operators-tqdv6\" (UID: \"3cb490cf-3d04-4f08-93c3-51834457ccbc\") " pod="openshift-marketplace/community-operators-tqdv6" Nov 29 08:34:44 crc kubenswrapper[4795]: I1129 08:34:44.965540 4795 generic.go:334] "Generic (PLEG): container finished" podID="c1cd1870-e582-4296-956e-18ea9d637877" containerID="23d7add3dd0ec2c06638892f8365faeb120f6ef97dc3f0ba5146c16204907846" exitCode=0 Nov 29 08:34:44 crc kubenswrapper[4795]: I1129 08:34:44.966137 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rgt2v" event={"ID":"c1cd1870-e582-4296-956e-18ea9d637877","Type":"ContainerDied","Data":"23d7add3dd0ec2c06638892f8365faeb120f6ef97dc3f0ba5146c16204907846"} Nov 29 08:34:45 crc kubenswrapper[4795]: I1129 08:34:45.085440 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tqdv6" Nov 29 08:34:45 crc kubenswrapper[4795]: W1129 08:34:45.928531 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3cb490cf_3d04_4f08_93c3_51834457ccbc.slice/crio-0317ae109c56d455bdb9c69dfd720dd8a3e4883d483d0cbec959bb6db1215eba WatchSource:0}: Error finding container 0317ae109c56d455bdb9c69dfd720dd8a3e4883d483d0cbec959bb6db1215eba: Status 404 returned error can't find the container with id 0317ae109c56d455bdb9c69dfd720dd8a3e4883d483d0cbec959bb6db1215eba Nov 29 08:34:45 crc kubenswrapper[4795]: I1129 08:34:45.929928 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tqdv6"] Nov 29 08:34:45 crc kubenswrapper[4795]: I1129 08:34:45.997467 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tqdv6" event={"ID":"3cb490cf-3d04-4f08-93c3-51834457ccbc","Type":"ContainerStarted","Data":"0317ae109c56d455bdb9c69dfd720dd8a3e4883d483d0cbec959bb6db1215eba"} Nov 29 08:34:47 crc kubenswrapper[4795]: I1129 08:34:47.017305 4795 generic.go:334] "Generic (PLEG): container finished" podID="3cb490cf-3d04-4f08-93c3-51834457ccbc" containerID="12861ae6731d1ec84b86abf34e2abdd1ea196abf1ad285e035cc8e75dbc42c73" exitCode=0 Nov 29 08:34:47 crc kubenswrapper[4795]: I1129 08:34:47.017398 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tqdv6" event={"ID":"3cb490cf-3d04-4f08-93c3-51834457ccbc","Type":"ContainerDied","Data":"12861ae6731d1ec84b86abf34e2abdd1ea196abf1ad285e035cc8e75dbc42c73"} Nov 29 08:34:47 crc kubenswrapper[4795]: I1129 08:34:47.024397 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rgt2v" event={"ID":"c1cd1870-e582-4296-956e-18ea9d637877","Type":"ContainerStarted","Data":"e6a5d6958216749e93f97a17484279d23ac6638c89841de9086dc2d6ef34fedd"} Nov 29 08:34:47 crc kubenswrapper[4795]: I1129 08:34:47.064384 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rgt2v" podStartSLOduration=3.270628773 podStartE2EDuration="9.06436804s" podCreationTimestamp="2025-11-29 08:34:38 +0000 UTC" firstStartedPulling="2025-11-29 08:34:39.915152841 +0000 UTC m=+3325.890728631" lastFinishedPulling="2025-11-29 08:34:45.708892108 +0000 UTC m=+3331.684467898" observedRunningTime="2025-11-29 08:34:47.059134981 +0000 UTC m=+3333.034710791" watchObservedRunningTime="2025-11-29 08:34:47.06436804 +0000 UTC m=+3333.039943830" Nov 29 08:34:49 crc kubenswrapper[4795]: I1129 08:34:49.010797 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rgt2v" Nov 29 08:34:49 crc kubenswrapper[4795]: I1129 08:34:49.011373 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rgt2v" Nov 29 08:34:49 crc kubenswrapper[4795]: I1129 08:34:49.064029 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tqdv6" event={"ID":"3cb490cf-3d04-4f08-93c3-51834457ccbc","Type":"ContainerStarted","Data":"372385d62449702ea495842f8942dc6feb6fe170ad08b26c63fb2d94bbb631ee"} Nov 29 08:34:50 crc kubenswrapper[4795]: I1129 08:34:50.075221 4795 generic.go:334] "Generic (PLEG): container finished" podID="3cb490cf-3d04-4f08-93c3-51834457ccbc" containerID="372385d62449702ea495842f8942dc6feb6fe170ad08b26c63fb2d94bbb631ee" exitCode=0 Nov 29 08:34:50 crc kubenswrapper[4795]: I1129 08:34:50.075331 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tqdv6" event={"ID":"3cb490cf-3d04-4f08-93c3-51834457ccbc","Type":"ContainerDied","Data":"372385d62449702ea495842f8942dc6feb6fe170ad08b26c63fb2d94bbb631ee"} Nov 29 08:34:50 crc kubenswrapper[4795]: I1129 08:34:50.102196 4795 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rgt2v" podUID="c1cd1870-e582-4296-956e-18ea9d637877" containerName="registry-server" probeResult="failure" output=< Nov 29 08:34:50 crc kubenswrapper[4795]: timeout: failed to connect service ":50051" within 1s Nov 29 08:34:50 crc kubenswrapper[4795]: > Nov 29 08:34:51 crc kubenswrapper[4795]: I1129 08:34:51.088410 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tqdv6" event={"ID":"3cb490cf-3d04-4f08-93c3-51834457ccbc","Type":"ContainerStarted","Data":"2af868f86485e6c6fe2aef2e65448bf132fc8c5164fc4ccd5431c4d4b70210fd"} Nov 29 08:34:51 crc kubenswrapper[4795]: I1129 08:34:51.115362 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tqdv6" podStartSLOduration=3.563256464 podStartE2EDuration="7.115342765s" podCreationTimestamp="2025-11-29 08:34:44 +0000 UTC" firstStartedPulling="2025-11-29 08:34:47.019273691 +0000 UTC m=+3332.994849481" lastFinishedPulling="2025-11-29 08:34:50.571359992 +0000 UTC m=+3336.546935782" observedRunningTime="2025-11-29 08:34:51.112265788 +0000 UTC m=+3337.087841598" watchObservedRunningTime="2025-11-29 08:34:51.115342765 +0000 UTC m=+3337.090918545" Nov 29 08:34:54 crc kubenswrapper[4795]: I1129 08:34:54.318080 4795 scope.go:117] "RemoveContainer" containerID="08469485735dcf4c196646b777c5b1c73e97106d4c854403de03ac1f9b64edb4" Nov 29 08:34:54 crc kubenswrapper[4795]: E1129 08:34:54.319069 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:34:55 crc kubenswrapper[4795]: I1129 08:34:55.085666 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tqdv6" Nov 29 08:34:55 crc kubenswrapper[4795]: I1129 08:34:55.085816 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tqdv6" Nov 29 08:34:55 crc kubenswrapper[4795]: I1129 08:34:55.165218 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tqdv6" Nov 29 08:34:56 crc kubenswrapper[4795]: I1129 08:34:56.184923 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tqdv6" Nov 29 08:34:56 crc kubenswrapper[4795]: I1129 08:34:56.246256 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tqdv6"] Nov 29 08:34:58 crc kubenswrapper[4795]: I1129 08:34:58.156783 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tqdv6" podUID="3cb490cf-3d04-4f08-93c3-51834457ccbc" containerName="registry-server" containerID="cri-o://2af868f86485e6c6fe2aef2e65448bf132fc8c5164fc4ccd5431c4d4b70210fd" gracePeriod=2 Nov 29 08:34:58 crc kubenswrapper[4795]: I1129 08:34:58.725096 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tqdv6" Nov 29 08:34:58 crc kubenswrapper[4795]: I1129 08:34:58.830654 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cb490cf-3d04-4f08-93c3-51834457ccbc-catalog-content\") pod \"3cb490cf-3d04-4f08-93c3-51834457ccbc\" (UID: \"3cb490cf-3d04-4f08-93c3-51834457ccbc\") " Nov 29 08:34:58 crc kubenswrapper[4795]: I1129 08:34:58.830755 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cb490cf-3d04-4f08-93c3-51834457ccbc-utilities\") pod \"3cb490cf-3d04-4f08-93c3-51834457ccbc\" (UID: \"3cb490cf-3d04-4f08-93c3-51834457ccbc\") " Nov 29 08:34:58 crc kubenswrapper[4795]: I1129 08:34:58.831038 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjxhn\" (UniqueName: \"kubernetes.io/projected/3cb490cf-3d04-4f08-93c3-51834457ccbc-kube-api-access-tjxhn\") pod \"3cb490cf-3d04-4f08-93c3-51834457ccbc\" (UID: \"3cb490cf-3d04-4f08-93c3-51834457ccbc\") " Nov 29 08:34:58 crc kubenswrapper[4795]: I1129 08:34:58.831710 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cb490cf-3d04-4f08-93c3-51834457ccbc-utilities" (OuterVolumeSpecName: "utilities") pod "3cb490cf-3d04-4f08-93c3-51834457ccbc" (UID: "3cb490cf-3d04-4f08-93c3-51834457ccbc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:34:58 crc kubenswrapper[4795]: I1129 08:34:58.838732 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb490cf-3d04-4f08-93c3-51834457ccbc-kube-api-access-tjxhn" (OuterVolumeSpecName: "kube-api-access-tjxhn") pod "3cb490cf-3d04-4f08-93c3-51834457ccbc" (UID: "3cb490cf-3d04-4f08-93c3-51834457ccbc"). InnerVolumeSpecName "kube-api-access-tjxhn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:34:58 crc kubenswrapper[4795]: I1129 08:34:58.894070 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cb490cf-3d04-4f08-93c3-51834457ccbc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3cb490cf-3d04-4f08-93c3-51834457ccbc" (UID: "3cb490cf-3d04-4f08-93c3-51834457ccbc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:34:58 crc kubenswrapper[4795]: I1129 08:34:58.935238 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjxhn\" (UniqueName: \"kubernetes.io/projected/3cb490cf-3d04-4f08-93c3-51834457ccbc-kube-api-access-tjxhn\") on node \"crc\" DevicePath \"\"" Nov 29 08:34:58 crc kubenswrapper[4795]: I1129 08:34:58.935278 4795 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cb490cf-3d04-4f08-93c3-51834457ccbc-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 08:34:58 crc kubenswrapper[4795]: I1129 08:34:58.935290 4795 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cb490cf-3d04-4f08-93c3-51834457ccbc-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 08:34:59 crc kubenswrapper[4795]: I1129 08:34:59.062302 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rgt2v" Nov 29 08:34:59 crc kubenswrapper[4795]: I1129 08:34:59.120271 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rgt2v" Nov 29 08:34:59 crc kubenswrapper[4795]: I1129 08:34:59.198035 4795 generic.go:334] "Generic (PLEG): container finished" podID="3cb490cf-3d04-4f08-93c3-51834457ccbc" containerID="2af868f86485e6c6fe2aef2e65448bf132fc8c5164fc4ccd5431c4d4b70210fd" exitCode=0 Nov 29 08:34:59 crc kubenswrapper[4795]: I1129 08:34:59.198149 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tqdv6" Nov 29 08:34:59 crc kubenswrapper[4795]: I1129 08:34:59.198225 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tqdv6" event={"ID":"3cb490cf-3d04-4f08-93c3-51834457ccbc","Type":"ContainerDied","Data":"2af868f86485e6c6fe2aef2e65448bf132fc8c5164fc4ccd5431c4d4b70210fd"} Nov 29 08:34:59 crc kubenswrapper[4795]: I1129 08:34:59.198302 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tqdv6" event={"ID":"3cb490cf-3d04-4f08-93c3-51834457ccbc","Type":"ContainerDied","Data":"0317ae109c56d455bdb9c69dfd720dd8a3e4883d483d0cbec959bb6db1215eba"} Nov 29 08:34:59 crc kubenswrapper[4795]: I1129 08:34:59.198319 4795 scope.go:117] "RemoveContainer" containerID="2af868f86485e6c6fe2aef2e65448bf132fc8c5164fc4ccd5431c4d4b70210fd" Nov 29 08:34:59 crc kubenswrapper[4795]: I1129 08:34:59.240247 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tqdv6"] Nov 29 08:34:59 crc kubenswrapper[4795]: I1129 08:34:59.240657 4795 scope.go:117] "RemoveContainer" containerID="372385d62449702ea495842f8942dc6feb6fe170ad08b26c63fb2d94bbb631ee" Nov 29 08:34:59 crc kubenswrapper[4795]: I1129 08:34:59.253582 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tqdv6"] Nov 29 08:34:59 crc kubenswrapper[4795]: I1129 08:34:59.272267 4795 scope.go:117] "RemoveContainer" containerID="12861ae6731d1ec84b86abf34e2abdd1ea196abf1ad285e035cc8e75dbc42c73" Nov 29 08:34:59 crc kubenswrapper[4795]: I1129 08:34:59.330871 4795 scope.go:117] "RemoveContainer" containerID="2af868f86485e6c6fe2aef2e65448bf132fc8c5164fc4ccd5431c4d4b70210fd" Nov 29 08:34:59 crc kubenswrapper[4795]: E1129 08:34:59.331385 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2af868f86485e6c6fe2aef2e65448bf132fc8c5164fc4ccd5431c4d4b70210fd\": container with ID starting with 2af868f86485e6c6fe2aef2e65448bf132fc8c5164fc4ccd5431c4d4b70210fd not found: ID does not exist" containerID="2af868f86485e6c6fe2aef2e65448bf132fc8c5164fc4ccd5431c4d4b70210fd" Nov 29 08:34:59 crc kubenswrapper[4795]: I1129 08:34:59.331442 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2af868f86485e6c6fe2aef2e65448bf132fc8c5164fc4ccd5431c4d4b70210fd"} err="failed to get container status \"2af868f86485e6c6fe2aef2e65448bf132fc8c5164fc4ccd5431c4d4b70210fd\": rpc error: code = NotFound desc = could not find container \"2af868f86485e6c6fe2aef2e65448bf132fc8c5164fc4ccd5431c4d4b70210fd\": container with ID starting with 2af868f86485e6c6fe2aef2e65448bf132fc8c5164fc4ccd5431c4d4b70210fd not found: ID does not exist" Nov 29 08:34:59 crc kubenswrapper[4795]: I1129 08:34:59.331471 4795 scope.go:117] "RemoveContainer" containerID="372385d62449702ea495842f8942dc6feb6fe170ad08b26c63fb2d94bbb631ee" Nov 29 08:34:59 crc kubenswrapper[4795]: E1129 08:34:59.331929 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"372385d62449702ea495842f8942dc6feb6fe170ad08b26c63fb2d94bbb631ee\": container with ID starting with 372385d62449702ea495842f8942dc6feb6fe170ad08b26c63fb2d94bbb631ee not found: ID does not exist" containerID="372385d62449702ea495842f8942dc6feb6fe170ad08b26c63fb2d94bbb631ee" Nov 29 08:34:59 crc kubenswrapper[4795]: I1129 08:34:59.331972 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"372385d62449702ea495842f8942dc6feb6fe170ad08b26c63fb2d94bbb631ee"} err="failed to get container status \"372385d62449702ea495842f8942dc6feb6fe170ad08b26c63fb2d94bbb631ee\": rpc error: code = NotFound desc = could not find container \"372385d62449702ea495842f8942dc6feb6fe170ad08b26c63fb2d94bbb631ee\": container with ID starting with 372385d62449702ea495842f8942dc6feb6fe170ad08b26c63fb2d94bbb631ee not found: ID does not exist" Nov 29 08:34:59 crc kubenswrapper[4795]: I1129 08:34:59.331999 4795 scope.go:117] "RemoveContainer" containerID="12861ae6731d1ec84b86abf34e2abdd1ea196abf1ad285e035cc8e75dbc42c73" Nov 29 08:34:59 crc kubenswrapper[4795]: E1129 08:34:59.332296 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12861ae6731d1ec84b86abf34e2abdd1ea196abf1ad285e035cc8e75dbc42c73\": container with ID starting with 12861ae6731d1ec84b86abf34e2abdd1ea196abf1ad285e035cc8e75dbc42c73 not found: ID does not exist" containerID="12861ae6731d1ec84b86abf34e2abdd1ea196abf1ad285e035cc8e75dbc42c73" Nov 29 08:34:59 crc kubenswrapper[4795]: I1129 08:34:59.332340 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12861ae6731d1ec84b86abf34e2abdd1ea196abf1ad285e035cc8e75dbc42c73"} err="failed to get container status \"12861ae6731d1ec84b86abf34e2abdd1ea196abf1ad285e035cc8e75dbc42c73\": rpc error: code = NotFound desc = could not find container \"12861ae6731d1ec84b86abf34e2abdd1ea196abf1ad285e035cc8e75dbc42c73\": container with ID starting with 12861ae6731d1ec84b86abf34e2abdd1ea196abf1ad285e035cc8e75dbc42c73 not found: ID does not exist" Nov 29 08:35:00 crc kubenswrapper[4795]: I1129 08:35:00.287481 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb490cf-3d04-4f08-93c3-51834457ccbc" path="/var/lib/kubelet/pods/3cb490cf-3d04-4f08-93c3-51834457ccbc/volumes" Nov 29 08:35:00 crc kubenswrapper[4795]: I1129 08:35:00.565645 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rgt2v"] Nov 29 08:35:00 crc kubenswrapper[4795]: I1129 08:35:00.566783 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rgt2v" podUID="c1cd1870-e582-4296-956e-18ea9d637877" containerName="registry-server" containerID="cri-o://e6a5d6958216749e93f97a17484279d23ac6638c89841de9086dc2d6ef34fedd" gracePeriod=2 Nov 29 08:35:01 crc kubenswrapper[4795]: I1129 08:35:01.076488 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rgt2v" Nov 29 08:35:01 crc kubenswrapper[4795]: I1129 08:35:01.204125 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1cd1870-e582-4296-956e-18ea9d637877-utilities\") pod \"c1cd1870-e582-4296-956e-18ea9d637877\" (UID: \"c1cd1870-e582-4296-956e-18ea9d637877\") " Nov 29 08:35:01 crc kubenswrapper[4795]: I1129 08:35:01.204233 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1cd1870-e582-4296-956e-18ea9d637877-catalog-content\") pod \"c1cd1870-e582-4296-956e-18ea9d637877\" (UID: \"c1cd1870-e582-4296-956e-18ea9d637877\") " Nov 29 08:35:01 crc kubenswrapper[4795]: I1129 08:35:01.204461 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvvd2\" (UniqueName: \"kubernetes.io/projected/c1cd1870-e582-4296-956e-18ea9d637877-kube-api-access-kvvd2\") pod \"c1cd1870-e582-4296-956e-18ea9d637877\" (UID: \"c1cd1870-e582-4296-956e-18ea9d637877\") " Nov 29 08:35:01 crc kubenswrapper[4795]: I1129 08:35:01.204873 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1cd1870-e582-4296-956e-18ea9d637877-utilities" (OuterVolumeSpecName: "utilities") pod "c1cd1870-e582-4296-956e-18ea9d637877" (UID: "c1cd1870-e582-4296-956e-18ea9d637877"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:35:01 crc kubenswrapper[4795]: I1129 08:35:01.205471 4795 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1cd1870-e582-4296-956e-18ea9d637877-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 08:35:01 crc kubenswrapper[4795]: I1129 08:35:01.210542 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1cd1870-e582-4296-956e-18ea9d637877-kube-api-access-kvvd2" (OuterVolumeSpecName: "kube-api-access-kvvd2") pod "c1cd1870-e582-4296-956e-18ea9d637877" (UID: "c1cd1870-e582-4296-956e-18ea9d637877"). InnerVolumeSpecName "kube-api-access-kvvd2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:35:01 crc kubenswrapper[4795]: I1129 08:35:01.223797 4795 generic.go:334] "Generic (PLEG): container finished" podID="c1cd1870-e582-4296-956e-18ea9d637877" containerID="e6a5d6958216749e93f97a17484279d23ac6638c89841de9086dc2d6ef34fedd" exitCode=0 Nov 29 08:35:01 crc kubenswrapper[4795]: I1129 08:35:01.223845 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rgt2v" event={"ID":"c1cd1870-e582-4296-956e-18ea9d637877","Type":"ContainerDied","Data":"e6a5d6958216749e93f97a17484279d23ac6638c89841de9086dc2d6ef34fedd"} Nov 29 08:35:01 crc kubenswrapper[4795]: I1129 08:35:01.223874 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rgt2v" event={"ID":"c1cd1870-e582-4296-956e-18ea9d637877","Type":"ContainerDied","Data":"f634fc2ac23e0e1d452c8f4ce68b0b400d6e1e1edab2c46b6ce5ff121eb388e5"} Nov 29 08:35:01 crc kubenswrapper[4795]: I1129 08:35:01.223895 4795 scope.go:117] "RemoveContainer" containerID="e6a5d6958216749e93f97a17484279d23ac6638c89841de9086dc2d6ef34fedd" Nov 29 08:35:01 crc kubenswrapper[4795]: I1129 08:35:01.224026 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rgt2v" Nov 29 08:35:01 crc kubenswrapper[4795]: I1129 08:35:01.291289 4795 scope.go:117] "RemoveContainer" containerID="23d7add3dd0ec2c06638892f8365faeb120f6ef97dc3f0ba5146c16204907846" Nov 29 08:35:01 crc kubenswrapper[4795]: I1129 08:35:01.307762 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvvd2\" (UniqueName: \"kubernetes.io/projected/c1cd1870-e582-4296-956e-18ea9d637877-kube-api-access-kvvd2\") on node \"crc\" DevicePath \"\"" Nov 29 08:35:01 crc kubenswrapper[4795]: I1129 08:35:01.314889 4795 scope.go:117] "RemoveContainer" containerID="57a50b70c1d3a236734ac9f37b507221348e2f238997d772592125debd76012b" Nov 29 08:35:01 crc kubenswrapper[4795]: I1129 08:35:01.321375 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1cd1870-e582-4296-956e-18ea9d637877-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c1cd1870-e582-4296-956e-18ea9d637877" (UID: "c1cd1870-e582-4296-956e-18ea9d637877"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:35:01 crc kubenswrapper[4795]: I1129 08:35:01.378539 4795 scope.go:117] "RemoveContainer" containerID="e6a5d6958216749e93f97a17484279d23ac6638c89841de9086dc2d6ef34fedd" Nov 29 08:35:01 crc kubenswrapper[4795]: E1129 08:35:01.379014 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6a5d6958216749e93f97a17484279d23ac6638c89841de9086dc2d6ef34fedd\": container with ID starting with e6a5d6958216749e93f97a17484279d23ac6638c89841de9086dc2d6ef34fedd not found: ID does not exist" containerID="e6a5d6958216749e93f97a17484279d23ac6638c89841de9086dc2d6ef34fedd" Nov 29 08:35:01 crc kubenswrapper[4795]: I1129 08:35:01.379072 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6a5d6958216749e93f97a17484279d23ac6638c89841de9086dc2d6ef34fedd"} err="failed to get container status \"e6a5d6958216749e93f97a17484279d23ac6638c89841de9086dc2d6ef34fedd\": rpc error: code = NotFound desc = could not find container \"e6a5d6958216749e93f97a17484279d23ac6638c89841de9086dc2d6ef34fedd\": container with ID starting with e6a5d6958216749e93f97a17484279d23ac6638c89841de9086dc2d6ef34fedd not found: ID does not exist" Nov 29 08:35:01 crc kubenswrapper[4795]: I1129 08:35:01.379106 4795 scope.go:117] "RemoveContainer" containerID="23d7add3dd0ec2c06638892f8365faeb120f6ef97dc3f0ba5146c16204907846" Nov 29 08:35:01 crc kubenswrapper[4795]: E1129 08:35:01.379434 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23d7add3dd0ec2c06638892f8365faeb120f6ef97dc3f0ba5146c16204907846\": container with ID starting with 23d7add3dd0ec2c06638892f8365faeb120f6ef97dc3f0ba5146c16204907846 not found: ID does not exist" containerID="23d7add3dd0ec2c06638892f8365faeb120f6ef97dc3f0ba5146c16204907846" Nov 29 08:35:01 crc kubenswrapper[4795]: I1129 08:35:01.379465 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23d7add3dd0ec2c06638892f8365faeb120f6ef97dc3f0ba5146c16204907846"} err="failed to get container status \"23d7add3dd0ec2c06638892f8365faeb120f6ef97dc3f0ba5146c16204907846\": rpc error: code = NotFound desc = could not find container \"23d7add3dd0ec2c06638892f8365faeb120f6ef97dc3f0ba5146c16204907846\": container with ID starting with 23d7add3dd0ec2c06638892f8365faeb120f6ef97dc3f0ba5146c16204907846 not found: ID does not exist" Nov 29 08:35:01 crc kubenswrapper[4795]: I1129 08:35:01.379478 4795 scope.go:117] "RemoveContainer" containerID="57a50b70c1d3a236734ac9f37b507221348e2f238997d772592125debd76012b" Nov 29 08:35:01 crc kubenswrapper[4795]: E1129 08:35:01.379872 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57a50b70c1d3a236734ac9f37b507221348e2f238997d772592125debd76012b\": container with ID starting with 57a50b70c1d3a236734ac9f37b507221348e2f238997d772592125debd76012b not found: ID does not exist" containerID="57a50b70c1d3a236734ac9f37b507221348e2f238997d772592125debd76012b" Nov 29 08:35:01 crc kubenswrapper[4795]: I1129 08:35:01.379912 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57a50b70c1d3a236734ac9f37b507221348e2f238997d772592125debd76012b"} err="failed to get container status \"57a50b70c1d3a236734ac9f37b507221348e2f238997d772592125debd76012b\": rpc error: code = NotFound desc = could not find container \"57a50b70c1d3a236734ac9f37b507221348e2f238997d772592125debd76012b\": container with ID starting with 57a50b70c1d3a236734ac9f37b507221348e2f238997d772592125debd76012b not found: ID does not exist" Nov 29 08:35:01 crc kubenswrapper[4795]: I1129 08:35:01.411373 4795 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1cd1870-e582-4296-956e-18ea9d637877-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 08:35:01 crc kubenswrapper[4795]: I1129 08:35:01.570651 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rgt2v"] Nov 29 08:35:01 crc kubenswrapper[4795]: I1129 08:35:01.583567 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rgt2v"] Nov 29 08:35:02 crc kubenswrapper[4795]: I1129 08:35:02.290053 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1cd1870-e582-4296-956e-18ea9d637877" path="/var/lib/kubelet/pods/c1cd1870-e582-4296-956e-18ea9d637877/volumes" Nov 29 08:35:06 crc kubenswrapper[4795]: I1129 08:35:06.276021 4795 scope.go:117] "RemoveContainer" containerID="08469485735dcf4c196646b777c5b1c73e97106d4c854403de03ac1f9b64edb4" Nov 29 08:35:06 crc kubenswrapper[4795]: E1129 08:35:06.276740 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:35:20 crc kubenswrapper[4795]: I1129 08:35:20.277124 4795 scope.go:117] "RemoveContainer" containerID="08469485735dcf4c196646b777c5b1c73e97106d4c854403de03ac1f9b64edb4" Nov 29 08:35:20 crc kubenswrapper[4795]: E1129 08:35:20.278844 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:35:31 crc kubenswrapper[4795]: I1129 08:35:31.275694 4795 scope.go:117] "RemoveContainer" containerID="08469485735dcf4c196646b777c5b1c73e97106d4c854403de03ac1f9b64edb4" Nov 29 08:35:31 crc kubenswrapper[4795]: E1129 08:35:31.276478 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:35:46 crc kubenswrapper[4795]: I1129 08:35:46.275762 4795 scope.go:117] "RemoveContainer" containerID="08469485735dcf4c196646b777c5b1c73e97106d4c854403de03ac1f9b64edb4" Nov 29 08:35:46 crc kubenswrapper[4795]: E1129 08:35:46.276926 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:35:50 crc kubenswrapper[4795]: I1129 08:35:50.740753 4795 generic.go:334] "Generic (PLEG): container finished" podID="de36795d-fe29-4964-bcc6-c63bf2eda290" containerID="53843b3f5805b9e12a612199f4648a71f4502998100751ae575b4efe4c4e7dfe" exitCode=0 Nov 29 08:35:50 crc kubenswrapper[4795]: I1129 08:35:50.740836 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-z67pl" event={"ID":"de36795d-fe29-4964-bcc6-c63bf2eda290","Type":"ContainerDied","Data":"53843b3f5805b9e12a612199f4648a71f4502998100751ae575b4efe4c4e7dfe"} Nov 29 08:35:52 crc kubenswrapper[4795]: I1129 08:35:52.267319 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-z67pl" Nov 29 08:35:52 crc kubenswrapper[4795]: I1129 08:35:52.347978 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/de36795d-fe29-4964-bcc6-c63bf2eda290-ceilometer-compute-config-data-0\") pod \"de36795d-fe29-4964-bcc6-c63bf2eda290\" (UID: \"de36795d-fe29-4964-bcc6-c63bf2eda290\") " Nov 29 08:35:52 crc kubenswrapper[4795]: I1129 08:35:52.348630 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/de36795d-fe29-4964-bcc6-c63bf2eda290-ssh-key\") pod \"de36795d-fe29-4964-bcc6-c63bf2eda290\" (UID: \"de36795d-fe29-4964-bcc6-c63bf2eda290\") " Nov 29 08:35:52 crc kubenswrapper[4795]: I1129 08:35:52.348748 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de36795d-fe29-4964-bcc6-c63bf2eda290-inventory\") pod \"de36795d-fe29-4964-bcc6-c63bf2eda290\" (UID: \"de36795d-fe29-4964-bcc6-c63bf2eda290\") " Nov 29 08:35:52 crc kubenswrapper[4795]: I1129 08:35:52.348924 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/de36795d-fe29-4964-bcc6-c63bf2eda290-ceilometer-compute-config-data-2\") pod \"de36795d-fe29-4964-bcc6-c63bf2eda290\" (UID: \"de36795d-fe29-4964-bcc6-c63bf2eda290\") " Nov 29 08:35:52 crc kubenswrapper[4795]: I1129 08:35:52.349051 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/de36795d-fe29-4964-bcc6-c63bf2eda290-ceilometer-compute-config-data-1\") pod \"de36795d-fe29-4964-bcc6-c63bf2eda290\" (UID: \"de36795d-fe29-4964-bcc6-c63bf2eda290\") " Nov 29 08:35:52 crc kubenswrapper[4795]: I1129 08:35:52.349165 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c95zp\" (UniqueName: \"kubernetes.io/projected/de36795d-fe29-4964-bcc6-c63bf2eda290-kube-api-access-c95zp\") pod \"de36795d-fe29-4964-bcc6-c63bf2eda290\" (UID: \"de36795d-fe29-4964-bcc6-c63bf2eda290\") " Nov 29 08:35:52 crc kubenswrapper[4795]: I1129 08:35:52.349314 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de36795d-fe29-4964-bcc6-c63bf2eda290-telemetry-combined-ca-bundle\") pod \"de36795d-fe29-4964-bcc6-c63bf2eda290\" (UID: \"de36795d-fe29-4964-bcc6-c63bf2eda290\") " Nov 29 08:35:52 crc kubenswrapper[4795]: I1129 08:35:52.379641 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de36795d-fe29-4964-bcc6-c63bf2eda290-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "de36795d-fe29-4964-bcc6-c63bf2eda290" (UID: "de36795d-fe29-4964-bcc6-c63bf2eda290"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:35:52 crc kubenswrapper[4795]: I1129 08:35:52.379671 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de36795d-fe29-4964-bcc6-c63bf2eda290-kube-api-access-c95zp" (OuterVolumeSpecName: "kube-api-access-c95zp") pod "de36795d-fe29-4964-bcc6-c63bf2eda290" (UID: "de36795d-fe29-4964-bcc6-c63bf2eda290"). InnerVolumeSpecName "kube-api-access-c95zp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:35:52 crc kubenswrapper[4795]: I1129 08:35:52.391997 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de36795d-fe29-4964-bcc6-c63bf2eda290-inventory" (OuterVolumeSpecName: "inventory") pod "de36795d-fe29-4964-bcc6-c63bf2eda290" (UID: "de36795d-fe29-4964-bcc6-c63bf2eda290"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:35:52 crc kubenswrapper[4795]: I1129 08:35:52.392448 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de36795d-fe29-4964-bcc6-c63bf2eda290-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "de36795d-fe29-4964-bcc6-c63bf2eda290" (UID: "de36795d-fe29-4964-bcc6-c63bf2eda290"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:35:52 crc kubenswrapper[4795]: I1129 08:35:52.394056 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de36795d-fe29-4964-bcc6-c63bf2eda290-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "de36795d-fe29-4964-bcc6-c63bf2eda290" (UID: "de36795d-fe29-4964-bcc6-c63bf2eda290"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:35:52 crc kubenswrapper[4795]: I1129 08:35:52.403186 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de36795d-fe29-4964-bcc6-c63bf2eda290-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "de36795d-fe29-4964-bcc6-c63bf2eda290" (UID: "de36795d-fe29-4964-bcc6-c63bf2eda290"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:35:52 crc kubenswrapper[4795]: I1129 08:35:52.423356 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de36795d-fe29-4964-bcc6-c63bf2eda290-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "de36795d-fe29-4964-bcc6-c63bf2eda290" (UID: "de36795d-fe29-4964-bcc6-c63bf2eda290"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:35:52 crc kubenswrapper[4795]: I1129 08:35:52.465910 4795 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/de36795d-fe29-4964-bcc6-c63bf2eda290-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Nov 29 08:35:52 crc kubenswrapper[4795]: I1129 08:35:52.465940 4795 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/de36795d-fe29-4964-bcc6-c63bf2eda290-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 08:35:52 crc kubenswrapper[4795]: I1129 08:35:52.465951 4795 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de36795d-fe29-4964-bcc6-c63bf2eda290-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 08:35:52 crc kubenswrapper[4795]: I1129 08:35:52.465960 4795 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/de36795d-fe29-4964-bcc6-c63bf2eda290-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Nov 29 08:35:52 crc kubenswrapper[4795]: I1129 08:35:52.465972 4795 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/de36795d-fe29-4964-bcc6-c63bf2eda290-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Nov 29 08:35:52 crc kubenswrapper[4795]: I1129 08:35:52.465982 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c95zp\" (UniqueName: \"kubernetes.io/projected/de36795d-fe29-4964-bcc6-c63bf2eda290-kube-api-access-c95zp\") on node \"crc\" DevicePath \"\"" Nov 29 08:35:52 crc kubenswrapper[4795]: I1129 08:35:52.465993 4795 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de36795d-fe29-4964-bcc6-c63bf2eda290-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 08:35:52 crc kubenswrapper[4795]: I1129 08:35:52.763423 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-z67pl" event={"ID":"de36795d-fe29-4964-bcc6-c63bf2eda290","Type":"ContainerDied","Data":"b50e4f2e3cb503ba7c53955f0a7a4e5b38e256013c40fcc0bfbd525611c2dedf"} Nov 29 08:35:52 crc kubenswrapper[4795]: I1129 08:35:52.763462 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-z67pl" Nov 29 08:35:52 crc kubenswrapper[4795]: I1129 08:35:52.763488 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b50e4f2e3cb503ba7c53955f0a7a4e5b38e256013c40fcc0bfbd525611c2dedf" Nov 29 08:35:52 crc kubenswrapper[4795]: I1129 08:35:52.880431 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-fjmtk"] Nov 29 08:35:52 crc kubenswrapper[4795]: E1129 08:35:52.881010 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1cd1870-e582-4296-956e-18ea9d637877" containerName="registry-server" Nov 29 08:35:52 crc kubenswrapper[4795]: I1129 08:35:52.881035 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1cd1870-e582-4296-956e-18ea9d637877" containerName="registry-server" Nov 29 08:35:52 crc kubenswrapper[4795]: E1129 08:35:52.881057 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cb490cf-3d04-4f08-93c3-51834457ccbc" containerName="extract-utilities" Nov 29 08:35:52 crc kubenswrapper[4795]: I1129 08:35:52.881066 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cb490cf-3d04-4f08-93c3-51834457ccbc" containerName="extract-utilities" Nov 29 08:35:52 crc kubenswrapper[4795]: E1129 08:35:52.881104 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cb490cf-3d04-4f08-93c3-51834457ccbc" containerName="extract-content" Nov 29 08:35:52 crc kubenswrapper[4795]: I1129 08:35:52.881112 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cb490cf-3d04-4f08-93c3-51834457ccbc" containerName="extract-content" Nov 29 08:35:52 crc kubenswrapper[4795]: E1129 08:35:52.881129 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de36795d-fe29-4964-bcc6-c63bf2eda290" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 29 08:35:52 crc kubenswrapper[4795]: I1129 08:35:52.881139 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="de36795d-fe29-4964-bcc6-c63bf2eda290" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 29 08:35:52 crc kubenswrapper[4795]: E1129 08:35:52.881156 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cb490cf-3d04-4f08-93c3-51834457ccbc" containerName="registry-server" Nov 29 08:35:52 crc kubenswrapper[4795]: I1129 08:35:52.881165 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cb490cf-3d04-4f08-93c3-51834457ccbc" containerName="registry-server" Nov 29 08:35:52 crc kubenswrapper[4795]: E1129 08:35:52.881182 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1cd1870-e582-4296-956e-18ea9d637877" containerName="extract-utilities" Nov 29 08:35:52 crc kubenswrapper[4795]: I1129 08:35:52.881190 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1cd1870-e582-4296-956e-18ea9d637877" containerName="extract-utilities" Nov 29 08:35:52 crc kubenswrapper[4795]: E1129 08:35:52.881215 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1cd1870-e582-4296-956e-18ea9d637877" containerName="extract-content" Nov 29 08:35:52 crc kubenswrapper[4795]: I1129 08:35:52.881222 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1cd1870-e582-4296-956e-18ea9d637877" containerName="extract-content" Nov 29 08:35:52 crc kubenswrapper[4795]: I1129 08:35:52.881487 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="de36795d-fe29-4964-bcc6-c63bf2eda290" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 29 08:35:52 crc kubenswrapper[4795]: I1129 08:35:52.881517 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cb490cf-3d04-4f08-93c3-51834457ccbc" containerName="registry-server" Nov 29 08:35:52 crc kubenswrapper[4795]: I1129 08:35:52.881556 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1cd1870-e582-4296-956e-18ea9d637877" containerName="registry-server" Nov 29 08:35:52 crc kubenswrapper[4795]: I1129 08:35:52.882331 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-fjmtk" Nov 29 08:35:52 crc kubenswrapper[4795]: I1129 08:35:52.884470 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 08:35:52 crc kubenswrapper[4795]: I1129 08:35:52.884748 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 08:35:52 crc kubenswrapper[4795]: I1129 08:35:52.884988 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 08:35:52 crc kubenswrapper[4795]: I1129 08:35:52.885362 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nfhsg" Nov 29 08:35:52 crc kubenswrapper[4795]: I1129 08:35:52.885742 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-ipmi-config-data" Nov 29 08:35:52 crc kubenswrapper[4795]: I1129 08:35:52.901506 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-fjmtk"] Nov 29 08:35:52 crc kubenswrapper[4795]: I1129 08:35:52.979271 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/46efc96a-a270-4709-a9a1-cf8d60484215-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-fjmtk\" (UID: \"46efc96a-a270-4709-a9a1-cf8d60484215\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-fjmtk" Nov 29 08:35:52 crc kubenswrapper[4795]: I1129 08:35:52.979328 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfgsf\" (UniqueName: \"kubernetes.io/projected/46efc96a-a270-4709-a9a1-cf8d60484215-kube-api-access-vfgsf\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-fjmtk\" (UID: \"46efc96a-a270-4709-a9a1-cf8d60484215\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-fjmtk" Nov 29 08:35:52 crc kubenswrapper[4795]: I1129 08:35:52.979425 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/46efc96a-a270-4709-a9a1-cf8d60484215-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-fjmtk\" (UID: \"46efc96a-a270-4709-a9a1-cf8d60484215\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-fjmtk" Nov 29 08:35:52 crc kubenswrapper[4795]: I1129 08:35:52.979463 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/46efc96a-a270-4709-a9a1-cf8d60484215-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-fjmtk\" (UID: \"46efc96a-a270-4709-a9a1-cf8d60484215\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-fjmtk" Nov 29 08:35:52 crc kubenswrapper[4795]: I1129 08:35:52.979491 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46efc96a-a270-4709-a9a1-cf8d60484215-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-fjmtk\" (UID: \"46efc96a-a270-4709-a9a1-cf8d60484215\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-fjmtk" Nov 29 08:35:52 crc kubenswrapper[4795]: I1129 08:35:52.979650 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/46efc96a-a270-4709-a9a1-cf8d60484215-ssh-key\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-fjmtk\" (UID: \"46efc96a-a270-4709-a9a1-cf8d60484215\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-fjmtk" Nov 29 08:35:52 crc kubenswrapper[4795]: I1129 08:35:52.979686 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/46efc96a-a270-4709-a9a1-cf8d60484215-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-fjmtk\" (UID: \"46efc96a-a270-4709-a9a1-cf8d60484215\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-fjmtk" Nov 29 08:35:53 crc kubenswrapper[4795]: I1129 08:35:53.083119 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/46efc96a-a270-4709-a9a1-cf8d60484215-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-fjmtk\" (UID: \"46efc96a-a270-4709-a9a1-cf8d60484215\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-fjmtk" Nov 29 08:35:53 crc kubenswrapper[4795]: I1129 08:35:53.083215 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfgsf\" (UniqueName: \"kubernetes.io/projected/46efc96a-a270-4709-a9a1-cf8d60484215-kube-api-access-vfgsf\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-fjmtk\" (UID: \"46efc96a-a270-4709-a9a1-cf8d60484215\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-fjmtk" Nov 29 08:35:53 crc kubenswrapper[4795]: I1129 08:35:53.083333 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/46efc96a-a270-4709-a9a1-cf8d60484215-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-fjmtk\" (UID: \"46efc96a-a270-4709-a9a1-cf8d60484215\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-fjmtk" Nov 29 08:35:53 crc kubenswrapper[4795]: I1129 08:35:53.083391 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/46efc96a-a270-4709-a9a1-cf8d60484215-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-fjmtk\" (UID: \"46efc96a-a270-4709-a9a1-cf8d60484215\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-fjmtk" Nov 29 08:35:53 crc kubenswrapper[4795]: I1129 08:35:53.083438 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46efc96a-a270-4709-a9a1-cf8d60484215-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-fjmtk\" (UID: \"46efc96a-a270-4709-a9a1-cf8d60484215\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-fjmtk" Nov 29 08:35:53 crc kubenswrapper[4795]: I1129 08:35:53.083538 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/46efc96a-a270-4709-a9a1-cf8d60484215-ssh-key\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-fjmtk\" (UID: \"46efc96a-a270-4709-a9a1-cf8d60484215\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-fjmtk" Nov 29 08:35:53 crc kubenswrapper[4795]: I1129 08:35:53.083616 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/46efc96a-a270-4709-a9a1-cf8d60484215-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-fjmtk\" (UID: \"46efc96a-a270-4709-a9a1-cf8d60484215\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-fjmtk" Nov 29 08:35:53 crc kubenswrapper[4795]: I1129 08:35:53.087674 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46efc96a-a270-4709-a9a1-cf8d60484215-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-fjmtk\" (UID: \"46efc96a-a270-4709-a9a1-cf8d60484215\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-fjmtk" Nov 29 08:35:53 crc kubenswrapper[4795]: I1129 08:35:53.087894 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/46efc96a-a270-4709-a9a1-cf8d60484215-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-fjmtk\" (UID: \"46efc96a-a270-4709-a9a1-cf8d60484215\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-fjmtk" Nov 29 08:35:53 crc kubenswrapper[4795]: I1129 08:35:53.087907 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/46efc96a-a270-4709-a9a1-cf8d60484215-ssh-key\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-fjmtk\" (UID: \"46efc96a-a270-4709-a9a1-cf8d60484215\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-fjmtk" Nov 29 08:35:53 crc kubenswrapper[4795]: I1129 08:35:53.088275 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/46efc96a-a270-4709-a9a1-cf8d60484215-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-fjmtk\" (UID: \"46efc96a-a270-4709-a9a1-cf8d60484215\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-fjmtk" Nov 29 08:35:53 crc kubenswrapper[4795]: I1129 08:35:53.088496 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/46efc96a-a270-4709-a9a1-cf8d60484215-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-fjmtk\" (UID: \"46efc96a-a270-4709-a9a1-cf8d60484215\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-fjmtk" Nov 29 08:35:53 crc kubenswrapper[4795]: I1129 08:35:53.090395 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/46efc96a-a270-4709-a9a1-cf8d60484215-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-fjmtk\" (UID: \"46efc96a-a270-4709-a9a1-cf8d60484215\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-fjmtk" Nov 29 08:35:53 crc kubenswrapper[4795]: I1129 08:35:53.109171 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfgsf\" (UniqueName: \"kubernetes.io/projected/46efc96a-a270-4709-a9a1-cf8d60484215-kube-api-access-vfgsf\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-fjmtk\" (UID: \"46efc96a-a270-4709-a9a1-cf8d60484215\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-fjmtk" Nov 29 08:35:53 crc kubenswrapper[4795]: I1129 08:35:53.201881 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-fjmtk" Nov 29 08:35:53 crc kubenswrapper[4795]: I1129 08:35:53.735311 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-fjmtk"] Nov 29 08:35:53 crc kubenswrapper[4795]: I1129 08:35:53.740009 4795 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 29 08:35:53 crc kubenswrapper[4795]: I1129 08:35:53.777196 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-fjmtk" event={"ID":"46efc96a-a270-4709-a9a1-cf8d60484215","Type":"ContainerStarted","Data":"997376ca80cca7de4bee1d67ca3fd50dc8f3478237b9f359f9f29e4b1842cafc"} Nov 29 08:35:54 crc kubenswrapper[4795]: I1129 08:35:54.806408 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-fjmtk" event={"ID":"46efc96a-a270-4709-a9a1-cf8d60484215","Type":"ContainerStarted","Data":"30c468b08ccc525e6cd0e4cfdeb99d62e646f51cda97d1185e2c60bb516adae2"} Nov 29 08:35:54 crc kubenswrapper[4795]: I1129 08:35:54.841922 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-fjmtk" podStartSLOduration=2.290295583 podStartE2EDuration="2.841901623s" podCreationTimestamp="2025-11-29 08:35:52 +0000 UTC" firstStartedPulling="2025-11-29 08:35:53.739715573 +0000 UTC m=+3399.715291363" lastFinishedPulling="2025-11-29 08:35:54.291321613 +0000 UTC m=+3400.266897403" observedRunningTime="2025-11-29 08:35:54.834792521 +0000 UTC m=+3400.810368311" watchObservedRunningTime="2025-11-29 08:35:54.841901623 +0000 UTC m=+3400.817477413" Nov 29 08:35:57 crc kubenswrapper[4795]: I1129 08:35:57.276149 4795 scope.go:117] "RemoveContainer" containerID="08469485735dcf4c196646b777c5b1c73e97106d4c854403de03ac1f9b64edb4" Nov 29 08:35:57 crc kubenswrapper[4795]: E1129 08:35:57.277057 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:36:10 crc kubenswrapper[4795]: I1129 08:36:10.275960 4795 scope.go:117] "RemoveContainer" containerID="08469485735dcf4c196646b777c5b1c73e97106d4c854403de03ac1f9b64edb4" Nov 29 08:36:10 crc kubenswrapper[4795]: E1129 08:36:10.276891 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:36:18 crc kubenswrapper[4795]: I1129 08:36:18.920316 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4j86c"] Nov 29 08:36:18 crc kubenswrapper[4795]: I1129 08:36:18.924990 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4j86c" Nov 29 08:36:18 crc kubenswrapper[4795]: I1129 08:36:18.934822 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4j86c"] Nov 29 08:36:19 crc kubenswrapper[4795]: I1129 08:36:19.004433 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab16018b-23c7-49f7-8c5e-a34cfb207773-utilities\") pod \"certified-operators-4j86c\" (UID: \"ab16018b-23c7-49f7-8c5e-a34cfb207773\") " pod="openshift-marketplace/certified-operators-4j86c" Nov 29 08:36:19 crc kubenswrapper[4795]: I1129 08:36:19.004576 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab16018b-23c7-49f7-8c5e-a34cfb207773-catalog-content\") pod \"certified-operators-4j86c\" (UID: \"ab16018b-23c7-49f7-8c5e-a34cfb207773\") " pod="openshift-marketplace/certified-operators-4j86c" Nov 29 08:36:19 crc kubenswrapper[4795]: I1129 08:36:19.004814 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwvlj\" (UniqueName: \"kubernetes.io/projected/ab16018b-23c7-49f7-8c5e-a34cfb207773-kube-api-access-lwvlj\") pod \"certified-operators-4j86c\" (UID: \"ab16018b-23c7-49f7-8c5e-a34cfb207773\") " pod="openshift-marketplace/certified-operators-4j86c" Nov 29 08:36:19 crc kubenswrapper[4795]: I1129 08:36:19.107409 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab16018b-23c7-49f7-8c5e-a34cfb207773-utilities\") pod \"certified-operators-4j86c\" (UID: \"ab16018b-23c7-49f7-8c5e-a34cfb207773\") " pod="openshift-marketplace/certified-operators-4j86c" Nov 29 08:36:19 crc kubenswrapper[4795]: I1129 08:36:19.107550 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab16018b-23c7-49f7-8c5e-a34cfb207773-catalog-content\") pod \"certified-operators-4j86c\" (UID: \"ab16018b-23c7-49f7-8c5e-a34cfb207773\") " pod="openshift-marketplace/certified-operators-4j86c" Nov 29 08:36:19 crc kubenswrapper[4795]: I1129 08:36:19.107713 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwvlj\" (UniqueName: \"kubernetes.io/projected/ab16018b-23c7-49f7-8c5e-a34cfb207773-kube-api-access-lwvlj\") pod \"certified-operators-4j86c\" (UID: \"ab16018b-23c7-49f7-8c5e-a34cfb207773\") " pod="openshift-marketplace/certified-operators-4j86c" Nov 29 08:36:19 crc kubenswrapper[4795]: I1129 08:36:19.107935 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab16018b-23c7-49f7-8c5e-a34cfb207773-utilities\") pod \"certified-operators-4j86c\" (UID: \"ab16018b-23c7-49f7-8c5e-a34cfb207773\") " pod="openshift-marketplace/certified-operators-4j86c" Nov 29 08:36:19 crc kubenswrapper[4795]: I1129 08:36:19.108130 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab16018b-23c7-49f7-8c5e-a34cfb207773-catalog-content\") pod \"certified-operators-4j86c\" (UID: \"ab16018b-23c7-49f7-8c5e-a34cfb207773\") " pod="openshift-marketplace/certified-operators-4j86c" Nov 29 08:36:19 crc kubenswrapper[4795]: I1129 08:36:19.139640 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwvlj\" (UniqueName: \"kubernetes.io/projected/ab16018b-23c7-49f7-8c5e-a34cfb207773-kube-api-access-lwvlj\") pod \"certified-operators-4j86c\" (UID: \"ab16018b-23c7-49f7-8c5e-a34cfb207773\") " pod="openshift-marketplace/certified-operators-4j86c" Nov 29 08:36:19 crc kubenswrapper[4795]: I1129 08:36:19.257904 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4j86c" Nov 29 08:36:19 crc kubenswrapper[4795]: I1129 08:36:19.831497 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4j86c"] Nov 29 08:36:20 crc kubenswrapper[4795]: I1129 08:36:20.085873 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4j86c" event={"ID":"ab16018b-23c7-49f7-8c5e-a34cfb207773","Type":"ContainerStarted","Data":"ba4391b890aa43a5d4f3a988e7144a3fa8de4911d5e1cd05fdef572420feeb01"} Nov 29 08:36:21 crc kubenswrapper[4795]: I1129 08:36:21.097821 4795 generic.go:334] "Generic (PLEG): container finished" podID="ab16018b-23c7-49f7-8c5e-a34cfb207773" containerID="7527d1215900e50364a0e9e79fb89f96772702ca7baf921130a94a219c85983f" exitCode=0 Nov 29 08:36:21 crc kubenswrapper[4795]: I1129 08:36:21.097861 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4j86c" event={"ID":"ab16018b-23c7-49f7-8c5e-a34cfb207773","Type":"ContainerDied","Data":"7527d1215900e50364a0e9e79fb89f96772702ca7baf921130a94a219c85983f"} Nov 29 08:36:21 crc kubenswrapper[4795]: I1129 08:36:21.276234 4795 scope.go:117] "RemoveContainer" containerID="08469485735dcf4c196646b777c5b1c73e97106d4c854403de03ac1f9b64edb4" Nov 29 08:36:21 crc kubenswrapper[4795]: E1129 08:36:21.276614 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:36:23 crc kubenswrapper[4795]: I1129 08:36:23.138347 4795 generic.go:334] "Generic (PLEG): container finished" podID="ab16018b-23c7-49f7-8c5e-a34cfb207773" containerID="40126fcaf931945be0cb7b8ebba368434cc4adaab612f34f0778416b77f0ba5a" exitCode=0 Nov 29 08:36:23 crc kubenswrapper[4795]: I1129 08:36:23.138455 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4j86c" event={"ID":"ab16018b-23c7-49f7-8c5e-a34cfb207773","Type":"ContainerDied","Data":"40126fcaf931945be0cb7b8ebba368434cc4adaab612f34f0778416b77f0ba5a"} Nov 29 08:36:25 crc kubenswrapper[4795]: I1129 08:36:25.171482 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4j86c" event={"ID":"ab16018b-23c7-49f7-8c5e-a34cfb207773","Type":"ContainerStarted","Data":"51063733f3dc02272449b6cd9708400d3b8326164fac6bea28b8cb25bb82dc8a"} Nov 29 08:36:25 crc kubenswrapper[4795]: I1129 08:36:25.197405 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4j86c" podStartSLOduration=4.208286242 podStartE2EDuration="7.197381902s" podCreationTimestamp="2025-11-29 08:36:18 +0000 UTC" firstStartedPulling="2025-11-29 08:36:21.100123939 +0000 UTC m=+3427.075699729" lastFinishedPulling="2025-11-29 08:36:24.089219599 +0000 UTC m=+3430.064795389" observedRunningTime="2025-11-29 08:36:25.185822555 +0000 UTC m=+3431.161398355" watchObservedRunningTime="2025-11-29 08:36:25.197381902 +0000 UTC m=+3431.172957692" Nov 29 08:36:29 crc kubenswrapper[4795]: I1129 08:36:29.258515 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4j86c" Nov 29 08:36:29 crc kubenswrapper[4795]: I1129 08:36:29.259068 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4j86c" Nov 29 08:36:29 crc kubenswrapper[4795]: I1129 08:36:29.316481 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4j86c" Nov 29 08:36:30 crc kubenswrapper[4795]: I1129 08:36:30.304765 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4j86c" Nov 29 08:36:30 crc kubenswrapper[4795]: I1129 08:36:30.376996 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4j86c"] Nov 29 08:36:32 crc kubenswrapper[4795]: I1129 08:36:32.246314 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4j86c" podUID="ab16018b-23c7-49f7-8c5e-a34cfb207773" containerName="registry-server" containerID="cri-o://51063733f3dc02272449b6cd9708400d3b8326164fac6bea28b8cb25bb82dc8a" gracePeriod=2 Nov 29 08:36:33 crc kubenswrapper[4795]: I1129 08:36:33.258026 4795 generic.go:334] "Generic (PLEG): container finished" podID="ab16018b-23c7-49f7-8c5e-a34cfb207773" containerID="51063733f3dc02272449b6cd9708400d3b8326164fac6bea28b8cb25bb82dc8a" exitCode=0 Nov 29 08:36:33 crc kubenswrapper[4795]: I1129 08:36:33.258103 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4j86c" event={"ID":"ab16018b-23c7-49f7-8c5e-a34cfb207773","Type":"ContainerDied","Data":"51063733f3dc02272449b6cd9708400d3b8326164fac6bea28b8cb25bb82dc8a"} Nov 29 08:36:33 crc kubenswrapper[4795]: I1129 08:36:33.258358 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4j86c" event={"ID":"ab16018b-23c7-49f7-8c5e-a34cfb207773","Type":"ContainerDied","Data":"ba4391b890aa43a5d4f3a988e7144a3fa8de4911d5e1cd05fdef572420feeb01"} Nov 29 08:36:33 crc kubenswrapper[4795]: I1129 08:36:33.258375 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba4391b890aa43a5d4f3a988e7144a3fa8de4911d5e1cd05fdef572420feeb01" Nov 29 08:36:33 crc kubenswrapper[4795]: I1129 08:36:33.292299 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4j86c" Nov 29 08:36:33 crc kubenswrapper[4795]: I1129 08:36:33.413346 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab16018b-23c7-49f7-8c5e-a34cfb207773-catalog-content\") pod \"ab16018b-23c7-49f7-8c5e-a34cfb207773\" (UID: \"ab16018b-23c7-49f7-8c5e-a34cfb207773\") " Nov 29 08:36:33 crc kubenswrapper[4795]: I1129 08:36:33.413533 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab16018b-23c7-49f7-8c5e-a34cfb207773-utilities\") pod \"ab16018b-23c7-49f7-8c5e-a34cfb207773\" (UID: \"ab16018b-23c7-49f7-8c5e-a34cfb207773\") " Nov 29 08:36:33 crc kubenswrapper[4795]: I1129 08:36:33.413617 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwvlj\" (UniqueName: \"kubernetes.io/projected/ab16018b-23c7-49f7-8c5e-a34cfb207773-kube-api-access-lwvlj\") pod \"ab16018b-23c7-49f7-8c5e-a34cfb207773\" (UID: \"ab16018b-23c7-49f7-8c5e-a34cfb207773\") " Nov 29 08:36:33 crc kubenswrapper[4795]: I1129 08:36:33.414459 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab16018b-23c7-49f7-8c5e-a34cfb207773-utilities" (OuterVolumeSpecName: "utilities") pod "ab16018b-23c7-49f7-8c5e-a34cfb207773" (UID: "ab16018b-23c7-49f7-8c5e-a34cfb207773"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:36:33 crc kubenswrapper[4795]: I1129 08:36:33.415425 4795 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab16018b-23c7-49f7-8c5e-a34cfb207773-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 08:36:33 crc kubenswrapper[4795]: I1129 08:36:33.419828 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab16018b-23c7-49f7-8c5e-a34cfb207773-kube-api-access-lwvlj" (OuterVolumeSpecName: "kube-api-access-lwvlj") pod "ab16018b-23c7-49f7-8c5e-a34cfb207773" (UID: "ab16018b-23c7-49f7-8c5e-a34cfb207773"). InnerVolumeSpecName "kube-api-access-lwvlj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:36:33 crc kubenswrapper[4795]: I1129 08:36:33.469487 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab16018b-23c7-49f7-8c5e-a34cfb207773-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ab16018b-23c7-49f7-8c5e-a34cfb207773" (UID: "ab16018b-23c7-49f7-8c5e-a34cfb207773"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:36:33 crc kubenswrapper[4795]: I1129 08:36:33.517256 4795 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab16018b-23c7-49f7-8c5e-a34cfb207773-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 08:36:33 crc kubenswrapper[4795]: I1129 08:36:33.517288 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwvlj\" (UniqueName: \"kubernetes.io/projected/ab16018b-23c7-49f7-8c5e-a34cfb207773-kube-api-access-lwvlj\") on node \"crc\" DevicePath \"\"" Nov 29 08:36:34 crc kubenswrapper[4795]: I1129 08:36:34.318709 4795 scope.go:117] "RemoveContainer" containerID="08469485735dcf4c196646b777c5b1c73e97106d4c854403de03ac1f9b64edb4" Nov 29 08:36:34 crc kubenswrapper[4795]: E1129 08:36:34.319014 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:36:34 crc kubenswrapper[4795]: I1129 08:36:34.327543 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4j86c" Nov 29 08:36:34 crc kubenswrapper[4795]: I1129 08:36:34.368034 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4j86c"] Nov 29 08:36:34 crc kubenswrapper[4795]: I1129 08:36:34.381227 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4j86c"] Nov 29 08:36:36 crc kubenswrapper[4795]: I1129 08:36:36.300100 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab16018b-23c7-49f7-8c5e-a34cfb207773" path="/var/lib/kubelet/pods/ab16018b-23c7-49f7-8c5e-a34cfb207773/volumes" Nov 29 08:36:50 crc kubenswrapper[4795]: I1129 08:36:50.276497 4795 scope.go:117] "RemoveContainer" containerID="08469485735dcf4c196646b777c5b1c73e97106d4c854403de03ac1f9b64edb4" Nov 29 08:36:50 crc kubenswrapper[4795]: E1129 08:36:50.277348 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:37:02 crc kubenswrapper[4795]: I1129 08:37:02.277568 4795 scope.go:117] "RemoveContainer" containerID="08469485735dcf4c196646b777c5b1c73e97106d4c854403de03ac1f9b64edb4" Nov 29 08:37:02 crc kubenswrapper[4795]: E1129 08:37:02.278428 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:37:05 crc kubenswrapper[4795]: I1129 08:37:05.518383 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xrrqc"] Nov 29 08:37:05 crc kubenswrapper[4795]: E1129 08:37:05.519400 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab16018b-23c7-49f7-8c5e-a34cfb207773" containerName="extract-utilities" Nov 29 08:37:05 crc kubenswrapper[4795]: I1129 08:37:05.519418 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab16018b-23c7-49f7-8c5e-a34cfb207773" containerName="extract-utilities" Nov 29 08:37:05 crc kubenswrapper[4795]: E1129 08:37:05.519434 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab16018b-23c7-49f7-8c5e-a34cfb207773" containerName="extract-content" Nov 29 08:37:05 crc kubenswrapper[4795]: I1129 08:37:05.519441 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab16018b-23c7-49f7-8c5e-a34cfb207773" containerName="extract-content" Nov 29 08:37:05 crc kubenswrapper[4795]: E1129 08:37:05.519454 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab16018b-23c7-49f7-8c5e-a34cfb207773" containerName="registry-server" Nov 29 08:37:05 crc kubenswrapper[4795]: I1129 08:37:05.519462 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab16018b-23c7-49f7-8c5e-a34cfb207773" containerName="registry-server" Nov 29 08:37:05 crc kubenswrapper[4795]: I1129 08:37:05.519775 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab16018b-23c7-49f7-8c5e-a34cfb207773" containerName="registry-server" Nov 29 08:37:05 crc kubenswrapper[4795]: I1129 08:37:05.522106 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xrrqc" Nov 29 08:37:05 crc kubenswrapper[4795]: I1129 08:37:05.531789 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xrrqc"] Nov 29 08:37:05 crc kubenswrapper[4795]: I1129 08:37:05.602034 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5181c6ac-076b-413c-8b46-93601a7e0ca3-utilities\") pod \"redhat-marketplace-xrrqc\" (UID: \"5181c6ac-076b-413c-8b46-93601a7e0ca3\") " pod="openshift-marketplace/redhat-marketplace-xrrqc" Nov 29 08:37:05 crc kubenswrapper[4795]: I1129 08:37:05.602126 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-df5xn\" (UniqueName: \"kubernetes.io/projected/5181c6ac-076b-413c-8b46-93601a7e0ca3-kube-api-access-df5xn\") pod \"redhat-marketplace-xrrqc\" (UID: \"5181c6ac-076b-413c-8b46-93601a7e0ca3\") " pod="openshift-marketplace/redhat-marketplace-xrrqc" Nov 29 08:37:05 crc kubenswrapper[4795]: I1129 08:37:05.602290 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5181c6ac-076b-413c-8b46-93601a7e0ca3-catalog-content\") pod \"redhat-marketplace-xrrqc\" (UID: \"5181c6ac-076b-413c-8b46-93601a7e0ca3\") " pod="openshift-marketplace/redhat-marketplace-xrrqc" Nov 29 08:37:05 crc kubenswrapper[4795]: I1129 08:37:05.704922 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5181c6ac-076b-413c-8b46-93601a7e0ca3-catalog-content\") pod \"redhat-marketplace-xrrqc\" (UID: \"5181c6ac-076b-413c-8b46-93601a7e0ca3\") " pod="openshift-marketplace/redhat-marketplace-xrrqc" Nov 29 08:37:05 crc kubenswrapper[4795]: I1129 08:37:05.705040 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5181c6ac-076b-413c-8b46-93601a7e0ca3-utilities\") pod \"redhat-marketplace-xrrqc\" (UID: \"5181c6ac-076b-413c-8b46-93601a7e0ca3\") " pod="openshift-marketplace/redhat-marketplace-xrrqc" Nov 29 08:37:05 crc kubenswrapper[4795]: I1129 08:37:05.705076 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-df5xn\" (UniqueName: \"kubernetes.io/projected/5181c6ac-076b-413c-8b46-93601a7e0ca3-kube-api-access-df5xn\") pod \"redhat-marketplace-xrrqc\" (UID: \"5181c6ac-076b-413c-8b46-93601a7e0ca3\") " pod="openshift-marketplace/redhat-marketplace-xrrqc" Nov 29 08:37:05 crc kubenswrapper[4795]: I1129 08:37:05.705542 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5181c6ac-076b-413c-8b46-93601a7e0ca3-catalog-content\") pod \"redhat-marketplace-xrrqc\" (UID: \"5181c6ac-076b-413c-8b46-93601a7e0ca3\") " pod="openshift-marketplace/redhat-marketplace-xrrqc" Nov 29 08:37:05 crc kubenswrapper[4795]: I1129 08:37:05.705687 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5181c6ac-076b-413c-8b46-93601a7e0ca3-utilities\") pod \"redhat-marketplace-xrrqc\" (UID: \"5181c6ac-076b-413c-8b46-93601a7e0ca3\") " pod="openshift-marketplace/redhat-marketplace-xrrqc" Nov 29 08:37:05 crc kubenswrapper[4795]: I1129 08:37:05.725156 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-df5xn\" (UniqueName: \"kubernetes.io/projected/5181c6ac-076b-413c-8b46-93601a7e0ca3-kube-api-access-df5xn\") pod \"redhat-marketplace-xrrqc\" (UID: \"5181c6ac-076b-413c-8b46-93601a7e0ca3\") " pod="openshift-marketplace/redhat-marketplace-xrrqc" Nov 29 08:37:05 crc kubenswrapper[4795]: I1129 08:37:05.847529 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xrrqc" Nov 29 08:37:06 crc kubenswrapper[4795]: I1129 08:37:06.354701 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xrrqc"] Nov 29 08:37:06 crc kubenswrapper[4795]: I1129 08:37:06.810470 4795 generic.go:334] "Generic (PLEG): container finished" podID="5181c6ac-076b-413c-8b46-93601a7e0ca3" containerID="db4d9eda01994b03c71cb4538eb8d092d3e63d28595a616aefb938eac1fa3b96" exitCode=0 Nov 29 08:37:06 crc kubenswrapper[4795]: I1129 08:37:06.810558 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xrrqc" event={"ID":"5181c6ac-076b-413c-8b46-93601a7e0ca3","Type":"ContainerDied","Data":"db4d9eda01994b03c71cb4538eb8d092d3e63d28595a616aefb938eac1fa3b96"} Nov 29 08:37:06 crc kubenswrapper[4795]: I1129 08:37:06.810730 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xrrqc" event={"ID":"5181c6ac-076b-413c-8b46-93601a7e0ca3","Type":"ContainerStarted","Data":"13c359bdce7c964396b7e63623d500fe80dabb101feac8f4557a1dfc84b223b2"} Nov 29 08:37:08 crc kubenswrapper[4795]: I1129 08:37:08.844632 4795 generic.go:334] "Generic (PLEG): container finished" podID="5181c6ac-076b-413c-8b46-93601a7e0ca3" containerID="3b7a753a106c654c4c4ac3e3cd23c4c9d65a1727bed4b2edff1178df1039d651" exitCode=0 Nov 29 08:37:08 crc kubenswrapper[4795]: I1129 08:37:08.845168 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xrrqc" event={"ID":"5181c6ac-076b-413c-8b46-93601a7e0ca3","Type":"ContainerDied","Data":"3b7a753a106c654c4c4ac3e3cd23c4c9d65a1727bed4b2edff1178df1039d651"} Nov 29 08:37:09 crc kubenswrapper[4795]: I1129 08:37:09.863485 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xrrqc" event={"ID":"5181c6ac-076b-413c-8b46-93601a7e0ca3","Type":"ContainerStarted","Data":"14277791a84e303f357829ebcd8ccc12cf9c20529a7d1722ce6775379da21a6a"} Nov 29 08:37:09 crc kubenswrapper[4795]: I1129 08:37:09.887130 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xrrqc" podStartSLOduration=2.161572463 podStartE2EDuration="4.887107539s" podCreationTimestamp="2025-11-29 08:37:05 +0000 UTC" firstStartedPulling="2025-11-29 08:37:06.812872898 +0000 UTC m=+3472.788448688" lastFinishedPulling="2025-11-29 08:37:09.538407954 +0000 UTC m=+3475.513983764" observedRunningTime="2025-11-29 08:37:09.881699086 +0000 UTC m=+3475.857274876" watchObservedRunningTime="2025-11-29 08:37:09.887107539 +0000 UTC m=+3475.862683329" Nov 29 08:37:14 crc kubenswrapper[4795]: I1129 08:37:14.285938 4795 scope.go:117] "RemoveContainer" containerID="08469485735dcf4c196646b777c5b1c73e97106d4c854403de03ac1f9b64edb4" Nov 29 08:37:14 crc kubenswrapper[4795]: E1129 08:37:14.287395 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:37:15 crc kubenswrapper[4795]: I1129 08:37:15.847648 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xrrqc" Nov 29 08:37:15 crc kubenswrapper[4795]: I1129 08:37:15.847974 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xrrqc" Nov 29 08:37:15 crc kubenswrapper[4795]: I1129 08:37:15.908304 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xrrqc" Nov 29 08:37:15 crc kubenswrapper[4795]: I1129 08:37:15.991088 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xrrqc" Nov 29 08:37:16 crc kubenswrapper[4795]: I1129 08:37:16.150859 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xrrqc"] Nov 29 08:37:17 crc kubenswrapper[4795]: I1129 08:37:17.953213 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xrrqc" podUID="5181c6ac-076b-413c-8b46-93601a7e0ca3" containerName="registry-server" containerID="cri-o://14277791a84e303f357829ebcd8ccc12cf9c20529a7d1722ce6775379da21a6a" gracePeriod=2 Nov 29 08:37:18 crc kubenswrapper[4795]: I1129 08:37:18.496381 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xrrqc" Nov 29 08:37:18 crc kubenswrapper[4795]: I1129 08:37:18.646018 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-df5xn\" (UniqueName: \"kubernetes.io/projected/5181c6ac-076b-413c-8b46-93601a7e0ca3-kube-api-access-df5xn\") pod \"5181c6ac-076b-413c-8b46-93601a7e0ca3\" (UID: \"5181c6ac-076b-413c-8b46-93601a7e0ca3\") " Nov 29 08:37:18 crc kubenswrapper[4795]: I1129 08:37:18.646101 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5181c6ac-076b-413c-8b46-93601a7e0ca3-catalog-content\") pod \"5181c6ac-076b-413c-8b46-93601a7e0ca3\" (UID: \"5181c6ac-076b-413c-8b46-93601a7e0ca3\") " Nov 29 08:37:18 crc kubenswrapper[4795]: I1129 08:37:18.646174 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5181c6ac-076b-413c-8b46-93601a7e0ca3-utilities\") pod \"5181c6ac-076b-413c-8b46-93601a7e0ca3\" (UID: \"5181c6ac-076b-413c-8b46-93601a7e0ca3\") " Nov 29 08:37:18 crc kubenswrapper[4795]: I1129 08:37:18.646906 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5181c6ac-076b-413c-8b46-93601a7e0ca3-utilities" (OuterVolumeSpecName: "utilities") pod "5181c6ac-076b-413c-8b46-93601a7e0ca3" (UID: "5181c6ac-076b-413c-8b46-93601a7e0ca3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:37:18 crc kubenswrapper[4795]: I1129 08:37:18.653023 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5181c6ac-076b-413c-8b46-93601a7e0ca3-kube-api-access-df5xn" (OuterVolumeSpecName: "kube-api-access-df5xn") pod "5181c6ac-076b-413c-8b46-93601a7e0ca3" (UID: "5181c6ac-076b-413c-8b46-93601a7e0ca3"). InnerVolumeSpecName "kube-api-access-df5xn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:37:18 crc kubenswrapper[4795]: I1129 08:37:18.665242 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5181c6ac-076b-413c-8b46-93601a7e0ca3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5181c6ac-076b-413c-8b46-93601a7e0ca3" (UID: "5181c6ac-076b-413c-8b46-93601a7e0ca3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:37:18 crc kubenswrapper[4795]: I1129 08:37:18.748379 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-df5xn\" (UniqueName: \"kubernetes.io/projected/5181c6ac-076b-413c-8b46-93601a7e0ca3-kube-api-access-df5xn\") on node \"crc\" DevicePath \"\"" Nov 29 08:37:18 crc kubenswrapper[4795]: I1129 08:37:18.748716 4795 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5181c6ac-076b-413c-8b46-93601a7e0ca3-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 08:37:18 crc kubenswrapper[4795]: I1129 08:37:18.748729 4795 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5181c6ac-076b-413c-8b46-93601a7e0ca3-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 08:37:18 crc kubenswrapper[4795]: I1129 08:37:18.966462 4795 generic.go:334] "Generic (PLEG): container finished" podID="5181c6ac-076b-413c-8b46-93601a7e0ca3" containerID="14277791a84e303f357829ebcd8ccc12cf9c20529a7d1722ce6775379da21a6a" exitCode=0 Nov 29 08:37:18 crc kubenswrapper[4795]: I1129 08:37:18.966514 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xrrqc" event={"ID":"5181c6ac-076b-413c-8b46-93601a7e0ca3","Type":"ContainerDied","Data":"14277791a84e303f357829ebcd8ccc12cf9c20529a7d1722ce6775379da21a6a"} Nov 29 08:37:18 crc kubenswrapper[4795]: I1129 08:37:18.966541 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xrrqc" event={"ID":"5181c6ac-076b-413c-8b46-93601a7e0ca3","Type":"ContainerDied","Data":"13c359bdce7c964396b7e63623d500fe80dabb101feac8f4557a1dfc84b223b2"} Nov 29 08:37:18 crc kubenswrapper[4795]: I1129 08:37:18.966557 4795 scope.go:117] "RemoveContainer" containerID="14277791a84e303f357829ebcd8ccc12cf9c20529a7d1722ce6775379da21a6a" Nov 29 08:37:18 crc kubenswrapper[4795]: I1129 08:37:18.966556 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xrrqc" Nov 29 08:37:19 crc kubenswrapper[4795]: I1129 08:37:19.003997 4795 scope.go:117] "RemoveContainer" containerID="3b7a753a106c654c4c4ac3e3cd23c4c9d65a1727bed4b2edff1178df1039d651" Nov 29 08:37:19 crc kubenswrapper[4795]: I1129 08:37:19.015420 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xrrqc"] Nov 29 08:37:19 crc kubenswrapper[4795]: I1129 08:37:19.023486 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xrrqc"] Nov 29 08:37:19 crc kubenswrapper[4795]: I1129 08:37:19.031797 4795 scope.go:117] "RemoveContainer" containerID="db4d9eda01994b03c71cb4538eb8d092d3e63d28595a616aefb938eac1fa3b96" Nov 29 08:37:19 crc kubenswrapper[4795]: I1129 08:37:19.084800 4795 scope.go:117] "RemoveContainer" containerID="14277791a84e303f357829ebcd8ccc12cf9c20529a7d1722ce6775379da21a6a" Nov 29 08:37:19 crc kubenswrapper[4795]: E1129 08:37:19.085187 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14277791a84e303f357829ebcd8ccc12cf9c20529a7d1722ce6775379da21a6a\": container with ID starting with 14277791a84e303f357829ebcd8ccc12cf9c20529a7d1722ce6775379da21a6a not found: ID does not exist" containerID="14277791a84e303f357829ebcd8ccc12cf9c20529a7d1722ce6775379da21a6a" Nov 29 08:37:19 crc kubenswrapper[4795]: I1129 08:37:19.085227 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14277791a84e303f357829ebcd8ccc12cf9c20529a7d1722ce6775379da21a6a"} err="failed to get container status \"14277791a84e303f357829ebcd8ccc12cf9c20529a7d1722ce6775379da21a6a\": rpc error: code = NotFound desc = could not find container \"14277791a84e303f357829ebcd8ccc12cf9c20529a7d1722ce6775379da21a6a\": container with ID starting with 14277791a84e303f357829ebcd8ccc12cf9c20529a7d1722ce6775379da21a6a not found: ID does not exist" Nov 29 08:37:19 crc kubenswrapper[4795]: I1129 08:37:19.085257 4795 scope.go:117] "RemoveContainer" containerID="3b7a753a106c654c4c4ac3e3cd23c4c9d65a1727bed4b2edff1178df1039d651" Nov 29 08:37:19 crc kubenswrapper[4795]: E1129 08:37:19.085492 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b7a753a106c654c4c4ac3e3cd23c4c9d65a1727bed4b2edff1178df1039d651\": container with ID starting with 3b7a753a106c654c4c4ac3e3cd23c4c9d65a1727bed4b2edff1178df1039d651 not found: ID does not exist" containerID="3b7a753a106c654c4c4ac3e3cd23c4c9d65a1727bed4b2edff1178df1039d651" Nov 29 08:37:19 crc kubenswrapper[4795]: I1129 08:37:19.085525 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b7a753a106c654c4c4ac3e3cd23c4c9d65a1727bed4b2edff1178df1039d651"} err="failed to get container status \"3b7a753a106c654c4c4ac3e3cd23c4c9d65a1727bed4b2edff1178df1039d651\": rpc error: code = NotFound desc = could not find container \"3b7a753a106c654c4c4ac3e3cd23c4c9d65a1727bed4b2edff1178df1039d651\": container with ID starting with 3b7a753a106c654c4c4ac3e3cd23c4c9d65a1727bed4b2edff1178df1039d651 not found: ID does not exist" Nov 29 08:37:19 crc kubenswrapper[4795]: I1129 08:37:19.085543 4795 scope.go:117] "RemoveContainer" containerID="db4d9eda01994b03c71cb4538eb8d092d3e63d28595a616aefb938eac1fa3b96" Nov 29 08:37:19 crc kubenswrapper[4795]: E1129 08:37:19.085823 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db4d9eda01994b03c71cb4538eb8d092d3e63d28595a616aefb938eac1fa3b96\": container with ID starting with db4d9eda01994b03c71cb4538eb8d092d3e63d28595a616aefb938eac1fa3b96 not found: ID does not exist" containerID="db4d9eda01994b03c71cb4538eb8d092d3e63d28595a616aefb938eac1fa3b96" Nov 29 08:37:19 crc kubenswrapper[4795]: I1129 08:37:19.085849 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db4d9eda01994b03c71cb4538eb8d092d3e63d28595a616aefb938eac1fa3b96"} err="failed to get container status \"db4d9eda01994b03c71cb4538eb8d092d3e63d28595a616aefb938eac1fa3b96\": rpc error: code = NotFound desc = could not find container \"db4d9eda01994b03c71cb4538eb8d092d3e63d28595a616aefb938eac1fa3b96\": container with ID starting with db4d9eda01994b03c71cb4538eb8d092d3e63d28595a616aefb938eac1fa3b96 not found: ID does not exist" Nov 29 08:37:20 crc kubenswrapper[4795]: I1129 08:37:20.291105 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5181c6ac-076b-413c-8b46-93601a7e0ca3" path="/var/lib/kubelet/pods/5181c6ac-076b-413c-8b46-93601a7e0ca3/volumes" Nov 29 08:37:29 crc kubenswrapper[4795]: I1129 08:37:29.276925 4795 scope.go:117] "RemoveContainer" containerID="08469485735dcf4c196646b777c5b1c73e97106d4c854403de03ac1f9b64edb4" Nov 29 08:37:29 crc kubenswrapper[4795]: E1129 08:37:29.279649 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:37:40 crc kubenswrapper[4795]: I1129 08:37:40.276136 4795 scope.go:117] "RemoveContainer" containerID="08469485735dcf4c196646b777c5b1c73e97106d4c854403de03ac1f9b64edb4" Nov 29 08:37:40 crc kubenswrapper[4795]: E1129 08:37:40.276995 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:37:52 crc kubenswrapper[4795]: I1129 08:37:52.277212 4795 scope.go:117] "RemoveContainer" containerID="08469485735dcf4c196646b777c5b1c73e97106d4c854403de03ac1f9b64edb4" Nov 29 08:37:53 crc kubenswrapper[4795]: I1129 08:37:53.464847 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" event={"ID":"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1","Type":"ContainerStarted","Data":"08f37ef2e40936678a51357c25f40952e4cf3a652d9673063f9ab32cd6c9ad44"} Nov 29 08:38:06 crc kubenswrapper[4795]: I1129 08:38:06.625119 4795 generic.go:334] "Generic (PLEG): container finished" podID="46efc96a-a270-4709-a9a1-cf8d60484215" containerID="30c468b08ccc525e6cd0e4cfdeb99d62e646f51cda97d1185e2c60bb516adae2" exitCode=0 Nov 29 08:38:06 crc kubenswrapper[4795]: I1129 08:38:06.625497 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-fjmtk" event={"ID":"46efc96a-a270-4709-a9a1-cf8d60484215","Type":"ContainerDied","Data":"30c468b08ccc525e6cd0e4cfdeb99d62e646f51cda97d1185e2c60bb516adae2"} Nov 29 08:38:08 crc kubenswrapper[4795]: I1129 08:38:08.121180 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-fjmtk" Nov 29 08:38:08 crc kubenswrapper[4795]: I1129 08:38:08.228388 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/46efc96a-a270-4709-a9a1-cf8d60484215-ceilometer-ipmi-config-data-0\") pod \"46efc96a-a270-4709-a9a1-cf8d60484215\" (UID: \"46efc96a-a270-4709-a9a1-cf8d60484215\") " Nov 29 08:38:08 crc kubenswrapper[4795]: I1129 08:38:08.228526 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/46efc96a-a270-4709-a9a1-cf8d60484215-inventory\") pod \"46efc96a-a270-4709-a9a1-cf8d60484215\" (UID: \"46efc96a-a270-4709-a9a1-cf8d60484215\") " Nov 29 08:38:08 crc kubenswrapper[4795]: I1129 08:38:08.228694 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/46efc96a-a270-4709-a9a1-cf8d60484215-ssh-key\") pod \"46efc96a-a270-4709-a9a1-cf8d60484215\" (UID: \"46efc96a-a270-4709-a9a1-cf8d60484215\") " Nov 29 08:38:08 crc kubenswrapper[4795]: I1129 08:38:08.228747 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfgsf\" (UniqueName: \"kubernetes.io/projected/46efc96a-a270-4709-a9a1-cf8d60484215-kube-api-access-vfgsf\") pod \"46efc96a-a270-4709-a9a1-cf8d60484215\" (UID: \"46efc96a-a270-4709-a9a1-cf8d60484215\") " Nov 29 08:38:08 crc kubenswrapper[4795]: I1129 08:38:08.228817 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/46efc96a-a270-4709-a9a1-cf8d60484215-ceilometer-ipmi-config-data-2\") pod \"46efc96a-a270-4709-a9a1-cf8d60484215\" (UID: \"46efc96a-a270-4709-a9a1-cf8d60484215\") " Nov 29 08:38:08 crc kubenswrapper[4795]: I1129 08:38:08.228872 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46efc96a-a270-4709-a9a1-cf8d60484215-telemetry-power-monitoring-combined-ca-bundle\") pod \"46efc96a-a270-4709-a9a1-cf8d60484215\" (UID: \"46efc96a-a270-4709-a9a1-cf8d60484215\") " Nov 29 08:38:08 crc kubenswrapper[4795]: I1129 08:38:08.228945 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/46efc96a-a270-4709-a9a1-cf8d60484215-ceilometer-ipmi-config-data-1\") pod \"46efc96a-a270-4709-a9a1-cf8d60484215\" (UID: \"46efc96a-a270-4709-a9a1-cf8d60484215\") " Nov 29 08:38:08 crc kubenswrapper[4795]: I1129 08:38:08.256135 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46efc96a-a270-4709-a9a1-cf8d60484215-kube-api-access-vfgsf" (OuterVolumeSpecName: "kube-api-access-vfgsf") pod "46efc96a-a270-4709-a9a1-cf8d60484215" (UID: "46efc96a-a270-4709-a9a1-cf8d60484215"). InnerVolumeSpecName "kube-api-access-vfgsf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:38:08 crc kubenswrapper[4795]: I1129 08:38:08.264859 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46efc96a-a270-4709-a9a1-cf8d60484215-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "46efc96a-a270-4709-a9a1-cf8d60484215" (UID: "46efc96a-a270-4709-a9a1-cf8d60484215"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:38:08 crc kubenswrapper[4795]: I1129 08:38:08.271952 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46efc96a-a270-4709-a9a1-cf8d60484215-ceilometer-ipmi-config-data-0" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-0") pod "46efc96a-a270-4709-a9a1-cf8d60484215" (UID: "46efc96a-a270-4709-a9a1-cf8d60484215"). InnerVolumeSpecName "ceilometer-ipmi-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:38:08 crc kubenswrapper[4795]: I1129 08:38:08.280226 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46efc96a-a270-4709-a9a1-cf8d60484215-ceilometer-ipmi-config-data-2" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-2") pod "46efc96a-a270-4709-a9a1-cf8d60484215" (UID: "46efc96a-a270-4709-a9a1-cf8d60484215"). InnerVolumeSpecName "ceilometer-ipmi-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:38:08 crc kubenswrapper[4795]: I1129 08:38:08.282803 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46efc96a-a270-4709-a9a1-cf8d60484215-ceilometer-ipmi-config-data-1" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-1") pod "46efc96a-a270-4709-a9a1-cf8d60484215" (UID: "46efc96a-a270-4709-a9a1-cf8d60484215"). InnerVolumeSpecName "ceilometer-ipmi-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:38:08 crc kubenswrapper[4795]: I1129 08:38:08.298189 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46efc96a-a270-4709-a9a1-cf8d60484215-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "46efc96a-a270-4709-a9a1-cf8d60484215" (UID: "46efc96a-a270-4709-a9a1-cf8d60484215"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:38:08 crc kubenswrapper[4795]: I1129 08:38:08.311255 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46efc96a-a270-4709-a9a1-cf8d60484215-inventory" (OuterVolumeSpecName: "inventory") pod "46efc96a-a270-4709-a9a1-cf8d60484215" (UID: "46efc96a-a270-4709-a9a1-cf8d60484215"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:38:08 crc kubenswrapper[4795]: I1129 08:38:08.332080 4795 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/46efc96a-a270-4709-a9a1-cf8d60484215-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 08:38:08 crc kubenswrapper[4795]: I1129 08:38:08.332114 4795 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/46efc96a-a270-4709-a9a1-cf8d60484215-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 08:38:08 crc kubenswrapper[4795]: I1129 08:38:08.332125 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfgsf\" (UniqueName: \"kubernetes.io/projected/46efc96a-a270-4709-a9a1-cf8d60484215-kube-api-access-vfgsf\") on node \"crc\" DevicePath \"\"" Nov 29 08:38:08 crc kubenswrapper[4795]: I1129 08:38:08.332135 4795 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/46efc96a-a270-4709-a9a1-cf8d60484215-ceilometer-ipmi-config-data-2\") on node \"crc\" DevicePath \"\"" Nov 29 08:38:08 crc kubenswrapper[4795]: I1129 08:38:08.332147 4795 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46efc96a-a270-4709-a9a1-cf8d60484215-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 08:38:08 crc kubenswrapper[4795]: I1129 08:38:08.332161 4795 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/46efc96a-a270-4709-a9a1-cf8d60484215-ceilometer-ipmi-config-data-1\") on node \"crc\" DevicePath \"\"" Nov 29 08:38:08 crc kubenswrapper[4795]: I1129 08:38:08.332171 4795 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/46efc96a-a270-4709-a9a1-cf8d60484215-ceilometer-ipmi-config-data-0\") on node \"crc\" DevicePath \"\"" Nov 29 08:38:08 crc kubenswrapper[4795]: I1129 08:38:08.652630 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-fjmtk" event={"ID":"46efc96a-a270-4709-a9a1-cf8d60484215","Type":"ContainerDied","Data":"997376ca80cca7de4bee1d67ca3fd50dc8f3478237b9f359f9f29e4b1842cafc"} Nov 29 08:38:08 crc kubenswrapper[4795]: I1129 08:38:08.653147 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="997376ca80cca7de4bee1d67ca3fd50dc8f3478237b9f359f9f29e4b1842cafc" Nov 29 08:38:08 crc kubenswrapper[4795]: I1129 08:38:08.653281 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-fjmtk" Nov 29 08:38:08 crc kubenswrapper[4795]: I1129 08:38:08.761816 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-g2krp"] Nov 29 08:38:08 crc kubenswrapper[4795]: E1129 08:38:08.762443 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5181c6ac-076b-413c-8b46-93601a7e0ca3" containerName="registry-server" Nov 29 08:38:08 crc kubenswrapper[4795]: I1129 08:38:08.762461 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="5181c6ac-076b-413c-8b46-93601a7e0ca3" containerName="registry-server" Nov 29 08:38:08 crc kubenswrapper[4795]: E1129 08:38:08.762481 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5181c6ac-076b-413c-8b46-93601a7e0ca3" containerName="extract-content" Nov 29 08:38:08 crc kubenswrapper[4795]: I1129 08:38:08.762490 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="5181c6ac-076b-413c-8b46-93601a7e0ca3" containerName="extract-content" Nov 29 08:38:08 crc kubenswrapper[4795]: E1129 08:38:08.762514 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5181c6ac-076b-413c-8b46-93601a7e0ca3" containerName="extract-utilities" Nov 29 08:38:08 crc kubenswrapper[4795]: I1129 08:38:08.762525 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="5181c6ac-076b-413c-8b46-93601a7e0ca3" containerName="extract-utilities" Nov 29 08:38:08 crc kubenswrapper[4795]: E1129 08:38:08.762571 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46efc96a-a270-4709-a9a1-cf8d60484215" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Nov 29 08:38:08 crc kubenswrapper[4795]: I1129 08:38:08.762580 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="46efc96a-a270-4709-a9a1-cf8d60484215" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Nov 29 08:38:08 crc kubenswrapper[4795]: I1129 08:38:08.762949 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="5181c6ac-076b-413c-8b46-93601a7e0ca3" containerName="registry-server" Nov 29 08:38:08 crc kubenswrapper[4795]: I1129 08:38:08.762981 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="46efc96a-a270-4709-a9a1-cf8d60484215" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Nov 29 08:38:08 crc kubenswrapper[4795]: I1129 08:38:08.763803 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-g2krp" Nov 29 08:38:08 crc kubenswrapper[4795]: I1129 08:38:08.769689 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"logging-compute-config-data" Nov 29 08:38:08 crc kubenswrapper[4795]: I1129 08:38:08.769952 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 08:38:08 crc kubenswrapper[4795]: I1129 08:38:08.770101 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nfhsg" Nov 29 08:38:08 crc kubenswrapper[4795]: I1129 08:38:08.770638 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 08:38:08 crc kubenswrapper[4795]: I1129 08:38:08.770876 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 08:38:08 crc kubenswrapper[4795]: I1129 08:38:08.780124 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-g2krp"] Nov 29 08:38:08 crc kubenswrapper[4795]: I1129 08:38:08.847218 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4700d212-5bd7-4b67-a36a-ae486608b8a8-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-g2krp\" (UID: \"4700d212-5bd7-4b67-a36a-ae486608b8a8\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-g2krp" Nov 29 08:38:08 crc kubenswrapper[4795]: I1129 08:38:08.847288 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/4700d212-5bd7-4b67-a36a-ae486608b8a8-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-g2krp\" (UID: \"4700d212-5bd7-4b67-a36a-ae486608b8a8\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-g2krp" Nov 29 08:38:08 crc kubenswrapper[4795]: I1129 08:38:08.847344 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4700d212-5bd7-4b67-a36a-ae486608b8a8-ssh-key\") pod \"logging-edpm-deployment-openstack-edpm-ipam-g2krp\" (UID: \"4700d212-5bd7-4b67-a36a-ae486608b8a8\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-g2krp" Nov 29 08:38:08 crc kubenswrapper[4795]: I1129 08:38:08.847435 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klz4r\" (UniqueName: \"kubernetes.io/projected/4700d212-5bd7-4b67-a36a-ae486608b8a8-kube-api-access-klz4r\") pod \"logging-edpm-deployment-openstack-edpm-ipam-g2krp\" (UID: \"4700d212-5bd7-4b67-a36a-ae486608b8a8\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-g2krp" Nov 29 08:38:08 crc kubenswrapper[4795]: I1129 08:38:08.847477 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/4700d212-5bd7-4b67-a36a-ae486608b8a8-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-g2krp\" (UID: \"4700d212-5bd7-4b67-a36a-ae486608b8a8\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-g2krp" Nov 29 08:38:08 crc kubenswrapper[4795]: E1129 08:38:08.946831 4795 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod46efc96a_a270_4709_a9a1_cf8d60484215.slice\": RecentStats: unable to find data in memory cache]" Nov 29 08:38:08 crc kubenswrapper[4795]: I1129 08:38:08.949124 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/4700d212-5bd7-4b67-a36a-ae486608b8a8-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-g2krp\" (UID: \"4700d212-5bd7-4b67-a36a-ae486608b8a8\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-g2krp" Nov 29 08:38:08 crc kubenswrapper[4795]: I1129 08:38:08.949222 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4700d212-5bd7-4b67-a36a-ae486608b8a8-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-g2krp\" (UID: \"4700d212-5bd7-4b67-a36a-ae486608b8a8\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-g2krp" Nov 29 08:38:08 crc kubenswrapper[4795]: I1129 08:38:08.949258 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/4700d212-5bd7-4b67-a36a-ae486608b8a8-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-g2krp\" (UID: \"4700d212-5bd7-4b67-a36a-ae486608b8a8\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-g2krp" Nov 29 08:38:08 crc kubenswrapper[4795]: I1129 08:38:08.949309 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4700d212-5bd7-4b67-a36a-ae486608b8a8-ssh-key\") pod \"logging-edpm-deployment-openstack-edpm-ipam-g2krp\" (UID: \"4700d212-5bd7-4b67-a36a-ae486608b8a8\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-g2krp" Nov 29 08:38:08 crc kubenswrapper[4795]: I1129 08:38:08.949392 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klz4r\" (UniqueName: \"kubernetes.io/projected/4700d212-5bd7-4b67-a36a-ae486608b8a8-kube-api-access-klz4r\") pod \"logging-edpm-deployment-openstack-edpm-ipam-g2krp\" (UID: \"4700d212-5bd7-4b67-a36a-ae486608b8a8\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-g2krp" Nov 29 08:38:08 crc kubenswrapper[4795]: I1129 08:38:08.953885 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/4700d212-5bd7-4b67-a36a-ae486608b8a8-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-g2krp\" (UID: \"4700d212-5bd7-4b67-a36a-ae486608b8a8\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-g2krp" Nov 29 08:38:08 crc kubenswrapper[4795]: I1129 08:38:08.954142 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/4700d212-5bd7-4b67-a36a-ae486608b8a8-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-g2krp\" (UID: \"4700d212-5bd7-4b67-a36a-ae486608b8a8\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-g2krp" Nov 29 08:38:08 crc kubenswrapper[4795]: I1129 08:38:08.956384 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4700d212-5bd7-4b67-a36a-ae486608b8a8-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-g2krp\" (UID: \"4700d212-5bd7-4b67-a36a-ae486608b8a8\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-g2krp" Nov 29 08:38:08 crc kubenswrapper[4795]: I1129 08:38:08.962157 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4700d212-5bd7-4b67-a36a-ae486608b8a8-ssh-key\") pod \"logging-edpm-deployment-openstack-edpm-ipam-g2krp\" (UID: \"4700d212-5bd7-4b67-a36a-ae486608b8a8\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-g2krp" Nov 29 08:38:08 crc kubenswrapper[4795]: I1129 08:38:08.964923 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klz4r\" (UniqueName: \"kubernetes.io/projected/4700d212-5bd7-4b67-a36a-ae486608b8a8-kube-api-access-klz4r\") pod \"logging-edpm-deployment-openstack-edpm-ipam-g2krp\" (UID: \"4700d212-5bd7-4b67-a36a-ae486608b8a8\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-g2krp" Nov 29 08:38:09 crc kubenswrapper[4795]: I1129 08:38:09.096864 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-g2krp" Nov 29 08:38:09 crc kubenswrapper[4795]: I1129 08:38:09.704283 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-g2krp"] Nov 29 08:38:09 crc kubenswrapper[4795]: W1129 08:38:09.705810 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4700d212_5bd7_4b67_a36a_ae486608b8a8.slice/crio-3cb7507989a4311705cd8c683d5442f12202612f3df1555d10736b585542becb WatchSource:0}: Error finding container 3cb7507989a4311705cd8c683d5442f12202612f3df1555d10736b585542becb: Status 404 returned error can't find the container with id 3cb7507989a4311705cd8c683d5442f12202612f3df1555d10736b585542becb Nov 29 08:38:10 crc kubenswrapper[4795]: I1129 08:38:10.680230 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-g2krp" event={"ID":"4700d212-5bd7-4b67-a36a-ae486608b8a8","Type":"ContainerStarted","Data":"afded806a9bada1ea7afb7b625588b03017d4d65fab2a05b6cc857e14b7f94e0"} Nov 29 08:38:10 crc kubenswrapper[4795]: I1129 08:38:10.680765 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-g2krp" event={"ID":"4700d212-5bd7-4b67-a36a-ae486608b8a8","Type":"ContainerStarted","Data":"3cb7507989a4311705cd8c683d5442f12202612f3df1555d10736b585542becb"} Nov 29 08:38:10 crc kubenswrapper[4795]: I1129 08:38:10.705552 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-g2krp" podStartSLOduration=2.190622825 podStartE2EDuration="2.705533947s" podCreationTimestamp="2025-11-29 08:38:08 +0000 UTC" firstStartedPulling="2025-11-29 08:38:09.708510642 +0000 UTC m=+3535.684086442" lastFinishedPulling="2025-11-29 08:38:10.223421774 +0000 UTC m=+3536.198997564" observedRunningTime="2025-11-29 08:38:10.694896886 +0000 UTC m=+3536.670472686" watchObservedRunningTime="2025-11-29 08:38:10.705533947 +0000 UTC m=+3536.681109737" Nov 29 08:38:26 crc kubenswrapper[4795]: I1129 08:38:26.851795 4795 generic.go:334] "Generic (PLEG): container finished" podID="4700d212-5bd7-4b67-a36a-ae486608b8a8" containerID="afded806a9bada1ea7afb7b625588b03017d4d65fab2a05b6cc857e14b7f94e0" exitCode=0 Nov 29 08:38:26 crc kubenswrapper[4795]: I1129 08:38:26.851880 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-g2krp" event={"ID":"4700d212-5bd7-4b67-a36a-ae486608b8a8","Type":"ContainerDied","Data":"afded806a9bada1ea7afb7b625588b03017d4d65fab2a05b6cc857e14b7f94e0"} Nov 29 08:38:28 crc kubenswrapper[4795]: I1129 08:38:28.422242 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-g2krp" Nov 29 08:38:28 crc kubenswrapper[4795]: I1129 08:38:28.592742 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/4700d212-5bd7-4b67-a36a-ae486608b8a8-logging-compute-config-data-0\") pod \"4700d212-5bd7-4b67-a36a-ae486608b8a8\" (UID: \"4700d212-5bd7-4b67-a36a-ae486608b8a8\") " Nov 29 08:38:28 crc kubenswrapper[4795]: I1129 08:38:28.592850 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/4700d212-5bd7-4b67-a36a-ae486608b8a8-logging-compute-config-data-1\") pod \"4700d212-5bd7-4b67-a36a-ae486608b8a8\" (UID: \"4700d212-5bd7-4b67-a36a-ae486608b8a8\") " Nov 29 08:38:28 crc kubenswrapper[4795]: I1129 08:38:28.592900 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4700d212-5bd7-4b67-a36a-ae486608b8a8-inventory\") pod \"4700d212-5bd7-4b67-a36a-ae486608b8a8\" (UID: \"4700d212-5bd7-4b67-a36a-ae486608b8a8\") " Nov 29 08:38:28 crc kubenswrapper[4795]: I1129 08:38:28.592923 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-klz4r\" (UniqueName: \"kubernetes.io/projected/4700d212-5bd7-4b67-a36a-ae486608b8a8-kube-api-access-klz4r\") pod \"4700d212-5bd7-4b67-a36a-ae486608b8a8\" (UID: \"4700d212-5bd7-4b67-a36a-ae486608b8a8\") " Nov 29 08:38:28 crc kubenswrapper[4795]: I1129 08:38:28.593711 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4700d212-5bd7-4b67-a36a-ae486608b8a8-ssh-key\") pod \"4700d212-5bd7-4b67-a36a-ae486608b8a8\" (UID: \"4700d212-5bd7-4b67-a36a-ae486608b8a8\") " Nov 29 08:38:28 crc kubenswrapper[4795]: I1129 08:38:28.601898 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4700d212-5bd7-4b67-a36a-ae486608b8a8-kube-api-access-klz4r" (OuterVolumeSpecName: "kube-api-access-klz4r") pod "4700d212-5bd7-4b67-a36a-ae486608b8a8" (UID: "4700d212-5bd7-4b67-a36a-ae486608b8a8"). InnerVolumeSpecName "kube-api-access-klz4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:38:28 crc kubenswrapper[4795]: I1129 08:38:28.628585 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4700d212-5bd7-4b67-a36a-ae486608b8a8-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "4700d212-5bd7-4b67-a36a-ae486608b8a8" (UID: "4700d212-5bd7-4b67-a36a-ae486608b8a8"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:38:28 crc kubenswrapper[4795]: I1129 08:38:28.630143 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4700d212-5bd7-4b67-a36a-ae486608b8a8-logging-compute-config-data-0" (OuterVolumeSpecName: "logging-compute-config-data-0") pod "4700d212-5bd7-4b67-a36a-ae486608b8a8" (UID: "4700d212-5bd7-4b67-a36a-ae486608b8a8"). InnerVolumeSpecName "logging-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:38:28 crc kubenswrapper[4795]: I1129 08:38:28.632027 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4700d212-5bd7-4b67-a36a-ae486608b8a8-inventory" (OuterVolumeSpecName: "inventory") pod "4700d212-5bd7-4b67-a36a-ae486608b8a8" (UID: "4700d212-5bd7-4b67-a36a-ae486608b8a8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:38:28 crc kubenswrapper[4795]: I1129 08:38:28.635770 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4700d212-5bd7-4b67-a36a-ae486608b8a8-logging-compute-config-data-1" (OuterVolumeSpecName: "logging-compute-config-data-1") pod "4700d212-5bd7-4b67-a36a-ae486608b8a8" (UID: "4700d212-5bd7-4b67-a36a-ae486608b8a8"). InnerVolumeSpecName "logging-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:38:28 crc kubenswrapper[4795]: I1129 08:38:28.696578 4795 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/4700d212-5bd7-4b67-a36a-ae486608b8a8-logging-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Nov 29 08:38:28 crc kubenswrapper[4795]: I1129 08:38:28.696654 4795 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/4700d212-5bd7-4b67-a36a-ae486608b8a8-logging-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Nov 29 08:38:28 crc kubenswrapper[4795]: I1129 08:38:28.696670 4795 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4700d212-5bd7-4b67-a36a-ae486608b8a8-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 08:38:28 crc kubenswrapper[4795]: I1129 08:38:28.696681 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-klz4r\" (UniqueName: \"kubernetes.io/projected/4700d212-5bd7-4b67-a36a-ae486608b8a8-kube-api-access-klz4r\") on node \"crc\" DevicePath \"\"" Nov 29 08:38:28 crc kubenswrapper[4795]: I1129 08:38:28.696694 4795 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4700d212-5bd7-4b67-a36a-ae486608b8a8-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 08:38:28 crc kubenswrapper[4795]: I1129 08:38:28.874077 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-g2krp" event={"ID":"4700d212-5bd7-4b67-a36a-ae486608b8a8","Type":"ContainerDied","Data":"3cb7507989a4311705cd8c683d5442f12202612f3df1555d10736b585542becb"} Nov 29 08:38:28 crc kubenswrapper[4795]: I1129 08:38:28.874417 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3cb7507989a4311705cd8c683d5442f12202612f3df1555d10736b585542becb" Nov 29 08:38:28 crc kubenswrapper[4795]: I1129 08:38:28.874129 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-g2krp" Nov 29 08:39:38 crc kubenswrapper[4795]: E1129 08:39:38.124621 4795 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.107:60848->38.102.83.107:42443: write tcp 38.102.83.107:60848->38.102.83.107:42443: write: broken pipe Nov 29 08:40:11 crc kubenswrapper[4795]: I1129 08:40:11.941015 4795 patch_prober.go:28] interesting pod/machine-config-daemon-bkmq6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:40:11 crc kubenswrapper[4795]: I1129 08:40:11.941503 4795 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:40:41 crc kubenswrapper[4795]: I1129 08:40:41.941070 4795 patch_prober.go:28] interesting pod/machine-config-daemon-bkmq6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:40:41 crc kubenswrapper[4795]: I1129 08:40:41.941715 4795 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:41:11 crc kubenswrapper[4795]: I1129 08:41:11.941101 4795 patch_prober.go:28] interesting pod/machine-config-daemon-bkmq6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:41:11 crc kubenswrapper[4795]: I1129 08:41:11.941879 4795 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:41:11 crc kubenswrapper[4795]: I1129 08:41:11.941948 4795 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" Nov 29 08:41:11 crc kubenswrapper[4795]: I1129 08:41:11.943115 4795 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"08f37ef2e40936678a51357c25f40952e4cf3a652d9673063f9ab32cd6c9ad44"} pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 08:41:11 crc kubenswrapper[4795]: I1129 08:41:11.943196 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" containerID="cri-o://08f37ef2e40936678a51357c25f40952e4cf3a652d9673063f9ab32cd6c9ad44" gracePeriod=600 Nov 29 08:41:12 crc kubenswrapper[4795]: I1129 08:41:12.959381 4795 generic.go:334] "Generic (PLEG): container finished" podID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerID="08f37ef2e40936678a51357c25f40952e4cf3a652d9673063f9ab32cd6c9ad44" exitCode=0 Nov 29 08:41:12 crc kubenswrapper[4795]: I1129 08:41:12.959425 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" event={"ID":"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1","Type":"ContainerDied","Data":"08f37ef2e40936678a51357c25f40952e4cf3a652d9673063f9ab32cd6c9ad44"} Nov 29 08:41:12 crc kubenswrapper[4795]: I1129 08:41:12.959724 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" event={"ID":"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1","Type":"ContainerStarted","Data":"5030ff85ae40c73022065ee3f64b116b77c5909a9198a4179ecaaeacf1816833"} Nov 29 08:41:12 crc kubenswrapper[4795]: I1129 08:41:12.959763 4795 scope.go:117] "RemoveContainer" containerID="08469485735dcf4c196646b777c5b1c73e97106d4c854403de03ac1f9b64edb4" Nov 29 08:42:33 crc kubenswrapper[4795]: I1129 08:42:33.340084 4795 scope.go:117] "RemoveContainer" containerID="7527d1215900e50364a0e9e79fb89f96772702ca7baf921130a94a219c85983f" Nov 29 08:42:33 crc kubenswrapper[4795]: I1129 08:42:33.366706 4795 scope.go:117] "RemoveContainer" containerID="40126fcaf931945be0cb7b8ebba368434cc4adaab612f34f0778416b77f0ba5a" Nov 29 08:42:33 crc kubenswrapper[4795]: I1129 08:42:33.419206 4795 scope.go:117] "RemoveContainer" containerID="51063733f3dc02272449b6cd9708400d3b8326164fac6bea28b8cb25bb82dc8a" Nov 29 08:43:41 crc kubenswrapper[4795]: I1129 08:43:41.940907 4795 patch_prober.go:28] interesting pod/machine-config-daemon-bkmq6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:43:41 crc kubenswrapper[4795]: I1129 08:43:41.941436 4795 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:44:11 crc kubenswrapper[4795]: I1129 08:44:11.940870 4795 patch_prober.go:28] interesting pod/machine-config-daemon-bkmq6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:44:11 crc kubenswrapper[4795]: I1129 08:44:11.941421 4795 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:44:41 crc kubenswrapper[4795]: I1129 08:44:41.941150 4795 patch_prober.go:28] interesting pod/machine-config-daemon-bkmq6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:44:41 crc kubenswrapper[4795]: I1129 08:44:41.941987 4795 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:44:41 crc kubenswrapper[4795]: I1129 08:44:41.942058 4795 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" Nov 29 08:44:41 crc kubenswrapper[4795]: I1129 08:44:41.943297 4795 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5030ff85ae40c73022065ee3f64b116b77c5909a9198a4179ecaaeacf1816833"} pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 08:44:41 crc kubenswrapper[4795]: I1129 08:44:41.943394 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" containerID="cri-o://5030ff85ae40c73022065ee3f64b116b77c5909a9198a4179ecaaeacf1816833" gracePeriod=600 Nov 29 08:44:42 crc kubenswrapper[4795]: E1129 08:44:42.073425 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:44:42 crc kubenswrapper[4795]: I1129 08:44:42.711350 4795 generic.go:334] "Generic (PLEG): container finished" podID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerID="5030ff85ae40c73022065ee3f64b116b77c5909a9198a4179ecaaeacf1816833" exitCode=0 Nov 29 08:44:42 crc kubenswrapper[4795]: I1129 08:44:42.711416 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" event={"ID":"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1","Type":"ContainerDied","Data":"5030ff85ae40c73022065ee3f64b116b77c5909a9198a4179ecaaeacf1816833"} Nov 29 08:44:42 crc kubenswrapper[4795]: I1129 08:44:42.711786 4795 scope.go:117] "RemoveContainer" containerID="08f37ef2e40936678a51357c25f40952e4cf3a652d9673063f9ab32cd6c9ad44" Nov 29 08:44:42 crc kubenswrapper[4795]: I1129 08:44:42.712574 4795 scope.go:117] "RemoveContainer" containerID="5030ff85ae40c73022065ee3f64b116b77c5909a9198a4179ecaaeacf1816833" Nov 29 08:44:42 crc kubenswrapper[4795]: E1129 08:44:42.712847 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:44:56 crc kubenswrapper[4795]: I1129 08:44:56.276268 4795 scope.go:117] "RemoveContainer" containerID="5030ff85ae40c73022065ee3f64b116b77c5909a9198a4179ecaaeacf1816833" Nov 29 08:44:56 crc kubenswrapper[4795]: E1129 08:44:56.278400 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:45:00 crc kubenswrapper[4795]: I1129 08:45:00.162288 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406765-mpfc5"] Nov 29 08:45:00 crc kubenswrapper[4795]: E1129 08:45:00.163583 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4700d212-5bd7-4b67-a36a-ae486608b8a8" containerName="logging-edpm-deployment-openstack-edpm-ipam" Nov 29 08:45:00 crc kubenswrapper[4795]: I1129 08:45:00.163619 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="4700d212-5bd7-4b67-a36a-ae486608b8a8" containerName="logging-edpm-deployment-openstack-edpm-ipam" Nov 29 08:45:00 crc kubenswrapper[4795]: I1129 08:45:00.163955 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="4700d212-5bd7-4b67-a36a-ae486608b8a8" containerName="logging-edpm-deployment-openstack-edpm-ipam" Nov 29 08:45:00 crc kubenswrapper[4795]: I1129 08:45:00.164980 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406765-mpfc5" Nov 29 08:45:00 crc kubenswrapper[4795]: I1129 08:45:00.167661 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 29 08:45:00 crc kubenswrapper[4795]: I1129 08:45:00.167668 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 29 08:45:00 crc kubenswrapper[4795]: I1129 08:45:00.183523 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406765-mpfc5"] Nov 29 08:45:00 crc kubenswrapper[4795]: I1129 08:45:00.237312 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/00916148-0906-47ee-b3f1-b243256135ed-config-volume\") pod \"collect-profiles-29406765-mpfc5\" (UID: \"00916148-0906-47ee-b3f1-b243256135ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406765-mpfc5" Nov 29 08:45:00 crc kubenswrapper[4795]: I1129 08:45:00.237665 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24g5j\" (UniqueName: \"kubernetes.io/projected/00916148-0906-47ee-b3f1-b243256135ed-kube-api-access-24g5j\") pod \"collect-profiles-29406765-mpfc5\" (UID: \"00916148-0906-47ee-b3f1-b243256135ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406765-mpfc5" Nov 29 08:45:00 crc kubenswrapper[4795]: I1129 08:45:00.237770 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/00916148-0906-47ee-b3f1-b243256135ed-secret-volume\") pod \"collect-profiles-29406765-mpfc5\" (UID: \"00916148-0906-47ee-b3f1-b243256135ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406765-mpfc5" Nov 29 08:45:00 crc kubenswrapper[4795]: I1129 08:45:00.341891 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24g5j\" (UniqueName: \"kubernetes.io/projected/00916148-0906-47ee-b3f1-b243256135ed-kube-api-access-24g5j\") pod \"collect-profiles-29406765-mpfc5\" (UID: \"00916148-0906-47ee-b3f1-b243256135ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406765-mpfc5" Nov 29 08:45:00 crc kubenswrapper[4795]: I1129 08:45:00.342489 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/00916148-0906-47ee-b3f1-b243256135ed-secret-volume\") pod \"collect-profiles-29406765-mpfc5\" (UID: \"00916148-0906-47ee-b3f1-b243256135ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406765-mpfc5" Nov 29 08:45:00 crc kubenswrapper[4795]: I1129 08:45:00.342540 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/00916148-0906-47ee-b3f1-b243256135ed-config-volume\") pod \"collect-profiles-29406765-mpfc5\" (UID: \"00916148-0906-47ee-b3f1-b243256135ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406765-mpfc5" Nov 29 08:45:00 crc kubenswrapper[4795]: I1129 08:45:00.343742 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/00916148-0906-47ee-b3f1-b243256135ed-config-volume\") pod \"collect-profiles-29406765-mpfc5\" (UID: \"00916148-0906-47ee-b3f1-b243256135ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406765-mpfc5" Nov 29 08:45:00 crc kubenswrapper[4795]: I1129 08:45:00.354928 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/00916148-0906-47ee-b3f1-b243256135ed-secret-volume\") pod \"collect-profiles-29406765-mpfc5\" (UID: \"00916148-0906-47ee-b3f1-b243256135ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406765-mpfc5" Nov 29 08:45:00 crc kubenswrapper[4795]: I1129 08:45:00.363474 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24g5j\" (UniqueName: \"kubernetes.io/projected/00916148-0906-47ee-b3f1-b243256135ed-kube-api-access-24g5j\") pod \"collect-profiles-29406765-mpfc5\" (UID: \"00916148-0906-47ee-b3f1-b243256135ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406765-mpfc5" Nov 29 08:45:00 crc kubenswrapper[4795]: I1129 08:45:00.390105 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-d8bwf"] Nov 29 08:45:00 crc kubenswrapper[4795]: I1129 08:45:00.397105 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d8bwf" Nov 29 08:45:00 crc kubenswrapper[4795]: I1129 08:45:00.428667 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d8bwf"] Nov 29 08:45:00 crc kubenswrapper[4795]: I1129 08:45:00.495708 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406765-mpfc5" Nov 29 08:45:00 crc kubenswrapper[4795]: I1129 08:45:00.549233 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdpmv\" (UniqueName: \"kubernetes.io/projected/ef520b02-4e02-4b02-a213-51cbaa7d1cb0-kube-api-access-vdpmv\") pod \"community-operators-d8bwf\" (UID: \"ef520b02-4e02-4b02-a213-51cbaa7d1cb0\") " pod="openshift-marketplace/community-operators-d8bwf" Nov 29 08:45:00 crc kubenswrapper[4795]: I1129 08:45:00.549322 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef520b02-4e02-4b02-a213-51cbaa7d1cb0-utilities\") pod \"community-operators-d8bwf\" (UID: \"ef520b02-4e02-4b02-a213-51cbaa7d1cb0\") " pod="openshift-marketplace/community-operators-d8bwf" Nov 29 08:45:00 crc kubenswrapper[4795]: I1129 08:45:00.549631 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef520b02-4e02-4b02-a213-51cbaa7d1cb0-catalog-content\") pod \"community-operators-d8bwf\" (UID: \"ef520b02-4e02-4b02-a213-51cbaa7d1cb0\") " pod="openshift-marketplace/community-operators-d8bwf" Nov 29 08:45:00 crc kubenswrapper[4795]: I1129 08:45:00.652276 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef520b02-4e02-4b02-a213-51cbaa7d1cb0-catalog-content\") pod \"community-operators-d8bwf\" (UID: \"ef520b02-4e02-4b02-a213-51cbaa7d1cb0\") " pod="openshift-marketplace/community-operators-d8bwf" Nov 29 08:45:00 crc kubenswrapper[4795]: I1129 08:45:00.652790 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdpmv\" (UniqueName: \"kubernetes.io/projected/ef520b02-4e02-4b02-a213-51cbaa7d1cb0-kube-api-access-vdpmv\") pod \"community-operators-d8bwf\" (UID: \"ef520b02-4e02-4b02-a213-51cbaa7d1cb0\") " pod="openshift-marketplace/community-operators-d8bwf" Nov 29 08:45:00 crc kubenswrapper[4795]: I1129 08:45:00.652830 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef520b02-4e02-4b02-a213-51cbaa7d1cb0-utilities\") pod \"community-operators-d8bwf\" (UID: \"ef520b02-4e02-4b02-a213-51cbaa7d1cb0\") " pod="openshift-marketplace/community-operators-d8bwf" Nov 29 08:45:00 crc kubenswrapper[4795]: I1129 08:45:00.652990 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef520b02-4e02-4b02-a213-51cbaa7d1cb0-catalog-content\") pod \"community-operators-d8bwf\" (UID: \"ef520b02-4e02-4b02-a213-51cbaa7d1cb0\") " pod="openshift-marketplace/community-operators-d8bwf" Nov 29 08:45:00 crc kubenswrapper[4795]: I1129 08:45:00.653308 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef520b02-4e02-4b02-a213-51cbaa7d1cb0-utilities\") pod \"community-operators-d8bwf\" (UID: \"ef520b02-4e02-4b02-a213-51cbaa7d1cb0\") " pod="openshift-marketplace/community-operators-d8bwf" Nov 29 08:45:00 crc kubenswrapper[4795]: I1129 08:45:00.674493 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdpmv\" (UniqueName: \"kubernetes.io/projected/ef520b02-4e02-4b02-a213-51cbaa7d1cb0-kube-api-access-vdpmv\") pod \"community-operators-d8bwf\" (UID: \"ef520b02-4e02-4b02-a213-51cbaa7d1cb0\") " pod="openshift-marketplace/community-operators-d8bwf" Nov 29 08:45:00 crc kubenswrapper[4795]: I1129 08:45:00.813208 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d8bwf" Nov 29 08:45:01 crc kubenswrapper[4795]: I1129 08:45:01.055195 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406765-mpfc5"] Nov 29 08:45:01 crc kubenswrapper[4795]: W1129 08:45:01.062682 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00916148_0906_47ee_b3f1_b243256135ed.slice/crio-dddcc1ca7db61a4e897e1977858b51d3d2716c25a0c4831b199acb76386592a2 WatchSource:0}: Error finding container dddcc1ca7db61a4e897e1977858b51d3d2716c25a0c4831b199acb76386592a2: Status 404 returned error can't find the container with id dddcc1ca7db61a4e897e1977858b51d3d2716c25a0c4831b199acb76386592a2 Nov 29 08:45:01 crc kubenswrapper[4795]: W1129 08:45:01.443623 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef520b02_4e02_4b02_a213_51cbaa7d1cb0.slice/crio-d1e92712fcedea847c0f7af71dac136beaa38e32bb054d1d21374288913d64ec WatchSource:0}: Error finding container d1e92712fcedea847c0f7af71dac136beaa38e32bb054d1d21374288913d64ec: Status 404 returned error can't find the container with id d1e92712fcedea847c0f7af71dac136beaa38e32bb054d1d21374288913d64ec Nov 29 08:45:01 crc kubenswrapper[4795]: I1129 08:45:01.445647 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d8bwf"] Nov 29 08:45:01 crc kubenswrapper[4795]: I1129 08:45:01.993005 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406765-mpfc5" event={"ID":"00916148-0906-47ee-b3f1-b243256135ed","Type":"ContainerStarted","Data":"bcf34fe16286a7fa1d756b0c7332935520b5c06403ad64bf2ec70ea250b0afc7"} Nov 29 08:45:01 crc kubenswrapper[4795]: I1129 08:45:01.993064 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406765-mpfc5" event={"ID":"00916148-0906-47ee-b3f1-b243256135ed","Type":"ContainerStarted","Data":"dddcc1ca7db61a4e897e1977858b51d3d2716c25a0c4831b199acb76386592a2"} Nov 29 08:45:01 crc kubenswrapper[4795]: I1129 08:45:01.994485 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d8bwf" event={"ID":"ef520b02-4e02-4b02-a213-51cbaa7d1cb0","Type":"ContainerStarted","Data":"d1e92712fcedea847c0f7af71dac136beaa38e32bb054d1d21374288913d64ec"} Nov 29 08:45:02 crc kubenswrapper[4795]: I1129 08:45:02.017632 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29406765-mpfc5" podStartSLOduration=2.017603832 podStartE2EDuration="2.017603832s" podCreationTimestamp="2025-11-29 08:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 08:45:02.006456466 +0000 UTC m=+3947.982032256" watchObservedRunningTime="2025-11-29 08:45:02.017603832 +0000 UTC m=+3947.993179632" Nov 29 08:45:03 crc kubenswrapper[4795]: I1129 08:45:03.007173 4795 generic.go:334] "Generic (PLEG): container finished" podID="ef520b02-4e02-4b02-a213-51cbaa7d1cb0" containerID="a2ab68a8c2696020b017714aa1833ed60814fdb2f69f946d2f292d423f9c344e" exitCode=0 Nov 29 08:45:03 crc kubenswrapper[4795]: I1129 08:45:03.007489 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d8bwf" event={"ID":"ef520b02-4e02-4b02-a213-51cbaa7d1cb0","Type":"ContainerDied","Data":"a2ab68a8c2696020b017714aa1833ed60814fdb2f69f946d2f292d423f9c344e"} Nov 29 08:45:03 crc kubenswrapper[4795]: I1129 08:45:03.009505 4795 generic.go:334] "Generic (PLEG): container finished" podID="00916148-0906-47ee-b3f1-b243256135ed" containerID="bcf34fe16286a7fa1d756b0c7332935520b5c06403ad64bf2ec70ea250b0afc7" exitCode=0 Nov 29 08:45:03 crc kubenswrapper[4795]: I1129 08:45:03.009538 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406765-mpfc5" event={"ID":"00916148-0906-47ee-b3f1-b243256135ed","Type":"ContainerDied","Data":"bcf34fe16286a7fa1d756b0c7332935520b5c06403ad64bf2ec70ea250b0afc7"} Nov 29 08:45:03 crc kubenswrapper[4795]: I1129 08:45:03.016406 4795 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 29 08:45:04 crc kubenswrapper[4795]: I1129 08:45:04.713141 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406765-mpfc5" Nov 29 08:45:04 crc kubenswrapper[4795]: I1129 08:45:04.853282 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/00916148-0906-47ee-b3f1-b243256135ed-config-volume\") pod \"00916148-0906-47ee-b3f1-b243256135ed\" (UID: \"00916148-0906-47ee-b3f1-b243256135ed\") " Nov 29 08:45:04 crc kubenswrapper[4795]: I1129 08:45:04.853367 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24g5j\" (UniqueName: \"kubernetes.io/projected/00916148-0906-47ee-b3f1-b243256135ed-kube-api-access-24g5j\") pod \"00916148-0906-47ee-b3f1-b243256135ed\" (UID: \"00916148-0906-47ee-b3f1-b243256135ed\") " Nov 29 08:45:04 crc kubenswrapper[4795]: I1129 08:45:04.853463 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/00916148-0906-47ee-b3f1-b243256135ed-secret-volume\") pod \"00916148-0906-47ee-b3f1-b243256135ed\" (UID: \"00916148-0906-47ee-b3f1-b243256135ed\") " Nov 29 08:45:04 crc kubenswrapper[4795]: I1129 08:45:04.855431 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00916148-0906-47ee-b3f1-b243256135ed-config-volume" (OuterVolumeSpecName: "config-volume") pod "00916148-0906-47ee-b3f1-b243256135ed" (UID: "00916148-0906-47ee-b3f1-b243256135ed"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:45:04 crc kubenswrapper[4795]: I1129 08:45:04.869193 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00916148-0906-47ee-b3f1-b243256135ed-kube-api-access-24g5j" (OuterVolumeSpecName: "kube-api-access-24g5j") pod "00916148-0906-47ee-b3f1-b243256135ed" (UID: "00916148-0906-47ee-b3f1-b243256135ed"). InnerVolumeSpecName "kube-api-access-24g5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:45:04 crc kubenswrapper[4795]: I1129 08:45:04.870762 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00916148-0906-47ee-b3f1-b243256135ed-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "00916148-0906-47ee-b3f1-b243256135ed" (UID: "00916148-0906-47ee-b3f1-b243256135ed"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:45:04 crc kubenswrapper[4795]: I1129 08:45:04.955885 4795 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/00916148-0906-47ee-b3f1-b243256135ed-config-volume\") on node \"crc\" DevicePath \"\"" Nov 29 08:45:04 crc kubenswrapper[4795]: I1129 08:45:04.955914 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24g5j\" (UniqueName: \"kubernetes.io/projected/00916148-0906-47ee-b3f1-b243256135ed-kube-api-access-24g5j\") on node \"crc\" DevicePath \"\"" Nov 29 08:45:04 crc kubenswrapper[4795]: I1129 08:45:04.955926 4795 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/00916148-0906-47ee-b3f1-b243256135ed-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 29 08:45:05 crc kubenswrapper[4795]: I1129 08:45:05.035217 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d8bwf" event={"ID":"ef520b02-4e02-4b02-a213-51cbaa7d1cb0","Type":"ContainerStarted","Data":"098fe8811e8d717f73bea31b899db942bbfe8f7b61a2ba7c10df52ec916a60d6"} Nov 29 08:45:05 crc kubenswrapper[4795]: I1129 08:45:05.040668 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406765-mpfc5" event={"ID":"00916148-0906-47ee-b3f1-b243256135ed","Type":"ContainerDied","Data":"dddcc1ca7db61a4e897e1977858b51d3d2716c25a0c4831b199acb76386592a2"} Nov 29 08:45:05 crc kubenswrapper[4795]: I1129 08:45:05.040725 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dddcc1ca7db61a4e897e1977858b51d3d2716c25a0c4831b199acb76386592a2" Nov 29 08:45:05 crc kubenswrapper[4795]: I1129 08:45:05.040806 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406765-mpfc5" Nov 29 08:45:05 crc kubenswrapper[4795]: I1129 08:45:05.093009 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406720-55czb"] Nov 29 08:45:05 crc kubenswrapper[4795]: I1129 08:45:05.103011 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406720-55czb"] Nov 29 08:45:06 crc kubenswrapper[4795]: I1129 08:45:06.866904 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87f01948-4fcf-443c-86a5-52cd4ea2497d" path="/var/lib/kubelet/pods/87f01948-4fcf-443c-86a5-52cd4ea2497d/volumes" Nov 29 08:45:07 crc kubenswrapper[4795]: I1129 08:45:07.075158 4795 generic.go:334] "Generic (PLEG): container finished" podID="ef520b02-4e02-4b02-a213-51cbaa7d1cb0" containerID="098fe8811e8d717f73bea31b899db942bbfe8f7b61a2ba7c10df52ec916a60d6" exitCode=0 Nov 29 08:45:07 crc kubenswrapper[4795]: I1129 08:45:07.075214 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d8bwf" event={"ID":"ef520b02-4e02-4b02-a213-51cbaa7d1cb0","Type":"ContainerDied","Data":"098fe8811e8d717f73bea31b899db942bbfe8f7b61a2ba7c10df52ec916a60d6"} Nov 29 08:45:08 crc kubenswrapper[4795]: I1129 08:45:08.087530 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d8bwf" event={"ID":"ef520b02-4e02-4b02-a213-51cbaa7d1cb0","Type":"ContainerStarted","Data":"1dc7288b13a8baaaba894618faa3633cf868d283f6c321ba2a5f4235df6c764b"} Nov 29 08:45:08 crc kubenswrapper[4795]: I1129 08:45:08.109352 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-d8bwf" podStartSLOduration=3.430492829 podStartE2EDuration="8.109334078s" podCreationTimestamp="2025-11-29 08:45:00 +0000 UTC" firstStartedPulling="2025-11-29 08:45:03.016215652 +0000 UTC m=+3948.991791442" lastFinishedPulling="2025-11-29 08:45:07.695056901 +0000 UTC m=+3953.670632691" observedRunningTime="2025-11-29 08:45:08.108018401 +0000 UTC m=+3954.083594191" watchObservedRunningTime="2025-11-29 08:45:08.109334078 +0000 UTC m=+3954.084909868" Nov 29 08:45:10 crc kubenswrapper[4795]: I1129 08:45:10.275495 4795 scope.go:117] "RemoveContainer" containerID="5030ff85ae40c73022065ee3f64b116b77c5909a9198a4179ecaaeacf1816833" Nov 29 08:45:10 crc kubenswrapper[4795]: E1129 08:45:10.276317 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:45:10 crc kubenswrapper[4795]: I1129 08:45:10.813693 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-d8bwf" Nov 29 08:45:10 crc kubenswrapper[4795]: I1129 08:45:10.813785 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-d8bwf" Nov 29 08:45:11 crc kubenswrapper[4795]: I1129 08:45:11.866300 4795 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-d8bwf" podUID="ef520b02-4e02-4b02-a213-51cbaa7d1cb0" containerName="registry-server" probeResult="failure" output=< Nov 29 08:45:11 crc kubenswrapper[4795]: timeout: failed to connect service ":50051" within 1s Nov 29 08:45:11 crc kubenswrapper[4795]: > Nov 29 08:45:20 crc kubenswrapper[4795]: I1129 08:45:20.861201 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-d8bwf" Nov 29 08:45:20 crc kubenswrapper[4795]: I1129 08:45:20.934024 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-d8bwf" Nov 29 08:45:21 crc kubenswrapper[4795]: I1129 08:45:21.105646 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d8bwf"] Nov 29 08:45:21 crc kubenswrapper[4795]: I1129 08:45:21.276488 4795 scope.go:117] "RemoveContainer" containerID="5030ff85ae40c73022065ee3f64b116b77c5909a9198a4179ecaaeacf1816833" Nov 29 08:45:21 crc kubenswrapper[4795]: E1129 08:45:21.276864 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:45:22 crc kubenswrapper[4795]: I1129 08:45:22.251810 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-d8bwf" podUID="ef520b02-4e02-4b02-a213-51cbaa7d1cb0" containerName="registry-server" containerID="cri-o://1dc7288b13a8baaaba894618faa3633cf868d283f6c321ba2a5f4235df6c764b" gracePeriod=2 Nov 29 08:45:22 crc kubenswrapper[4795]: I1129 08:45:22.961494 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d8bwf" Nov 29 08:45:23 crc kubenswrapper[4795]: I1129 08:45:23.055943 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef520b02-4e02-4b02-a213-51cbaa7d1cb0-catalog-content\") pod \"ef520b02-4e02-4b02-a213-51cbaa7d1cb0\" (UID: \"ef520b02-4e02-4b02-a213-51cbaa7d1cb0\") " Nov 29 08:45:23 crc kubenswrapper[4795]: I1129 08:45:23.056059 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef520b02-4e02-4b02-a213-51cbaa7d1cb0-utilities\") pod \"ef520b02-4e02-4b02-a213-51cbaa7d1cb0\" (UID: \"ef520b02-4e02-4b02-a213-51cbaa7d1cb0\") " Nov 29 08:45:23 crc kubenswrapper[4795]: I1129 08:45:23.056110 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdpmv\" (UniqueName: \"kubernetes.io/projected/ef520b02-4e02-4b02-a213-51cbaa7d1cb0-kube-api-access-vdpmv\") pod \"ef520b02-4e02-4b02-a213-51cbaa7d1cb0\" (UID: \"ef520b02-4e02-4b02-a213-51cbaa7d1cb0\") " Nov 29 08:45:23 crc kubenswrapper[4795]: I1129 08:45:23.057362 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef520b02-4e02-4b02-a213-51cbaa7d1cb0-utilities" (OuterVolumeSpecName: "utilities") pod "ef520b02-4e02-4b02-a213-51cbaa7d1cb0" (UID: "ef520b02-4e02-4b02-a213-51cbaa7d1cb0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:45:23 crc kubenswrapper[4795]: I1129 08:45:23.066321 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef520b02-4e02-4b02-a213-51cbaa7d1cb0-kube-api-access-vdpmv" (OuterVolumeSpecName: "kube-api-access-vdpmv") pod "ef520b02-4e02-4b02-a213-51cbaa7d1cb0" (UID: "ef520b02-4e02-4b02-a213-51cbaa7d1cb0"). InnerVolumeSpecName "kube-api-access-vdpmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:45:23 crc kubenswrapper[4795]: I1129 08:45:23.104871 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef520b02-4e02-4b02-a213-51cbaa7d1cb0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ef520b02-4e02-4b02-a213-51cbaa7d1cb0" (UID: "ef520b02-4e02-4b02-a213-51cbaa7d1cb0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:45:23 crc kubenswrapper[4795]: I1129 08:45:23.158575 4795 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef520b02-4e02-4b02-a213-51cbaa7d1cb0-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 08:45:23 crc kubenswrapper[4795]: I1129 08:45:23.158662 4795 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef520b02-4e02-4b02-a213-51cbaa7d1cb0-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 08:45:23 crc kubenswrapper[4795]: I1129 08:45:23.158673 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdpmv\" (UniqueName: \"kubernetes.io/projected/ef520b02-4e02-4b02-a213-51cbaa7d1cb0-kube-api-access-vdpmv\") on node \"crc\" DevicePath \"\"" Nov 29 08:45:23 crc kubenswrapper[4795]: I1129 08:45:23.281137 4795 generic.go:334] "Generic (PLEG): container finished" podID="ef520b02-4e02-4b02-a213-51cbaa7d1cb0" containerID="1dc7288b13a8baaaba894618faa3633cf868d283f6c321ba2a5f4235df6c764b" exitCode=0 Nov 29 08:45:23 crc kubenswrapper[4795]: I1129 08:45:23.281175 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d8bwf" event={"ID":"ef520b02-4e02-4b02-a213-51cbaa7d1cb0","Type":"ContainerDied","Data":"1dc7288b13a8baaaba894618faa3633cf868d283f6c321ba2a5f4235df6c764b"} Nov 29 08:45:23 crc kubenswrapper[4795]: I1129 08:45:23.281196 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d8bwf" event={"ID":"ef520b02-4e02-4b02-a213-51cbaa7d1cb0","Type":"ContainerDied","Data":"d1e92712fcedea847c0f7af71dac136beaa38e32bb054d1d21374288913d64ec"} Nov 29 08:45:23 crc kubenswrapper[4795]: I1129 08:45:23.281210 4795 scope.go:117] "RemoveContainer" containerID="1dc7288b13a8baaaba894618faa3633cf868d283f6c321ba2a5f4235df6c764b" Nov 29 08:45:23 crc kubenswrapper[4795]: I1129 08:45:23.281311 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d8bwf" Nov 29 08:45:23 crc kubenswrapper[4795]: I1129 08:45:23.323128 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d8bwf"] Nov 29 08:45:23 crc kubenswrapper[4795]: I1129 08:45:23.334845 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-d8bwf"] Nov 29 08:45:23 crc kubenswrapper[4795]: I1129 08:45:23.338627 4795 scope.go:117] "RemoveContainer" containerID="098fe8811e8d717f73bea31b899db942bbfe8f7b61a2ba7c10df52ec916a60d6" Nov 29 08:45:23 crc kubenswrapper[4795]: I1129 08:45:23.370492 4795 scope.go:117] "RemoveContainer" containerID="a2ab68a8c2696020b017714aa1833ed60814fdb2f69f946d2f292d423f9c344e" Nov 29 08:45:23 crc kubenswrapper[4795]: I1129 08:45:23.422447 4795 scope.go:117] "RemoveContainer" containerID="1dc7288b13a8baaaba894618faa3633cf868d283f6c321ba2a5f4235df6c764b" Nov 29 08:45:23 crc kubenswrapper[4795]: E1129 08:45:23.423113 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1dc7288b13a8baaaba894618faa3633cf868d283f6c321ba2a5f4235df6c764b\": container with ID starting with 1dc7288b13a8baaaba894618faa3633cf868d283f6c321ba2a5f4235df6c764b not found: ID does not exist" containerID="1dc7288b13a8baaaba894618faa3633cf868d283f6c321ba2a5f4235df6c764b" Nov 29 08:45:23 crc kubenswrapper[4795]: I1129 08:45:23.423169 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1dc7288b13a8baaaba894618faa3633cf868d283f6c321ba2a5f4235df6c764b"} err="failed to get container status \"1dc7288b13a8baaaba894618faa3633cf868d283f6c321ba2a5f4235df6c764b\": rpc error: code = NotFound desc = could not find container \"1dc7288b13a8baaaba894618faa3633cf868d283f6c321ba2a5f4235df6c764b\": container with ID starting with 1dc7288b13a8baaaba894618faa3633cf868d283f6c321ba2a5f4235df6c764b not found: ID does not exist" Nov 29 08:45:23 crc kubenswrapper[4795]: I1129 08:45:23.423201 4795 scope.go:117] "RemoveContainer" containerID="098fe8811e8d717f73bea31b899db942bbfe8f7b61a2ba7c10df52ec916a60d6" Nov 29 08:45:23 crc kubenswrapper[4795]: E1129 08:45:23.423786 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"098fe8811e8d717f73bea31b899db942bbfe8f7b61a2ba7c10df52ec916a60d6\": container with ID starting with 098fe8811e8d717f73bea31b899db942bbfe8f7b61a2ba7c10df52ec916a60d6 not found: ID does not exist" containerID="098fe8811e8d717f73bea31b899db942bbfe8f7b61a2ba7c10df52ec916a60d6" Nov 29 08:45:23 crc kubenswrapper[4795]: I1129 08:45:23.423811 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"098fe8811e8d717f73bea31b899db942bbfe8f7b61a2ba7c10df52ec916a60d6"} err="failed to get container status \"098fe8811e8d717f73bea31b899db942bbfe8f7b61a2ba7c10df52ec916a60d6\": rpc error: code = NotFound desc = could not find container \"098fe8811e8d717f73bea31b899db942bbfe8f7b61a2ba7c10df52ec916a60d6\": container with ID starting with 098fe8811e8d717f73bea31b899db942bbfe8f7b61a2ba7c10df52ec916a60d6 not found: ID does not exist" Nov 29 08:45:23 crc kubenswrapper[4795]: I1129 08:45:23.423826 4795 scope.go:117] "RemoveContainer" containerID="a2ab68a8c2696020b017714aa1833ed60814fdb2f69f946d2f292d423f9c344e" Nov 29 08:45:23 crc kubenswrapper[4795]: E1129 08:45:23.424262 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2ab68a8c2696020b017714aa1833ed60814fdb2f69f946d2f292d423f9c344e\": container with ID starting with a2ab68a8c2696020b017714aa1833ed60814fdb2f69f946d2f292d423f9c344e not found: ID does not exist" containerID="a2ab68a8c2696020b017714aa1833ed60814fdb2f69f946d2f292d423f9c344e" Nov 29 08:45:23 crc kubenswrapper[4795]: I1129 08:45:23.424291 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2ab68a8c2696020b017714aa1833ed60814fdb2f69f946d2f292d423f9c344e"} err="failed to get container status \"a2ab68a8c2696020b017714aa1833ed60814fdb2f69f946d2f292d423f9c344e\": rpc error: code = NotFound desc = could not find container \"a2ab68a8c2696020b017714aa1833ed60814fdb2f69f946d2f292d423f9c344e\": container with ID starting with a2ab68a8c2696020b017714aa1833ed60814fdb2f69f946d2f292d423f9c344e not found: ID does not exist" Nov 29 08:45:24 crc kubenswrapper[4795]: I1129 08:45:24.292287 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef520b02-4e02-4b02-a213-51cbaa7d1cb0" path="/var/lib/kubelet/pods/ef520b02-4e02-4b02-a213-51cbaa7d1cb0/volumes" Nov 29 08:45:33 crc kubenswrapper[4795]: I1129 08:45:33.276793 4795 scope.go:117] "RemoveContainer" containerID="5030ff85ae40c73022065ee3f64b116b77c5909a9198a4179ecaaeacf1816833" Nov 29 08:45:33 crc kubenswrapper[4795]: E1129 08:45:33.277752 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:45:33 crc kubenswrapper[4795]: I1129 08:45:33.559989 4795 scope.go:117] "RemoveContainer" containerID="79cb6e09ec5b12f7f4c8531192768dd4d9d4e49ca016da21508d472afe272124" Nov 29 08:45:47 crc kubenswrapper[4795]: I1129 08:45:47.291335 4795 scope.go:117] "RemoveContainer" containerID="5030ff85ae40c73022065ee3f64b116b77c5909a9198a4179ecaaeacf1816833" Nov 29 08:45:47 crc kubenswrapper[4795]: E1129 08:45:47.293017 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:45:53 crc kubenswrapper[4795]: I1129 08:45:53.523241 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-76r8t"] Nov 29 08:45:53 crc kubenswrapper[4795]: E1129 08:45:53.524332 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00916148-0906-47ee-b3f1-b243256135ed" containerName="collect-profiles" Nov 29 08:45:53 crc kubenswrapper[4795]: I1129 08:45:53.524352 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="00916148-0906-47ee-b3f1-b243256135ed" containerName="collect-profiles" Nov 29 08:45:53 crc kubenswrapper[4795]: E1129 08:45:53.524376 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef520b02-4e02-4b02-a213-51cbaa7d1cb0" containerName="extract-content" Nov 29 08:45:53 crc kubenswrapper[4795]: I1129 08:45:53.524384 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef520b02-4e02-4b02-a213-51cbaa7d1cb0" containerName="extract-content" Nov 29 08:45:53 crc kubenswrapper[4795]: E1129 08:45:53.524416 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef520b02-4e02-4b02-a213-51cbaa7d1cb0" containerName="extract-utilities" Nov 29 08:45:53 crc kubenswrapper[4795]: I1129 08:45:53.524423 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef520b02-4e02-4b02-a213-51cbaa7d1cb0" containerName="extract-utilities" Nov 29 08:45:53 crc kubenswrapper[4795]: E1129 08:45:53.524444 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef520b02-4e02-4b02-a213-51cbaa7d1cb0" containerName="registry-server" Nov 29 08:45:53 crc kubenswrapper[4795]: I1129 08:45:53.524450 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef520b02-4e02-4b02-a213-51cbaa7d1cb0" containerName="registry-server" Nov 29 08:45:53 crc kubenswrapper[4795]: I1129 08:45:53.524692 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="00916148-0906-47ee-b3f1-b243256135ed" containerName="collect-profiles" Nov 29 08:45:53 crc kubenswrapper[4795]: I1129 08:45:53.524709 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef520b02-4e02-4b02-a213-51cbaa7d1cb0" containerName="registry-server" Nov 29 08:45:53 crc kubenswrapper[4795]: I1129 08:45:53.526499 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-76r8t" Nov 29 08:45:53 crc kubenswrapper[4795]: I1129 08:45:53.545282 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-76r8t"] Nov 29 08:45:53 crc kubenswrapper[4795]: I1129 08:45:53.720017 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c816084-efe3-45a4-9de6-09070bceb8b0-catalog-content\") pod \"redhat-operators-76r8t\" (UID: \"6c816084-efe3-45a4-9de6-09070bceb8b0\") " pod="openshift-marketplace/redhat-operators-76r8t" Nov 29 08:45:53 crc kubenswrapper[4795]: I1129 08:45:53.720144 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c816084-efe3-45a4-9de6-09070bceb8b0-utilities\") pod \"redhat-operators-76r8t\" (UID: \"6c816084-efe3-45a4-9de6-09070bceb8b0\") " pod="openshift-marketplace/redhat-operators-76r8t" Nov 29 08:45:53 crc kubenswrapper[4795]: I1129 08:45:53.720266 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d48lm\" (UniqueName: \"kubernetes.io/projected/6c816084-efe3-45a4-9de6-09070bceb8b0-kube-api-access-d48lm\") pod \"redhat-operators-76r8t\" (UID: \"6c816084-efe3-45a4-9de6-09070bceb8b0\") " pod="openshift-marketplace/redhat-operators-76r8t" Nov 29 08:45:53 crc kubenswrapper[4795]: I1129 08:45:53.822822 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c816084-efe3-45a4-9de6-09070bceb8b0-catalog-content\") pod \"redhat-operators-76r8t\" (UID: \"6c816084-efe3-45a4-9de6-09070bceb8b0\") " pod="openshift-marketplace/redhat-operators-76r8t" Nov 29 08:45:53 crc kubenswrapper[4795]: I1129 08:45:53.822907 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c816084-efe3-45a4-9de6-09070bceb8b0-utilities\") pod \"redhat-operators-76r8t\" (UID: \"6c816084-efe3-45a4-9de6-09070bceb8b0\") " pod="openshift-marketplace/redhat-operators-76r8t" Nov 29 08:45:53 crc kubenswrapper[4795]: I1129 08:45:53.822988 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d48lm\" (UniqueName: \"kubernetes.io/projected/6c816084-efe3-45a4-9de6-09070bceb8b0-kube-api-access-d48lm\") pod \"redhat-operators-76r8t\" (UID: \"6c816084-efe3-45a4-9de6-09070bceb8b0\") " pod="openshift-marketplace/redhat-operators-76r8t" Nov 29 08:45:53 crc kubenswrapper[4795]: I1129 08:45:53.823418 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c816084-efe3-45a4-9de6-09070bceb8b0-catalog-content\") pod \"redhat-operators-76r8t\" (UID: \"6c816084-efe3-45a4-9de6-09070bceb8b0\") " pod="openshift-marketplace/redhat-operators-76r8t" Nov 29 08:45:53 crc kubenswrapper[4795]: I1129 08:45:53.823457 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c816084-efe3-45a4-9de6-09070bceb8b0-utilities\") pod \"redhat-operators-76r8t\" (UID: \"6c816084-efe3-45a4-9de6-09070bceb8b0\") " pod="openshift-marketplace/redhat-operators-76r8t" Nov 29 08:45:53 crc kubenswrapper[4795]: I1129 08:45:53.854902 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d48lm\" (UniqueName: \"kubernetes.io/projected/6c816084-efe3-45a4-9de6-09070bceb8b0-kube-api-access-d48lm\") pod \"redhat-operators-76r8t\" (UID: \"6c816084-efe3-45a4-9de6-09070bceb8b0\") " pod="openshift-marketplace/redhat-operators-76r8t" Nov 29 08:45:53 crc kubenswrapper[4795]: I1129 08:45:53.869336 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-76r8t" Nov 29 08:45:54 crc kubenswrapper[4795]: I1129 08:45:54.357909 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-76r8t"] Nov 29 08:45:54 crc kubenswrapper[4795]: I1129 08:45:54.737132 4795 generic.go:334] "Generic (PLEG): container finished" podID="6c816084-efe3-45a4-9de6-09070bceb8b0" containerID="982a07bac50ec11ab4075d152d958c25f0b7565034bb73649943b93e4d089bbd" exitCode=0 Nov 29 08:45:54 crc kubenswrapper[4795]: I1129 08:45:54.737227 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-76r8t" event={"ID":"6c816084-efe3-45a4-9de6-09070bceb8b0","Type":"ContainerDied","Data":"982a07bac50ec11ab4075d152d958c25f0b7565034bb73649943b93e4d089bbd"} Nov 29 08:45:54 crc kubenswrapper[4795]: I1129 08:45:54.737397 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-76r8t" event={"ID":"6c816084-efe3-45a4-9de6-09070bceb8b0","Type":"ContainerStarted","Data":"d29670ea9d207fb6eef2bdc03d4cc834e803fee4d930a5044a781fdc0549c6d9"} Nov 29 08:45:55 crc kubenswrapper[4795]: I1129 08:45:55.751703 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-76r8t" event={"ID":"6c816084-efe3-45a4-9de6-09070bceb8b0","Type":"ContainerStarted","Data":"dd3cdd0eee70e5e9582fd719520331b0adcd6a50d41116d1e3e0678fddf6f758"} Nov 29 08:45:59 crc kubenswrapper[4795]: I1129 08:45:59.809443 4795 generic.go:334] "Generic (PLEG): container finished" podID="6c816084-efe3-45a4-9de6-09070bceb8b0" containerID="dd3cdd0eee70e5e9582fd719520331b0adcd6a50d41116d1e3e0678fddf6f758" exitCode=0 Nov 29 08:45:59 crc kubenswrapper[4795]: I1129 08:45:59.809543 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-76r8t" event={"ID":"6c816084-efe3-45a4-9de6-09070bceb8b0","Type":"ContainerDied","Data":"dd3cdd0eee70e5e9582fd719520331b0adcd6a50d41116d1e3e0678fddf6f758"} Nov 29 08:46:01 crc kubenswrapper[4795]: I1129 08:46:01.280770 4795 scope.go:117] "RemoveContainer" containerID="5030ff85ae40c73022065ee3f64b116b77c5909a9198a4179ecaaeacf1816833" Nov 29 08:46:01 crc kubenswrapper[4795]: E1129 08:46:01.281695 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:46:01 crc kubenswrapper[4795]: I1129 08:46:01.843954 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-76r8t" event={"ID":"6c816084-efe3-45a4-9de6-09070bceb8b0","Type":"ContainerStarted","Data":"fb583c5f12c3092c6b75e25bd464677a85fc4d0cebccf2d46ff1cca8b8e241af"} Nov 29 08:46:01 crc kubenswrapper[4795]: I1129 08:46:01.876239 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-76r8t" podStartSLOduration=2.672559869 podStartE2EDuration="8.876217901s" podCreationTimestamp="2025-11-29 08:45:53 +0000 UTC" firstStartedPulling="2025-11-29 08:45:54.740490389 +0000 UTC m=+4000.716066179" lastFinishedPulling="2025-11-29 08:46:00.944148381 +0000 UTC m=+4006.919724211" observedRunningTime="2025-11-29 08:46:01.864019645 +0000 UTC m=+4007.839595435" watchObservedRunningTime="2025-11-29 08:46:01.876217901 +0000 UTC m=+4007.851793691" Nov 29 08:46:03 crc kubenswrapper[4795]: I1129 08:46:03.869789 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-76r8t" Nov 29 08:46:03 crc kubenswrapper[4795]: I1129 08:46:03.870387 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-76r8t" Nov 29 08:46:04 crc kubenswrapper[4795]: I1129 08:46:04.940995 4795 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-76r8t" podUID="6c816084-efe3-45a4-9de6-09070bceb8b0" containerName="registry-server" probeResult="failure" output=< Nov 29 08:46:04 crc kubenswrapper[4795]: timeout: failed to connect service ":50051" within 1s Nov 29 08:46:04 crc kubenswrapper[4795]: > Nov 29 08:46:14 crc kubenswrapper[4795]: I1129 08:46:14.931280 4795 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-76r8t" podUID="6c816084-efe3-45a4-9de6-09070bceb8b0" containerName="registry-server" probeResult="failure" output=< Nov 29 08:46:14 crc kubenswrapper[4795]: timeout: failed to connect service ":50051" within 1s Nov 29 08:46:14 crc kubenswrapper[4795]: > Nov 29 08:46:16 crc kubenswrapper[4795]: I1129 08:46:16.275938 4795 scope.go:117] "RemoveContainer" containerID="5030ff85ae40c73022065ee3f64b116b77c5909a9198a4179ecaaeacf1816833" Nov 29 08:46:16 crc kubenswrapper[4795]: E1129 08:46:16.276619 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:46:23 crc kubenswrapper[4795]: I1129 08:46:23.940529 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-76r8t" Nov 29 08:46:24 crc kubenswrapper[4795]: I1129 08:46:24.023604 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-76r8t" Nov 29 08:46:24 crc kubenswrapper[4795]: I1129 08:46:24.723782 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-76r8t"] Nov 29 08:46:25 crc kubenswrapper[4795]: I1129 08:46:25.103109 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-76r8t" podUID="6c816084-efe3-45a4-9de6-09070bceb8b0" containerName="registry-server" containerID="cri-o://fb583c5f12c3092c6b75e25bd464677a85fc4d0cebccf2d46ff1cca8b8e241af" gracePeriod=2 Nov 29 08:46:25 crc kubenswrapper[4795]: I1129 08:46:25.825673 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-76r8t" Nov 29 08:46:25 crc kubenswrapper[4795]: I1129 08:46:25.992330 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d48lm\" (UniqueName: \"kubernetes.io/projected/6c816084-efe3-45a4-9de6-09070bceb8b0-kube-api-access-d48lm\") pod \"6c816084-efe3-45a4-9de6-09070bceb8b0\" (UID: \"6c816084-efe3-45a4-9de6-09070bceb8b0\") " Nov 29 08:46:25 crc kubenswrapper[4795]: I1129 08:46:25.992502 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c816084-efe3-45a4-9de6-09070bceb8b0-catalog-content\") pod \"6c816084-efe3-45a4-9de6-09070bceb8b0\" (UID: \"6c816084-efe3-45a4-9de6-09070bceb8b0\") " Nov 29 08:46:25 crc kubenswrapper[4795]: I1129 08:46:25.992531 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c816084-efe3-45a4-9de6-09070bceb8b0-utilities\") pod \"6c816084-efe3-45a4-9de6-09070bceb8b0\" (UID: \"6c816084-efe3-45a4-9de6-09070bceb8b0\") " Nov 29 08:46:25 crc kubenswrapper[4795]: I1129 08:46:25.994432 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c816084-efe3-45a4-9de6-09070bceb8b0-utilities" (OuterVolumeSpecName: "utilities") pod "6c816084-efe3-45a4-9de6-09070bceb8b0" (UID: "6c816084-efe3-45a4-9de6-09070bceb8b0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:46:26 crc kubenswrapper[4795]: I1129 08:46:26.001707 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c816084-efe3-45a4-9de6-09070bceb8b0-kube-api-access-d48lm" (OuterVolumeSpecName: "kube-api-access-d48lm") pod "6c816084-efe3-45a4-9de6-09070bceb8b0" (UID: "6c816084-efe3-45a4-9de6-09070bceb8b0"). InnerVolumeSpecName "kube-api-access-d48lm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:46:26 crc kubenswrapper[4795]: I1129 08:46:26.096218 4795 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c816084-efe3-45a4-9de6-09070bceb8b0-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 08:46:26 crc kubenswrapper[4795]: I1129 08:46:26.096266 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d48lm\" (UniqueName: \"kubernetes.io/projected/6c816084-efe3-45a4-9de6-09070bceb8b0-kube-api-access-d48lm\") on node \"crc\" DevicePath \"\"" Nov 29 08:46:26 crc kubenswrapper[4795]: I1129 08:46:26.115750 4795 generic.go:334] "Generic (PLEG): container finished" podID="6c816084-efe3-45a4-9de6-09070bceb8b0" containerID="fb583c5f12c3092c6b75e25bd464677a85fc4d0cebccf2d46ff1cca8b8e241af" exitCode=0 Nov 29 08:46:26 crc kubenswrapper[4795]: I1129 08:46:26.115810 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-76r8t" event={"ID":"6c816084-efe3-45a4-9de6-09070bceb8b0","Type":"ContainerDied","Data":"fb583c5f12c3092c6b75e25bd464677a85fc4d0cebccf2d46ff1cca8b8e241af"} Nov 29 08:46:26 crc kubenswrapper[4795]: I1129 08:46:26.115848 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-76r8t" event={"ID":"6c816084-efe3-45a4-9de6-09070bceb8b0","Type":"ContainerDied","Data":"d29670ea9d207fb6eef2bdc03d4cc834e803fee4d930a5044a781fdc0549c6d9"} Nov 29 08:46:26 crc kubenswrapper[4795]: I1129 08:46:26.115864 4795 scope.go:117] "RemoveContainer" containerID="fb583c5f12c3092c6b75e25bd464677a85fc4d0cebccf2d46ff1cca8b8e241af" Nov 29 08:46:26 crc kubenswrapper[4795]: I1129 08:46:26.115889 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-76r8t" Nov 29 08:46:26 crc kubenswrapper[4795]: I1129 08:46:26.122064 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c816084-efe3-45a4-9de6-09070bceb8b0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6c816084-efe3-45a4-9de6-09070bceb8b0" (UID: "6c816084-efe3-45a4-9de6-09070bceb8b0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:46:26 crc kubenswrapper[4795]: I1129 08:46:26.157387 4795 scope.go:117] "RemoveContainer" containerID="dd3cdd0eee70e5e9582fd719520331b0adcd6a50d41116d1e3e0678fddf6f758" Nov 29 08:46:26 crc kubenswrapper[4795]: I1129 08:46:26.198707 4795 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c816084-efe3-45a4-9de6-09070bceb8b0-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 08:46:26 crc kubenswrapper[4795]: I1129 08:46:26.203321 4795 scope.go:117] "RemoveContainer" containerID="982a07bac50ec11ab4075d152d958c25f0b7565034bb73649943b93e4d089bbd" Nov 29 08:46:26 crc kubenswrapper[4795]: I1129 08:46:26.234751 4795 scope.go:117] "RemoveContainer" containerID="fb583c5f12c3092c6b75e25bd464677a85fc4d0cebccf2d46ff1cca8b8e241af" Nov 29 08:46:26 crc kubenswrapper[4795]: E1129 08:46:26.235101 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb583c5f12c3092c6b75e25bd464677a85fc4d0cebccf2d46ff1cca8b8e241af\": container with ID starting with fb583c5f12c3092c6b75e25bd464677a85fc4d0cebccf2d46ff1cca8b8e241af not found: ID does not exist" containerID="fb583c5f12c3092c6b75e25bd464677a85fc4d0cebccf2d46ff1cca8b8e241af" Nov 29 08:46:26 crc kubenswrapper[4795]: I1129 08:46:26.235139 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb583c5f12c3092c6b75e25bd464677a85fc4d0cebccf2d46ff1cca8b8e241af"} err="failed to get container status \"fb583c5f12c3092c6b75e25bd464677a85fc4d0cebccf2d46ff1cca8b8e241af\": rpc error: code = NotFound desc = could not find container \"fb583c5f12c3092c6b75e25bd464677a85fc4d0cebccf2d46ff1cca8b8e241af\": container with ID starting with fb583c5f12c3092c6b75e25bd464677a85fc4d0cebccf2d46ff1cca8b8e241af not found: ID does not exist" Nov 29 08:46:26 crc kubenswrapper[4795]: I1129 08:46:26.235165 4795 scope.go:117] "RemoveContainer" containerID="dd3cdd0eee70e5e9582fd719520331b0adcd6a50d41116d1e3e0678fddf6f758" Nov 29 08:46:26 crc kubenswrapper[4795]: E1129 08:46:26.235381 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd3cdd0eee70e5e9582fd719520331b0adcd6a50d41116d1e3e0678fddf6f758\": container with ID starting with dd3cdd0eee70e5e9582fd719520331b0adcd6a50d41116d1e3e0678fddf6f758 not found: ID does not exist" containerID="dd3cdd0eee70e5e9582fd719520331b0adcd6a50d41116d1e3e0678fddf6f758" Nov 29 08:46:26 crc kubenswrapper[4795]: I1129 08:46:26.235418 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd3cdd0eee70e5e9582fd719520331b0adcd6a50d41116d1e3e0678fddf6f758"} err="failed to get container status \"dd3cdd0eee70e5e9582fd719520331b0adcd6a50d41116d1e3e0678fddf6f758\": rpc error: code = NotFound desc = could not find container \"dd3cdd0eee70e5e9582fd719520331b0adcd6a50d41116d1e3e0678fddf6f758\": container with ID starting with dd3cdd0eee70e5e9582fd719520331b0adcd6a50d41116d1e3e0678fddf6f758 not found: ID does not exist" Nov 29 08:46:26 crc kubenswrapper[4795]: I1129 08:46:26.235437 4795 scope.go:117] "RemoveContainer" containerID="982a07bac50ec11ab4075d152d958c25f0b7565034bb73649943b93e4d089bbd" Nov 29 08:46:26 crc kubenswrapper[4795]: E1129 08:46:26.235695 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"982a07bac50ec11ab4075d152d958c25f0b7565034bb73649943b93e4d089bbd\": container with ID starting with 982a07bac50ec11ab4075d152d958c25f0b7565034bb73649943b93e4d089bbd not found: ID does not exist" containerID="982a07bac50ec11ab4075d152d958c25f0b7565034bb73649943b93e4d089bbd" Nov 29 08:46:26 crc kubenswrapper[4795]: I1129 08:46:26.235725 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"982a07bac50ec11ab4075d152d958c25f0b7565034bb73649943b93e4d089bbd"} err="failed to get container status \"982a07bac50ec11ab4075d152d958c25f0b7565034bb73649943b93e4d089bbd\": rpc error: code = NotFound desc = could not find container \"982a07bac50ec11ab4075d152d958c25f0b7565034bb73649943b93e4d089bbd\": container with ID starting with 982a07bac50ec11ab4075d152d958c25f0b7565034bb73649943b93e4d089bbd not found: ID does not exist" Nov 29 08:46:26 crc kubenswrapper[4795]: I1129 08:46:26.455390 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-76r8t"] Nov 29 08:46:26 crc kubenswrapper[4795]: I1129 08:46:26.469248 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-76r8t"] Nov 29 08:46:28 crc kubenswrapper[4795]: I1129 08:46:28.292725 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c816084-efe3-45a4-9de6-09070bceb8b0" path="/var/lib/kubelet/pods/6c816084-efe3-45a4-9de6-09070bceb8b0/volumes" Nov 29 08:46:29 crc kubenswrapper[4795]: I1129 08:46:29.277931 4795 scope.go:117] "RemoveContainer" containerID="5030ff85ae40c73022065ee3f64b116b77c5909a9198a4179ecaaeacf1816833" Nov 29 08:46:29 crc kubenswrapper[4795]: E1129 08:46:29.278812 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:46:29 crc kubenswrapper[4795]: I1129 08:46:29.321133 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mr82c"] Nov 29 08:46:29 crc kubenswrapper[4795]: E1129 08:46:29.322031 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c816084-efe3-45a4-9de6-09070bceb8b0" containerName="registry-server" Nov 29 08:46:29 crc kubenswrapper[4795]: I1129 08:46:29.322047 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c816084-efe3-45a4-9de6-09070bceb8b0" containerName="registry-server" Nov 29 08:46:29 crc kubenswrapper[4795]: E1129 08:46:29.322074 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c816084-efe3-45a4-9de6-09070bceb8b0" containerName="extract-content" Nov 29 08:46:29 crc kubenswrapper[4795]: I1129 08:46:29.322081 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c816084-efe3-45a4-9de6-09070bceb8b0" containerName="extract-content" Nov 29 08:46:29 crc kubenswrapper[4795]: E1129 08:46:29.322093 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c816084-efe3-45a4-9de6-09070bceb8b0" containerName="extract-utilities" Nov 29 08:46:29 crc kubenswrapper[4795]: I1129 08:46:29.322103 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c816084-efe3-45a4-9de6-09070bceb8b0" containerName="extract-utilities" Nov 29 08:46:29 crc kubenswrapper[4795]: I1129 08:46:29.322363 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c816084-efe3-45a4-9de6-09070bceb8b0" containerName="registry-server" Nov 29 08:46:29 crc kubenswrapper[4795]: I1129 08:46:29.324186 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mr82c" Nov 29 08:46:29 crc kubenswrapper[4795]: I1129 08:46:29.338632 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mr82c"] Nov 29 08:46:29 crc kubenswrapper[4795]: I1129 08:46:29.378694 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4692dfd-7e48-428b-b493-bdea9b6d21e8-catalog-content\") pod \"certified-operators-mr82c\" (UID: \"b4692dfd-7e48-428b-b493-bdea9b6d21e8\") " pod="openshift-marketplace/certified-operators-mr82c" Nov 29 08:46:29 crc kubenswrapper[4795]: I1129 08:46:29.379116 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22jd9\" (UniqueName: \"kubernetes.io/projected/b4692dfd-7e48-428b-b493-bdea9b6d21e8-kube-api-access-22jd9\") pod \"certified-operators-mr82c\" (UID: \"b4692dfd-7e48-428b-b493-bdea9b6d21e8\") " pod="openshift-marketplace/certified-operators-mr82c" Nov 29 08:46:29 crc kubenswrapper[4795]: I1129 08:46:29.379157 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4692dfd-7e48-428b-b493-bdea9b6d21e8-utilities\") pod \"certified-operators-mr82c\" (UID: \"b4692dfd-7e48-428b-b493-bdea9b6d21e8\") " pod="openshift-marketplace/certified-operators-mr82c" Nov 29 08:46:29 crc kubenswrapper[4795]: I1129 08:46:29.482036 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22jd9\" (UniqueName: \"kubernetes.io/projected/b4692dfd-7e48-428b-b493-bdea9b6d21e8-kube-api-access-22jd9\") pod \"certified-operators-mr82c\" (UID: \"b4692dfd-7e48-428b-b493-bdea9b6d21e8\") " pod="openshift-marketplace/certified-operators-mr82c" Nov 29 08:46:29 crc kubenswrapper[4795]: I1129 08:46:29.482086 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4692dfd-7e48-428b-b493-bdea9b6d21e8-utilities\") pod \"certified-operators-mr82c\" (UID: \"b4692dfd-7e48-428b-b493-bdea9b6d21e8\") " pod="openshift-marketplace/certified-operators-mr82c" Nov 29 08:46:29 crc kubenswrapper[4795]: I1129 08:46:29.482174 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4692dfd-7e48-428b-b493-bdea9b6d21e8-catalog-content\") pod \"certified-operators-mr82c\" (UID: \"b4692dfd-7e48-428b-b493-bdea9b6d21e8\") " pod="openshift-marketplace/certified-operators-mr82c" Nov 29 08:46:29 crc kubenswrapper[4795]: I1129 08:46:29.482700 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4692dfd-7e48-428b-b493-bdea9b6d21e8-catalog-content\") pod \"certified-operators-mr82c\" (UID: \"b4692dfd-7e48-428b-b493-bdea9b6d21e8\") " pod="openshift-marketplace/certified-operators-mr82c" Nov 29 08:46:29 crc kubenswrapper[4795]: I1129 08:46:29.482927 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4692dfd-7e48-428b-b493-bdea9b6d21e8-utilities\") pod \"certified-operators-mr82c\" (UID: \"b4692dfd-7e48-428b-b493-bdea9b6d21e8\") " pod="openshift-marketplace/certified-operators-mr82c" Nov 29 08:46:29 crc kubenswrapper[4795]: I1129 08:46:29.896461 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22jd9\" (UniqueName: \"kubernetes.io/projected/b4692dfd-7e48-428b-b493-bdea9b6d21e8-kube-api-access-22jd9\") pod \"certified-operators-mr82c\" (UID: \"b4692dfd-7e48-428b-b493-bdea9b6d21e8\") " pod="openshift-marketplace/certified-operators-mr82c" Nov 29 08:46:29 crc kubenswrapper[4795]: I1129 08:46:29.957668 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mr82c" Nov 29 08:46:30 crc kubenswrapper[4795]: I1129 08:46:30.540909 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mr82c"] Nov 29 08:46:31 crc kubenswrapper[4795]: I1129 08:46:31.196768 4795 generic.go:334] "Generic (PLEG): container finished" podID="b4692dfd-7e48-428b-b493-bdea9b6d21e8" containerID="0f5ae118b646cf50e25c39ed40490521fd5b059c7581c20df9bfb7dd75b53e96" exitCode=0 Nov 29 08:46:31 crc kubenswrapper[4795]: I1129 08:46:31.196836 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mr82c" event={"ID":"b4692dfd-7e48-428b-b493-bdea9b6d21e8","Type":"ContainerDied","Data":"0f5ae118b646cf50e25c39ed40490521fd5b059c7581c20df9bfb7dd75b53e96"} Nov 29 08:46:31 crc kubenswrapper[4795]: I1129 08:46:31.197158 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mr82c" event={"ID":"b4692dfd-7e48-428b-b493-bdea9b6d21e8","Type":"ContainerStarted","Data":"0850066b6d65d43ffdba71fea289c16201dae0e240133828aca1f28edd800a90"} Nov 29 08:46:32 crc kubenswrapper[4795]: I1129 08:46:32.213150 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mr82c" event={"ID":"b4692dfd-7e48-428b-b493-bdea9b6d21e8","Type":"ContainerStarted","Data":"f5c94136a09a50c25d2d4197a294d7e0ea9edbd6fbde84c62cc2e97a07c5bbbc"} Nov 29 08:46:33 crc kubenswrapper[4795]: I1129 08:46:33.228245 4795 generic.go:334] "Generic (PLEG): container finished" podID="b4692dfd-7e48-428b-b493-bdea9b6d21e8" containerID="f5c94136a09a50c25d2d4197a294d7e0ea9edbd6fbde84c62cc2e97a07c5bbbc" exitCode=0 Nov 29 08:46:33 crc kubenswrapper[4795]: I1129 08:46:33.228388 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mr82c" event={"ID":"b4692dfd-7e48-428b-b493-bdea9b6d21e8","Type":"ContainerDied","Data":"f5c94136a09a50c25d2d4197a294d7e0ea9edbd6fbde84c62cc2e97a07c5bbbc"} Nov 29 08:46:34 crc kubenswrapper[4795]: I1129 08:46:34.246250 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mr82c" event={"ID":"b4692dfd-7e48-428b-b493-bdea9b6d21e8","Type":"ContainerStarted","Data":"e4d2d6fc36541b267ea28baebd1d30c7313dc2a1df14464645b706137f539041"} Nov 29 08:46:34 crc kubenswrapper[4795]: I1129 08:46:34.285196 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mr82c" podStartSLOduration=2.8529273550000003 podStartE2EDuration="5.285174124s" podCreationTimestamp="2025-11-29 08:46:29 +0000 UTC" firstStartedPulling="2025-11-29 08:46:31.199798145 +0000 UTC m=+4037.175373955" lastFinishedPulling="2025-11-29 08:46:33.632044934 +0000 UTC m=+4039.607620724" observedRunningTime="2025-11-29 08:46:34.268492661 +0000 UTC m=+4040.244068451" watchObservedRunningTime="2025-11-29 08:46:34.285174124 +0000 UTC m=+4040.260749914" Nov 29 08:46:39 crc kubenswrapper[4795]: I1129 08:46:39.959192 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mr82c" Nov 29 08:46:39 crc kubenswrapper[4795]: I1129 08:46:39.959793 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mr82c" Nov 29 08:46:40 crc kubenswrapper[4795]: I1129 08:46:40.036398 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mr82c" Nov 29 08:46:40 crc kubenswrapper[4795]: I1129 08:46:40.424676 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mr82c" Nov 29 08:46:40 crc kubenswrapper[4795]: I1129 08:46:40.485701 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mr82c"] Nov 29 08:46:42 crc kubenswrapper[4795]: I1129 08:46:42.360558 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mr82c" podUID="b4692dfd-7e48-428b-b493-bdea9b6d21e8" containerName="registry-server" containerID="cri-o://e4d2d6fc36541b267ea28baebd1d30c7313dc2a1df14464645b706137f539041" gracePeriod=2 Nov 29 08:46:42 crc kubenswrapper[4795]: I1129 08:46:42.944690 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mr82c" Nov 29 08:46:43 crc kubenswrapper[4795]: I1129 08:46:43.055567 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4692dfd-7e48-428b-b493-bdea9b6d21e8-utilities\") pod \"b4692dfd-7e48-428b-b493-bdea9b6d21e8\" (UID: \"b4692dfd-7e48-428b-b493-bdea9b6d21e8\") " Nov 29 08:46:43 crc kubenswrapper[4795]: I1129 08:46:43.055693 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4692dfd-7e48-428b-b493-bdea9b6d21e8-catalog-content\") pod \"b4692dfd-7e48-428b-b493-bdea9b6d21e8\" (UID: \"b4692dfd-7e48-428b-b493-bdea9b6d21e8\") " Nov 29 08:46:43 crc kubenswrapper[4795]: I1129 08:46:43.055794 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22jd9\" (UniqueName: \"kubernetes.io/projected/b4692dfd-7e48-428b-b493-bdea9b6d21e8-kube-api-access-22jd9\") pod \"b4692dfd-7e48-428b-b493-bdea9b6d21e8\" (UID: \"b4692dfd-7e48-428b-b493-bdea9b6d21e8\") " Nov 29 08:46:43 crc kubenswrapper[4795]: I1129 08:46:43.056445 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4692dfd-7e48-428b-b493-bdea9b6d21e8-utilities" (OuterVolumeSpecName: "utilities") pod "b4692dfd-7e48-428b-b493-bdea9b6d21e8" (UID: "b4692dfd-7e48-428b-b493-bdea9b6d21e8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:46:43 crc kubenswrapper[4795]: I1129 08:46:43.062284 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4692dfd-7e48-428b-b493-bdea9b6d21e8-kube-api-access-22jd9" (OuterVolumeSpecName: "kube-api-access-22jd9") pod "b4692dfd-7e48-428b-b493-bdea9b6d21e8" (UID: "b4692dfd-7e48-428b-b493-bdea9b6d21e8"). InnerVolumeSpecName "kube-api-access-22jd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:46:43 crc kubenswrapper[4795]: I1129 08:46:43.158430 4795 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4692dfd-7e48-428b-b493-bdea9b6d21e8-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 08:46:43 crc kubenswrapper[4795]: I1129 08:46:43.158818 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22jd9\" (UniqueName: \"kubernetes.io/projected/b4692dfd-7e48-428b-b493-bdea9b6d21e8-kube-api-access-22jd9\") on node \"crc\" DevicePath \"\"" Nov 29 08:46:43 crc kubenswrapper[4795]: I1129 08:46:43.278536 4795 scope.go:117] "RemoveContainer" containerID="5030ff85ae40c73022065ee3f64b116b77c5909a9198a4179ecaaeacf1816833" Nov 29 08:46:43 crc kubenswrapper[4795]: E1129 08:46:43.278942 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:46:43 crc kubenswrapper[4795]: I1129 08:46:43.376628 4795 generic.go:334] "Generic (PLEG): container finished" podID="b4692dfd-7e48-428b-b493-bdea9b6d21e8" containerID="e4d2d6fc36541b267ea28baebd1d30c7313dc2a1df14464645b706137f539041" exitCode=0 Nov 29 08:46:43 crc kubenswrapper[4795]: I1129 08:46:43.376675 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mr82c" event={"ID":"b4692dfd-7e48-428b-b493-bdea9b6d21e8","Type":"ContainerDied","Data":"e4d2d6fc36541b267ea28baebd1d30c7313dc2a1df14464645b706137f539041"} Nov 29 08:46:43 crc kubenswrapper[4795]: I1129 08:46:43.376709 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mr82c" event={"ID":"b4692dfd-7e48-428b-b493-bdea9b6d21e8","Type":"ContainerDied","Data":"0850066b6d65d43ffdba71fea289c16201dae0e240133828aca1f28edd800a90"} Nov 29 08:46:43 crc kubenswrapper[4795]: I1129 08:46:43.376731 4795 scope.go:117] "RemoveContainer" containerID="e4d2d6fc36541b267ea28baebd1d30c7313dc2a1df14464645b706137f539041" Nov 29 08:46:43 crc kubenswrapper[4795]: I1129 08:46:43.376923 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mr82c" Nov 29 08:46:43 crc kubenswrapper[4795]: I1129 08:46:43.390872 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4692dfd-7e48-428b-b493-bdea9b6d21e8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b4692dfd-7e48-428b-b493-bdea9b6d21e8" (UID: "b4692dfd-7e48-428b-b493-bdea9b6d21e8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:46:43 crc kubenswrapper[4795]: I1129 08:46:43.402277 4795 scope.go:117] "RemoveContainer" containerID="f5c94136a09a50c25d2d4197a294d7e0ea9edbd6fbde84c62cc2e97a07c5bbbc" Nov 29 08:46:43 crc kubenswrapper[4795]: I1129 08:46:43.432951 4795 scope.go:117] "RemoveContainer" containerID="0f5ae118b646cf50e25c39ed40490521fd5b059c7581c20df9bfb7dd75b53e96" Nov 29 08:46:43 crc kubenswrapper[4795]: I1129 08:46:43.465249 4795 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4692dfd-7e48-428b-b493-bdea9b6d21e8-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 08:46:43 crc kubenswrapper[4795]: I1129 08:46:43.499359 4795 scope.go:117] "RemoveContainer" containerID="e4d2d6fc36541b267ea28baebd1d30c7313dc2a1df14464645b706137f539041" Nov 29 08:46:43 crc kubenswrapper[4795]: E1129 08:46:43.500071 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4d2d6fc36541b267ea28baebd1d30c7313dc2a1df14464645b706137f539041\": container with ID starting with e4d2d6fc36541b267ea28baebd1d30c7313dc2a1df14464645b706137f539041 not found: ID does not exist" containerID="e4d2d6fc36541b267ea28baebd1d30c7313dc2a1df14464645b706137f539041" Nov 29 08:46:43 crc kubenswrapper[4795]: I1129 08:46:43.500112 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4d2d6fc36541b267ea28baebd1d30c7313dc2a1df14464645b706137f539041"} err="failed to get container status \"e4d2d6fc36541b267ea28baebd1d30c7313dc2a1df14464645b706137f539041\": rpc error: code = NotFound desc = could not find container \"e4d2d6fc36541b267ea28baebd1d30c7313dc2a1df14464645b706137f539041\": container with ID starting with e4d2d6fc36541b267ea28baebd1d30c7313dc2a1df14464645b706137f539041 not found: ID does not exist" Nov 29 08:46:43 crc kubenswrapper[4795]: I1129 08:46:43.500139 4795 scope.go:117] "RemoveContainer" containerID="f5c94136a09a50c25d2d4197a294d7e0ea9edbd6fbde84c62cc2e97a07c5bbbc" Nov 29 08:46:43 crc kubenswrapper[4795]: E1129 08:46:43.500403 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5c94136a09a50c25d2d4197a294d7e0ea9edbd6fbde84c62cc2e97a07c5bbbc\": container with ID starting with f5c94136a09a50c25d2d4197a294d7e0ea9edbd6fbde84c62cc2e97a07c5bbbc not found: ID does not exist" containerID="f5c94136a09a50c25d2d4197a294d7e0ea9edbd6fbde84c62cc2e97a07c5bbbc" Nov 29 08:46:43 crc kubenswrapper[4795]: I1129 08:46:43.500434 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5c94136a09a50c25d2d4197a294d7e0ea9edbd6fbde84c62cc2e97a07c5bbbc"} err="failed to get container status \"f5c94136a09a50c25d2d4197a294d7e0ea9edbd6fbde84c62cc2e97a07c5bbbc\": rpc error: code = NotFound desc = could not find container \"f5c94136a09a50c25d2d4197a294d7e0ea9edbd6fbde84c62cc2e97a07c5bbbc\": container with ID starting with f5c94136a09a50c25d2d4197a294d7e0ea9edbd6fbde84c62cc2e97a07c5bbbc not found: ID does not exist" Nov 29 08:46:43 crc kubenswrapper[4795]: I1129 08:46:43.500453 4795 scope.go:117] "RemoveContainer" containerID="0f5ae118b646cf50e25c39ed40490521fd5b059c7581c20df9bfb7dd75b53e96" Nov 29 08:46:43 crc kubenswrapper[4795]: E1129 08:46:43.500673 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f5ae118b646cf50e25c39ed40490521fd5b059c7581c20df9bfb7dd75b53e96\": container with ID starting with 0f5ae118b646cf50e25c39ed40490521fd5b059c7581c20df9bfb7dd75b53e96 not found: ID does not exist" containerID="0f5ae118b646cf50e25c39ed40490521fd5b059c7581c20df9bfb7dd75b53e96" Nov 29 08:46:43 crc kubenswrapper[4795]: I1129 08:46:43.500699 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f5ae118b646cf50e25c39ed40490521fd5b059c7581c20df9bfb7dd75b53e96"} err="failed to get container status \"0f5ae118b646cf50e25c39ed40490521fd5b059c7581c20df9bfb7dd75b53e96\": rpc error: code = NotFound desc = could not find container \"0f5ae118b646cf50e25c39ed40490521fd5b059c7581c20df9bfb7dd75b53e96\": container with ID starting with 0f5ae118b646cf50e25c39ed40490521fd5b059c7581c20df9bfb7dd75b53e96 not found: ID does not exist" Nov 29 08:46:43 crc kubenswrapper[4795]: I1129 08:46:43.723920 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mr82c"] Nov 29 08:46:43 crc kubenswrapper[4795]: I1129 08:46:43.736278 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mr82c"] Nov 29 08:46:44 crc kubenswrapper[4795]: I1129 08:46:44.290882 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4692dfd-7e48-428b-b493-bdea9b6d21e8" path="/var/lib/kubelet/pods/b4692dfd-7e48-428b-b493-bdea9b6d21e8/volumes" Nov 29 08:46:55 crc kubenswrapper[4795]: I1129 08:46:55.275792 4795 scope.go:117] "RemoveContainer" containerID="5030ff85ae40c73022065ee3f64b116b77c5909a9198a4179ecaaeacf1816833" Nov 29 08:46:55 crc kubenswrapper[4795]: E1129 08:46:55.276515 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:47:08 crc kubenswrapper[4795]: I1129 08:47:08.276195 4795 scope.go:117] "RemoveContainer" containerID="5030ff85ae40c73022065ee3f64b116b77c5909a9198a4179ecaaeacf1816833" Nov 29 08:47:08 crc kubenswrapper[4795]: E1129 08:47:08.277079 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:47:23 crc kubenswrapper[4795]: I1129 08:47:23.276671 4795 scope.go:117] "RemoveContainer" containerID="5030ff85ae40c73022065ee3f64b116b77c5909a9198a4179ecaaeacf1816833" Nov 29 08:47:23 crc kubenswrapper[4795]: E1129 08:47:23.278126 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:47:37 crc kubenswrapper[4795]: I1129 08:47:37.276438 4795 scope.go:117] "RemoveContainer" containerID="5030ff85ae40c73022065ee3f64b116b77c5909a9198a4179ecaaeacf1816833" Nov 29 08:47:37 crc kubenswrapper[4795]: E1129 08:47:37.277908 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:47:50 crc kubenswrapper[4795]: I1129 08:47:50.276080 4795 scope.go:117] "RemoveContainer" containerID="5030ff85ae40c73022065ee3f64b116b77c5909a9198a4179ecaaeacf1816833" Nov 29 08:47:50 crc kubenswrapper[4795]: E1129 08:47:50.276850 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:48:02 crc kubenswrapper[4795]: I1129 08:48:02.624115 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4sk4c"] Nov 29 08:48:02 crc kubenswrapper[4795]: E1129 08:48:02.625114 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4692dfd-7e48-428b-b493-bdea9b6d21e8" containerName="extract-content" Nov 29 08:48:02 crc kubenswrapper[4795]: I1129 08:48:02.625129 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4692dfd-7e48-428b-b493-bdea9b6d21e8" containerName="extract-content" Nov 29 08:48:02 crc kubenswrapper[4795]: E1129 08:48:02.625148 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4692dfd-7e48-428b-b493-bdea9b6d21e8" containerName="registry-server" Nov 29 08:48:02 crc kubenswrapper[4795]: I1129 08:48:02.625156 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4692dfd-7e48-428b-b493-bdea9b6d21e8" containerName="registry-server" Nov 29 08:48:02 crc kubenswrapper[4795]: E1129 08:48:02.625171 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4692dfd-7e48-428b-b493-bdea9b6d21e8" containerName="extract-utilities" Nov 29 08:48:02 crc kubenswrapper[4795]: I1129 08:48:02.625177 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4692dfd-7e48-428b-b493-bdea9b6d21e8" containerName="extract-utilities" Nov 29 08:48:02 crc kubenswrapper[4795]: I1129 08:48:02.625427 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4692dfd-7e48-428b-b493-bdea9b6d21e8" containerName="registry-server" Nov 29 08:48:02 crc kubenswrapper[4795]: I1129 08:48:02.627078 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4sk4c" Nov 29 08:48:02 crc kubenswrapper[4795]: I1129 08:48:02.646848 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4sk4c"] Nov 29 08:48:02 crc kubenswrapper[4795]: I1129 08:48:02.775237 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctl2l\" (UniqueName: \"kubernetes.io/projected/f9ff2115-e366-42ad-a732-119ddc1900b6-kube-api-access-ctl2l\") pod \"redhat-marketplace-4sk4c\" (UID: \"f9ff2115-e366-42ad-a732-119ddc1900b6\") " pod="openshift-marketplace/redhat-marketplace-4sk4c" Nov 29 08:48:02 crc kubenswrapper[4795]: I1129 08:48:02.775685 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9ff2115-e366-42ad-a732-119ddc1900b6-catalog-content\") pod \"redhat-marketplace-4sk4c\" (UID: \"f9ff2115-e366-42ad-a732-119ddc1900b6\") " pod="openshift-marketplace/redhat-marketplace-4sk4c" Nov 29 08:48:02 crc kubenswrapper[4795]: I1129 08:48:02.775947 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9ff2115-e366-42ad-a732-119ddc1900b6-utilities\") pod \"redhat-marketplace-4sk4c\" (UID: \"f9ff2115-e366-42ad-a732-119ddc1900b6\") " pod="openshift-marketplace/redhat-marketplace-4sk4c" Nov 29 08:48:02 crc kubenswrapper[4795]: I1129 08:48:02.879569 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9ff2115-e366-42ad-a732-119ddc1900b6-catalog-content\") pod \"redhat-marketplace-4sk4c\" (UID: \"f9ff2115-e366-42ad-a732-119ddc1900b6\") " pod="openshift-marketplace/redhat-marketplace-4sk4c" Nov 29 08:48:02 crc kubenswrapper[4795]: I1129 08:48:02.879738 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9ff2115-e366-42ad-a732-119ddc1900b6-utilities\") pod \"redhat-marketplace-4sk4c\" (UID: \"f9ff2115-e366-42ad-a732-119ddc1900b6\") " pod="openshift-marketplace/redhat-marketplace-4sk4c" Nov 29 08:48:02 crc kubenswrapper[4795]: I1129 08:48:02.879817 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctl2l\" (UniqueName: \"kubernetes.io/projected/f9ff2115-e366-42ad-a732-119ddc1900b6-kube-api-access-ctl2l\") pod \"redhat-marketplace-4sk4c\" (UID: \"f9ff2115-e366-42ad-a732-119ddc1900b6\") " pod="openshift-marketplace/redhat-marketplace-4sk4c" Nov 29 08:48:02 crc kubenswrapper[4795]: I1129 08:48:02.881036 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9ff2115-e366-42ad-a732-119ddc1900b6-catalog-content\") pod \"redhat-marketplace-4sk4c\" (UID: \"f9ff2115-e366-42ad-a732-119ddc1900b6\") " pod="openshift-marketplace/redhat-marketplace-4sk4c" Nov 29 08:48:02 crc kubenswrapper[4795]: I1129 08:48:02.881269 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9ff2115-e366-42ad-a732-119ddc1900b6-utilities\") pod \"redhat-marketplace-4sk4c\" (UID: \"f9ff2115-e366-42ad-a732-119ddc1900b6\") " pod="openshift-marketplace/redhat-marketplace-4sk4c" Nov 29 08:48:02 crc kubenswrapper[4795]: I1129 08:48:02.921454 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctl2l\" (UniqueName: \"kubernetes.io/projected/f9ff2115-e366-42ad-a732-119ddc1900b6-kube-api-access-ctl2l\") pod \"redhat-marketplace-4sk4c\" (UID: \"f9ff2115-e366-42ad-a732-119ddc1900b6\") " pod="openshift-marketplace/redhat-marketplace-4sk4c" Nov 29 08:48:02 crc kubenswrapper[4795]: I1129 08:48:02.955143 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4sk4c" Nov 29 08:48:03 crc kubenswrapper[4795]: I1129 08:48:03.564509 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4sk4c"] Nov 29 08:48:04 crc kubenswrapper[4795]: I1129 08:48:04.284311 4795 scope.go:117] "RemoveContainer" containerID="5030ff85ae40c73022065ee3f64b116b77c5909a9198a4179ecaaeacf1816833" Nov 29 08:48:04 crc kubenswrapper[4795]: E1129 08:48:04.284848 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:48:04 crc kubenswrapper[4795]: I1129 08:48:04.344397 4795 generic.go:334] "Generic (PLEG): container finished" podID="f9ff2115-e366-42ad-a732-119ddc1900b6" containerID="c07e9d475f510fbaaa0f7d273385fff89cade98a594cadfb6a72337ddea52011" exitCode=0 Nov 29 08:48:04 crc kubenswrapper[4795]: I1129 08:48:04.344527 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4sk4c" event={"ID":"f9ff2115-e366-42ad-a732-119ddc1900b6","Type":"ContainerDied","Data":"c07e9d475f510fbaaa0f7d273385fff89cade98a594cadfb6a72337ddea52011"} Nov 29 08:48:04 crc kubenswrapper[4795]: I1129 08:48:04.344869 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4sk4c" event={"ID":"f9ff2115-e366-42ad-a732-119ddc1900b6","Type":"ContainerStarted","Data":"ed1be21b073540ba8d2bea0b39c041fb7f6384a8e4d19f571b2a713a1df53abf"} Nov 29 08:48:06 crc kubenswrapper[4795]: I1129 08:48:06.368207 4795 generic.go:334] "Generic (PLEG): container finished" podID="f9ff2115-e366-42ad-a732-119ddc1900b6" containerID="afea0dcece1ebafc0e32c69aad2afa428c1afa3b94dff9a0eb64605f6bffc8e7" exitCode=0 Nov 29 08:48:06 crc kubenswrapper[4795]: I1129 08:48:06.368271 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4sk4c" event={"ID":"f9ff2115-e366-42ad-a732-119ddc1900b6","Type":"ContainerDied","Data":"afea0dcece1ebafc0e32c69aad2afa428c1afa3b94dff9a0eb64605f6bffc8e7"} Nov 29 08:48:07 crc kubenswrapper[4795]: I1129 08:48:07.381153 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4sk4c" event={"ID":"f9ff2115-e366-42ad-a732-119ddc1900b6","Type":"ContainerStarted","Data":"b35ade76c6f669941ce632400e4429f7305f4874989d91d3b0dd7a07048180db"} Nov 29 08:48:07 crc kubenswrapper[4795]: I1129 08:48:07.404489 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4sk4c" podStartSLOduration=2.89537224 podStartE2EDuration="5.404469769s" podCreationTimestamp="2025-11-29 08:48:02 +0000 UTC" firstStartedPulling="2025-11-29 08:48:04.34767678 +0000 UTC m=+4130.323252570" lastFinishedPulling="2025-11-29 08:48:06.856774309 +0000 UTC m=+4132.832350099" observedRunningTime="2025-11-29 08:48:07.403248704 +0000 UTC m=+4133.378824504" watchObservedRunningTime="2025-11-29 08:48:07.404469769 +0000 UTC m=+4133.380045559" Nov 29 08:48:12 crc kubenswrapper[4795]: I1129 08:48:12.956105 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4sk4c" Nov 29 08:48:12 crc kubenswrapper[4795]: I1129 08:48:12.958056 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4sk4c" Nov 29 08:48:13 crc kubenswrapper[4795]: I1129 08:48:13.810706 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4sk4c" Nov 29 08:48:14 crc kubenswrapper[4795]: I1129 08:48:14.588216 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4sk4c" Nov 29 08:48:14 crc kubenswrapper[4795]: I1129 08:48:14.653815 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4sk4c"] Nov 29 08:48:15 crc kubenswrapper[4795]: I1129 08:48:15.276503 4795 scope.go:117] "RemoveContainer" containerID="5030ff85ae40c73022065ee3f64b116b77c5909a9198a4179ecaaeacf1816833" Nov 29 08:48:15 crc kubenswrapper[4795]: E1129 08:48:15.276916 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:48:16 crc kubenswrapper[4795]: I1129 08:48:16.560607 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4sk4c" podUID="f9ff2115-e366-42ad-a732-119ddc1900b6" containerName="registry-server" containerID="cri-o://b35ade76c6f669941ce632400e4429f7305f4874989d91d3b0dd7a07048180db" gracePeriod=2 Nov 29 08:48:17 crc kubenswrapper[4795]: I1129 08:48:17.185910 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4sk4c" Nov 29 08:48:17 crc kubenswrapper[4795]: I1129 08:48:17.349584 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9ff2115-e366-42ad-a732-119ddc1900b6-catalog-content\") pod \"f9ff2115-e366-42ad-a732-119ddc1900b6\" (UID: \"f9ff2115-e366-42ad-a732-119ddc1900b6\") " Nov 29 08:48:17 crc kubenswrapper[4795]: I1129 08:48:17.349667 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9ff2115-e366-42ad-a732-119ddc1900b6-utilities\") pod \"f9ff2115-e366-42ad-a732-119ddc1900b6\" (UID: \"f9ff2115-e366-42ad-a732-119ddc1900b6\") " Nov 29 08:48:17 crc kubenswrapper[4795]: I1129 08:48:17.349832 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ctl2l\" (UniqueName: \"kubernetes.io/projected/f9ff2115-e366-42ad-a732-119ddc1900b6-kube-api-access-ctl2l\") pod \"f9ff2115-e366-42ad-a732-119ddc1900b6\" (UID: \"f9ff2115-e366-42ad-a732-119ddc1900b6\") " Nov 29 08:48:17 crc kubenswrapper[4795]: I1129 08:48:17.351174 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9ff2115-e366-42ad-a732-119ddc1900b6-utilities" (OuterVolumeSpecName: "utilities") pod "f9ff2115-e366-42ad-a732-119ddc1900b6" (UID: "f9ff2115-e366-42ad-a732-119ddc1900b6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:48:17 crc kubenswrapper[4795]: I1129 08:48:17.356305 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9ff2115-e366-42ad-a732-119ddc1900b6-kube-api-access-ctl2l" (OuterVolumeSpecName: "kube-api-access-ctl2l") pod "f9ff2115-e366-42ad-a732-119ddc1900b6" (UID: "f9ff2115-e366-42ad-a732-119ddc1900b6"). InnerVolumeSpecName "kube-api-access-ctl2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:48:17 crc kubenswrapper[4795]: I1129 08:48:17.381869 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9ff2115-e366-42ad-a732-119ddc1900b6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f9ff2115-e366-42ad-a732-119ddc1900b6" (UID: "f9ff2115-e366-42ad-a732-119ddc1900b6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:48:17 crc kubenswrapper[4795]: I1129 08:48:17.453139 4795 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9ff2115-e366-42ad-a732-119ddc1900b6-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 08:48:17 crc kubenswrapper[4795]: I1129 08:48:17.454121 4795 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9ff2115-e366-42ad-a732-119ddc1900b6-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 08:48:17 crc kubenswrapper[4795]: I1129 08:48:17.454157 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ctl2l\" (UniqueName: \"kubernetes.io/projected/f9ff2115-e366-42ad-a732-119ddc1900b6-kube-api-access-ctl2l\") on node \"crc\" DevicePath \"\"" Nov 29 08:48:17 crc kubenswrapper[4795]: I1129 08:48:17.575189 4795 generic.go:334] "Generic (PLEG): container finished" podID="f9ff2115-e366-42ad-a732-119ddc1900b6" containerID="b35ade76c6f669941ce632400e4429f7305f4874989d91d3b0dd7a07048180db" exitCode=0 Nov 29 08:48:17 crc kubenswrapper[4795]: I1129 08:48:17.575256 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4sk4c" event={"ID":"f9ff2115-e366-42ad-a732-119ddc1900b6","Type":"ContainerDied","Data":"b35ade76c6f669941ce632400e4429f7305f4874989d91d3b0dd7a07048180db"} Nov 29 08:48:17 crc kubenswrapper[4795]: I1129 08:48:17.575313 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4sk4c" event={"ID":"f9ff2115-e366-42ad-a732-119ddc1900b6","Type":"ContainerDied","Data":"ed1be21b073540ba8d2bea0b39c041fb7f6384a8e4d19f571b2a713a1df53abf"} Nov 29 08:48:17 crc kubenswrapper[4795]: I1129 08:48:17.575344 4795 scope.go:117] "RemoveContainer" containerID="b35ade76c6f669941ce632400e4429f7305f4874989d91d3b0dd7a07048180db" Nov 29 08:48:17 crc kubenswrapper[4795]: I1129 08:48:17.576307 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4sk4c" Nov 29 08:48:17 crc kubenswrapper[4795]: I1129 08:48:17.607477 4795 scope.go:117] "RemoveContainer" containerID="afea0dcece1ebafc0e32c69aad2afa428c1afa3b94dff9a0eb64605f6bffc8e7" Nov 29 08:48:17 crc kubenswrapper[4795]: I1129 08:48:17.623574 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4sk4c"] Nov 29 08:48:17 crc kubenswrapper[4795]: I1129 08:48:17.633345 4795 scope.go:117] "RemoveContainer" containerID="c07e9d475f510fbaaa0f7d273385fff89cade98a594cadfb6a72337ddea52011" Nov 29 08:48:17 crc kubenswrapper[4795]: I1129 08:48:17.640214 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4sk4c"] Nov 29 08:48:17 crc kubenswrapper[4795]: I1129 08:48:17.696182 4795 scope.go:117] "RemoveContainer" containerID="b35ade76c6f669941ce632400e4429f7305f4874989d91d3b0dd7a07048180db" Nov 29 08:48:17 crc kubenswrapper[4795]: E1129 08:48:17.697052 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b35ade76c6f669941ce632400e4429f7305f4874989d91d3b0dd7a07048180db\": container with ID starting with b35ade76c6f669941ce632400e4429f7305f4874989d91d3b0dd7a07048180db not found: ID does not exist" containerID="b35ade76c6f669941ce632400e4429f7305f4874989d91d3b0dd7a07048180db" Nov 29 08:48:17 crc kubenswrapper[4795]: I1129 08:48:17.697088 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b35ade76c6f669941ce632400e4429f7305f4874989d91d3b0dd7a07048180db"} err="failed to get container status \"b35ade76c6f669941ce632400e4429f7305f4874989d91d3b0dd7a07048180db\": rpc error: code = NotFound desc = could not find container \"b35ade76c6f669941ce632400e4429f7305f4874989d91d3b0dd7a07048180db\": container with ID starting with b35ade76c6f669941ce632400e4429f7305f4874989d91d3b0dd7a07048180db not found: ID does not exist" Nov 29 08:48:17 crc kubenswrapper[4795]: I1129 08:48:17.697109 4795 scope.go:117] "RemoveContainer" containerID="afea0dcece1ebafc0e32c69aad2afa428c1afa3b94dff9a0eb64605f6bffc8e7" Nov 29 08:48:17 crc kubenswrapper[4795]: E1129 08:48:17.697832 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afea0dcece1ebafc0e32c69aad2afa428c1afa3b94dff9a0eb64605f6bffc8e7\": container with ID starting with afea0dcece1ebafc0e32c69aad2afa428c1afa3b94dff9a0eb64605f6bffc8e7 not found: ID does not exist" containerID="afea0dcece1ebafc0e32c69aad2afa428c1afa3b94dff9a0eb64605f6bffc8e7" Nov 29 08:48:17 crc kubenswrapper[4795]: I1129 08:48:17.697857 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afea0dcece1ebafc0e32c69aad2afa428c1afa3b94dff9a0eb64605f6bffc8e7"} err="failed to get container status \"afea0dcece1ebafc0e32c69aad2afa428c1afa3b94dff9a0eb64605f6bffc8e7\": rpc error: code = NotFound desc = could not find container \"afea0dcece1ebafc0e32c69aad2afa428c1afa3b94dff9a0eb64605f6bffc8e7\": container with ID starting with afea0dcece1ebafc0e32c69aad2afa428c1afa3b94dff9a0eb64605f6bffc8e7 not found: ID does not exist" Nov 29 08:48:17 crc kubenswrapper[4795]: I1129 08:48:17.697874 4795 scope.go:117] "RemoveContainer" containerID="c07e9d475f510fbaaa0f7d273385fff89cade98a594cadfb6a72337ddea52011" Nov 29 08:48:17 crc kubenswrapper[4795]: E1129 08:48:17.698379 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c07e9d475f510fbaaa0f7d273385fff89cade98a594cadfb6a72337ddea52011\": container with ID starting with c07e9d475f510fbaaa0f7d273385fff89cade98a594cadfb6a72337ddea52011 not found: ID does not exist" containerID="c07e9d475f510fbaaa0f7d273385fff89cade98a594cadfb6a72337ddea52011" Nov 29 08:48:17 crc kubenswrapper[4795]: I1129 08:48:17.698405 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c07e9d475f510fbaaa0f7d273385fff89cade98a594cadfb6a72337ddea52011"} err="failed to get container status \"c07e9d475f510fbaaa0f7d273385fff89cade98a594cadfb6a72337ddea52011\": rpc error: code = NotFound desc = could not find container \"c07e9d475f510fbaaa0f7d273385fff89cade98a594cadfb6a72337ddea52011\": container with ID starting with c07e9d475f510fbaaa0f7d273385fff89cade98a594cadfb6a72337ddea52011 not found: ID does not exist" Nov 29 08:48:18 crc kubenswrapper[4795]: I1129 08:48:18.287317 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9ff2115-e366-42ad-a732-119ddc1900b6" path="/var/lib/kubelet/pods/f9ff2115-e366-42ad-a732-119ddc1900b6/volumes" Nov 29 08:48:26 crc kubenswrapper[4795]: I1129 08:48:26.276107 4795 scope.go:117] "RemoveContainer" containerID="5030ff85ae40c73022065ee3f64b116b77c5909a9198a4179ecaaeacf1816833" Nov 29 08:48:26 crc kubenswrapper[4795]: E1129 08:48:26.276994 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:48:38 crc kubenswrapper[4795]: I1129 08:48:38.281718 4795 scope.go:117] "RemoveContainer" containerID="5030ff85ae40c73022065ee3f64b116b77c5909a9198a4179ecaaeacf1816833" Nov 29 08:48:38 crc kubenswrapper[4795]: E1129 08:48:38.282848 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:48:49 crc kubenswrapper[4795]: I1129 08:48:49.276485 4795 scope.go:117] "RemoveContainer" containerID="5030ff85ae40c73022065ee3f64b116b77c5909a9198a4179ecaaeacf1816833" Nov 29 08:48:49 crc kubenswrapper[4795]: E1129 08:48:49.277929 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:49:02 crc kubenswrapper[4795]: I1129 08:49:02.275869 4795 scope.go:117] "RemoveContainer" containerID="5030ff85ae40c73022065ee3f64b116b77c5909a9198a4179ecaaeacf1816833" Nov 29 08:49:02 crc kubenswrapper[4795]: E1129 08:49:02.276755 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:49:16 crc kubenswrapper[4795]: I1129 08:49:16.276656 4795 scope.go:117] "RemoveContainer" containerID="5030ff85ae40c73022065ee3f64b116b77c5909a9198a4179ecaaeacf1816833" Nov 29 08:49:16 crc kubenswrapper[4795]: E1129 08:49:16.277948 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:49:31 crc kubenswrapper[4795]: I1129 08:49:31.277334 4795 scope.go:117] "RemoveContainer" containerID="5030ff85ae40c73022065ee3f64b116b77c5909a9198a4179ecaaeacf1816833" Nov 29 08:49:31 crc kubenswrapper[4795]: E1129 08:49:31.278167 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:49:46 crc kubenswrapper[4795]: I1129 08:49:46.276679 4795 scope.go:117] "RemoveContainer" containerID="5030ff85ae40c73022065ee3f64b116b77c5909a9198a4179ecaaeacf1816833" Nov 29 08:49:47 crc kubenswrapper[4795]: I1129 08:49:47.326018 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" event={"ID":"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1","Type":"ContainerStarted","Data":"70f84a243d87fe6aebbfeef24cbca9d8e668382e512cd6cc87a202af1db4343b"} Nov 29 08:51:16 crc kubenswrapper[4795]: E1129 08:51:16.502928 4795 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.107:44618->38.102.83.107:42443: write tcp 38.102.83.107:44618->38.102.83.107:42443: write: broken pipe Nov 29 08:52:11 crc kubenswrapper[4795]: I1129 08:52:11.941618 4795 patch_prober.go:28] interesting pod/machine-config-daemon-bkmq6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:52:11 crc kubenswrapper[4795]: I1129 08:52:11.942205 4795 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:52:41 crc kubenswrapper[4795]: I1129 08:52:41.941128 4795 patch_prober.go:28] interesting pod/machine-config-daemon-bkmq6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:52:41 crc kubenswrapper[4795]: I1129 08:52:41.941946 4795 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:53:11 crc kubenswrapper[4795]: I1129 08:53:11.941670 4795 patch_prober.go:28] interesting pod/machine-config-daemon-bkmq6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:53:11 crc kubenswrapper[4795]: I1129 08:53:11.943731 4795 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:53:11 crc kubenswrapper[4795]: I1129 08:53:11.943872 4795 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" Nov 29 08:53:11 crc kubenswrapper[4795]: I1129 08:53:11.945061 4795 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"70f84a243d87fe6aebbfeef24cbca9d8e668382e512cd6cc87a202af1db4343b"} pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 08:53:11 crc kubenswrapper[4795]: I1129 08:53:11.945266 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" containerID="cri-o://70f84a243d87fe6aebbfeef24cbca9d8e668382e512cd6cc87a202af1db4343b" gracePeriod=600 Nov 29 08:53:12 crc kubenswrapper[4795]: I1129 08:53:12.875694 4795 generic.go:334] "Generic (PLEG): container finished" podID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerID="70f84a243d87fe6aebbfeef24cbca9d8e668382e512cd6cc87a202af1db4343b" exitCode=0 Nov 29 08:53:12 crc kubenswrapper[4795]: I1129 08:53:12.875775 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" event={"ID":"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1","Type":"ContainerDied","Data":"70f84a243d87fe6aebbfeef24cbca9d8e668382e512cd6cc87a202af1db4343b"} Nov 29 08:53:12 crc kubenswrapper[4795]: I1129 08:53:12.876255 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" event={"ID":"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1","Type":"ContainerStarted","Data":"5bb2909e6b594906fc0990f1f08ffa742ff297714422ce25194698d86a68bdf1"} Nov 29 08:53:12 crc kubenswrapper[4795]: I1129 08:53:12.876279 4795 scope.go:117] "RemoveContainer" containerID="5030ff85ae40c73022065ee3f64b116b77c5909a9198a4179ecaaeacf1816833" Nov 29 08:55:41 crc kubenswrapper[4795]: I1129 08:55:41.941047 4795 patch_prober.go:28] interesting pod/machine-config-daemon-bkmq6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:55:41 crc kubenswrapper[4795]: I1129 08:55:41.941574 4795 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:55:56 crc kubenswrapper[4795]: I1129 08:55:56.343248 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-t59js"] Nov 29 08:55:56 crc kubenswrapper[4795]: E1129 08:55:56.344905 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9ff2115-e366-42ad-a732-119ddc1900b6" containerName="extract-content" Nov 29 08:55:56 crc kubenswrapper[4795]: I1129 08:55:56.344941 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9ff2115-e366-42ad-a732-119ddc1900b6" containerName="extract-content" Nov 29 08:55:56 crc kubenswrapper[4795]: E1129 08:55:56.345004 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9ff2115-e366-42ad-a732-119ddc1900b6" containerName="extract-utilities" Nov 29 08:55:56 crc kubenswrapper[4795]: I1129 08:55:56.345023 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9ff2115-e366-42ad-a732-119ddc1900b6" containerName="extract-utilities" Nov 29 08:55:56 crc kubenswrapper[4795]: E1129 08:55:56.345074 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9ff2115-e366-42ad-a732-119ddc1900b6" containerName="registry-server" Nov 29 08:55:56 crc kubenswrapper[4795]: I1129 08:55:56.345093 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9ff2115-e366-42ad-a732-119ddc1900b6" containerName="registry-server" Nov 29 08:55:56 crc kubenswrapper[4795]: I1129 08:55:56.345716 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9ff2115-e366-42ad-a732-119ddc1900b6" containerName="registry-server" Nov 29 08:55:56 crc kubenswrapper[4795]: I1129 08:55:56.349031 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t59js" Nov 29 08:55:56 crc kubenswrapper[4795]: I1129 08:55:56.363477 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t59js"] Nov 29 08:55:56 crc kubenswrapper[4795]: I1129 08:55:56.472729 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e81b95a-0f77-4523-9aeb-2ba77553e23a-catalog-content\") pod \"redhat-operators-t59js\" (UID: \"3e81b95a-0f77-4523-9aeb-2ba77553e23a\") " pod="openshift-marketplace/redhat-operators-t59js" Nov 29 08:55:56 crc kubenswrapper[4795]: I1129 08:55:56.472930 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e81b95a-0f77-4523-9aeb-2ba77553e23a-utilities\") pod \"redhat-operators-t59js\" (UID: \"3e81b95a-0f77-4523-9aeb-2ba77553e23a\") " pod="openshift-marketplace/redhat-operators-t59js" Nov 29 08:55:56 crc kubenswrapper[4795]: I1129 08:55:56.472988 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d54ng\" (UniqueName: \"kubernetes.io/projected/3e81b95a-0f77-4523-9aeb-2ba77553e23a-kube-api-access-d54ng\") pod \"redhat-operators-t59js\" (UID: \"3e81b95a-0f77-4523-9aeb-2ba77553e23a\") " pod="openshift-marketplace/redhat-operators-t59js" Nov 29 08:55:56 crc kubenswrapper[4795]: I1129 08:55:56.575306 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e81b95a-0f77-4523-9aeb-2ba77553e23a-utilities\") pod \"redhat-operators-t59js\" (UID: \"3e81b95a-0f77-4523-9aeb-2ba77553e23a\") " pod="openshift-marketplace/redhat-operators-t59js" Nov 29 08:55:56 crc kubenswrapper[4795]: I1129 08:55:56.575440 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d54ng\" (UniqueName: \"kubernetes.io/projected/3e81b95a-0f77-4523-9aeb-2ba77553e23a-kube-api-access-d54ng\") pod \"redhat-operators-t59js\" (UID: \"3e81b95a-0f77-4523-9aeb-2ba77553e23a\") " pod="openshift-marketplace/redhat-operators-t59js" Nov 29 08:55:56 crc kubenswrapper[4795]: I1129 08:55:56.575503 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e81b95a-0f77-4523-9aeb-2ba77553e23a-catalog-content\") pod \"redhat-operators-t59js\" (UID: \"3e81b95a-0f77-4523-9aeb-2ba77553e23a\") " pod="openshift-marketplace/redhat-operators-t59js" Nov 29 08:55:56 crc kubenswrapper[4795]: I1129 08:55:56.576058 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e81b95a-0f77-4523-9aeb-2ba77553e23a-utilities\") pod \"redhat-operators-t59js\" (UID: \"3e81b95a-0f77-4523-9aeb-2ba77553e23a\") " pod="openshift-marketplace/redhat-operators-t59js" Nov 29 08:55:56 crc kubenswrapper[4795]: I1129 08:55:56.576121 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e81b95a-0f77-4523-9aeb-2ba77553e23a-catalog-content\") pod \"redhat-operators-t59js\" (UID: \"3e81b95a-0f77-4523-9aeb-2ba77553e23a\") " pod="openshift-marketplace/redhat-operators-t59js" Nov 29 08:55:56 crc kubenswrapper[4795]: I1129 08:55:56.598989 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d54ng\" (UniqueName: \"kubernetes.io/projected/3e81b95a-0f77-4523-9aeb-2ba77553e23a-kube-api-access-d54ng\") pod \"redhat-operators-t59js\" (UID: \"3e81b95a-0f77-4523-9aeb-2ba77553e23a\") " pod="openshift-marketplace/redhat-operators-t59js" Nov 29 08:55:56 crc kubenswrapper[4795]: I1129 08:55:56.711271 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t59js" Nov 29 08:55:57 crc kubenswrapper[4795]: I1129 08:55:57.255125 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t59js"] Nov 29 08:55:57 crc kubenswrapper[4795]: I1129 08:55:57.995447 4795 generic.go:334] "Generic (PLEG): container finished" podID="3e81b95a-0f77-4523-9aeb-2ba77553e23a" containerID="951e67d1c381332bf421cb78c6551810dedf8ac85b5002cda156721b3f2b686d" exitCode=0 Nov 29 08:55:57 crc kubenswrapper[4795]: I1129 08:55:57.995849 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t59js" event={"ID":"3e81b95a-0f77-4523-9aeb-2ba77553e23a","Type":"ContainerDied","Data":"951e67d1c381332bf421cb78c6551810dedf8ac85b5002cda156721b3f2b686d"} Nov 29 08:55:57 crc kubenswrapper[4795]: I1129 08:55:57.995906 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t59js" event={"ID":"3e81b95a-0f77-4523-9aeb-2ba77553e23a","Type":"ContainerStarted","Data":"6adbbe32e8d0436219acd8f8de90d2cb1cd13b2a922cec692fecb71f7092365f"} Nov 29 08:55:57 crc kubenswrapper[4795]: I1129 08:55:57.999441 4795 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 29 08:55:59 crc kubenswrapper[4795]: I1129 08:55:59.015254 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t59js" event={"ID":"3e81b95a-0f77-4523-9aeb-2ba77553e23a","Type":"ContainerStarted","Data":"33bb5892c38bf38bf19fd04f7637f35e49ba1a4fe54f1d5e4dbfa6a8d80aee12"} Nov 29 08:56:02 crc kubenswrapper[4795]: I1129 08:56:02.060469 4795 generic.go:334] "Generic (PLEG): container finished" podID="3e81b95a-0f77-4523-9aeb-2ba77553e23a" containerID="33bb5892c38bf38bf19fd04f7637f35e49ba1a4fe54f1d5e4dbfa6a8d80aee12" exitCode=0 Nov 29 08:56:02 crc kubenswrapper[4795]: I1129 08:56:02.060658 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t59js" event={"ID":"3e81b95a-0f77-4523-9aeb-2ba77553e23a","Type":"ContainerDied","Data":"33bb5892c38bf38bf19fd04f7637f35e49ba1a4fe54f1d5e4dbfa6a8d80aee12"} Nov 29 08:56:03 crc kubenswrapper[4795]: I1129 08:56:03.123990 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t59js" event={"ID":"3e81b95a-0f77-4523-9aeb-2ba77553e23a","Type":"ContainerStarted","Data":"d52fc7d68c8b7b5e738f32e3b8db239a82f82cd26c58378b3311d8855b88f41c"} Nov 29 08:56:03 crc kubenswrapper[4795]: I1129 08:56:03.166026 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-t59js" podStartSLOduration=2.498529485 podStartE2EDuration="7.166008647s" podCreationTimestamp="2025-11-29 08:55:56 +0000 UTC" firstStartedPulling="2025-11-29 08:55:57.999192025 +0000 UTC m=+4603.974767815" lastFinishedPulling="2025-11-29 08:56:02.666671187 +0000 UTC m=+4608.642246977" observedRunningTime="2025-11-29 08:56:03.160196983 +0000 UTC m=+4609.135772773" watchObservedRunningTime="2025-11-29 08:56:03.166008647 +0000 UTC m=+4609.141584437" Nov 29 08:56:06 crc kubenswrapper[4795]: I1129 08:56:06.711883 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-t59js" Nov 29 08:56:06 crc kubenswrapper[4795]: I1129 08:56:06.712864 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-t59js" Nov 29 08:56:07 crc kubenswrapper[4795]: I1129 08:56:07.771267 4795 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t59js" podUID="3e81b95a-0f77-4523-9aeb-2ba77553e23a" containerName="registry-server" probeResult="failure" output=< Nov 29 08:56:07 crc kubenswrapper[4795]: timeout: failed to connect service ":50051" within 1s Nov 29 08:56:07 crc kubenswrapper[4795]: > Nov 29 08:56:11 crc kubenswrapper[4795]: I1129 08:56:11.940783 4795 patch_prober.go:28] interesting pod/machine-config-daemon-bkmq6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:56:11 crc kubenswrapper[4795]: I1129 08:56:11.941293 4795 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:56:16 crc kubenswrapper[4795]: I1129 08:56:16.768553 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-t59js" Nov 29 08:56:16 crc kubenswrapper[4795]: I1129 08:56:16.823433 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-t59js" Nov 29 08:56:17 crc kubenswrapper[4795]: I1129 08:56:17.014386 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t59js"] Nov 29 08:56:18 crc kubenswrapper[4795]: I1129 08:56:18.300901 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-t59js" podUID="3e81b95a-0f77-4523-9aeb-2ba77553e23a" containerName="registry-server" containerID="cri-o://d52fc7d68c8b7b5e738f32e3b8db239a82f82cd26c58378b3311d8855b88f41c" gracePeriod=2 Nov 29 08:56:18 crc kubenswrapper[4795]: I1129 08:56:18.894805 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t59js" Nov 29 08:56:19 crc kubenswrapper[4795]: I1129 08:56:19.001864 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e81b95a-0f77-4523-9aeb-2ba77553e23a-catalog-content\") pod \"3e81b95a-0f77-4523-9aeb-2ba77553e23a\" (UID: \"3e81b95a-0f77-4523-9aeb-2ba77553e23a\") " Nov 29 08:56:19 crc kubenswrapper[4795]: I1129 08:56:19.002057 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e81b95a-0f77-4523-9aeb-2ba77553e23a-utilities\") pod \"3e81b95a-0f77-4523-9aeb-2ba77553e23a\" (UID: \"3e81b95a-0f77-4523-9aeb-2ba77553e23a\") " Nov 29 08:56:19 crc kubenswrapper[4795]: I1129 08:56:19.002371 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d54ng\" (UniqueName: \"kubernetes.io/projected/3e81b95a-0f77-4523-9aeb-2ba77553e23a-kube-api-access-d54ng\") pod \"3e81b95a-0f77-4523-9aeb-2ba77553e23a\" (UID: \"3e81b95a-0f77-4523-9aeb-2ba77553e23a\") " Nov 29 08:56:19 crc kubenswrapper[4795]: I1129 08:56:19.002985 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e81b95a-0f77-4523-9aeb-2ba77553e23a-utilities" (OuterVolumeSpecName: "utilities") pod "3e81b95a-0f77-4523-9aeb-2ba77553e23a" (UID: "3e81b95a-0f77-4523-9aeb-2ba77553e23a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:56:19 crc kubenswrapper[4795]: I1129 08:56:19.003489 4795 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e81b95a-0f77-4523-9aeb-2ba77553e23a-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 08:56:19 crc kubenswrapper[4795]: I1129 08:56:19.063879 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e81b95a-0f77-4523-9aeb-2ba77553e23a-kube-api-access-d54ng" (OuterVolumeSpecName: "kube-api-access-d54ng") pod "3e81b95a-0f77-4523-9aeb-2ba77553e23a" (UID: "3e81b95a-0f77-4523-9aeb-2ba77553e23a"). InnerVolumeSpecName "kube-api-access-d54ng". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:56:19 crc kubenswrapper[4795]: I1129 08:56:19.106153 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d54ng\" (UniqueName: \"kubernetes.io/projected/3e81b95a-0f77-4523-9aeb-2ba77553e23a-kube-api-access-d54ng\") on node \"crc\" DevicePath \"\"" Nov 29 08:56:19 crc kubenswrapper[4795]: I1129 08:56:19.120718 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e81b95a-0f77-4523-9aeb-2ba77553e23a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3e81b95a-0f77-4523-9aeb-2ba77553e23a" (UID: "3e81b95a-0f77-4523-9aeb-2ba77553e23a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:56:19 crc kubenswrapper[4795]: I1129 08:56:19.208911 4795 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e81b95a-0f77-4523-9aeb-2ba77553e23a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 08:56:19 crc kubenswrapper[4795]: I1129 08:56:19.316450 4795 generic.go:334] "Generic (PLEG): container finished" podID="3e81b95a-0f77-4523-9aeb-2ba77553e23a" containerID="d52fc7d68c8b7b5e738f32e3b8db239a82f82cd26c58378b3311d8855b88f41c" exitCode=0 Nov 29 08:56:19 crc kubenswrapper[4795]: I1129 08:56:19.316494 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t59js" event={"ID":"3e81b95a-0f77-4523-9aeb-2ba77553e23a","Type":"ContainerDied","Data":"d52fc7d68c8b7b5e738f32e3b8db239a82f82cd26c58378b3311d8855b88f41c"} Nov 29 08:56:19 crc kubenswrapper[4795]: I1129 08:56:19.316522 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t59js" event={"ID":"3e81b95a-0f77-4523-9aeb-2ba77553e23a","Type":"ContainerDied","Data":"6adbbe32e8d0436219acd8f8de90d2cb1cd13b2a922cec692fecb71f7092365f"} Nov 29 08:56:19 crc kubenswrapper[4795]: I1129 08:56:19.316550 4795 scope.go:117] "RemoveContainer" containerID="d52fc7d68c8b7b5e738f32e3b8db239a82f82cd26c58378b3311d8855b88f41c" Nov 29 08:56:19 crc kubenswrapper[4795]: I1129 08:56:19.316554 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t59js" Nov 29 08:56:19 crc kubenswrapper[4795]: I1129 08:56:19.345424 4795 scope.go:117] "RemoveContainer" containerID="33bb5892c38bf38bf19fd04f7637f35e49ba1a4fe54f1d5e4dbfa6a8d80aee12" Nov 29 08:56:19 crc kubenswrapper[4795]: I1129 08:56:19.406905 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t59js"] Nov 29 08:56:19 crc kubenswrapper[4795]: I1129 08:56:19.414773 4795 scope.go:117] "RemoveContainer" containerID="951e67d1c381332bf421cb78c6551810dedf8ac85b5002cda156721b3f2b686d" Nov 29 08:56:19 crc kubenswrapper[4795]: I1129 08:56:19.417312 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-t59js"] Nov 29 08:56:19 crc kubenswrapper[4795]: I1129 08:56:19.465692 4795 scope.go:117] "RemoveContainer" containerID="d52fc7d68c8b7b5e738f32e3b8db239a82f82cd26c58378b3311d8855b88f41c" Nov 29 08:56:19 crc kubenswrapper[4795]: E1129 08:56:19.470066 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d52fc7d68c8b7b5e738f32e3b8db239a82f82cd26c58378b3311d8855b88f41c\": container with ID starting with d52fc7d68c8b7b5e738f32e3b8db239a82f82cd26c58378b3311d8855b88f41c not found: ID does not exist" containerID="d52fc7d68c8b7b5e738f32e3b8db239a82f82cd26c58378b3311d8855b88f41c" Nov 29 08:56:19 crc kubenswrapper[4795]: I1129 08:56:19.470144 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d52fc7d68c8b7b5e738f32e3b8db239a82f82cd26c58378b3311d8855b88f41c"} err="failed to get container status \"d52fc7d68c8b7b5e738f32e3b8db239a82f82cd26c58378b3311d8855b88f41c\": rpc error: code = NotFound desc = could not find container \"d52fc7d68c8b7b5e738f32e3b8db239a82f82cd26c58378b3311d8855b88f41c\": container with ID starting with d52fc7d68c8b7b5e738f32e3b8db239a82f82cd26c58378b3311d8855b88f41c not found: ID does not exist" Nov 29 08:56:19 crc kubenswrapper[4795]: I1129 08:56:19.470166 4795 scope.go:117] "RemoveContainer" containerID="33bb5892c38bf38bf19fd04f7637f35e49ba1a4fe54f1d5e4dbfa6a8d80aee12" Nov 29 08:56:19 crc kubenswrapper[4795]: E1129 08:56:19.470551 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33bb5892c38bf38bf19fd04f7637f35e49ba1a4fe54f1d5e4dbfa6a8d80aee12\": container with ID starting with 33bb5892c38bf38bf19fd04f7637f35e49ba1a4fe54f1d5e4dbfa6a8d80aee12 not found: ID does not exist" containerID="33bb5892c38bf38bf19fd04f7637f35e49ba1a4fe54f1d5e4dbfa6a8d80aee12" Nov 29 08:56:19 crc kubenswrapper[4795]: I1129 08:56:19.470576 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33bb5892c38bf38bf19fd04f7637f35e49ba1a4fe54f1d5e4dbfa6a8d80aee12"} err="failed to get container status \"33bb5892c38bf38bf19fd04f7637f35e49ba1a4fe54f1d5e4dbfa6a8d80aee12\": rpc error: code = NotFound desc = could not find container \"33bb5892c38bf38bf19fd04f7637f35e49ba1a4fe54f1d5e4dbfa6a8d80aee12\": container with ID starting with 33bb5892c38bf38bf19fd04f7637f35e49ba1a4fe54f1d5e4dbfa6a8d80aee12 not found: ID does not exist" Nov 29 08:56:19 crc kubenswrapper[4795]: I1129 08:56:19.470620 4795 scope.go:117] "RemoveContainer" containerID="951e67d1c381332bf421cb78c6551810dedf8ac85b5002cda156721b3f2b686d" Nov 29 08:56:19 crc kubenswrapper[4795]: E1129 08:56:19.474146 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"951e67d1c381332bf421cb78c6551810dedf8ac85b5002cda156721b3f2b686d\": container with ID starting with 951e67d1c381332bf421cb78c6551810dedf8ac85b5002cda156721b3f2b686d not found: ID does not exist" containerID="951e67d1c381332bf421cb78c6551810dedf8ac85b5002cda156721b3f2b686d" Nov 29 08:56:19 crc kubenswrapper[4795]: I1129 08:56:19.474185 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"951e67d1c381332bf421cb78c6551810dedf8ac85b5002cda156721b3f2b686d"} err="failed to get container status \"951e67d1c381332bf421cb78c6551810dedf8ac85b5002cda156721b3f2b686d\": rpc error: code = NotFound desc = could not find container \"951e67d1c381332bf421cb78c6551810dedf8ac85b5002cda156721b3f2b686d\": container with ID starting with 951e67d1c381332bf421cb78c6551810dedf8ac85b5002cda156721b3f2b686d not found: ID does not exist" Nov 29 08:56:20 crc kubenswrapper[4795]: I1129 08:56:20.293882 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e81b95a-0f77-4523-9aeb-2ba77553e23a" path="/var/lib/kubelet/pods/3e81b95a-0f77-4523-9aeb-2ba77553e23a/volumes" Nov 29 08:56:24 crc kubenswrapper[4795]: I1129 08:56:24.877202 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8xl7p"] Nov 29 08:56:24 crc kubenswrapper[4795]: E1129 08:56:24.878065 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e81b95a-0f77-4523-9aeb-2ba77553e23a" containerName="registry-server" Nov 29 08:56:24 crc kubenswrapper[4795]: I1129 08:56:24.878080 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e81b95a-0f77-4523-9aeb-2ba77553e23a" containerName="registry-server" Nov 29 08:56:24 crc kubenswrapper[4795]: E1129 08:56:24.878103 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e81b95a-0f77-4523-9aeb-2ba77553e23a" containerName="extract-utilities" Nov 29 08:56:24 crc kubenswrapper[4795]: I1129 08:56:24.878110 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e81b95a-0f77-4523-9aeb-2ba77553e23a" containerName="extract-utilities" Nov 29 08:56:24 crc kubenswrapper[4795]: E1129 08:56:24.878124 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e81b95a-0f77-4523-9aeb-2ba77553e23a" containerName="extract-content" Nov 29 08:56:24 crc kubenswrapper[4795]: I1129 08:56:24.878131 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e81b95a-0f77-4523-9aeb-2ba77553e23a" containerName="extract-content" Nov 29 08:56:24 crc kubenswrapper[4795]: I1129 08:56:24.878385 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e81b95a-0f77-4523-9aeb-2ba77553e23a" containerName="registry-server" Nov 29 08:56:24 crc kubenswrapper[4795]: I1129 08:56:24.880164 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8xl7p" Nov 29 08:56:24 crc kubenswrapper[4795]: I1129 08:56:24.925548 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fda09a90-a63a-44bd-a55a-5cc411392f7d-utilities\") pod \"community-operators-8xl7p\" (UID: \"fda09a90-a63a-44bd-a55a-5cc411392f7d\") " pod="openshift-marketplace/community-operators-8xl7p" Nov 29 08:56:24 crc kubenswrapper[4795]: I1129 08:56:24.925950 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njt66\" (UniqueName: \"kubernetes.io/projected/fda09a90-a63a-44bd-a55a-5cc411392f7d-kube-api-access-njt66\") pod \"community-operators-8xl7p\" (UID: \"fda09a90-a63a-44bd-a55a-5cc411392f7d\") " pod="openshift-marketplace/community-operators-8xl7p" Nov 29 08:56:24 crc kubenswrapper[4795]: I1129 08:56:24.926031 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8xl7p"] Nov 29 08:56:24 crc kubenswrapper[4795]: I1129 08:56:24.926275 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fda09a90-a63a-44bd-a55a-5cc411392f7d-catalog-content\") pod \"community-operators-8xl7p\" (UID: \"fda09a90-a63a-44bd-a55a-5cc411392f7d\") " pod="openshift-marketplace/community-operators-8xl7p" Nov 29 08:56:25 crc kubenswrapper[4795]: I1129 08:56:25.049238 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njt66\" (UniqueName: \"kubernetes.io/projected/fda09a90-a63a-44bd-a55a-5cc411392f7d-kube-api-access-njt66\") pod \"community-operators-8xl7p\" (UID: \"fda09a90-a63a-44bd-a55a-5cc411392f7d\") " pod="openshift-marketplace/community-operators-8xl7p" Nov 29 08:56:25 crc kubenswrapper[4795]: I1129 08:56:25.049286 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fda09a90-a63a-44bd-a55a-5cc411392f7d-utilities\") pod \"community-operators-8xl7p\" (UID: \"fda09a90-a63a-44bd-a55a-5cc411392f7d\") " pod="openshift-marketplace/community-operators-8xl7p" Nov 29 08:56:25 crc kubenswrapper[4795]: I1129 08:56:25.049796 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fda09a90-a63a-44bd-a55a-5cc411392f7d-catalog-content\") pod \"community-operators-8xl7p\" (UID: \"fda09a90-a63a-44bd-a55a-5cc411392f7d\") " pod="openshift-marketplace/community-operators-8xl7p" Nov 29 08:56:25 crc kubenswrapper[4795]: I1129 08:56:25.050226 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fda09a90-a63a-44bd-a55a-5cc411392f7d-utilities\") pod \"community-operators-8xl7p\" (UID: \"fda09a90-a63a-44bd-a55a-5cc411392f7d\") " pod="openshift-marketplace/community-operators-8xl7p" Nov 29 08:56:25 crc kubenswrapper[4795]: I1129 08:56:25.050288 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fda09a90-a63a-44bd-a55a-5cc411392f7d-catalog-content\") pod \"community-operators-8xl7p\" (UID: \"fda09a90-a63a-44bd-a55a-5cc411392f7d\") " pod="openshift-marketplace/community-operators-8xl7p" Nov 29 08:56:25 crc kubenswrapper[4795]: I1129 08:56:25.091025 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njt66\" (UniqueName: \"kubernetes.io/projected/fda09a90-a63a-44bd-a55a-5cc411392f7d-kube-api-access-njt66\") pod \"community-operators-8xl7p\" (UID: \"fda09a90-a63a-44bd-a55a-5cc411392f7d\") " pod="openshift-marketplace/community-operators-8xl7p" Nov 29 08:56:25 crc kubenswrapper[4795]: I1129 08:56:25.245022 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8xl7p" Nov 29 08:56:25 crc kubenswrapper[4795]: I1129 08:56:25.841268 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8xl7p"] Nov 29 08:56:26 crc kubenswrapper[4795]: I1129 08:56:26.430631 4795 generic.go:334] "Generic (PLEG): container finished" podID="fda09a90-a63a-44bd-a55a-5cc411392f7d" containerID="b92756f6dfd1acd05e73d4999d395a6cf427a9ff0e17cb56e4b37c47e206cda2" exitCode=0 Nov 29 08:56:26 crc kubenswrapper[4795]: I1129 08:56:26.430672 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8xl7p" event={"ID":"fda09a90-a63a-44bd-a55a-5cc411392f7d","Type":"ContainerDied","Data":"b92756f6dfd1acd05e73d4999d395a6cf427a9ff0e17cb56e4b37c47e206cda2"} Nov 29 08:56:26 crc kubenswrapper[4795]: I1129 08:56:26.430697 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8xl7p" event={"ID":"fda09a90-a63a-44bd-a55a-5cc411392f7d","Type":"ContainerStarted","Data":"82186ac0b48024919042985f2d356eae67d615a8e252a0904976dd148fd4362d"} Nov 29 08:56:32 crc kubenswrapper[4795]: I1129 08:56:32.106728 4795 generic.go:334] "Generic (PLEG): container finished" podID="fda09a90-a63a-44bd-a55a-5cc411392f7d" containerID="c35221381471ac4303900ce5cf8205453a75463d8b18afe9bb0b2d6f4ce03754" exitCode=0 Nov 29 08:56:32 crc kubenswrapper[4795]: I1129 08:56:32.106895 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8xl7p" event={"ID":"fda09a90-a63a-44bd-a55a-5cc411392f7d","Type":"ContainerDied","Data":"c35221381471ac4303900ce5cf8205453a75463d8b18afe9bb0b2d6f4ce03754"} Nov 29 08:56:33 crc kubenswrapper[4795]: I1129 08:56:33.120420 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8xl7p" event={"ID":"fda09a90-a63a-44bd-a55a-5cc411392f7d","Type":"ContainerStarted","Data":"dbad47a2a66a5628782eb218e2aa1bd254d72e1e017941b4bcfb4adfd045b769"} Nov 29 08:56:33 crc kubenswrapper[4795]: I1129 08:56:33.155487 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8xl7p" podStartSLOduration=2.893727674 podStartE2EDuration="9.155469417s" podCreationTimestamp="2025-11-29 08:56:24 +0000 UTC" firstStartedPulling="2025-11-29 08:56:26.433665189 +0000 UTC m=+4632.409240979" lastFinishedPulling="2025-11-29 08:56:32.695406932 +0000 UTC m=+4638.670982722" observedRunningTime="2025-11-29 08:56:33.137403385 +0000 UTC m=+4639.112979175" watchObservedRunningTime="2025-11-29 08:56:33.155469417 +0000 UTC m=+4639.131045207" Nov 29 08:56:35 crc kubenswrapper[4795]: I1129 08:56:35.245435 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8xl7p" Nov 29 08:56:35 crc kubenswrapper[4795]: I1129 08:56:35.245815 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8xl7p" Nov 29 08:56:35 crc kubenswrapper[4795]: I1129 08:56:35.296840 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8xl7p" Nov 29 08:56:38 crc kubenswrapper[4795]: I1129 08:56:38.749256 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wrlhd"] Nov 29 08:56:38 crc kubenswrapper[4795]: I1129 08:56:38.753769 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wrlhd" Nov 29 08:56:38 crc kubenswrapper[4795]: I1129 08:56:38.762533 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wrlhd"] Nov 29 08:56:38 crc kubenswrapper[4795]: I1129 08:56:38.871149 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f2b8d4f-3307-4d0b-b594-c77648c021e7-catalog-content\") pod \"certified-operators-wrlhd\" (UID: \"8f2b8d4f-3307-4d0b-b594-c77648c021e7\") " pod="openshift-marketplace/certified-operators-wrlhd" Nov 29 08:56:38 crc kubenswrapper[4795]: I1129 08:56:38.871480 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f2b8d4f-3307-4d0b-b594-c77648c021e7-utilities\") pod \"certified-operators-wrlhd\" (UID: \"8f2b8d4f-3307-4d0b-b594-c77648c021e7\") " pod="openshift-marketplace/certified-operators-wrlhd" Nov 29 08:56:38 crc kubenswrapper[4795]: I1129 08:56:38.871824 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wp8qf\" (UniqueName: \"kubernetes.io/projected/8f2b8d4f-3307-4d0b-b594-c77648c021e7-kube-api-access-wp8qf\") pod \"certified-operators-wrlhd\" (UID: \"8f2b8d4f-3307-4d0b-b594-c77648c021e7\") " pod="openshift-marketplace/certified-operators-wrlhd" Nov 29 08:56:38 crc kubenswrapper[4795]: I1129 08:56:38.974318 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f2b8d4f-3307-4d0b-b594-c77648c021e7-catalog-content\") pod \"certified-operators-wrlhd\" (UID: \"8f2b8d4f-3307-4d0b-b594-c77648c021e7\") " pod="openshift-marketplace/certified-operators-wrlhd" Nov 29 08:56:38 crc kubenswrapper[4795]: I1129 08:56:38.974404 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f2b8d4f-3307-4d0b-b594-c77648c021e7-utilities\") pod \"certified-operators-wrlhd\" (UID: \"8f2b8d4f-3307-4d0b-b594-c77648c021e7\") " pod="openshift-marketplace/certified-operators-wrlhd" Nov 29 08:56:38 crc kubenswrapper[4795]: I1129 08:56:38.974474 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wp8qf\" (UniqueName: \"kubernetes.io/projected/8f2b8d4f-3307-4d0b-b594-c77648c021e7-kube-api-access-wp8qf\") pod \"certified-operators-wrlhd\" (UID: \"8f2b8d4f-3307-4d0b-b594-c77648c021e7\") " pod="openshift-marketplace/certified-operators-wrlhd" Nov 29 08:56:38 crc kubenswrapper[4795]: I1129 08:56:38.974960 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f2b8d4f-3307-4d0b-b594-c77648c021e7-catalog-content\") pod \"certified-operators-wrlhd\" (UID: \"8f2b8d4f-3307-4d0b-b594-c77648c021e7\") " pod="openshift-marketplace/certified-operators-wrlhd" Nov 29 08:56:38 crc kubenswrapper[4795]: I1129 08:56:38.975001 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f2b8d4f-3307-4d0b-b594-c77648c021e7-utilities\") pod \"certified-operators-wrlhd\" (UID: \"8f2b8d4f-3307-4d0b-b594-c77648c021e7\") " pod="openshift-marketplace/certified-operators-wrlhd" Nov 29 08:56:38 crc kubenswrapper[4795]: I1129 08:56:38.995491 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wp8qf\" (UniqueName: \"kubernetes.io/projected/8f2b8d4f-3307-4d0b-b594-c77648c021e7-kube-api-access-wp8qf\") pod \"certified-operators-wrlhd\" (UID: \"8f2b8d4f-3307-4d0b-b594-c77648c021e7\") " pod="openshift-marketplace/certified-operators-wrlhd" Nov 29 08:56:39 crc kubenswrapper[4795]: I1129 08:56:39.097845 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wrlhd" Nov 29 08:56:39 crc kubenswrapper[4795]: I1129 08:56:39.732615 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wrlhd"] Nov 29 08:56:39 crc kubenswrapper[4795]: W1129 08:56:39.736781 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f2b8d4f_3307_4d0b_b594_c77648c021e7.slice/crio-573e2d32896786eb38887c3b49b593689cf35491709486e3db38872ba8d0274f WatchSource:0}: Error finding container 573e2d32896786eb38887c3b49b593689cf35491709486e3db38872ba8d0274f: Status 404 returned error can't find the container with id 573e2d32896786eb38887c3b49b593689cf35491709486e3db38872ba8d0274f Nov 29 08:56:40 crc kubenswrapper[4795]: I1129 08:56:40.193874 4795 generic.go:334] "Generic (PLEG): container finished" podID="8f2b8d4f-3307-4d0b-b594-c77648c021e7" containerID="6fde62ccd25993cc8906985a83e60e79ff2016ece7b8f24e48c602709f3d1c45" exitCode=0 Nov 29 08:56:40 crc kubenswrapper[4795]: I1129 08:56:40.193990 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wrlhd" event={"ID":"8f2b8d4f-3307-4d0b-b594-c77648c021e7","Type":"ContainerDied","Data":"6fde62ccd25993cc8906985a83e60e79ff2016ece7b8f24e48c602709f3d1c45"} Nov 29 08:56:40 crc kubenswrapper[4795]: I1129 08:56:40.194308 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wrlhd" event={"ID":"8f2b8d4f-3307-4d0b-b594-c77648c021e7","Type":"ContainerStarted","Data":"573e2d32896786eb38887c3b49b593689cf35491709486e3db38872ba8d0274f"} Nov 29 08:56:41 crc kubenswrapper[4795]: I1129 08:56:41.210219 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wrlhd" event={"ID":"8f2b8d4f-3307-4d0b-b594-c77648c021e7","Type":"ContainerStarted","Data":"d4f7f5fd36c34c7014ab8d5d889c0ca0e6e56fefd19edda3cc4d7123d89746e4"} Nov 29 08:56:41 crc kubenswrapper[4795]: I1129 08:56:41.941628 4795 patch_prober.go:28] interesting pod/machine-config-daemon-bkmq6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:56:41 crc kubenswrapper[4795]: I1129 08:56:41.941960 4795 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:56:41 crc kubenswrapper[4795]: I1129 08:56:41.942007 4795 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" Nov 29 08:56:41 crc kubenswrapper[4795]: I1129 08:56:41.942911 4795 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5bb2909e6b594906fc0990f1f08ffa742ff297714422ce25194698d86a68bdf1"} pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 08:56:41 crc kubenswrapper[4795]: I1129 08:56:41.942978 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" containerID="cri-o://5bb2909e6b594906fc0990f1f08ffa742ff297714422ce25194698d86a68bdf1" gracePeriod=600 Nov 29 08:56:42 crc kubenswrapper[4795]: E1129 08:56:42.062072 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:56:42 crc kubenswrapper[4795]: I1129 08:56:42.223330 4795 generic.go:334] "Generic (PLEG): container finished" podID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerID="5bb2909e6b594906fc0990f1f08ffa742ff297714422ce25194698d86a68bdf1" exitCode=0 Nov 29 08:56:42 crc kubenswrapper[4795]: I1129 08:56:42.223390 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" event={"ID":"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1","Type":"ContainerDied","Data":"5bb2909e6b594906fc0990f1f08ffa742ff297714422ce25194698d86a68bdf1"} Nov 29 08:56:42 crc kubenswrapper[4795]: I1129 08:56:42.223423 4795 scope.go:117] "RemoveContainer" containerID="70f84a243d87fe6aebbfeef24cbca9d8e668382e512cd6cc87a202af1db4343b" Nov 29 08:56:42 crc kubenswrapper[4795]: I1129 08:56:42.224186 4795 scope.go:117] "RemoveContainer" containerID="5bb2909e6b594906fc0990f1f08ffa742ff297714422ce25194698d86a68bdf1" Nov 29 08:56:42 crc kubenswrapper[4795]: E1129 08:56:42.224503 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:56:42 crc kubenswrapper[4795]: I1129 08:56:42.229222 4795 generic.go:334] "Generic (PLEG): container finished" podID="8f2b8d4f-3307-4d0b-b594-c77648c021e7" containerID="d4f7f5fd36c34c7014ab8d5d889c0ca0e6e56fefd19edda3cc4d7123d89746e4" exitCode=0 Nov 29 08:56:42 crc kubenswrapper[4795]: I1129 08:56:42.229259 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wrlhd" event={"ID":"8f2b8d4f-3307-4d0b-b594-c77648c021e7","Type":"ContainerDied","Data":"d4f7f5fd36c34c7014ab8d5d889c0ca0e6e56fefd19edda3cc4d7123d89746e4"} Nov 29 08:56:44 crc kubenswrapper[4795]: I1129 08:56:44.265436 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wrlhd" event={"ID":"8f2b8d4f-3307-4d0b-b594-c77648c021e7","Type":"ContainerStarted","Data":"9b1dc6b21206ced254ebd3605e3bc39a6f4cc46171f6b4caaaa6b7266b17cbd0"} Nov 29 08:56:44 crc kubenswrapper[4795]: I1129 08:56:44.293255 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wrlhd" podStartSLOduration=3.821545631 podStartE2EDuration="6.293230561s" podCreationTimestamp="2025-11-29 08:56:38 +0000 UTC" firstStartedPulling="2025-11-29 08:56:40.195746264 +0000 UTC m=+4646.171322054" lastFinishedPulling="2025-11-29 08:56:42.667431204 +0000 UTC m=+4648.643006984" observedRunningTime="2025-11-29 08:56:44.29001121 +0000 UTC m=+4650.265587000" watchObservedRunningTime="2025-11-29 08:56:44.293230561 +0000 UTC m=+4650.268806381" Nov 29 08:56:45 crc kubenswrapper[4795]: I1129 08:56:45.393985 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8xl7p" Nov 29 08:56:45 crc kubenswrapper[4795]: I1129 08:56:45.935235 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8xl7p"] Nov 29 08:56:46 crc kubenswrapper[4795]: I1129 08:56:46.130999 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-txz6d"] Nov 29 08:56:46 crc kubenswrapper[4795]: I1129 08:56:46.131952 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-txz6d" podUID="9e73c53a-c858-4ba0-8a00-6322487811be" containerName="registry-server" containerID="cri-o://c60305bfc59b7b8ccbfc588fecad81c6c3f219bb43e58c7cf079d086ddb2d400" gracePeriod=2 Nov 29 08:56:46 crc kubenswrapper[4795]: I1129 08:56:46.328152 4795 generic.go:334] "Generic (PLEG): container finished" podID="9e73c53a-c858-4ba0-8a00-6322487811be" containerID="c60305bfc59b7b8ccbfc588fecad81c6c3f219bb43e58c7cf079d086ddb2d400" exitCode=0 Nov 29 08:56:46 crc kubenswrapper[4795]: I1129 08:56:46.328259 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-txz6d" event={"ID":"9e73c53a-c858-4ba0-8a00-6322487811be","Type":"ContainerDied","Data":"c60305bfc59b7b8ccbfc588fecad81c6c3f219bb43e58c7cf079d086ddb2d400"} Nov 29 08:56:46 crc kubenswrapper[4795]: I1129 08:56:46.722442 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-txz6d" Nov 29 08:56:46 crc kubenswrapper[4795]: I1129 08:56:46.895022 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhxf7\" (UniqueName: \"kubernetes.io/projected/9e73c53a-c858-4ba0-8a00-6322487811be-kube-api-access-zhxf7\") pod \"9e73c53a-c858-4ba0-8a00-6322487811be\" (UID: \"9e73c53a-c858-4ba0-8a00-6322487811be\") " Nov 29 08:56:46 crc kubenswrapper[4795]: I1129 08:56:46.895140 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e73c53a-c858-4ba0-8a00-6322487811be-catalog-content\") pod \"9e73c53a-c858-4ba0-8a00-6322487811be\" (UID: \"9e73c53a-c858-4ba0-8a00-6322487811be\") " Nov 29 08:56:46 crc kubenswrapper[4795]: I1129 08:56:46.895245 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e73c53a-c858-4ba0-8a00-6322487811be-utilities\") pod \"9e73c53a-c858-4ba0-8a00-6322487811be\" (UID: \"9e73c53a-c858-4ba0-8a00-6322487811be\") " Nov 29 08:56:46 crc kubenswrapper[4795]: I1129 08:56:46.896784 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e73c53a-c858-4ba0-8a00-6322487811be-utilities" (OuterVolumeSpecName: "utilities") pod "9e73c53a-c858-4ba0-8a00-6322487811be" (UID: "9e73c53a-c858-4ba0-8a00-6322487811be"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:56:46 crc kubenswrapper[4795]: I1129 08:56:46.901874 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e73c53a-c858-4ba0-8a00-6322487811be-kube-api-access-zhxf7" (OuterVolumeSpecName: "kube-api-access-zhxf7") pod "9e73c53a-c858-4ba0-8a00-6322487811be" (UID: "9e73c53a-c858-4ba0-8a00-6322487811be"). InnerVolumeSpecName "kube-api-access-zhxf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:56:46 crc kubenswrapper[4795]: I1129 08:56:46.948461 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e73c53a-c858-4ba0-8a00-6322487811be-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9e73c53a-c858-4ba0-8a00-6322487811be" (UID: "9e73c53a-c858-4ba0-8a00-6322487811be"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:56:46 crc kubenswrapper[4795]: I1129 08:56:46.998531 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zhxf7\" (UniqueName: \"kubernetes.io/projected/9e73c53a-c858-4ba0-8a00-6322487811be-kube-api-access-zhxf7\") on node \"crc\" DevicePath \"\"" Nov 29 08:56:46 crc kubenswrapper[4795]: I1129 08:56:46.998571 4795 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e73c53a-c858-4ba0-8a00-6322487811be-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 08:56:46 crc kubenswrapper[4795]: I1129 08:56:46.998581 4795 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e73c53a-c858-4ba0-8a00-6322487811be-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 08:56:47 crc kubenswrapper[4795]: I1129 08:56:47.341062 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-txz6d" event={"ID":"9e73c53a-c858-4ba0-8a00-6322487811be","Type":"ContainerDied","Data":"a3563331bcceebe537fbf46862c356c3dae431bf1640de225a1c1f0c20b2bc7b"} Nov 29 08:56:47 crc kubenswrapper[4795]: I1129 08:56:47.341119 4795 scope.go:117] "RemoveContainer" containerID="c60305bfc59b7b8ccbfc588fecad81c6c3f219bb43e58c7cf079d086ddb2d400" Nov 29 08:56:47 crc kubenswrapper[4795]: I1129 08:56:47.341165 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-txz6d" Nov 29 08:56:47 crc kubenswrapper[4795]: I1129 08:56:47.369103 4795 scope.go:117] "RemoveContainer" containerID="3b0bd8e6dd2cee091969f128e0711223727c183afed985876fd17f32f38a9e33" Nov 29 08:56:47 crc kubenswrapper[4795]: I1129 08:56:47.504214 4795 scope.go:117] "RemoveContainer" containerID="d183823c144c774d04d6d8840a404c4367b2c9e893c171f9089c1cf2be5423ee" Nov 29 08:56:47 crc kubenswrapper[4795]: I1129 08:56:47.506455 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-txz6d"] Nov 29 08:56:47 crc kubenswrapper[4795]: I1129 08:56:47.518290 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-txz6d"] Nov 29 08:56:48 crc kubenswrapper[4795]: I1129 08:56:48.287220 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e73c53a-c858-4ba0-8a00-6322487811be" path="/var/lib/kubelet/pods/9e73c53a-c858-4ba0-8a00-6322487811be/volumes" Nov 29 08:56:49 crc kubenswrapper[4795]: I1129 08:56:49.098806 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wrlhd" Nov 29 08:56:49 crc kubenswrapper[4795]: I1129 08:56:49.099215 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wrlhd" Nov 29 08:56:49 crc kubenswrapper[4795]: I1129 08:56:49.155445 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wrlhd" Nov 29 08:56:49 crc kubenswrapper[4795]: I1129 08:56:49.547680 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wrlhd" Nov 29 08:56:51 crc kubenswrapper[4795]: I1129 08:56:51.321446 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wrlhd"] Nov 29 08:56:51 crc kubenswrapper[4795]: I1129 08:56:51.396665 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wrlhd" podUID="8f2b8d4f-3307-4d0b-b594-c77648c021e7" containerName="registry-server" containerID="cri-o://9b1dc6b21206ced254ebd3605e3bc39a6f4cc46171f6b4caaaa6b7266b17cbd0" gracePeriod=2 Nov 29 08:56:52 crc kubenswrapper[4795]: I1129 08:56:52.081839 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wrlhd" Nov 29 08:56:52 crc kubenswrapper[4795]: I1129 08:56:52.194300 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f2b8d4f-3307-4d0b-b594-c77648c021e7-utilities\") pod \"8f2b8d4f-3307-4d0b-b594-c77648c021e7\" (UID: \"8f2b8d4f-3307-4d0b-b594-c77648c021e7\") " Nov 29 08:56:52 crc kubenswrapper[4795]: I1129 08:56:52.194500 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wp8qf\" (UniqueName: \"kubernetes.io/projected/8f2b8d4f-3307-4d0b-b594-c77648c021e7-kube-api-access-wp8qf\") pod \"8f2b8d4f-3307-4d0b-b594-c77648c021e7\" (UID: \"8f2b8d4f-3307-4d0b-b594-c77648c021e7\") " Nov 29 08:56:52 crc kubenswrapper[4795]: I1129 08:56:52.194547 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f2b8d4f-3307-4d0b-b594-c77648c021e7-catalog-content\") pod \"8f2b8d4f-3307-4d0b-b594-c77648c021e7\" (UID: \"8f2b8d4f-3307-4d0b-b594-c77648c021e7\") " Nov 29 08:56:52 crc kubenswrapper[4795]: I1129 08:56:52.195197 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f2b8d4f-3307-4d0b-b594-c77648c021e7-utilities" (OuterVolumeSpecName: "utilities") pod "8f2b8d4f-3307-4d0b-b594-c77648c021e7" (UID: "8f2b8d4f-3307-4d0b-b594-c77648c021e7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:56:52 crc kubenswrapper[4795]: I1129 08:56:52.206562 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f2b8d4f-3307-4d0b-b594-c77648c021e7-kube-api-access-wp8qf" (OuterVolumeSpecName: "kube-api-access-wp8qf") pod "8f2b8d4f-3307-4d0b-b594-c77648c021e7" (UID: "8f2b8d4f-3307-4d0b-b594-c77648c021e7"). InnerVolumeSpecName "kube-api-access-wp8qf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:56:52 crc kubenswrapper[4795]: I1129 08:56:52.255088 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f2b8d4f-3307-4d0b-b594-c77648c021e7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8f2b8d4f-3307-4d0b-b594-c77648c021e7" (UID: "8f2b8d4f-3307-4d0b-b594-c77648c021e7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:56:52 crc kubenswrapper[4795]: I1129 08:56:52.297814 4795 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f2b8d4f-3307-4d0b-b594-c77648c021e7-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 08:56:52 crc kubenswrapper[4795]: I1129 08:56:52.297943 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wp8qf\" (UniqueName: \"kubernetes.io/projected/8f2b8d4f-3307-4d0b-b594-c77648c021e7-kube-api-access-wp8qf\") on node \"crc\" DevicePath \"\"" Nov 29 08:56:52 crc kubenswrapper[4795]: I1129 08:56:52.298006 4795 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f2b8d4f-3307-4d0b-b594-c77648c021e7-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 08:56:52 crc kubenswrapper[4795]: I1129 08:56:52.408558 4795 generic.go:334] "Generic (PLEG): container finished" podID="8f2b8d4f-3307-4d0b-b594-c77648c021e7" containerID="9b1dc6b21206ced254ebd3605e3bc39a6f4cc46171f6b4caaaa6b7266b17cbd0" exitCode=0 Nov 29 08:56:52 crc kubenswrapper[4795]: I1129 08:56:52.408631 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wrlhd" event={"ID":"8f2b8d4f-3307-4d0b-b594-c77648c021e7","Type":"ContainerDied","Data":"9b1dc6b21206ced254ebd3605e3bc39a6f4cc46171f6b4caaaa6b7266b17cbd0"} Nov 29 08:56:52 crc kubenswrapper[4795]: I1129 08:56:52.408659 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wrlhd" event={"ID":"8f2b8d4f-3307-4d0b-b594-c77648c021e7","Type":"ContainerDied","Data":"573e2d32896786eb38887c3b49b593689cf35491709486e3db38872ba8d0274f"} Nov 29 08:56:52 crc kubenswrapper[4795]: I1129 08:56:52.408675 4795 scope.go:117] "RemoveContainer" containerID="9b1dc6b21206ced254ebd3605e3bc39a6f4cc46171f6b4caaaa6b7266b17cbd0" Nov 29 08:56:52 crc kubenswrapper[4795]: I1129 08:56:52.408858 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wrlhd" Nov 29 08:56:52 crc kubenswrapper[4795]: I1129 08:56:52.435327 4795 scope.go:117] "RemoveContainer" containerID="d4f7f5fd36c34c7014ab8d5d889c0ca0e6e56fefd19edda3cc4d7123d89746e4" Nov 29 08:56:52 crc kubenswrapper[4795]: I1129 08:56:52.442178 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wrlhd"] Nov 29 08:56:52 crc kubenswrapper[4795]: I1129 08:56:52.453615 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wrlhd"] Nov 29 08:56:52 crc kubenswrapper[4795]: I1129 08:56:52.458699 4795 scope.go:117] "RemoveContainer" containerID="6fde62ccd25993cc8906985a83e60e79ff2016ece7b8f24e48c602709f3d1c45" Nov 29 08:56:52 crc kubenswrapper[4795]: I1129 08:56:52.512568 4795 scope.go:117] "RemoveContainer" containerID="9b1dc6b21206ced254ebd3605e3bc39a6f4cc46171f6b4caaaa6b7266b17cbd0" Nov 29 08:56:52 crc kubenswrapper[4795]: E1129 08:56:52.513024 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b1dc6b21206ced254ebd3605e3bc39a6f4cc46171f6b4caaaa6b7266b17cbd0\": container with ID starting with 9b1dc6b21206ced254ebd3605e3bc39a6f4cc46171f6b4caaaa6b7266b17cbd0 not found: ID does not exist" containerID="9b1dc6b21206ced254ebd3605e3bc39a6f4cc46171f6b4caaaa6b7266b17cbd0" Nov 29 08:56:52 crc kubenswrapper[4795]: I1129 08:56:52.513063 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b1dc6b21206ced254ebd3605e3bc39a6f4cc46171f6b4caaaa6b7266b17cbd0"} err="failed to get container status \"9b1dc6b21206ced254ebd3605e3bc39a6f4cc46171f6b4caaaa6b7266b17cbd0\": rpc error: code = NotFound desc = could not find container \"9b1dc6b21206ced254ebd3605e3bc39a6f4cc46171f6b4caaaa6b7266b17cbd0\": container with ID starting with 9b1dc6b21206ced254ebd3605e3bc39a6f4cc46171f6b4caaaa6b7266b17cbd0 not found: ID does not exist" Nov 29 08:56:52 crc kubenswrapper[4795]: I1129 08:56:52.513090 4795 scope.go:117] "RemoveContainer" containerID="d4f7f5fd36c34c7014ab8d5d889c0ca0e6e56fefd19edda3cc4d7123d89746e4" Nov 29 08:56:52 crc kubenswrapper[4795]: E1129 08:56:52.513367 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4f7f5fd36c34c7014ab8d5d889c0ca0e6e56fefd19edda3cc4d7123d89746e4\": container with ID starting with d4f7f5fd36c34c7014ab8d5d889c0ca0e6e56fefd19edda3cc4d7123d89746e4 not found: ID does not exist" containerID="d4f7f5fd36c34c7014ab8d5d889c0ca0e6e56fefd19edda3cc4d7123d89746e4" Nov 29 08:56:52 crc kubenswrapper[4795]: I1129 08:56:52.513395 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4f7f5fd36c34c7014ab8d5d889c0ca0e6e56fefd19edda3cc4d7123d89746e4"} err="failed to get container status \"d4f7f5fd36c34c7014ab8d5d889c0ca0e6e56fefd19edda3cc4d7123d89746e4\": rpc error: code = NotFound desc = could not find container \"d4f7f5fd36c34c7014ab8d5d889c0ca0e6e56fefd19edda3cc4d7123d89746e4\": container with ID starting with d4f7f5fd36c34c7014ab8d5d889c0ca0e6e56fefd19edda3cc4d7123d89746e4 not found: ID does not exist" Nov 29 08:56:52 crc kubenswrapper[4795]: I1129 08:56:52.513412 4795 scope.go:117] "RemoveContainer" containerID="6fde62ccd25993cc8906985a83e60e79ff2016ece7b8f24e48c602709f3d1c45" Nov 29 08:56:52 crc kubenswrapper[4795]: E1129 08:56:52.513831 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fde62ccd25993cc8906985a83e60e79ff2016ece7b8f24e48c602709f3d1c45\": container with ID starting with 6fde62ccd25993cc8906985a83e60e79ff2016ece7b8f24e48c602709f3d1c45 not found: ID does not exist" containerID="6fde62ccd25993cc8906985a83e60e79ff2016ece7b8f24e48c602709f3d1c45" Nov 29 08:56:52 crc kubenswrapper[4795]: I1129 08:56:52.513860 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fde62ccd25993cc8906985a83e60e79ff2016ece7b8f24e48c602709f3d1c45"} err="failed to get container status \"6fde62ccd25993cc8906985a83e60e79ff2016ece7b8f24e48c602709f3d1c45\": rpc error: code = NotFound desc = could not find container \"6fde62ccd25993cc8906985a83e60e79ff2016ece7b8f24e48c602709f3d1c45\": container with ID starting with 6fde62ccd25993cc8906985a83e60e79ff2016ece7b8f24e48c602709f3d1c45 not found: ID does not exist" Nov 29 08:56:54 crc kubenswrapper[4795]: I1129 08:56:54.305828 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f2b8d4f-3307-4d0b-b594-c77648c021e7" path="/var/lib/kubelet/pods/8f2b8d4f-3307-4d0b-b594-c77648c021e7/volumes" Nov 29 08:56:55 crc kubenswrapper[4795]: I1129 08:56:55.275379 4795 scope.go:117] "RemoveContainer" containerID="5bb2909e6b594906fc0990f1f08ffa742ff297714422ce25194698d86a68bdf1" Nov 29 08:56:55 crc kubenswrapper[4795]: E1129 08:56:55.275768 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:57:10 crc kubenswrapper[4795]: I1129 08:57:10.275798 4795 scope.go:117] "RemoveContainer" containerID="5bb2909e6b594906fc0990f1f08ffa742ff297714422ce25194698d86a68bdf1" Nov 29 08:57:10 crc kubenswrapper[4795]: E1129 08:57:10.276556 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:57:21 crc kubenswrapper[4795]: I1129 08:57:21.276519 4795 scope.go:117] "RemoveContainer" containerID="5bb2909e6b594906fc0990f1f08ffa742ff297714422ce25194698d86a68bdf1" Nov 29 08:57:21 crc kubenswrapper[4795]: E1129 08:57:21.277550 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:57:34 crc kubenswrapper[4795]: I1129 08:57:34.307227 4795 scope.go:117] "RemoveContainer" containerID="5bb2909e6b594906fc0990f1f08ffa742ff297714422ce25194698d86a68bdf1" Nov 29 08:57:34 crc kubenswrapper[4795]: E1129 08:57:34.309090 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:57:49 crc kubenswrapper[4795]: I1129 08:57:49.275535 4795 scope.go:117] "RemoveContainer" containerID="5bb2909e6b594906fc0990f1f08ffa742ff297714422ce25194698d86a68bdf1" Nov 29 08:57:49 crc kubenswrapper[4795]: E1129 08:57:49.276216 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:58:02 crc kubenswrapper[4795]: I1129 08:58:02.296238 4795 scope.go:117] "RemoveContainer" containerID="5bb2909e6b594906fc0990f1f08ffa742ff297714422ce25194698d86a68bdf1" Nov 29 08:58:02 crc kubenswrapper[4795]: E1129 08:58:02.297225 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:58:15 crc kubenswrapper[4795]: I1129 08:58:15.276088 4795 scope.go:117] "RemoveContainer" containerID="5bb2909e6b594906fc0990f1f08ffa742ff297714422ce25194698d86a68bdf1" Nov 29 08:58:15 crc kubenswrapper[4795]: E1129 08:58:15.277110 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:58:26 crc kubenswrapper[4795]: I1129 08:58:26.278127 4795 scope.go:117] "RemoveContainer" containerID="5bb2909e6b594906fc0990f1f08ffa742ff297714422ce25194698d86a68bdf1" Nov 29 08:58:26 crc kubenswrapper[4795]: E1129 08:58:26.279160 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:58:37 crc kubenswrapper[4795]: I1129 08:58:37.170469 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9szk6"] Nov 29 08:58:37 crc kubenswrapper[4795]: E1129 08:58:37.171846 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f2b8d4f-3307-4d0b-b594-c77648c021e7" containerName="extract-content" Nov 29 08:58:37 crc kubenswrapper[4795]: I1129 08:58:37.171869 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f2b8d4f-3307-4d0b-b594-c77648c021e7" containerName="extract-content" Nov 29 08:58:37 crc kubenswrapper[4795]: E1129 08:58:37.171917 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e73c53a-c858-4ba0-8a00-6322487811be" containerName="extract-utilities" Nov 29 08:58:37 crc kubenswrapper[4795]: I1129 08:58:37.171927 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e73c53a-c858-4ba0-8a00-6322487811be" containerName="extract-utilities" Nov 29 08:58:37 crc kubenswrapper[4795]: E1129 08:58:37.171951 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f2b8d4f-3307-4d0b-b594-c77648c021e7" containerName="registry-server" Nov 29 08:58:37 crc kubenswrapper[4795]: I1129 08:58:37.171959 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f2b8d4f-3307-4d0b-b594-c77648c021e7" containerName="registry-server" Nov 29 08:58:37 crc kubenswrapper[4795]: E1129 08:58:37.171977 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f2b8d4f-3307-4d0b-b594-c77648c021e7" containerName="extract-utilities" Nov 29 08:58:37 crc kubenswrapper[4795]: I1129 08:58:37.171984 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f2b8d4f-3307-4d0b-b594-c77648c021e7" containerName="extract-utilities" Nov 29 08:58:37 crc kubenswrapper[4795]: E1129 08:58:37.172021 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e73c53a-c858-4ba0-8a00-6322487811be" containerName="extract-content" Nov 29 08:58:37 crc kubenswrapper[4795]: I1129 08:58:37.172029 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e73c53a-c858-4ba0-8a00-6322487811be" containerName="extract-content" Nov 29 08:58:37 crc kubenswrapper[4795]: E1129 08:58:37.172045 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e73c53a-c858-4ba0-8a00-6322487811be" containerName="registry-server" Nov 29 08:58:37 crc kubenswrapper[4795]: I1129 08:58:37.172052 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e73c53a-c858-4ba0-8a00-6322487811be" containerName="registry-server" Nov 29 08:58:37 crc kubenswrapper[4795]: I1129 08:58:37.172384 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f2b8d4f-3307-4d0b-b594-c77648c021e7" containerName="registry-server" Nov 29 08:58:37 crc kubenswrapper[4795]: I1129 08:58:37.172414 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e73c53a-c858-4ba0-8a00-6322487811be" containerName="registry-server" Nov 29 08:58:37 crc kubenswrapper[4795]: I1129 08:58:37.174910 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9szk6" Nov 29 08:58:37 crc kubenswrapper[4795]: I1129 08:58:37.187846 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9szk6"] Nov 29 08:58:37 crc kubenswrapper[4795]: I1129 08:58:37.353786 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5ccc40c-1933-4064-83c2-ff6a48be940b-utilities\") pod \"redhat-marketplace-9szk6\" (UID: \"f5ccc40c-1933-4064-83c2-ff6a48be940b\") " pod="openshift-marketplace/redhat-marketplace-9szk6" Nov 29 08:58:37 crc kubenswrapper[4795]: I1129 08:58:37.354184 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5ccc40c-1933-4064-83c2-ff6a48be940b-catalog-content\") pod \"redhat-marketplace-9szk6\" (UID: \"f5ccc40c-1933-4064-83c2-ff6a48be940b\") " pod="openshift-marketplace/redhat-marketplace-9szk6" Nov 29 08:58:37 crc kubenswrapper[4795]: I1129 08:58:37.355168 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh8fx\" (UniqueName: \"kubernetes.io/projected/f5ccc40c-1933-4064-83c2-ff6a48be940b-kube-api-access-mh8fx\") pod \"redhat-marketplace-9szk6\" (UID: \"f5ccc40c-1933-4064-83c2-ff6a48be940b\") " pod="openshift-marketplace/redhat-marketplace-9szk6" Nov 29 08:58:37 crc kubenswrapper[4795]: I1129 08:58:37.457380 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5ccc40c-1933-4064-83c2-ff6a48be940b-utilities\") pod \"redhat-marketplace-9szk6\" (UID: \"f5ccc40c-1933-4064-83c2-ff6a48be940b\") " pod="openshift-marketplace/redhat-marketplace-9szk6" Nov 29 08:58:37 crc kubenswrapper[4795]: I1129 08:58:37.457431 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5ccc40c-1933-4064-83c2-ff6a48be940b-catalog-content\") pod \"redhat-marketplace-9szk6\" (UID: \"f5ccc40c-1933-4064-83c2-ff6a48be940b\") " pod="openshift-marketplace/redhat-marketplace-9szk6" Nov 29 08:58:37 crc kubenswrapper[4795]: I1129 08:58:37.457629 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mh8fx\" (UniqueName: \"kubernetes.io/projected/f5ccc40c-1933-4064-83c2-ff6a48be940b-kube-api-access-mh8fx\") pod \"redhat-marketplace-9szk6\" (UID: \"f5ccc40c-1933-4064-83c2-ff6a48be940b\") " pod="openshift-marketplace/redhat-marketplace-9szk6" Nov 29 08:58:37 crc kubenswrapper[4795]: I1129 08:58:37.458320 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5ccc40c-1933-4064-83c2-ff6a48be940b-catalog-content\") pod \"redhat-marketplace-9szk6\" (UID: \"f5ccc40c-1933-4064-83c2-ff6a48be940b\") " pod="openshift-marketplace/redhat-marketplace-9szk6" Nov 29 08:58:37 crc kubenswrapper[4795]: I1129 08:58:37.458718 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5ccc40c-1933-4064-83c2-ff6a48be940b-utilities\") pod \"redhat-marketplace-9szk6\" (UID: \"f5ccc40c-1933-4064-83c2-ff6a48be940b\") " pod="openshift-marketplace/redhat-marketplace-9szk6" Nov 29 08:58:37 crc kubenswrapper[4795]: I1129 08:58:37.484815 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mh8fx\" (UniqueName: \"kubernetes.io/projected/f5ccc40c-1933-4064-83c2-ff6a48be940b-kube-api-access-mh8fx\") pod \"redhat-marketplace-9szk6\" (UID: \"f5ccc40c-1933-4064-83c2-ff6a48be940b\") " pod="openshift-marketplace/redhat-marketplace-9szk6" Nov 29 08:58:37 crc kubenswrapper[4795]: I1129 08:58:37.527725 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9szk6" Nov 29 08:58:38 crc kubenswrapper[4795]: I1129 08:58:38.028730 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9szk6"] Nov 29 08:58:38 crc kubenswrapper[4795]: I1129 08:58:38.630154 4795 generic.go:334] "Generic (PLEG): container finished" podID="f5ccc40c-1933-4064-83c2-ff6a48be940b" containerID="693d89f428a0a674227ed7d66b9efbdc56670a145241a92a23f599f764464610" exitCode=0 Nov 29 08:58:38 crc kubenswrapper[4795]: I1129 08:58:38.630317 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9szk6" event={"ID":"f5ccc40c-1933-4064-83c2-ff6a48be940b","Type":"ContainerDied","Data":"693d89f428a0a674227ed7d66b9efbdc56670a145241a92a23f599f764464610"} Nov 29 08:58:38 crc kubenswrapper[4795]: I1129 08:58:38.630403 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9szk6" event={"ID":"f5ccc40c-1933-4064-83c2-ff6a48be940b","Type":"ContainerStarted","Data":"bfb7a181a4de447a5efe0b969a6fd4f9acdd91d8da359d41f228915da810f811"} Nov 29 08:58:40 crc kubenswrapper[4795]: I1129 08:58:40.653568 4795 generic.go:334] "Generic (PLEG): container finished" podID="f5ccc40c-1933-4064-83c2-ff6a48be940b" containerID="6518d3ea23df9ad901aab2415e946c7221db5abc4f33812bf17effd20efcb0a9" exitCode=0 Nov 29 08:58:40 crc kubenswrapper[4795]: I1129 08:58:40.653667 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9szk6" event={"ID":"f5ccc40c-1933-4064-83c2-ff6a48be940b","Type":"ContainerDied","Data":"6518d3ea23df9ad901aab2415e946c7221db5abc4f33812bf17effd20efcb0a9"} Nov 29 08:58:41 crc kubenswrapper[4795]: I1129 08:58:41.275935 4795 scope.go:117] "RemoveContainer" containerID="5bb2909e6b594906fc0990f1f08ffa742ff297714422ce25194698d86a68bdf1" Nov 29 08:58:41 crc kubenswrapper[4795]: E1129 08:58:41.276536 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:58:41 crc kubenswrapper[4795]: I1129 08:58:41.677258 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9szk6" event={"ID":"f5ccc40c-1933-4064-83c2-ff6a48be940b","Type":"ContainerStarted","Data":"502a191b9b771b8d1f4866b7605225ad285410d72bbf78395e92c2ee74ba5784"} Nov 29 08:58:41 crc kubenswrapper[4795]: I1129 08:58:41.720286 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9szk6" podStartSLOduration=2.190281158 podStartE2EDuration="4.720264333s" podCreationTimestamp="2025-11-29 08:58:37 +0000 UTC" firstStartedPulling="2025-11-29 08:58:38.633023495 +0000 UTC m=+4764.608599285" lastFinishedPulling="2025-11-29 08:58:41.16300666 +0000 UTC m=+4767.138582460" observedRunningTime="2025-11-29 08:58:41.709520099 +0000 UTC m=+4767.685095889" watchObservedRunningTime="2025-11-29 08:58:41.720264333 +0000 UTC m=+4767.695840134" Nov 29 08:58:47 crc kubenswrapper[4795]: I1129 08:58:47.528972 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9szk6" Nov 29 08:58:47 crc kubenswrapper[4795]: I1129 08:58:47.529584 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9szk6" Nov 29 08:58:47 crc kubenswrapper[4795]: I1129 08:58:47.596843 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9szk6" Nov 29 08:58:47 crc kubenswrapper[4795]: I1129 08:58:47.806297 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9szk6" Nov 29 08:58:47 crc kubenswrapper[4795]: I1129 08:58:47.873860 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9szk6"] Nov 29 08:58:49 crc kubenswrapper[4795]: I1129 08:58:49.772668 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9szk6" podUID="f5ccc40c-1933-4064-83c2-ff6a48be940b" containerName="registry-server" containerID="cri-o://502a191b9b771b8d1f4866b7605225ad285410d72bbf78395e92c2ee74ba5784" gracePeriod=2 Nov 29 08:58:50 crc kubenswrapper[4795]: I1129 08:58:50.353390 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9szk6" Nov 29 08:58:50 crc kubenswrapper[4795]: I1129 08:58:50.496936 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5ccc40c-1933-4064-83c2-ff6a48be940b-catalog-content\") pod \"f5ccc40c-1933-4064-83c2-ff6a48be940b\" (UID: \"f5ccc40c-1933-4064-83c2-ff6a48be940b\") " Nov 29 08:58:50 crc kubenswrapper[4795]: I1129 08:58:50.497144 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mh8fx\" (UniqueName: \"kubernetes.io/projected/f5ccc40c-1933-4064-83c2-ff6a48be940b-kube-api-access-mh8fx\") pod \"f5ccc40c-1933-4064-83c2-ff6a48be940b\" (UID: \"f5ccc40c-1933-4064-83c2-ff6a48be940b\") " Nov 29 08:58:50 crc kubenswrapper[4795]: I1129 08:58:50.497237 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5ccc40c-1933-4064-83c2-ff6a48be940b-utilities\") pod \"f5ccc40c-1933-4064-83c2-ff6a48be940b\" (UID: \"f5ccc40c-1933-4064-83c2-ff6a48be940b\") " Nov 29 08:58:50 crc kubenswrapper[4795]: I1129 08:58:50.497991 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5ccc40c-1933-4064-83c2-ff6a48be940b-utilities" (OuterVolumeSpecName: "utilities") pod "f5ccc40c-1933-4064-83c2-ff6a48be940b" (UID: "f5ccc40c-1933-4064-83c2-ff6a48be940b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:58:50 crc kubenswrapper[4795]: I1129 08:58:50.498292 4795 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5ccc40c-1933-4064-83c2-ff6a48be940b-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 08:58:50 crc kubenswrapper[4795]: I1129 08:58:50.504395 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5ccc40c-1933-4064-83c2-ff6a48be940b-kube-api-access-mh8fx" (OuterVolumeSpecName: "kube-api-access-mh8fx") pod "f5ccc40c-1933-4064-83c2-ff6a48be940b" (UID: "f5ccc40c-1933-4064-83c2-ff6a48be940b"). InnerVolumeSpecName "kube-api-access-mh8fx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:58:50 crc kubenswrapper[4795]: I1129 08:58:50.518820 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5ccc40c-1933-4064-83c2-ff6a48be940b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f5ccc40c-1933-4064-83c2-ff6a48be940b" (UID: "f5ccc40c-1933-4064-83c2-ff6a48be940b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:58:50 crc kubenswrapper[4795]: I1129 08:58:50.600228 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mh8fx\" (UniqueName: \"kubernetes.io/projected/f5ccc40c-1933-4064-83c2-ff6a48be940b-kube-api-access-mh8fx\") on node \"crc\" DevicePath \"\"" Nov 29 08:58:50 crc kubenswrapper[4795]: I1129 08:58:50.600267 4795 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5ccc40c-1933-4064-83c2-ff6a48be940b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 08:58:50 crc kubenswrapper[4795]: I1129 08:58:50.786926 4795 generic.go:334] "Generic (PLEG): container finished" podID="f5ccc40c-1933-4064-83c2-ff6a48be940b" containerID="502a191b9b771b8d1f4866b7605225ad285410d72bbf78395e92c2ee74ba5784" exitCode=0 Nov 29 08:58:50 crc kubenswrapper[4795]: I1129 08:58:50.786968 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9szk6" event={"ID":"f5ccc40c-1933-4064-83c2-ff6a48be940b","Type":"ContainerDied","Data":"502a191b9b771b8d1f4866b7605225ad285410d72bbf78395e92c2ee74ba5784"} Nov 29 08:58:50 crc kubenswrapper[4795]: I1129 08:58:50.786995 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9szk6" event={"ID":"f5ccc40c-1933-4064-83c2-ff6a48be940b","Type":"ContainerDied","Data":"bfb7a181a4de447a5efe0b969a6fd4f9acdd91d8da359d41f228915da810f811"} Nov 29 08:58:50 crc kubenswrapper[4795]: I1129 08:58:50.787013 4795 scope.go:117] "RemoveContainer" containerID="502a191b9b771b8d1f4866b7605225ad285410d72bbf78395e92c2ee74ba5784" Nov 29 08:58:50 crc kubenswrapper[4795]: I1129 08:58:50.787295 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9szk6" Nov 29 08:58:50 crc kubenswrapper[4795]: I1129 08:58:50.833046 4795 scope.go:117] "RemoveContainer" containerID="6518d3ea23df9ad901aab2415e946c7221db5abc4f33812bf17effd20efcb0a9" Nov 29 08:58:50 crc kubenswrapper[4795]: I1129 08:58:50.847708 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9szk6"] Nov 29 08:58:50 crc kubenswrapper[4795]: I1129 08:58:50.860287 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9szk6"] Nov 29 08:58:50 crc kubenswrapper[4795]: I1129 08:58:50.865147 4795 scope.go:117] "RemoveContainer" containerID="693d89f428a0a674227ed7d66b9efbdc56670a145241a92a23f599f764464610" Nov 29 08:58:50 crc kubenswrapper[4795]: I1129 08:58:50.907243 4795 scope.go:117] "RemoveContainer" containerID="502a191b9b771b8d1f4866b7605225ad285410d72bbf78395e92c2ee74ba5784" Nov 29 08:58:50 crc kubenswrapper[4795]: E1129 08:58:50.908136 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"502a191b9b771b8d1f4866b7605225ad285410d72bbf78395e92c2ee74ba5784\": container with ID starting with 502a191b9b771b8d1f4866b7605225ad285410d72bbf78395e92c2ee74ba5784 not found: ID does not exist" containerID="502a191b9b771b8d1f4866b7605225ad285410d72bbf78395e92c2ee74ba5784" Nov 29 08:58:50 crc kubenswrapper[4795]: I1129 08:58:50.908178 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"502a191b9b771b8d1f4866b7605225ad285410d72bbf78395e92c2ee74ba5784"} err="failed to get container status \"502a191b9b771b8d1f4866b7605225ad285410d72bbf78395e92c2ee74ba5784\": rpc error: code = NotFound desc = could not find container \"502a191b9b771b8d1f4866b7605225ad285410d72bbf78395e92c2ee74ba5784\": container with ID starting with 502a191b9b771b8d1f4866b7605225ad285410d72bbf78395e92c2ee74ba5784 not found: ID does not exist" Nov 29 08:58:50 crc kubenswrapper[4795]: I1129 08:58:50.908205 4795 scope.go:117] "RemoveContainer" containerID="6518d3ea23df9ad901aab2415e946c7221db5abc4f33812bf17effd20efcb0a9" Nov 29 08:58:50 crc kubenswrapper[4795]: E1129 08:58:50.908701 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6518d3ea23df9ad901aab2415e946c7221db5abc4f33812bf17effd20efcb0a9\": container with ID starting with 6518d3ea23df9ad901aab2415e946c7221db5abc4f33812bf17effd20efcb0a9 not found: ID does not exist" containerID="6518d3ea23df9ad901aab2415e946c7221db5abc4f33812bf17effd20efcb0a9" Nov 29 08:58:50 crc kubenswrapper[4795]: I1129 08:58:50.908727 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6518d3ea23df9ad901aab2415e946c7221db5abc4f33812bf17effd20efcb0a9"} err="failed to get container status \"6518d3ea23df9ad901aab2415e946c7221db5abc4f33812bf17effd20efcb0a9\": rpc error: code = NotFound desc = could not find container \"6518d3ea23df9ad901aab2415e946c7221db5abc4f33812bf17effd20efcb0a9\": container with ID starting with 6518d3ea23df9ad901aab2415e946c7221db5abc4f33812bf17effd20efcb0a9 not found: ID does not exist" Nov 29 08:58:50 crc kubenswrapper[4795]: I1129 08:58:50.908743 4795 scope.go:117] "RemoveContainer" containerID="693d89f428a0a674227ed7d66b9efbdc56670a145241a92a23f599f764464610" Nov 29 08:58:50 crc kubenswrapper[4795]: E1129 08:58:50.909082 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"693d89f428a0a674227ed7d66b9efbdc56670a145241a92a23f599f764464610\": container with ID starting with 693d89f428a0a674227ed7d66b9efbdc56670a145241a92a23f599f764464610 not found: ID does not exist" containerID="693d89f428a0a674227ed7d66b9efbdc56670a145241a92a23f599f764464610" Nov 29 08:58:50 crc kubenswrapper[4795]: I1129 08:58:50.909110 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"693d89f428a0a674227ed7d66b9efbdc56670a145241a92a23f599f764464610"} err="failed to get container status \"693d89f428a0a674227ed7d66b9efbdc56670a145241a92a23f599f764464610\": rpc error: code = NotFound desc = could not find container \"693d89f428a0a674227ed7d66b9efbdc56670a145241a92a23f599f764464610\": container with ID starting with 693d89f428a0a674227ed7d66b9efbdc56670a145241a92a23f599f764464610 not found: ID does not exist" Nov 29 08:58:52 crc kubenswrapper[4795]: I1129 08:58:52.304376 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5ccc40c-1933-4064-83c2-ff6a48be940b" path="/var/lib/kubelet/pods/f5ccc40c-1933-4064-83c2-ff6a48be940b/volumes" Nov 29 08:58:55 crc kubenswrapper[4795]: I1129 08:58:55.277030 4795 scope.go:117] "RemoveContainer" containerID="5bb2909e6b594906fc0990f1f08ffa742ff297714422ce25194698d86a68bdf1" Nov 29 08:58:55 crc kubenswrapper[4795]: E1129 08:58:55.278545 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:59:09 crc kubenswrapper[4795]: I1129 08:59:09.275559 4795 scope.go:117] "RemoveContainer" containerID="5bb2909e6b594906fc0990f1f08ffa742ff297714422ce25194698d86a68bdf1" Nov 29 08:59:09 crc kubenswrapper[4795]: E1129 08:59:09.276202 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:59:24 crc kubenswrapper[4795]: I1129 08:59:24.284841 4795 scope.go:117] "RemoveContainer" containerID="5bb2909e6b594906fc0990f1f08ffa742ff297714422ce25194698d86a68bdf1" Nov 29 08:59:24 crc kubenswrapper[4795]: E1129 08:59:24.288161 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:59:39 crc kubenswrapper[4795]: I1129 08:59:39.276104 4795 scope.go:117] "RemoveContainer" containerID="5bb2909e6b594906fc0990f1f08ffa742ff297714422ce25194698d86a68bdf1" Nov 29 08:59:39 crc kubenswrapper[4795]: E1129 08:59:39.277037 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 08:59:53 crc kubenswrapper[4795]: I1129 08:59:53.276454 4795 scope.go:117] "RemoveContainer" containerID="5bb2909e6b594906fc0990f1f08ffa742ff297714422ce25194698d86a68bdf1" Nov 29 08:59:53 crc kubenswrapper[4795]: E1129 08:59:53.277344 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:00:00 crc kubenswrapper[4795]: I1129 09:00:00.236024 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406780-lzb48"] Nov 29 09:00:00 crc kubenswrapper[4795]: E1129 09:00:00.237926 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5ccc40c-1933-4064-83c2-ff6a48be940b" containerName="registry-server" Nov 29 09:00:00 crc kubenswrapper[4795]: I1129 09:00:00.237947 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5ccc40c-1933-4064-83c2-ff6a48be940b" containerName="registry-server" Nov 29 09:00:00 crc kubenswrapper[4795]: E1129 09:00:00.237985 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5ccc40c-1933-4064-83c2-ff6a48be940b" containerName="extract-utilities" Nov 29 09:00:00 crc kubenswrapper[4795]: I1129 09:00:00.237994 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5ccc40c-1933-4064-83c2-ff6a48be940b" containerName="extract-utilities" Nov 29 09:00:00 crc kubenswrapper[4795]: E1129 09:00:00.238019 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5ccc40c-1933-4064-83c2-ff6a48be940b" containerName="extract-content" Nov 29 09:00:00 crc kubenswrapper[4795]: I1129 09:00:00.238028 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5ccc40c-1933-4064-83c2-ff6a48be940b" containerName="extract-content" Nov 29 09:00:00 crc kubenswrapper[4795]: I1129 09:00:00.238340 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5ccc40c-1933-4064-83c2-ff6a48be940b" containerName="registry-server" Nov 29 09:00:00 crc kubenswrapper[4795]: I1129 09:00:00.239543 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406780-lzb48" Nov 29 09:00:00 crc kubenswrapper[4795]: I1129 09:00:00.241377 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 29 09:00:00 crc kubenswrapper[4795]: I1129 09:00:00.242943 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 29 09:00:00 crc kubenswrapper[4795]: I1129 09:00:00.247494 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406780-lzb48"] Nov 29 09:00:00 crc kubenswrapper[4795]: I1129 09:00:00.274435 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4375e703-065e-4402-a428-bac0bf6a3339-secret-volume\") pod \"collect-profiles-29406780-lzb48\" (UID: \"4375e703-065e-4402-a428-bac0bf6a3339\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406780-lzb48" Nov 29 09:00:00 crc kubenswrapper[4795]: I1129 09:00:00.274569 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4375e703-065e-4402-a428-bac0bf6a3339-config-volume\") pod \"collect-profiles-29406780-lzb48\" (UID: \"4375e703-065e-4402-a428-bac0bf6a3339\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406780-lzb48" Nov 29 09:00:00 crc kubenswrapper[4795]: I1129 09:00:00.274644 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gcxw\" (UniqueName: \"kubernetes.io/projected/4375e703-065e-4402-a428-bac0bf6a3339-kube-api-access-9gcxw\") pod \"collect-profiles-29406780-lzb48\" (UID: \"4375e703-065e-4402-a428-bac0bf6a3339\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406780-lzb48" Nov 29 09:00:00 crc kubenswrapper[4795]: I1129 09:00:00.377078 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4375e703-065e-4402-a428-bac0bf6a3339-config-volume\") pod \"collect-profiles-29406780-lzb48\" (UID: \"4375e703-065e-4402-a428-bac0bf6a3339\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406780-lzb48" Nov 29 09:00:00 crc kubenswrapper[4795]: I1129 09:00:00.377142 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gcxw\" (UniqueName: \"kubernetes.io/projected/4375e703-065e-4402-a428-bac0bf6a3339-kube-api-access-9gcxw\") pod \"collect-profiles-29406780-lzb48\" (UID: \"4375e703-065e-4402-a428-bac0bf6a3339\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406780-lzb48" Nov 29 09:00:00 crc kubenswrapper[4795]: I1129 09:00:00.377424 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4375e703-065e-4402-a428-bac0bf6a3339-secret-volume\") pod \"collect-profiles-29406780-lzb48\" (UID: \"4375e703-065e-4402-a428-bac0bf6a3339\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406780-lzb48" Nov 29 09:00:00 crc kubenswrapper[4795]: I1129 09:00:00.378049 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4375e703-065e-4402-a428-bac0bf6a3339-config-volume\") pod \"collect-profiles-29406780-lzb48\" (UID: \"4375e703-065e-4402-a428-bac0bf6a3339\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406780-lzb48" Nov 29 09:00:00 crc kubenswrapper[4795]: I1129 09:00:00.387611 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4375e703-065e-4402-a428-bac0bf6a3339-secret-volume\") pod \"collect-profiles-29406780-lzb48\" (UID: \"4375e703-065e-4402-a428-bac0bf6a3339\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406780-lzb48" Nov 29 09:00:00 crc kubenswrapper[4795]: I1129 09:00:00.393634 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gcxw\" (UniqueName: \"kubernetes.io/projected/4375e703-065e-4402-a428-bac0bf6a3339-kube-api-access-9gcxw\") pod \"collect-profiles-29406780-lzb48\" (UID: \"4375e703-065e-4402-a428-bac0bf6a3339\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406780-lzb48" Nov 29 09:00:00 crc kubenswrapper[4795]: I1129 09:00:00.576502 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406780-lzb48" Nov 29 09:00:01 crc kubenswrapper[4795]: I1129 09:00:01.088190 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406780-lzb48"] Nov 29 09:00:01 crc kubenswrapper[4795]: I1129 09:00:01.663294 4795 generic.go:334] "Generic (PLEG): container finished" podID="4375e703-065e-4402-a428-bac0bf6a3339" containerID="9104c0b16de50726be82830322594823409f70e2345913e658bad8fce96a99d3" exitCode=0 Nov 29 09:00:01 crc kubenswrapper[4795]: I1129 09:00:01.663540 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406780-lzb48" event={"ID":"4375e703-065e-4402-a428-bac0bf6a3339","Type":"ContainerDied","Data":"9104c0b16de50726be82830322594823409f70e2345913e658bad8fce96a99d3"} Nov 29 09:00:01 crc kubenswrapper[4795]: I1129 09:00:01.663664 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406780-lzb48" event={"ID":"4375e703-065e-4402-a428-bac0bf6a3339","Type":"ContainerStarted","Data":"3050aa2ae6c09b5a389bb79fe8c8d338b5b1c36b730b68d46becba39bf04792b"} Nov 29 09:00:03 crc kubenswrapper[4795]: I1129 09:00:03.289862 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406780-lzb48" Nov 29 09:00:03 crc kubenswrapper[4795]: I1129 09:00:03.435982 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4375e703-065e-4402-a428-bac0bf6a3339-secret-volume\") pod \"4375e703-065e-4402-a428-bac0bf6a3339\" (UID: \"4375e703-065e-4402-a428-bac0bf6a3339\") " Nov 29 09:00:03 crc kubenswrapper[4795]: I1129 09:00:03.436049 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gcxw\" (UniqueName: \"kubernetes.io/projected/4375e703-065e-4402-a428-bac0bf6a3339-kube-api-access-9gcxw\") pod \"4375e703-065e-4402-a428-bac0bf6a3339\" (UID: \"4375e703-065e-4402-a428-bac0bf6a3339\") " Nov 29 09:00:03 crc kubenswrapper[4795]: I1129 09:00:03.436826 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4375e703-065e-4402-a428-bac0bf6a3339-config-volume\") pod \"4375e703-065e-4402-a428-bac0bf6a3339\" (UID: \"4375e703-065e-4402-a428-bac0bf6a3339\") " Nov 29 09:00:03 crc kubenswrapper[4795]: I1129 09:00:03.439629 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4375e703-065e-4402-a428-bac0bf6a3339-config-volume" (OuterVolumeSpecName: "config-volume") pod "4375e703-065e-4402-a428-bac0bf6a3339" (UID: "4375e703-065e-4402-a428-bac0bf6a3339"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 09:00:03 crc kubenswrapper[4795]: I1129 09:00:03.443250 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4375e703-065e-4402-a428-bac0bf6a3339-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4375e703-065e-4402-a428-bac0bf6a3339" (UID: "4375e703-065e-4402-a428-bac0bf6a3339"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 09:00:03 crc kubenswrapper[4795]: I1129 09:00:03.443429 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4375e703-065e-4402-a428-bac0bf6a3339-kube-api-access-9gcxw" (OuterVolumeSpecName: "kube-api-access-9gcxw") pod "4375e703-065e-4402-a428-bac0bf6a3339" (UID: "4375e703-065e-4402-a428-bac0bf6a3339"). InnerVolumeSpecName "kube-api-access-9gcxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 09:00:03 crc kubenswrapper[4795]: I1129 09:00:03.539701 4795 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4375e703-065e-4402-a428-bac0bf6a3339-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 29 09:00:03 crc kubenswrapper[4795]: I1129 09:00:03.539740 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9gcxw\" (UniqueName: \"kubernetes.io/projected/4375e703-065e-4402-a428-bac0bf6a3339-kube-api-access-9gcxw\") on node \"crc\" DevicePath \"\"" Nov 29 09:00:03 crc kubenswrapper[4795]: I1129 09:00:03.539773 4795 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4375e703-065e-4402-a428-bac0bf6a3339-config-volume\") on node \"crc\" DevicePath \"\"" Nov 29 09:00:03 crc kubenswrapper[4795]: I1129 09:00:03.686012 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406780-lzb48" event={"ID":"4375e703-065e-4402-a428-bac0bf6a3339","Type":"ContainerDied","Data":"3050aa2ae6c09b5a389bb79fe8c8d338b5b1c36b730b68d46becba39bf04792b"} Nov 29 09:00:03 crc kubenswrapper[4795]: I1129 09:00:03.686077 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3050aa2ae6c09b5a389bb79fe8c8d338b5b1c36b730b68d46becba39bf04792b" Nov 29 09:00:03 crc kubenswrapper[4795]: I1129 09:00:03.686109 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406780-lzb48" Nov 29 09:00:04 crc kubenswrapper[4795]: I1129 09:00:04.303495 4795 scope.go:117] "RemoveContainer" containerID="5bb2909e6b594906fc0990f1f08ffa742ff297714422ce25194698d86a68bdf1" Nov 29 09:00:04 crc kubenswrapper[4795]: E1129 09:00:04.305525 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:00:04 crc kubenswrapper[4795]: I1129 09:00:04.376175 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406735-mf8f9"] Nov 29 09:00:04 crc kubenswrapper[4795]: I1129 09:00:04.387503 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406735-mf8f9"] Nov 29 09:00:06 crc kubenswrapper[4795]: I1129 09:00:06.294802 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="075c29f6-7d79-44c2-9cbd-9ac3e3460f6e" path="/var/lib/kubelet/pods/075c29f6-7d79-44c2-9cbd-9ac3e3460f6e/volumes" Nov 29 09:00:15 crc kubenswrapper[4795]: I1129 09:00:15.276934 4795 scope.go:117] "RemoveContainer" containerID="5bb2909e6b594906fc0990f1f08ffa742ff297714422ce25194698d86a68bdf1" Nov 29 09:00:15 crc kubenswrapper[4795]: E1129 09:00:15.280134 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:00:26 crc kubenswrapper[4795]: I1129 09:00:26.276417 4795 scope.go:117] "RemoveContainer" containerID="5bb2909e6b594906fc0990f1f08ffa742ff297714422ce25194698d86a68bdf1" Nov 29 09:00:26 crc kubenswrapper[4795]: E1129 09:00:26.277357 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:00:34 crc kubenswrapper[4795]: I1129 09:00:34.196322 4795 scope.go:117] "RemoveContainer" containerID="8b67c7f215ca55272f6d267461cd58ceeaf7d51dc51945e27110925c48476133" Nov 29 09:00:39 crc kubenswrapper[4795]: I1129 09:00:39.277617 4795 scope.go:117] "RemoveContainer" containerID="5bb2909e6b594906fc0990f1f08ffa742ff297714422ce25194698d86a68bdf1" Nov 29 09:00:39 crc kubenswrapper[4795]: E1129 09:00:39.278523 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:00:52 crc kubenswrapper[4795]: I1129 09:00:52.276326 4795 scope.go:117] "RemoveContainer" containerID="5bb2909e6b594906fc0990f1f08ffa742ff297714422ce25194698d86a68bdf1" Nov 29 09:00:52 crc kubenswrapper[4795]: E1129 09:00:52.277446 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:01:00 crc kubenswrapper[4795]: I1129 09:01:00.160548 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29406781-9xlk7"] Nov 29 09:01:00 crc kubenswrapper[4795]: E1129 09:01:00.161548 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4375e703-065e-4402-a428-bac0bf6a3339" containerName="collect-profiles" Nov 29 09:01:00 crc kubenswrapper[4795]: I1129 09:01:00.161563 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="4375e703-065e-4402-a428-bac0bf6a3339" containerName="collect-profiles" Nov 29 09:01:00 crc kubenswrapper[4795]: I1129 09:01:00.161922 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="4375e703-065e-4402-a428-bac0bf6a3339" containerName="collect-profiles" Nov 29 09:01:00 crc kubenswrapper[4795]: I1129 09:01:00.162699 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29406781-9xlk7" Nov 29 09:01:00 crc kubenswrapper[4795]: I1129 09:01:00.244234 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29406781-9xlk7"] Nov 29 09:01:00 crc kubenswrapper[4795]: I1129 09:01:00.331408 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4t8m2\" (UniqueName: \"kubernetes.io/projected/3eaf8e63-a9e1-47a4-b093-d1f65a80c4db-kube-api-access-4t8m2\") pod \"keystone-cron-29406781-9xlk7\" (UID: \"3eaf8e63-a9e1-47a4-b093-d1f65a80c4db\") " pod="openstack/keystone-cron-29406781-9xlk7" Nov 29 09:01:00 crc kubenswrapper[4795]: I1129 09:01:00.331506 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3eaf8e63-a9e1-47a4-b093-d1f65a80c4db-fernet-keys\") pod \"keystone-cron-29406781-9xlk7\" (UID: \"3eaf8e63-a9e1-47a4-b093-d1f65a80c4db\") " pod="openstack/keystone-cron-29406781-9xlk7" Nov 29 09:01:00 crc kubenswrapper[4795]: I1129 09:01:00.331585 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3eaf8e63-a9e1-47a4-b093-d1f65a80c4db-combined-ca-bundle\") pod \"keystone-cron-29406781-9xlk7\" (UID: \"3eaf8e63-a9e1-47a4-b093-d1f65a80c4db\") " pod="openstack/keystone-cron-29406781-9xlk7" Nov 29 09:01:00 crc kubenswrapper[4795]: I1129 09:01:00.332136 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3eaf8e63-a9e1-47a4-b093-d1f65a80c4db-config-data\") pod \"keystone-cron-29406781-9xlk7\" (UID: \"3eaf8e63-a9e1-47a4-b093-d1f65a80c4db\") " pod="openstack/keystone-cron-29406781-9xlk7" Nov 29 09:01:00 crc kubenswrapper[4795]: I1129 09:01:00.435091 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4t8m2\" (UniqueName: \"kubernetes.io/projected/3eaf8e63-a9e1-47a4-b093-d1f65a80c4db-kube-api-access-4t8m2\") pod \"keystone-cron-29406781-9xlk7\" (UID: \"3eaf8e63-a9e1-47a4-b093-d1f65a80c4db\") " pod="openstack/keystone-cron-29406781-9xlk7" Nov 29 09:01:00 crc kubenswrapper[4795]: I1129 09:01:00.435285 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3eaf8e63-a9e1-47a4-b093-d1f65a80c4db-fernet-keys\") pod \"keystone-cron-29406781-9xlk7\" (UID: \"3eaf8e63-a9e1-47a4-b093-d1f65a80c4db\") " pod="openstack/keystone-cron-29406781-9xlk7" Nov 29 09:01:00 crc kubenswrapper[4795]: I1129 09:01:00.435908 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3eaf8e63-a9e1-47a4-b093-d1f65a80c4db-combined-ca-bundle\") pod \"keystone-cron-29406781-9xlk7\" (UID: \"3eaf8e63-a9e1-47a4-b093-d1f65a80c4db\") " pod="openstack/keystone-cron-29406781-9xlk7" Nov 29 09:01:00 crc kubenswrapper[4795]: I1129 09:01:00.436426 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3eaf8e63-a9e1-47a4-b093-d1f65a80c4db-config-data\") pod \"keystone-cron-29406781-9xlk7\" (UID: \"3eaf8e63-a9e1-47a4-b093-d1f65a80c4db\") " pod="openstack/keystone-cron-29406781-9xlk7" Nov 29 09:01:00 crc kubenswrapper[4795]: I1129 09:01:00.447993 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3eaf8e63-a9e1-47a4-b093-d1f65a80c4db-config-data\") pod \"keystone-cron-29406781-9xlk7\" (UID: \"3eaf8e63-a9e1-47a4-b093-d1f65a80c4db\") " pod="openstack/keystone-cron-29406781-9xlk7" Nov 29 09:01:00 crc kubenswrapper[4795]: I1129 09:01:00.448711 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3eaf8e63-a9e1-47a4-b093-d1f65a80c4db-fernet-keys\") pod \"keystone-cron-29406781-9xlk7\" (UID: \"3eaf8e63-a9e1-47a4-b093-d1f65a80c4db\") " pod="openstack/keystone-cron-29406781-9xlk7" Nov 29 09:01:00 crc kubenswrapper[4795]: I1129 09:01:00.450573 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3eaf8e63-a9e1-47a4-b093-d1f65a80c4db-combined-ca-bundle\") pod \"keystone-cron-29406781-9xlk7\" (UID: \"3eaf8e63-a9e1-47a4-b093-d1f65a80c4db\") " pod="openstack/keystone-cron-29406781-9xlk7" Nov 29 09:01:00 crc kubenswrapper[4795]: I1129 09:01:00.470657 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4t8m2\" (UniqueName: \"kubernetes.io/projected/3eaf8e63-a9e1-47a4-b093-d1f65a80c4db-kube-api-access-4t8m2\") pod \"keystone-cron-29406781-9xlk7\" (UID: \"3eaf8e63-a9e1-47a4-b093-d1f65a80c4db\") " pod="openstack/keystone-cron-29406781-9xlk7" Nov 29 09:01:00 crc kubenswrapper[4795]: I1129 09:01:00.546084 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29406781-9xlk7" Nov 29 09:01:01 crc kubenswrapper[4795]: I1129 09:01:01.074065 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29406781-9xlk7"] Nov 29 09:01:02 crc kubenswrapper[4795]: I1129 09:01:02.368685 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29406781-9xlk7" event={"ID":"3eaf8e63-a9e1-47a4-b093-d1f65a80c4db","Type":"ContainerStarted","Data":"d40e7aa33d3ac14e55bfb25b0c7aa8950c594bafc391faee4b1f18970d52a48f"} Nov 29 09:01:02 crc kubenswrapper[4795]: I1129 09:01:02.370261 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29406781-9xlk7" event={"ID":"3eaf8e63-a9e1-47a4-b093-d1f65a80c4db","Type":"ContainerStarted","Data":"5f1c6bded5ce43b01bb0b5f47bae935105e54db124169f0915212b20445fb4bc"} Nov 29 09:01:04 crc kubenswrapper[4795]: I1129 09:01:04.291781 4795 scope.go:117] "RemoveContainer" containerID="5bb2909e6b594906fc0990f1f08ffa742ff297714422ce25194698d86a68bdf1" Nov 29 09:01:04 crc kubenswrapper[4795]: E1129 09:01:04.294509 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:01:04 crc kubenswrapper[4795]: I1129 09:01:04.392747 4795 generic.go:334] "Generic (PLEG): container finished" podID="3eaf8e63-a9e1-47a4-b093-d1f65a80c4db" containerID="d40e7aa33d3ac14e55bfb25b0c7aa8950c594bafc391faee4b1f18970d52a48f" exitCode=0 Nov 29 09:01:04 crc kubenswrapper[4795]: I1129 09:01:04.392800 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29406781-9xlk7" event={"ID":"3eaf8e63-a9e1-47a4-b093-d1f65a80c4db","Type":"ContainerDied","Data":"d40e7aa33d3ac14e55bfb25b0c7aa8950c594bafc391faee4b1f18970d52a48f"} Nov 29 09:01:05 crc kubenswrapper[4795]: I1129 09:01:05.839211 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29406781-9xlk7" Nov 29 09:01:05 crc kubenswrapper[4795]: I1129 09:01:05.893982 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4t8m2\" (UniqueName: \"kubernetes.io/projected/3eaf8e63-a9e1-47a4-b093-d1f65a80c4db-kube-api-access-4t8m2\") pod \"3eaf8e63-a9e1-47a4-b093-d1f65a80c4db\" (UID: \"3eaf8e63-a9e1-47a4-b093-d1f65a80c4db\") " Nov 29 09:01:05 crc kubenswrapper[4795]: I1129 09:01:05.894156 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3eaf8e63-a9e1-47a4-b093-d1f65a80c4db-fernet-keys\") pod \"3eaf8e63-a9e1-47a4-b093-d1f65a80c4db\" (UID: \"3eaf8e63-a9e1-47a4-b093-d1f65a80c4db\") " Nov 29 09:01:05 crc kubenswrapper[4795]: I1129 09:01:05.894433 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3eaf8e63-a9e1-47a4-b093-d1f65a80c4db-config-data\") pod \"3eaf8e63-a9e1-47a4-b093-d1f65a80c4db\" (UID: \"3eaf8e63-a9e1-47a4-b093-d1f65a80c4db\") " Nov 29 09:01:05 crc kubenswrapper[4795]: I1129 09:01:05.894545 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3eaf8e63-a9e1-47a4-b093-d1f65a80c4db-combined-ca-bundle\") pod \"3eaf8e63-a9e1-47a4-b093-d1f65a80c4db\" (UID: \"3eaf8e63-a9e1-47a4-b093-d1f65a80c4db\") " Nov 29 09:01:05 crc kubenswrapper[4795]: I1129 09:01:05.899567 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3eaf8e63-a9e1-47a4-b093-d1f65a80c4db-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "3eaf8e63-a9e1-47a4-b093-d1f65a80c4db" (UID: "3eaf8e63-a9e1-47a4-b093-d1f65a80c4db"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 09:01:05 crc kubenswrapper[4795]: I1129 09:01:05.905011 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3eaf8e63-a9e1-47a4-b093-d1f65a80c4db-kube-api-access-4t8m2" (OuterVolumeSpecName: "kube-api-access-4t8m2") pod "3eaf8e63-a9e1-47a4-b093-d1f65a80c4db" (UID: "3eaf8e63-a9e1-47a4-b093-d1f65a80c4db"). InnerVolumeSpecName "kube-api-access-4t8m2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 09:01:05 crc kubenswrapper[4795]: I1129 09:01:05.926850 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3eaf8e63-a9e1-47a4-b093-d1f65a80c4db-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3eaf8e63-a9e1-47a4-b093-d1f65a80c4db" (UID: "3eaf8e63-a9e1-47a4-b093-d1f65a80c4db"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 09:01:05 crc kubenswrapper[4795]: I1129 09:01:05.970196 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3eaf8e63-a9e1-47a4-b093-d1f65a80c4db-config-data" (OuterVolumeSpecName: "config-data") pod "3eaf8e63-a9e1-47a4-b093-d1f65a80c4db" (UID: "3eaf8e63-a9e1-47a4-b093-d1f65a80c4db"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 09:01:05 crc kubenswrapper[4795]: I1129 09:01:05.997715 4795 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3eaf8e63-a9e1-47a4-b093-d1f65a80c4db-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 09:01:05 crc kubenswrapper[4795]: I1129 09:01:05.998022 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4t8m2\" (UniqueName: \"kubernetes.io/projected/3eaf8e63-a9e1-47a4-b093-d1f65a80c4db-kube-api-access-4t8m2\") on node \"crc\" DevicePath \"\"" Nov 29 09:01:05 crc kubenswrapper[4795]: I1129 09:01:05.998041 4795 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3eaf8e63-a9e1-47a4-b093-d1f65a80c4db-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 29 09:01:05 crc kubenswrapper[4795]: I1129 09:01:05.998056 4795 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3eaf8e63-a9e1-47a4-b093-d1f65a80c4db-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 09:01:06 crc kubenswrapper[4795]: I1129 09:01:06.432322 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29406781-9xlk7" event={"ID":"3eaf8e63-a9e1-47a4-b093-d1f65a80c4db","Type":"ContainerDied","Data":"5f1c6bded5ce43b01bb0b5f47bae935105e54db124169f0915212b20445fb4bc"} Nov 29 09:01:06 crc kubenswrapper[4795]: I1129 09:01:06.432369 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f1c6bded5ce43b01bb0b5f47bae935105e54db124169f0915212b20445fb4bc" Nov 29 09:01:06 crc kubenswrapper[4795]: I1129 09:01:06.432826 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29406781-9xlk7" Nov 29 09:01:17 crc kubenswrapper[4795]: I1129 09:01:17.276532 4795 scope.go:117] "RemoveContainer" containerID="5bb2909e6b594906fc0990f1f08ffa742ff297714422ce25194698d86a68bdf1" Nov 29 09:01:17 crc kubenswrapper[4795]: E1129 09:01:17.277273 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:01:32 crc kubenswrapper[4795]: I1129 09:01:32.275578 4795 scope.go:117] "RemoveContainer" containerID="5bb2909e6b594906fc0990f1f08ffa742ff297714422ce25194698d86a68bdf1" Nov 29 09:01:32 crc kubenswrapper[4795]: E1129 09:01:32.276443 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:01:44 crc kubenswrapper[4795]: I1129 09:01:44.277028 4795 scope.go:117] "RemoveContainer" containerID="5bb2909e6b594906fc0990f1f08ffa742ff297714422ce25194698d86a68bdf1" Nov 29 09:01:45 crc kubenswrapper[4795]: I1129 09:01:45.236404 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" event={"ID":"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1","Type":"ContainerStarted","Data":"bae7a18bd4142aa219365a49652a2ec926c5c7d711430a0db99f082fa2c943e8"} Nov 29 09:04:11 crc kubenswrapper[4795]: I1129 09:04:11.941076 4795 patch_prober.go:28] interesting pod/machine-config-daemon-bkmq6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 09:04:11 crc kubenswrapper[4795]: I1129 09:04:11.941789 4795 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 09:04:41 crc kubenswrapper[4795]: I1129 09:04:41.941667 4795 patch_prober.go:28] interesting pod/machine-config-daemon-bkmq6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 09:04:41 crc kubenswrapper[4795]: I1129 09:04:41.942295 4795 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 09:05:11 crc kubenswrapper[4795]: I1129 09:05:11.941573 4795 patch_prober.go:28] interesting pod/machine-config-daemon-bkmq6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 09:05:11 crc kubenswrapper[4795]: I1129 09:05:11.942159 4795 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 09:05:11 crc kubenswrapper[4795]: I1129 09:05:11.942209 4795 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" Nov 29 09:05:11 crc kubenswrapper[4795]: I1129 09:05:11.943210 4795 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bae7a18bd4142aa219365a49652a2ec926c5c7d711430a0db99f082fa2c943e8"} pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 09:05:11 crc kubenswrapper[4795]: I1129 09:05:11.943287 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" containerID="cri-o://bae7a18bd4142aa219365a49652a2ec926c5c7d711430a0db99f082fa2c943e8" gracePeriod=600 Nov 29 09:05:12 crc kubenswrapper[4795]: I1129 09:05:12.601176 4795 generic.go:334] "Generic (PLEG): container finished" podID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerID="bae7a18bd4142aa219365a49652a2ec926c5c7d711430a0db99f082fa2c943e8" exitCode=0 Nov 29 09:05:12 crc kubenswrapper[4795]: I1129 09:05:12.601236 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" event={"ID":"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1","Type":"ContainerDied","Data":"bae7a18bd4142aa219365a49652a2ec926c5c7d711430a0db99f082fa2c943e8"} Nov 29 09:05:12 crc kubenswrapper[4795]: I1129 09:05:12.601272 4795 scope.go:117] "RemoveContainer" containerID="5bb2909e6b594906fc0990f1f08ffa742ff297714422ce25194698d86a68bdf1" Nov 29 09:05:13 crc kubenswrapper[4795]: I1129 09:05:13.663200 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" event={"ID":"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1","Type":"ContainerStarted","Data":"218724df802519bbb881bb31211f4f6c81a4822a96523ad04fdb2384ff35b7c7"} Nov 29 09:05:19 crc kubenswrapper[4795]: I1129 09:05:19.732349 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Nov 29 09:05:19 crc kubenswrapper[4795]: E1129 09:05:19.733772 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eaf8e63-a9e1-47a4-b093-d1f65a80c4db" containerName="keystone-cron" Nov 29 09:05:19 crc kubenswrapper[4795]: I1129 09:05:19.733792 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eaf8e63-a9e1-47a4-b093-d1f65a80c4db" containerName="keystone-cron" Nov 29 09:05:19 crc kubenswrapper[4795]: I1129 09:05:19.734121 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="3eaf8e63-a9e1-47a4-b093-d1f65a80c4db" containerName="keystone-cron" Nov 29 09:05:19 crc kubenswrapper[4795]: I1129 09:05:19.735146 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 29 09:05:19 crc kubenswrapper[4795]: I1129 09:05:19.738092 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Nov 29 09:05:19 crc kubenswrapper[4795]: I1129 09:05:19.738442 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-wk5x5" Nov 29 09:05:19 crc kubenswrapper[4795]: I1129 09:05:19.738509 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Nov 29 09:05:19 crc kubenswrapper[4795]: I1129 09:05:19.739463 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Nov 29 09:05:19 crc kubenswrapper[4795]: I1129 09:05:19.746984 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Nov 29 09:05:19 crc kubenswrapper[4795]: I1129 09:05:19.796681 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/95552069-4919-43f3-88d5-2c40ff4c0836-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"95552069-4919-43f3-88d5-2c40ff4c0836\") " pod="openstack/tempest-tests-tempest" Nov 29 09:05:19 crc kubenswrapper[4795]: I1129 09:05:19.796821 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/95552069-4919-43f3-88d5-2c40ff4c0836-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"95552069-4919-43f3-88d5-2c40ff4c0836\") " pod="openstack/tempest-tests-tempest" Nov 29 09:05:19 crc kubenswrapper[4795]: I1129 09:05:19.796840 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/95552069-4919-43f3-88d5-2c40ff4c0836-config-data\") pod \"tempest-tests-tempest\" (UID: \"95552069-4919-43f3-88d5-2c40ff4c0836\") " pod="openstack/tempest-tests-tempest" Nov 29 09:05:19 crc kubenswrapper[4795]: I1129 09:05:19.899171 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"95552069-4919-43f3-88d5-2c40ff4c0836\") " pod="openstack/tempest-tests-tempest" Nov 29 09:05:19 crc kubenswrapper[4795]: I1129 09:05:19.899249 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/95552069-4919-43f3-88d5-2c40ff4c0836-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"95552069-4919-43f3-88d5-2c40ff4c0836\") " pod="openstack/tempest-tests-tempest" Nov 29 09:05:19 crc kubenswrapper[4795]: I1129 09:05:19.899302 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/95552069-4919-43f3-88d5-2c40ff4c0836-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"95552069-4919-43f3-88d5-2c40ff4c0836\") " pod="openstack/tempest-tests-tempest" Nov 29 09:05:19 crc kubenswrapper[4795]: I1129 09:05:19.899360 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/95552069-4919-43f3-88d5-2c40ff4c0836-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"95552069-4919-43f3-88d5-2c40ff4c0836\") " pod="openstack/tempest-tests-tempest" Nov 29 09:05:19 crc kubenswrapper[4795]: I1129 09:05:19.899377 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/95552069-4919-43f3-88d5-2c40ff4c0836-config-data\") pod \"tempest-tests-tempest\" (UID: \"95552069-4919-43f3-88d5-2c40ff4c0836\") " pod="openstack/tempest-tests-tempest" Nov 29 09:05:19 crc kubenswrapper[4795]: I1129 09:05:19.899572 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbc2r\" (UniqueName: \"kubernetes.io/projected/95552069-4919-43f3-88d5-2c40ff4c0836-kube-api-access-wbc2r\") pod \"tempest-tests-tempest\" (UID: \"95552069-4919-43f3-88d5-2c40ff4c0836\") " pod="openstack/tempest-tests-tempest" Nov 29 09:05:19 crc kubenswrapper[4795]: I1129 09:05:19.899636 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/95552069-4919-43f3-88d5-2c40ff4c0836-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"95552069-4919-43f3-88d5-2c40ff4c0836\") " pod="openstack/tempest-tests-tempest" Nov 29 09:05:19 crc kubenswrapper[4795]: I1129 09:05:19.899682 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/95552069-4919-43f3-88d5-2c40ff4c0836-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"95552069-4919-43f3-88d5-2c40ff4c0836\") " pod="openstack/tempest-tests-tempest" Nov 29 09:05:19 crc kubenswrapper[4795]: I1129 09:05:19.900170 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/95552069-4919-43f3-88d5-2c40ff4c0836-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"95552069-4919-43f3-88d5-2c40ff4c0836\") " pod="openstack/tempest-tests-tempest" Nov 29 09:05:19 crc kubenswrapper[4795]: I1129 09:05:19.900462 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/95552069-4919-43f3-88d5-2c40ff4c0836-config-data\") pod \"tempest-tests-tempest\" (UID: \"95552069-4919-43f3-88d5-2c40ff4c0836\") " pod="openstack/tempest-tests-tempest" Nov 29 09:05:19 crc kubenswrapper[4795]: I1129 09:05:19.900496 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/95552069-4919-43f3-88d5-2c40ff4c0836-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"95552069-4919-43f3-88d5-2c40ff4c0836\") " pod="openstack/tempest-tests-tempest" Nov 29 09:05:19 crc kubenswrapper[4795]: I1129 09:05:19.909959 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/95552069-4919-43f3-88d5-2c40ff4c0836-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"95552069-4919-43f3-88d5-2c40ff4c0836\") " pod="openstack/tempest-tests-tempest" Nov 29 09:05:20 crc kubenswrapper[4795]: I1129 09:05:20.001895 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/95552069-4919-43f3-88d5-2c40ff4c0836-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"95552069-4919-43f3-88d5-2c40ff4c0836\") " pod="openstack/tempest-tests-tempest" Nov 29 09:05:20 crc kubenswrapper[4795]: I1129 09:05:20.002041 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbc2r\" (UniqueName: \"kubernetes.io/projected/95552069-4919-43f3-88d5-2c40ff4c0836-kube-api-access-wbc2r\") pod \"tempest-tests-tempest\" (UID: \"95552069-4919-43f3-88d5-2c40ff4c0836\") " pod="openstack/tempest-tests-tempest" Nov 29 09:05:20 crc kubenswrapper[4795]: I1129 09:05:20.002070 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/95552069-4919-43f3-88d5-2c40ff4c0836-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"95552069-4919-43f3-88d5-2c40ff4c0836\") " pod="openstack/tempest-tests-tempest" Nov 29 09:05:20 crc kubenswrapper[4795]: I1129 09:05:20.002093 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/95552069-4919-43f3-88d5-2c40ff4c0836-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"95552069-4919-43f3-88d5-2c40ff4c0836\") " pod="openstack/tempest-tests-tempest" Nov 29 09:05:20 crc kubenswrapper[4795]: I1129 09:05:20.002235 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/95552069-4919-43f3-88d5-2c40ff4c0836-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"95552069-4919-43f3-88d5-2c40ff4c0836\") " pod="openstack/tempest-tests-tempest" Nov 29 09:05:20 crc kubenswrapper[4795]: I1129 09:05:20.002421 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"95552069-4919-43f3-88d5-2c40ff4c0836\") " pod="openstack/tempest-tests-tempest" Nov 29 09:05:20 crc kubenswrapper[4795]: I1129 09:05:20.002748 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/95552069-4919-43f3-88d5-2c40ff4c0836-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"95552069-4919-43f3-88d5-2c40ff4c0836\") " pod="openstack/tempest-tests-tempest" Nov 29 09:05:20 crc kubenswrapper[4795]: I1129 09:05:20.002932 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/95552069-4919-43f3-88d5-2c40ff4c0836-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"95552069-4919-43f3-88d5-2c40ff4c0836\") " pod="openstack/tempest-tests-tempest" Nov 29 09:05:20 crc kubenswrapper[4795]: I1129 09:05:20.004206 4795 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"95552069-4919-43f3-88d5-2c40ff4c0836\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/tempest-tests-tempest" Nov 29 09:05:20 crc kubenswrapper[4795]: I1129 09:05:20.006700 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/95552069-4919-43f3-88d5-2c40ff4c0836-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"95552069-4919-43f3-88d5-2c40ff4c0836\") " pod="openstack/tempest-tests-tempest" Nov 29 09:05:20 crc kubenswrapper[4795]: I1129 09:05:20.008680 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/95552069-4919-43f3-88d5-2c40ff4c0836-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"95552069-4919-43f3-88d5-2c40ff4c0836\") " pod="openstack/tempest-tests-tempest" Nov 29 09:05:20 crc kubenswrapper[4795]: I1129 09:05:20.038259 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbc2r\" (UniqueName: \"kubernetes.io/projected/95552069-4919-43f3-88d5-2c40ff4c0836-kube-api-access-wbc2r\") pod \"tempest-tests-tempest\" (UID: \"95552069-4919-43f3-88d5-2c40ff4c0836\") " pod="openstack/tempest-tests-tempest" Nov 29 09:05:20 crc kubenswrapper[4795]: I1129 09:05:20.058502 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"95552069-4919-43f3-88d5-2c40ff4c0836\") " pod="openstack/tempest-tests-tempest" Nov 29 09:05:20 crc kubenswrapper[4795]: I1129 09:05:20.361421 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 29 09:05:20 crc kubenswrapper[4795]: I1129 09:05:20.980704 4795 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 29 09:05:20 crc kubenswrapper[4795]: I1129 09:05:20.984207 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Nov 29 09:05:21 crc kubenswrapper[4795]: I1129 09:05:21.753081 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"95552069-4919-43f3-88d5-2c40ff4c0836","Type":"ContainerStarted","Data":"804c0f98417f267aeda32ae3ee3fc5ae8afa0cf36735633e62d8e888830e5425"} Nov 29 09:06:05 crc kubenswrapper[4795]: E1129 09:06:05.525111 4795 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Nov 29 09:06:05 crc kubenswrapper[4795]: E1129 09:06:05.526498 4795 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wbc2r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(95552069-4919-43f3-88d5-2c40ff4c0836): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 09:06:05 crc kubenswrapper[4795]: E1129 09:06:05.527664 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="95552069-4919-43f3-88d5-2c40ff4c0836" Nov 29 09:06:06 crc kubenswrapper[4795]: E1129 09:06:06.360158 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="95552069-4919-43f3-88d5-2c40ff4c0836" Nov 29 09:06:22 crc kubenswrapper[4795]: I1129 09:06:22.370135 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Nov 29 09:06:24 crc kubenswrapper[4795]: I1129 09:06:24.559305 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"95552069-4919-43f3-88d5-2c40ff4c0836","Type":"ContainerStarted","Data":"a01c12be7747d9f5daf6a2f081591e4950b15dbf1e36e3d23246657455eb56bf"} Nov 29 09:06:24 crc kubenswrapper[4795]: I1129 09:06:24.589165 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=5.202695756 podStartE2EDuration="1m6.589146287s" podCreationTimestamp="2025-11-29 09:05:18 +0000 UTC" firstStartedPulling="2025-11-29 09:05:20.980464951 +0000 UTC m=+5166.956040741" lastFinishedPulling="2025-11-29 09:06:22.366915482 +0000 UTC m=+5228.342491272" observedRunningTime="2025-11-29 09:06:24.575740587 +0000 UTC m=+5230.551316407" watchObservedRunningTime="2025-11-29 09:06:24.589146287 +0000 UTC m=+5230.564722077" Nov 29 09:06:25 crc kubenswrapper[4795]: I1129 09:06:25.186220 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pgb8r"] Nov 29 09:06:25 crc kubenswrapper[4795]: I1129 09:06:25.197462 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pgb8r" Nov 29 09:06:25 crc kubenswrapper[4795]: I1129 09:06:25.211683 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pgb8r"] Nov 29 09:06:25 crc kubenswrapper[4795]: I1129 09:06:25.332991 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4df5240-881e-4e6d-87bb-7e42c0ab26b5-utilities\") pod \"redhat-operators-pgb8r\" (UID: \"d4df5240-881e-4e6d-87bb-7e42c0ab26b5\") " pod="openshift-marketplace/redhat-operators-pgb8r" Nov 29 09:06:25 crc kubenswrapper[4795]: I1129 09:06:25.333298 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s52dr\" (UniqueName: \"kubernetes.io/projected/d4df5240-881e-4e6d-87bb-7e42c0ab26b5-kube-api-access-s52dr\") pod \"redhat-operators-pgb8r\" (UID: \"d4df5240-881e-4e6d-87bb-7e42c0ab26b5\") " pod="openshift-marketplace/redhat-operators-pgb8r" Nov 29 09:06:25 crc kubenswrapper[4795]: I1129 09:06:25.333444 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4df5240-881e-4e6d-87bb-7e42c0ab26b5-catalog-content\") pod \"redhat-operators-pgb8r\" (UID: \"d4df5240-881e-4e6d-87bb-7e42c0ab26b5\") " pod="openshift-marketplace/redhat-operators-pgb8r" Nov 29 09:06:25 crc kubenswrapper[4795]: I1129 09:06:25.436135 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4df5240-881e-4e6d-87bb-7e42c0ab26b5-utilities\") pod \"redhat-operators-pgb8r\" (UID: \"d4df5240-881e-4e6d-87bb-7e42c0ab26b5\") " pod="openshift-marketplace/redhat-operators-pgb8r" Nov 29 09:06:25 crc kubenswrapper[4795]: I1129 09:06:25.436248 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s52dr\" (UniqueName: \"kubernetes.io/projected/d4df5240-881e-4e6d-87bb-7e42c0ab26b5-kube-api-access-s52dr\") pod \"redhat-operators-pgb8r\" (UID: \"d4df5240-881e-4e6d-87bb-7e42c0ab26b5\") " pod="openshift-marketplace/redhat-operators-pgb8r" Nov 29 09:06:25 crc kubenswrapper[4795]: I1129 09:06:25.436344 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4df5240-881e-4e6d-87bb-7e42c0ab26b5-catalog-content\") pod \"redhat-operators-pgb8r\" (UID: \"d4df5240-881e-4e6d-87bb-7e42c0ab26b5\") " pod="openshift-marketplace/redhat-operators-pgb8r" Nov 29 09:06:25 crc kubenswrapper[4795]: I1129 09:06:25.437901 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4df5240-881e-4e6d-87bb-7e42c0ab26b5-utilities\") pod \"redhat-operators-pgb8r\" (UID: \"d4df5240-881e-4e6d-87bb-7e42c0ab26b5\") " pod="openshift-marketplace/redhat-operators-pgb8r" Nov 29 09:06:25 crc kubenswrapper[4795]: I1129 09:06:25.438639 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4df5240-881e-4e6d-87bb-7e42c0ab26b5-catalog-content\") pod \"redhat-operators-pgb8r\" (UID: \"d4df5240-881e-4e6d-87bb-7e42c0ab26b5\") " pod="openshift-marketplace/redhat-operators-pgb8r" Nov 29 09:06:25 crc kubenswrapper[4795]: I1129 09:06:25.471117 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s52dr\" (UniqueName: \"kubernetes.io/projected/d4df5240-881e-4e6d-87bb-7e42c0ab26b5-kube-api-access-s52dr\") pod \"redhat-operators-pgb8r\" (UID: \"d4df5240-881e-4e6d-87bb-7e42c0ab26b5\") " pod="openshift-marketplace/redhat-operators-pgb8r" Nov 29 09:06:25 crc kubenswrapper[4795]: I1129 09:06:25.517272 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pgb8r" Nov 29 09:06:26 crc kubenswrapper[4795]: I1129 09:06:26.059716 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pgb8r"] Nov 29 09:06:26 crc kubenswrapper[4795]: W1129 09:06:26.069837 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4df5240_881e_4e6d_87bb_7e42c0ab26b5.slice/crio-a14157c7b96ec429dc46f80f0f1c7156d0f774500d5578cec6ed45e573d7e325 WatchSource:0}: Error finding container a14157c7b96ec429dc46f80f0f1c7156d0f774500d5578cec6ed45e573d7e325: Status 404 returned error can't find the container with id a14157c7b96ec429dc46f80f0f1c7156d0f774500d5578cec6ed45e573d7e325 Nov 29 09:06:26 crc kubenswrapper[4795]: I1129 09:06:26.592553 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pgb8r" event={"ID":"d4df5240-881e-4e6d-87bb-7e42c0ab26b5","Type":"ContainerStarted","Data":"a14157c7b96ec429dc46f80f0f1c7156d0f774500d5578cec6ed45e573d7e325"} Nov 29 09:06:26 crc kubenswrapper[4795]: E1129 09:06:26.899209 4795 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4df5240_881e_4e6d_87bb_7e42c0ab26b5.slice/crio-170a28cff3a92aa7f382a886228a829a0f4efd0d2f37f2a91097dea5621b7aac.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4df5240_881e_4e6d_87bb_7e42c0ab26b5.slice/crio-conmon-170a28cff3a92aa7f382a886228a829a0f4efd0d2f37f2a91097dea5621b7aac.scope\": RecentStats: unable to find data in memory cache]" Nov 29 09:06:27 crc kubenswrapper[4795]: I1129 09:06:27.606388 4795 generic.go:334] "Generic (PLEG): container finished" podID="d4df5240-881e-4e6d-87bb-7e42c0ab26b5" containerID="170a28cff3a92aa7f382a886228a829a0f4efd0d2f37f2a91097dea5621b7aac" exitCode=0 Nov 29 09:06:27 crc kubenswrapper[4795]: I1129 09:06:27.606509 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pgb8r" event={"ID":"d4df5240-881e-4e6d-87bb-7e42c0ab26b5","Type":"ContainerDied","Data":"170a28cff3a92aa7f382a886228a829a0f4efd0d2f37f2a91097dea5621b7aac"} Nov 29 09:06:29 crc kubenswrapper[4795]: I1129 09:06:29.640790 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pgb8r" event={"ID":"d4df5240-881e-4e6d-87bb-7e42c0ab26b5","Type":"ContainerStarted","Data":"42804d5f709f519a598649f273d6c8557604067acb8b9f503fa104bf5ba11fe9"} Nov 29 09:06:34 crc kubenswrapper[4795]: I1129 09:06:34.698550 4795 generic.go:334] "Generic (PLEG): container finished" podID="d4df5240-881e-4e6d-87bb-7e42c0ab26b5" containerID="42804d5f709f519a598649f273d6c8557604067acb8b9f503fa104bf5ba11fe9" exitCode=0 Nov 29 09:06:34 crc kubenswrapper[4795]: I1129 09:06:34.698628 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pgb8r" event={"ID":"d4df5240-881e-4e6d-87bb-7e42c0ab26b5","Type":"ContainerDied","Data":"42804d5f709f519a598649f273d6c8557604067acb8b9f503fa104bf5ba11fe9"} Nov 29 09:06:35 crc kubenswrapper[4795]: I1129 09:06:35.711028 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pgb8r" event={"ID":"d4df5240-881e-4e6d-87bb-7e42c0ab26b5","Type":"ContainerStarted","Data":"cbed8b1c7f403ec20ba5ad483a8cb5253e170cbd778a0e54a6b5869a9a4261e7"} Nov 29 09:06:35 crc kubenswrapper[4795]: I1129 09:06:35.738225 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pgb8r" podStartSLOduration=3.165001583 podStartE2EDuration="10.738206411s" podCreationTimestamp="2025-11-29 09:06:25 +0000 UTC" firstStartedPulling="2025-11-29 09:06:27.609186662 +0000 UTC m=+5233.584762452" lastFinishedPulling="2025-11-29 09:06:35.18239149 +0000 UTC m=+5241.157967280" observedRunningTime="2025-11-29 09:06:35.731027668 +0000 UTC m=+5241.706603458" watchObservedRunningTime="2025-11-29 09:06:35.738206411 +0000 UTC m=+5241.713782201" Nov 29 09:06:45 crc kubenswrapper[4795]: I1129 09:06:45.518071 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pgb8r" Nov 29 09:06:45 crc kubenswrapper[4795]: I1129 09:06:45.518727 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pgb8r" Nov 29 09:06:46 crc kubenswrapper[4795]: I1129 09:06:46.567290 4795 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pgb8r" podUID="d4df5240-881e-4e6d-87bb-7e42c0ab26b5" containerName="registry-server" probeResult="failure" output=< Nov 29 09:06:46 crc kubenswrapper[4795]: timeout: failed to connect service ":50051" within 1s Nov 29 09:06:46 crc kubenswrapper[4795]: > Nov 29 09:06:55 crc kubenswrapper[4795]: I1129 09:06:55.572771 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pgb8r" Nov 29 09:06:55 crc kubenswrapper[4795]: I1129 09:06:55.681105 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pgb8r" Nov 29 09:06:56 crc kubenswrapper[4795]: I1129 09:06:56.395792 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pgb8r"] Nov 29 09:06:57 crc kubenswrapper[4795]: I1129 09:06:57.011718 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pgb8r" podUID="d4df5240-881e-4e6d-87bb-7e42c0ab26b5" containerName="registry-server" containerID="cri-o://cbed8b1c7f403ec20ba5ad483a8cb5253e170cbd778a0e54a6b5869a9a4261e7" gracePeriod=2 Nov 29 09:06:57 crc kubenswrapper[4795]: E1129 09:06:57.584072 4795 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4df5240_881e_4e6d_87bb_7e42c0ab26b5.slice/crio-conmon-cbed8b1c7f403ec20ba5ad483a8cb5253e170cbd778a0e54a6b5869a9a4261e7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4df5240_881e_4e6d_87bb_7e42c0ab26b5.slice/crio-cbed8b1c7f403ec20ba5ad483a8cb5253e170cbd778a0e54a6b5869a9a4261e7.scope\": RecentStats: unable to find data in memory cache]" Nov 29 09:06:57 crc kubenswrapper[4795]: I1129 09:06:57.811785 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pgb8r" Nov 29 09:06:57 crc kubenswrapper[4795]: I1129 09:06:57.857202 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s52dr\" (UniqueName: \"kubernetes.io/projected/d4df5240-881e-4e6d-87bb-7e42c0ab26b5-kube-api-access-s52dr\") pod \"d4df5240-881e-4e6d-87bb-7e42c0ab26b5\" (UID: \"d4df5240-881e-4e6d-87bb-7e42c0ab26b5\") " Nov 29 09:06:57 crc kubenswrapper[4795]: I1129 09:06:57.857483 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4df5240-881e-4e6d-87bb-7e42c0ab26b5-catalog-content\") pod \"d4df5240-881e-4e6d-87bb-7e42c0ab26b5\" (UID: \"d4df5240-881e-4e6d-87bb-7e42c0ab26b5\") " Nov 29 09:06:57 crc kubenswrapper[4795]: I1129 09:06:57.857529 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4df5240-881e-4e6d-87bb-7e42c0ab26b5-utilities\") pod \"d4df5240-881e-4e6d-87bb-7e42c0ab26b5\" (UID: \"d4df5240-881e-4e6d-87bb-7e42c0ab26b5\") " Nov 29 09:06:57 crc kubenswrapper[4795]: I1129 09:06:57.859060 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4df5240-881e-4e6d-87bb-7e42c0ab26b5-utilities" (OuterVolumeSpecName: "utilities") pod "d4df5240-881e-4e6d-87bb-7e42c0ab26b5" (UID: "d4df5240-881e-4e6d-87bb-7e42c0ab26b5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 09:06:57 crc kubenswrapper[4795]: I1129 09:06:57.868684 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4df5240-881e-4e6d-87bb-7e42c0ab26b5-kube-api-access-s52dr" (OuterVolumeSpecName: "kube-api-access-s52dr") pod "d4df5240-881e-4e6d-87bb-7e42c0ab26b5" (UID: "d4df5240-881e-4e6d-87bb-7e42c0ab26b5"). InnerVolumeSpecName "kube-api-access-s52dr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 09:06:57 crc kubenswrapper[4795]: I1129 09:06:57.960001 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s52dr\" (UniqueName: \"kubernetes.io/projected/d4df5240-881e-4e6d-87bb-7e42c0ab26b5-kube-api-access-s52dr\") on node \"crc\" DevicePath \"\"" Nov 29 09:06:57 crc kubenswrapper[4795]: I1129 09:06:57.960050 4795 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4df5240-881e-4e6d-87bb-7e42c0ab26b5-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 09:06:58 crc kubenswrapper[4795]: I1129 09:06:58.000164 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4df5240-881e-4e6d-87bb-7e42c0ab26b5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d4df5240-881e-4e6d-87bb-7e42c0ab26b5" (UID: "d4df5240-881e-4e6d-87bb-7e42c0ab26b5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 09:06:58 crc kubenswrapper[4795]: I1129 09:06:58.025574 4795 generic.go:334] "Generic (PLEG): container finished" podID="d4df5240-881e-4e6d-87bb-7e42c0ab26b5" containerID="cbed8b1c7f403ec20ba5ad483a8cb5253e170cbd778a0e54a6b5869a9a4261e7" exitCode=0 Nov 29 09:06:58 crc kubenswrapper[4795]: I1129 09:06:58.025635 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pgb8r" event={"ID":"d4df5240-881e-4e6d-87bb-7e42c0ab26b5","Type":"ContainerDied","Data":"cbed8b1c7f403ec20ba5ad483a8cb5253e170cbd778a0e54a6b5869a9a4261e7"} Nov 29 09:06:58 crc kubenswrapper[4795]: I1129 09:06:58.025669 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pgb8r" event={"ID":"d4df5240-881e-4e6d-87bb-7e42c0ab26b5","Type":"ContainerDied","Data":"a14157c7b96ec429dc46f80f0f1c7156d0f774500d5578cec6ed45e573d7e325"} Nov 29 09:06:58 crc kubenswrapper[4795]: I1129 09:06:58.025689 4795 scope.go:117] "RemoveContainer" containerID="cbed8b1c7f403ec20ba5ad483a8cb5253e170cbd778a0e54a6b5869a9a4261e7" Nov 29 09:06:58 crc kubenswrapper[4795]: I1129 09:06:58.025828 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pgb8r" Nov 29 09:06:58 crc kubenswrapper[4795]: I1129 09:06:58.062296 4795 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4df5240-881e-4e6d-87bb-7e42c0ab26b5-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 09:06:58 crc kubenswrapper[4795]: I1129 09:06:58.066464 4795 scope.go:117] "RemoveContainer" containerID="42804d5f709f519a598649f273d6c8557604067acb8b9f503fa104bf5ba11fe9" Nov 29 09:06:58 crc kubenswrapper[4795]: I1129 09:06:58.069712 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pgb8r"] Nov 29 09:06:58 crc kubenswrapper[4795]: I1129 09:06:58.081772 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pgb8r"] Nov 29 09:06:58 crc kubenswrapper[4795]: I1129 09:06:58.094449 4795 scope.go:117] "RemoveContainer" containerID="170a28cff3a92aa7f382a886228a829a0f4efd0d2f37f2a91097dea5621b7aac" Nov 29 09:06:58 crc kubenswrapper[4795]: I1129 09:06:58.167870 4795 scope.go:117] "RemoveContainer" containerID="cbed8b1c7f403ec20ba5ad483a8cb5253e170cbd778a0e54a6b5869a9a4261e7" Nov 29 09:06:58 crc kubenswrapper[4795]: E1129 09:06:58.168894 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbed8b1c7f403ec20ba5ad483a8cb5253e170cbd778a0e54a6b5869a9a4261e7\": container with ID starting with cbed8b1c7f403ec20ba5ad483a8cb5253e170cbd778a0e54a6b5869a9a4261e7 not found: ID does not exist" containerID="cbed8b1c7f403ec20ba5ad483a8cb5253e170cbd778a0e54a6b5869a9a4261e7" Nov 29 09:06:58 crc kubenswrapper[4795]: I1129 09:06:58.169019 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbed8b1c7f403ec20ba5ad483a8cb5253e170cbd778a0e54a6b5869a9a4261e7"} err="failed to get container status \"cbed8b1c7f403ec20ba5ad483a8cb5253e170cbd778a0e54a6b5869a9a4261e7\": rpc error: code = NotFound desc = could not find container \"cbed8b1c7f403ec20ba5ad483a8cb5253e170cbd778a0e54a6b5869a9a4261e7\": container with ID starting with cbed8b1c7f403ec20ba5ad483a8cb5253e170cbd778a0e54a6b5869a9a4261e7 not found: ID does not exist" Nov 29 09:06:58 crc kubenswrapper[4795]: I1129 09:06:58.169053 4795 scope.go:117] "RemoveContainer" containerID="42804d5f709f519a598649f273d6c8557604067acb8b9f503fa104bf5ba11fe9" Nov 29 09:06:58 crc kubenswrapper[4795]: E1129 09:06:58.169472 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42804d5f709f519a598649f273d6c8557604067acb8b9f503fa104bf5ba11fe9\": container with ID starting with 42804d5f709f519a598649f273d6c8557604067acb8b9f503fa104bf5ba11fe9 not found: ID does not exist" containerID="42804d5f709f519a598649f273d6c8557604067acb8b9f503fa104bf5ba11fe9" Nov 29 09:06:58 crc kubenswrapper[4795]: I1129 09:06:58.169511 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42804d5f709f519a598649f273d6c8557604067acb8b9f503fa104bf5ba11fe9"} err="failed to get container status \"42804d5f709f519a598649f273d6c8557604067acb8b9f503fa104bf5ba11fe9\": rpc error: code = NotFound desc = could not find container \"42804d5f709f519a598649f273d6c8557604067acb8b9f503fa104bf5ba11fe9\": container with ID starting with 42804d5f709f519a598649f273d6c8557604067acb8b9f503fa104bf5ba11fe9 not found: ID does not exist" Nov 29 09:06:58 crc kubenswrapper[4795]: I1129 09:06:58.169538 4795 scope.go:117] "RemoveContainer" containerID="170a28cff3a92aa7f382a886228a829a0f4efd0d2f37f2a91097dea5621b7aac" Nov 29 09:06:58 crc kubenswrapper[4795]: E1129 09:06:58.169849 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"170a28cff3a92aa7f382a886228a829a0f4efd0d2f37f2a91097dea5621b7aac\": container with ID starting with 170a28cff3a92aa7f382a886228a829a0f4efd0d2f37f2a91097dea5621b7aac not found: ID does not exist" containerID="170a28cff3a92aa7f382a886228a829a0f4efd0d2f37f2a91097dea5621b7aac" Nov 29 09:06:58 crc kubenswrapper[4795]: I1129 09:06:58.169884 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"170a28cff3a92aa7f382a886228a829a0f4efd0d2f37f2a91097dea5621b7aac"} err="failed to get container status \"170a28cff3a92aa7f382a886228a829a0f4efd0d2f37f2a91097dea5621b7aac\": rpc error: code = NotFound desc = could not find container \"170a28cff3a92aa7f382a886228a829a0f4efd0d2f37f2a91097dea5621b7aac\": container with ID starting with 170a28cff3a92aa7f382a886228a829a0f4efd0d2f37f2a91097dea5621b7aac not found: ID does not exist" Nov 29 09:06:58 crc kubenswrapper[4795]: I1129 09:06:58.289381 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4df5240-881e-4e6d-87bb-7e42c0ab26b5" path="/var/lib/kubelet/pods/d4df5240-881e-4e6d-87bb-7e42c0ab26b5/volumes" Nov 29 09:07:41 crc kubenswrapper[4795]: I1129 09:07:41.941392 4795 patch_prober.go:28] interesting pod/machine-config-daemon-bkmq6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 09:07:41 crc kubenswrapper[4795]: I1129 09:07:41.942437 4795 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 09:07:58 crc kubenswrapper[4795]: I1129 09:07:58.755016 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zfgcp"] Nov 29 09:07:58 crc kubenswrapper[4795]: E1129 09:07:58.758172 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4df5240-881e-4e6d-87bb-7e42c0ab26b5" containerName="extract-utilities" Nov 29 09:07:58 crc kubenswrapper[4795]: I1129 09:07:58.758216 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4df5240-881e-4e6d-87bb-7e42c0ab26b5" containerName="extract-utilities" Nov 29 09:07:58 crc kubenswrapper[4795]: E1129 09:07:58.758645 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4df5240-881e-4e6d-87bb-7e42c0ab26b5" containerName="extract-content" Nov 29 09:07:58 crc kubenswrapper[4795]: I1129 09:07:58.758668 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4df5240-881e-4e6d-87bb-7e42c0ab26b5" containerName="extract-content" Nov 29 09:07:58 crc kubenswrapper[4795]: E1129 09:07:58.758685 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4df5240-881e-4e6d-87bb-7e42c0ab26b5" containerName="registry-server" Nov 29 09:07:58 crc kubenswrapper[4795]: I1129 09:07:58.758693 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4df5240-881e-4e6d-87bb-7e42c0ab26b5" containerName="registry-server" Nov 29 09:07:58 crc kubenswrapper[4795]: I1129 09:07:58.759654 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4df5240-881e-4e6d-87bb-7e42c0ab26b5" containerName="registry-server" Nov 29 09:07:58 crc kubenswrapper[4795]: I1129 09:07:58.763489 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zfgcp" Nov 29 09:07:58 crc kubenswrapper[4795]: I1129 09:07:58.817809 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2c97b1d-c171-4a73-b0ff-b96774327bf5-utilities\") pod \"certified-operators-zfgcp\" (UID: \"d2c97b1d-c171-4a73-b0ff-b96774327bf5\") " pod="openshift-marketplace/certified-operators-zfgcp" Nov 29 09:07:58 crc kubenswrapper[4795]: I1129 09:07:58.818008 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2c97b1d-c171-4a73-b0ff-b96774327bf5-catalog-content\") pod \"certified-operators-zfgcp\" (UID: \"d2c97b1d-c171-4a73-b0ff-b96774327bf5\") " pod="openshift-marketplace/certified-operators-zfgcp" Nov 29 09:07:58 crc kubenswrapper[4795]: I1129 09:07:58.818032 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9lr4\" (UniqueName: \"kubernetes.io/projected/d2c97b1d-c171-4a73-b0ff-b96774327bf5-kube-api-access-z9lr4\") pod \"certified-operators-zfgcp\" (UID: \"d2c97b1d-c171-4a73-b0ff-b96774327bf5\") " pod="openshift-marketplace/certified-operators-zfgcp" Nov 29 09:07:58 crc kubenswrapper[4795]: I1129 09:07:58.887842 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zfgcp"] Nov 29 09:07:58 crc kubenswrapper[4795]: I1129 09:07:58.941966 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2c97b1d-c171-4a73-b0ff-b96774327bf5-catalog-content\") pod \"certified-operators-zfgcp\" (UID: \"d2c97b1d-c171-4a73-b0ff-b96774327bf5\") " pod="openshift-marketplace/certified-operators-zfgcp" Nov 29 09:07:58 crc kubenswrapper[4795]: I1129 09:07:58.942039 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9lr4\" (UniqueName: \"kubernetes.io/projected/d2c97b1d-c171-4a73-b0ff-b96774327bf5-kube-api-access-z9lr4\") pod \"certified-operators-zfgcp\" (UID: \"d2c97b1d-c171-4a73-b0ff-b96774327bf5\") " pod="openshift-marketplace/certified-operators-zfgcp" Nov 29 09:07:58 crc kubenswrapper[4795]: I1129 09:07:58.942328 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2c97b1d-c171-4a73-b0ff-b96774327bf5-utilities\") pod \"certified-operators-zfgcp\" (UID: \"d2c97b1d-c171-4a73-b0ff-b96774327bf5\") " pod="openshift-marketplace/certified-operators-zfgcp" Nov 29 09:07:58 crc kubenswrapper[4795]: I1129 09:07:58.945056 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2c97b1d-c171-4a73-b0ff-b96774327bf5-catalog-content\") pod \"certified-operators-zfgcp\" (UID: \"d2c97b1d-c171-4a73-b0ff-b96774327bf5\") " pod="openshift-marketplace/certified-operators-zfgcp" Nov 29 09:07:58 crc kubenswrapper[4795]: I1129 09:07:58.945213 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2c97b1d-c171-4a73-b0ff-b96774327bf5-utilities\") pod \"certified-operators-zfgcp\" (UID: \"d2c97b1d-c171-4a73-b0ff-b96774327bf5\") " pod="openshift-marketplace/certified-operators-zfgcp" Nov 29 09:07:58 crc kubenswrapper[4795]: I1129 09:07:58.985898 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9lr4\" (UniqueName: \"kubernetes.io/projected/d2c97b1d-c171-4a73-b0ff-b96774327bf5-kube-api-access-z9lr4\") pod \"certified-operators-zfgcp\" (UID: \"d2c97b1d-c171-4a73-b0ff-b96774327bf5\") " pod="openshift-marketplace/certified-operators-zfgcp" Nov 29 09:07:59 crc kubenswrapper[4795]: I1129 09:07:59.100118 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zfgcp" Nov 29 09:08:00 crc kubenswrapper[4795]: I1129 09:08:00.299733 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zfgcp"] Nov 29 09:08:00 crc kubenswrapper[4795]: I1129 09:08:00.828938 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zfgcp" event={"ID":"d2c97b1d-c171-4a73-b0ff-b96774327bf5","Type":"ContainerDied","Data":"c74ccafaffcc4482371b680292d6446359df1a21c7c161d4e1346adc6d7ebdb0"} Nov 29 09:08:00 crc kubenswrapper[4795]: I1129 09:08:00.829368 4795 generic.go:334] "Generic (PLEG): container finished" podID="d2c97b1d-c171-4a73-b0ff-b96774327bf5" containerID="c74ccafaffcc4482371b680292d6446359df1a21c7c161d4e1346adc6d7ebdb0" exitCode=0 Nov 29 09:08:00 crc kubenswrapper[4795]: I1129 09:08:00.829528 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zfgcp" event={"ID":"d2c97b1d-c171-4a73-b0ff-b96774327bf5","Type":"ContainerStarted","Data":"9e81c004755bf321f1dfb764147cb02a0a7bd231948da80575898d39fd23d9e3"} Nov 29 09:08:01 crc kubenswrapper[4795]: I1129 09:08:01.076440 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8p2s6"] Nov 29 09:08:01 crc kubenswrapper[4795]: I1129 09:08:01.081557 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8p2s6" Nov 29 09:08:01 crc kubenswrapper[4795]: I1129 09:08:01.092933 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8p2s6"] Nov 29 09:08:01 crc kubenswrapper[4795]: I1129 09:08:01.102241 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a4935c9-c4ea-491c-8b9a-a20074780fac-catalog-content\") pod \"community-operators-8p2s6\" (UID: \"4a4935c9-c4ea-491c-8b9a-a20074780fac\") " pod="openshift-marketplace/community-operators-8p2s6" Nov 29 09:08:01 crc kubenswrapper[4795]: I1129 09:08:01.102424 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a4935c9-c4ea-491c-8b9a-a20074780fac-utilities\") pod \"community-operators-8p2s6\" (UID: \"4a4935c9-c4ea-491c-8b9a-a20074780fac\") " pod="openshift-marketplace/community-operators-8p2s6" Nov 29 09:08:01 crc kubenswrapper[4795]: I1129 09:08:01.102638 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5llsk\" (UniqueName: \"kubernetes.io/projected/4a4935c9-c4ea-491c-8b9a-a20074780fac-kube-api-access-5llsk\") pod \"community-operators-8p2s6\" (UID: \"4a4935c9-c4ea-491c-8b9a-a20074780fac\") " pod="openshift-marketplace/community-operators-8p2s6" Nov 29 09:08:01 crc kubenswrapper[4795]: I1129 09:08:01.204945 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a4935c9-c4ea-491c-8b9a-a20074780fac-catalog-content\") pod \"community-operators-8p2s6\" (UID: \"4a4935c9-c4ea-491c-8b9a-a20074780fac\") " pod="openshift-marketplace/community-operators-8p2s6" Nov 29 09:08:01 crc kubenswrapper[4795]: I1129 09:08:01.205339 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a4935c9-c4ea-491c-8b9a-a20074780fac-utilities\") pod \"community-operators-8p2s6\" (UID: \"4a4935c9-c4ea-491c-8b9a-a20074780fac\") " pod="openshift-marketplace/community-operators-8p2s6" Nov 29 09:08:01 crc kubenswrapper[4795]: I1129 09:08:01.205485 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5llsk\" (UniqueName: \"kubernetes.io/projected/4a4935c9-c4ea-491c-8b9a-a20074780fac-kube-api-access-5llsk\") pod \"community-operators-8p2s6\" (UID: \"4a4935c9-c4ea-491c-8b9a-a20074780fac\") " pod="openshift-marketplace/community-operators-8p2s6" Nov 29 09:08:01 crc kubenswrapper[4795]: I1129 09:08:01.455042 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a4935c9-c4ea-491c-8b9a-a20074780fac-utilities\") pod \"community-operators-8p2s6\" (UID: \"4a4935c9-c4ea-491c-8b9a-a20074780fac\") " pod="openshift-marketplace/community-operators-8p2s6" Nov 29 09:08:01 crc kubenswrapper[4795]: I1129 09:08:01.454747 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a4935c9-c4ea-491c-8b9a-a20074780fac-catalog-content\") pod \"community-operators-8p2s6\" (UID: \"4a4935c9-c4ea-491c-8b9a-a20074780fac\") " pod="openshift-marketplace/community-operators-8p2s6" Nov 29 09:08:01 crc kubenswrapper[4795]: I1129 09:08:01.472823 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5llsk\" (UniqueName: \"kubernetes.io/projected/4a4935c9-c4ea-491c-8b9a-a20074780fac-kube-api-access-5llsk\") pod \"community-operators-8p2s6\" (UID: \"4a4935c9-c4ea-491c-8b9a-a20074780fac\") " pod="openshift-marketplace/community-operators-8p2s6" Nov 29 09:08:01 crc kubenswrapper[4795]: I1129 09:08:01.755289 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8p2s6" Nov 29 09:08:02 crc kubenswrapper[4795]: I1129 09:08:02.298428 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8p2s6"] Nov 29 09:08:02 crc kubenswrapper[4795]: W1129 09:08:02.307280 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a4935c9_c4ea_491c_8b9a_a20074780fac.slice/crio-b1f1c68142259329bf5d0b1670c38aacf3d5afdc4d6b918532f24a7484afc733 WatchSource:0}: Error finding container b1f1c68142259329bf5d0b1670c38aacf3d5afdc4d6b918532f24a7484afc733: Status 404 returned error can't find the container with id b1f1c68142259329bf5d0b1670c38aacf3d5afdc4d6b918532f24a7484afc733 Nov 29 09:08:02 crc kubenswrapper[4795]: I1129 09:08:02.865195 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zfgcp" event={"ID":"d2c97b1d-c171-4a73-b0ff-b96774327bf5","Type":"ContainerStarted","Data":"87e2cb09c0a89414a684c2fd115065ab655fd9f58b24445d04db123e129c2af5"} Nov 29 09:08:02 crc kubenswrapper[4795]: I1129 09:08:02.873327 4795 generic.go:334] "Generic (PLEG): container finished" podID="4a4935c9-c4ea-491c-8b9a-a20074780fac" containerID="d3849b66ed1237cf8007c613018c6b5a86a4502e711a906c8b48faa4a8c620e0" exitCode=0 Nov 29 09:08:02 crc kubenswrapper[4795]: I1129 09:08:02.873415 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8p2s6" event={"ID":"4a4935c9-c4ea-491c-8b9a-a20074780fac","Type":"ContainerDied","Data":"d3849b66ed1237cf8007c613018c6b5a86a4502e711a906c8b48faa4a8c620e0"} Nov 29 09:08:02 crc kubenswrapper[4795]: I1129 09:08:02.873452 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8p2s6" event={"ID":"4a4935c9-c4ea-491c-8b9a-a20074780fac","Type":"ContainerStarted","Data":"b1f1c68142259329bf5d0b1670c38aacf3d5afdc4d6b918532f24a7484afc733"} Nov 29 09:08:03 crc kubenswrapper[4795]: I1129 09:08:03.900561 4795 generic.go:334] "Generic (PLEG): container finished" podID="d2c97b1d-c171-4a73-b0ff-b96774327bf5" containerID="87e2cb09c0a89414a684c2fd115065ab655fd9f58b24445d04db123e129c2af5" exitCode=0 Nov 29 09:08:03 crc kubenswrapper[4795]: I1129 09:08:03.901135 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zfgcp" event={"ID":"d2c97b1d-c171-4a73-b0ff-b96774327bf5","Type":"ContainerDied","Data":"87e2cb09c0a89414a684c2fd115065ab655fd9f58b24445d04db123e129c2af5"} Nov 29 09:08:04 crc kubenswrapper[4795]: I1129 09:08:04.912999 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8p2s6" event={"ID":"4a4935c9-c4ea-491c-8b9a-a20074780fac","Type":"ContainerStarted","Data":"6181413b93a5ee13be899145898442dfac7290671857c7aae82a321078f98845"} Nov 29 09:08:05 crc kubenswrapper[4795]: I1129 09:08:05.926148 4795 generic.go:334] "Generic (PLEG): container finished" podID="4a4935c9-c4ea-491c-8b9a-a20074780fac" containerID="6181413b93a5ee13be899145898442dfac7290671857c7aae82a321078f98845" exitCode=0 Nov 29 09:08:05 crc kubenswrapper[4795]: I1129 09:08:05.926278 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8p2s6" event={"ID":"4a4935c9-c4ea-491c-8b9a-a20074780fac","Type":"ContainerDied","Data":"6181413b93a5ee13be899145898442dfac7290671857c7aae82a321078f98845"} Nov 29 09:08:05 crc kubenswrapper[4795]: I1129 09:08:05.930686 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zfgcp" event={"ID":"d2c97b1d-c171-4a73-b0ff-b96774327bf5","Type":"ContainerStarted","Data":"e8ced314daef7c738fd344cfa0d4dfa0459505cca15796b1e6b7eb7ef1995ca6"} Nov 29 09:08:06 crc kubenswrapper[4795]: I1129 09:08:06.008900 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zfgcp" podStartSLOduration=4.1357103 podStartE2EDuration="8.008339587s" podCreationTimestamp="2025-11-29 09:07:58 +0000 UTC" firstStartedPulling="2025-11-29 09:08:00.830509202 +0000 UTC m=+5326.806084992" lastFinishedPulling="2025-11-29 09:08:04.703138489 +0000 UTC m=+5330.678714279" observedRunningTime="2025-11-29 09:08:05.994243118 +0000 UTC m=+5331.969818918" watchObservedRunningTime="2025-11-29 09:08:06.008339587 +0000 UTC m=+5331.983915377" Nov 29 09:08:06 crc kubenswrapper[4795]: I1129 09:08:06.947284 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8p2s6" event={"ID":"4a4935c9-c4ea-491c-8b9a-a20074780fac","Type":"ContainerStarted","Data":"23273933eaf04fbbc13e4ba96629d6e285acf2461c81bfe3caab1143eed0aad1"} Nov 29 09:08:06 crc kubenswrapper[4795]: I1129 09:08:06.979130 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8p2s6" podStartSLOduration=2.513219827 podStartE2EDuration="5.979107027s" podCreationTimestamp="2025-11-29 09:08:01 +0000 UTC" firstStartedPulling="2025-11-29 09:08:02.887055452 +0000 UTC m=+5328.862631242" lastFinishedPulling="2025-11-29 09:08:06.352942652 +0000 UTC m=+5332.328518442" observedRunningTime="2025-11-29 09:08:06.969507035 +0000 UTC m=+5332.945082825" watchObservedRunningTime="2025-11-29 09:08:06.979107027 +0000 UTC m=+5332.954682817" Nov 29 09:08:09 crc kubenswrapper[4795]: I1129 09:08:09.100919 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zfgcp" Nov 29 09:08:09 crc kubenswrapper[4795]: I1129 09:08:09.101603 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zfgcp" Nov 29 09:08:10 crc kubenswrapper[4795]: I1129 09:08:10.154544 4795 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-zfgcp" podUID="d2c97b1d-c171-4a73-b0ff-b96774327bf5" containerName="registry-server" probeResult="failure" output=< Nov 29 09:08:10 crc kubenswrapper[4795]: timeout: failed to connect service ":50051" within 1s Nov 29 09:08:10 crc kubenswrapper[4795]: > Nov 29 09:08:11 crc kubenswrapper[4795]: I1129 09:08:11.755832 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8p2s6" Nov 29 09:08:11 crc kubenswrapper[4795]: I1129 09:08:11.756164 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8p2s6" Nov 29 09:08:11 crc kubenswrapper[4795]: I1129 09:08:11.941913 4795 patch_prober.go:28] interesting pod/machine-config-daemon-bkmq6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 09:08:11 crc kubenswrapper[4795]: I1129 09:08:11.941991 4795 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 09:08:13 crc kubenswrapper[4795]: I1129 09:08:13.212576 4795 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-8p2s6" podUID="4a4935c9-c4ea-491c-8b9a-a20074780fac" containerName="registry-server" probeResult="failure" output=< Nov 29 09:08:13 crc kubenswrapper[4795]: timeout: failed to connect service ":50051" within 1s Nov 29 09:08:13 crc kubenswrapper[4795]: > Nov 29 09:08:19 crc kubenswrapper[4795]: I1129 09:08:19.155086 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zfgcp" Nov 29 09:08:19 crc kubenswrapper[4795]: I1129 09:08:19.215051 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zfgcp" Nov 29 09:08:19 crc kubenswrapper[4795]: I1129 09:08:19.712000 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zfgcp"] Nov 29 09:08:21 crc kubenswrapper[4795]: I1129 09:08:21.088835 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zfgcp" podUID="d2c97b1d-c171-4a73-b0ff-b96774327bf5" containerName="registry-server" containerID="cri-o://e8ced314daef7c738fd344cfa0d4dfa0459505cca15796b1e6b7eb7ef1995ca6" gracePeriod=2 Nov 29 09:08:22 crc kubenswrapper[4795]: I1129 09:08:22.166077 4795 generic.go:334] "Generic (PLEG): container finished" podID="d2c97b1d-c171-4a73-b0ff-b96774327bf5" containerID="e8ced314daef7c738fd344cfa0d4dfa0459505cca15796b1e6b7eb7ef1995ca6" exitCode=0 Nov 29 09:08:22 crc kubenswrapper[4795]: I1129 09:08:22.166237 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zfgcp" event={"ID":"d2c97b1d-c171-4a73-b0ff-b96774327bf5","Type":"ContainerDied","Data":"e8ced314daef7c738fd344cfa0d4dfa0459505cca15796b1e6b7eb7ef1995ca6"} Nov 29 09:08:22 crc kubenswrapper[4795]: I1129 09:08:22.168360 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8p2s6" Nov 29 09:08:22 crc kubenswrapper[4795]: I1129 09:08:22.267961 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8p2s6" Nov 29 09:08:22 crc kubenswrapper[4795]: I1129 09:08:22.286517 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zfgcp" Nov 29 09:08:22 crc kubenswrapper[4795]: I1129 09:08:22.333773 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8p2s6"] Nov 29 09:08:22 crc kubenswrapper[4795]: I1129 09:08:22.345998 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2c97b1d-c171-4a73-b0ff-b96774327bf5-utilities\") pod \"d2c97b1d-c171-4a73-b0ff-b96774327bf5\" (UID: \"d2c97b1d-c171-4a73-b0ff-b96774327bf5\") " Nov 29 09:08:22 crc kubenswrapper[4795]: I1129 09:08:22.346058 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2c97b1d-c171-4a73-b0ff-b96774327bf5-catalog-content\") pod \"d2c97b1d-c171-4a73-b0ff-b96774327bf5\" (UID: \"d2c97b1d-c171-4a73-b0ff-b96774327bf5\") " Nov 29 09:08:22 crc kubenswrapper[4795]: I1129 09:08:22.346184 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9lr4\" (UniqueName: \"kubernetes.io/projected/d2c97b1d-c171-4a73-b0ff-b96774327bf5-kube-api-access-z9lr4\") pod \"d2c97b1d-c171-4a73-b0ff-b96774327bf5\" (UID: \"d2c97b1d-c171-4a73-b0ff-b96774327bf5\") " Nov 29 09:08:22 crc kubenswrapper[4795]: I1129 09:08:22.348414 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2c97b1d-c171-4a73-b0ff-b96774327bf5-utilities" (OuterVolumeSpecName: "utilities") pod "d2c97b1d-c171-4a73-b0ff-b96774327bf5" (UID: "d2c97b1d-c171-4a73-b0ff-b96774327bf5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 09:08:22 crc kubenswrapper[4795]: I1129 09:08:22.906395 4795 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2c97b1d-c171-4a73-b0ff-b96774327bf5-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 09:08:22 crc kubenswrapper[4795]: I1129 09:08:22.924358 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2c97b1d-c171-4a73-b0ff-b96774327bf5-kube-api-access-z9lr4" (OuterVolumeSpecName: "kube-api-access-z9lr4") pod "d2c97b1d-c171-4a73-b0ff-b96774327bf5" (UID: "d2c97b1d-c171-4a73-b0ff-b96774327bf5"). InnerVolumeSpecName "kube-api-access-z9lr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 09:08:22 crc kubenswrapper[4795]: I1129 09:08:22.962419 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2c97b1d-c171-4a73-b0ff-b96774327bf5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d2c97b1d-c171-4a73-b0ff-b96774327bf5" (UID: "d2c97b1d-c171-4a73-b0ff-b96774327bf5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 09:08:23 crc kubenswrapper[4795]: I1129 09:08:23.009114 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9lr4\" (UniqueName: \"kubernetes.io/projected/d2c97b1d-c171-4a73-b0ff-b96774327bf5-kube-api-access-z9lr4\") on node \"crc\" DevicePath \"\"" Nov 29 09:08:23 crc kubenswrapper[4795]: I1129 09:08:23.009148 4795 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2c97b1d-c171-4a73-b0ff-b96774327bf5-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 09:08:23 crc kubenswrapper[4795]: I1129 09:08:23.181419 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zfgcp" event={"ID":"d2c97b1d-c171-4a73-b0ff-b96774327bf5","Type":"ContainerDied","Data":"9e81c004755bf321f1dfb764147cb02a0a7bd231948da80575898d39fd23d9e3"} Nov 29 09:08:23 crc kubenswrapper[4795]: I1129 09:08:23.181456 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zfgcp" Nov 29 09:08:23 crc kubenswrapper[4795]: I1129 09:08:23.182358 4795 scope.go:117] "RemoveContainer" containerID="e8ced314daef7c738fd344cfa0d4dfa0459505cca15796b1e6b7eb7ef1995ca6" Nov 29 09:08:23 crc kubenswrapper[4795]: I1129 09:08:23.239243 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zfgcp"] Nov 29 09:08:23 crc kubenswrapper[4795]: I1129 09:08:23.245775 4795 scope.go:117] "RemoveContainer" containerID="87e2cb09c0a89414a684c2fd115065ab655fd9f58b24445d04db123e129c2af5" Nov 29 09:08:23 crc kubenswrapper[4795]: I1129 09:08:23.268606 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zfgcp"] Nov 29 09:08:23 crc kubenswrapper[4795]: I1129 09:08:23.313085 4795 scope.go:117] "RemoveContainer" containerID="c74ccafaffcc4482371b680292d6446359df1a21c7c161d4e1346adc6d7ebdb0" Nov 29 09:08:25 crc kubenswrapper[4795]: I1129 09:08:25.615917 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2c97b1d-c171-4a73-b0ff-b96774327bf5" path="/var/lib/kubelet/pods/d2c97b1d-c171-4a73-b0ff-b96774327bf5/volumes" Nov 29 09:08:25 crc kubenswrapper[4795]: E1129 09:08:25.622389 4795 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.347s" Nov 29 09:08:25 crc kubenswrapper[4795]: I1129 09:08:25.625456 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8p2s6" podUID="4a4935c9-c4ea-491c-8b9a-a20074780fac" containerName="registry-server" containerID="cri-o://23273933eaf04fbbc13e4ba96629d6e285acf2461c81bfe3caab1143eed0aad1" gracePeriod=2 Nov 29 09:08:26 crc kubenswrapper[4795]: I1129 09:08:26.904401 4795 generic.go:334] "Generic (PLEG): container finished" podID="4a4935c9-c4ea-491c-8b9a-a20074780fac" containerID="23273933eaf04fbbc13e4ba96629d6e285acf2461c81bfe3caab1143eed0aad1" exitCode=0 Nov 29 09:08:26 crc kubenswrapper[4795]: I1129 09:08:26.904433 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8p2s6" event={"ID":"4a4935c9-c4ea-491c-8b9a-a20074780fac","Type":"ContainerDied","Data":"23273933eaf04fbbc13e4ba96629d6e285acf2461c81bfe3caab1143eed0aad1"} Nov 29 09:08:27 crc kubenswrapper[4795]: I1129 09:08:27.246932 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8p2s6" Nov 29 09:08:27 crc kubenswrapper[4795]: I1129 09:08:27.441987 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a4935c9-c4ea-491c-8b9a-a20074780fac-utilities\") pod \"4a4935c9-c4ea-491c-8b9a-a20074780fac\" (UID: \"4a4935c9-c4ea-491c-8b9a-a20074780fac\") " Nov 29 09:08:27 crc kubenswrapper[4795]: I1129 09:08:27.442171 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5llsk\" (UniqueName: \"kubernetes.io/projected/4a4935c9-c4ea-491c-8b9a-a20074780fac-kube-api-access-5llsk\") pod \"4a4935c9-c4ea-491c-8b9a-a20074780fac\" (UID: \"4a4935c9-c4ea-491c-8b9a-a20074780fac\") " Nov 29 09:08:27 crc kubenswrapper[4795]: I1129 09:08:27.442201 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a4935c9-c4ea-491c-8b9a-a20074780fac-catalog-content\") pod \"4a4935c9-c4ea-491c-8b9a-a20074780fac\" (UID: \"4a4935c9-c4ea-491c-8b9a-a20074780fac\") " Nov 29 09:08:27 crc kubenswrapper[4795]: I1129 09:08:27.443376 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a4935c9-c4ea-491c-8b9a-a20074780fac-utilities" (OuterVolumeSpecName: "utilities") pod "4a4935c9-c4ea-491c-8b9a-a20074780fac" (UID: "4a4935c9-c4ea-491c-8b9a-a20074780fac"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 09:08:27 crc kubenswrapper[4795]: I1129 09:08:27.452563 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a4935c9-c4ea-491c-8b9a-a20074780fac-kube-api-access-5llsk" (OuterVolumeSpecName: "kube-api-access-5llsk") pod "4a4935c9-c4ea-491c-8b9a-a20074780fac" (UID: "4a4935c9-c4ea-491c-8b9a-a20074780fac"). InnerVolumeSpecName "kube-api-access-5llsk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 09:08:27 crc kubenswrapper[4795]: I1129 09:08:27.518439 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a4935c9-c4ea-491c-8b9a-a20074780fac-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4a4935c9-c4ea-491c-8b9a-a20074780fac" (UID: "4a4935c9-c4ea-491c-8b9a-a20074780fac"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 09:08:27 crc kubenswrapper[4795]: I1129 09:08:27.545066 4795 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a4935c9-c4ea-491c-8b9a-a20074780fac-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 09:08:27 crc kubenswrapper[4795]: I1129 09:08:27.545102 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5llsk\" (UniqueName: \"kubernetes.io/projected/4a4935c9-c4ea-491c-8b9a-a20074780fac-kube-api-access-5llsk\") on node \"crc\" DevicePath \"\"" Nov 29 09:08:27 crc kubenswrapper[4795]: I1129 09:08:27.545113 4795 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a4935c9-c4ea-491c-8b9a-a20074780fac-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 09:08:27 crc kubenswrapper[4795]: I1129 09:08:27.919432 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8p2s6" event={"ID":"4a4935c9-c4ea-491c-8b9a-a20074780fac","Type":"ContainerDied","Data":"b1f1c68142259329bf5d0b1670c38aacf3d5afdc4d6b918532f24a7484afc733"} Nov 29 09:08:27 crc kubenswrapper[4795]: I1129 09:08:27.919495 4795 scope.go:117] "RemoveContainer" containerID="23273933eaf04fbbc13e4ba96629d6e285acf2461c81bfe3caab1143eed0aad1" Nov 29 09:08:27 crc kubenswrapper[4795]: I1129 09:08:27.919509 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8p2s6" Nov 29 09:08:27 crc kubenswrapper[4795]: I1129 09:08:27.986849 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8p2s6"] Nov 29 09:08:27 crc kubenswrapper[4795]: I1129 09:08:27.993557 4795 scope.go:117] "RemoveContainer" containerID="6181413b93a5ee13be899145898442dfac7290671857c7aae82a321078f98845" Nov 29 09:08:28 crc kubenswrapper[4795]: I1129 09:08:28.000228 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8p2s6"] Nov 29 09:08:28 crc kubenswrapper[4795]: I1129 09:08:28.290644 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a4935c9-c4ea-491c-8b9a-a20074780fac" path="/var/lib/kubelet/pods/4a4935c9-c4ea-491c-8b9a-a20074780fac/volumes" Nov 29 09:08:28 crc kubenswrapper[4795]: I1129 09:08:28.495889 4795 scope.go:117] "RemoveContainer" containerID="d3849b66ed1237cf8007c613018c6b5a86a4502e711a906c8b48faa4a8c620e0" Nov 29 09:08:41 crc kubenswrapper[4795]: I1129 09:08:41.941450 4795 patch_prober.go:28] interesting pod/machine-config-daemon-bkmq6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 09:08:41 crc kubenswrapper[4795]: I1129 09:08:41.942506 4795 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 09:08:41 crc kubenswrapper[4795]: I1129 09:08:41.942612 4795 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" Nov 29 09:08:41 crc kubenswrapper[4795]: I1129 09:08:41.943930 4795 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"218724df802519bbb881bb31211f4f6c81a4822a96523ad04fdb2384ff35b7c7"} pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 09:08:41 crc kubenswrapper[4795]: I1129 09:08:41.944000 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" containerID="cri-o://218724df802519bbb881bb31211f4f6c81a4822a96523ad04fdb2384ff35b7c7" gracePeriod=600 Nov 29 09:08:42 crc kubenswrapper[4795]: I1129 09:08:42.089419 4795 generic.go:334] "Generic (PLEG): container finished" podID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerID="218724df802519bbb881bb31211f4f6c81a4822a96523ad04fdb2384ff35b7c7" exitCode=0 Nov 29 09:08:42 crc kubenswrapper[4795]: I1129 09:08:42.089475 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" event={"ID":"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1","Type":"ContainerDied","Data":"218724df802519bbb881bb31211f4f6c81a4822a96523ad04fdb2384ff35b7c7"} Nov 29 09:08:42 crc kubenswrapper[4795]: I1129 09:08:42.089548 4795 scope.go:117] "RemoveContainer" containerID="bae7a18bd4142aa219365a49652a2ec926c5c7d711430a0db99f082fa2c943e8" Nov 29 09:08:42 crc kubenswrapper[4795]: E1129 09:08:42.146052 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:08:43 crc kubenswrapper[4795]: I1129 09:08:43.102405 4795 scope.go:117] "RemoveContainer" containerID="218724df802519bbb881bb31211f4f6c81a4822a96523ad04fdb2384ff35b7c7" Nov 29 09:08:43 crc kubenswrapper[4795]: E1129 09:08:43.102733 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:08:58 crc kubenswrapper[4795]: I1129 09:08:58.277851 4795 scope.go:117] "RemoveContainer" containerID="218724df802519bbb881bb31211f4f6c81a4822a96523ad04fdb2384ff35b7c7" Nov 29 09:08:58 crc kubenswrapper[4795]: E1129 09:08:58.279306 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:09:12 crc kubenswrapper[4795]: I1129 09:09:12.277870 4795 scope.go:117] "RemoveContainer" containerID="218724df802519bbb881bb31211f4f6c81a4822a96523ad04fdb2384ff35b7c7" Nov 29 09:09:12 crc kubenswrapper[4795]: E1129 09:09:12.278772 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:09:24 crc kubenswrapper[4795]: I1129 09:09:24.286175 4795 scope.go:117] "RemoveContainer" containerID="218724df802519bbb881bb31211f4f6c81a4822a96523ad04fdb2384ff35b7c7" Nov 29 09:09:24 crc kubenswrapper[4795]: E1129 09:09:24.287743 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:09:35 crc kubenswrapper[4795]: I1129 09:09:35.277078 4795 scope.go:117] "RemoveContainer" containerID="218724df802519bbb881bb31211f4f6c81a4822a96523ad04fdb2384ff35b7c7" Nov 29 09:09:35 crc kubenswrapper[4795]: E1129 09:09:35.277937 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:09:50 crc kubenswrapper[4795]: I1129 09:09:50.279983 4795 scope.go:117] "RemoveContainer" containerID="218724df802519bbb881bb31211f4f6c81a4822a96523ad04fdb2384ff35b7c7" Nov 29 09:09:50 crc kubenswrapper[4795]: E1129 09:09:50.280926 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:09:54 crc kubenswrapper[4795]: I1129 09:09:54.010478 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rxgns"] Nov 29 09:09:54 crc kubenswrapper[4795]: E1129 09:09:54.013770 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2c97b1d-c171-4a73-b0ff-b96774327bf5" containerName="extract-content" Nov 29 09:09:54 crc kubenswrapper[4795]: I1129 09:09:54.013803 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2c97b1d-c171-4a73-b0ff-b96774327bf5" containerName="extract-content" Nov 29 09:09:54 crc kubenswrapper[4795]: E1129 09:09:54.013922 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2c97b1d-c171-4a73-b0ff-b96774327bf5" containerName="registry-server" Nov 29 09:09:54 crc kubenswrapper[4795]: I1129 09:09:54.013934 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2c97b1d-c171-4a73-b0ff-b96774327bf5" containerName="registry-server" Nov 29 09:09:54 crc kubenswrapper[4795]: E1129 09:09:54.013950 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a4935c9-c4ea-491c-8b9a-a20074780fac" containerName="extract-utilities" Nov 29 09:09:54 crc kubenswrapper[4795]: I1129 09:09:54.013957 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a4935c9-c4ea-491c-8b9a-a20074780fac" containerName="extract-utilities" Nov 29 09:09:54 crc kubenswrapper[4795]: E1129 09:09:54.013972 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a4935c9-c4ea-491c-8b9a-a20074780fac" containerName="registry-server" Nov 29 09:09:54 crc kubenswrapper[4795]: I1129 09:09:54.013979 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a4935c9-c4ea-491c-8b9a-a20074780fac" containerName="registry-server" Nov 29 09:09:54 crc kubenswrapper[4795]: E1129 09:09:54.013993 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2c97b1d-c171-4a73-b0ff-b96774327bf5" containerName="extract-utilities" Nov 29 09:09:54 crc kubenswrapper[4795]: I1129 09:09:54.013998 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2c97b1d-c171-4a73-b0ff-b96774327bf5" containerName="extract-utilities" Nov 29 09:09:54 crc kubenswrapper[4795]: E1129 09:09:54.014028 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a4935c9-c4ea-491c-8b9a-a20074780fac" containerName="extract-content" Nov 29 09:09:54 crc kubenswrapper[4795]: I1129 09:09:54.014034 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a4935c9-c4ea-491c-8b9a-a20074780fac" containerName="extract-content" Nov 29 09:09:54 crc kubenswrapper[4795]: I1129 09:09:54.014870 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2c97b1d-c171-4a73-b0ff-b96774327bf5" containerName="registry-server" Nov 29 09:09:54 crc kubenswrapper[4795]: I1129 09:09:54.014898 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a4935c9-c4ea-491c-8b9a-a20074780fac" containerName="registry-server" Nov 29 09:09:54 crc kubenswrapper[4795]: I1129 09:09:54.017854 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rxgns" Nov 29 09:09:54 crc kubenswrapper[4795]: I1129 09:09:54.538079 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a231989-acaf-4ed8-90a6-60dc47de7750-utilities\") pod \"redhat-marketplace-rxgns\" (UID: \"3a231989-acaf-4ed8-90a6-60dc47de7750\") " pod="openshift-marketplace/redhat-marketplace-rxgns" Nov 29 09:09:54 crc kubenswrapper[4795]: I1129 09:09:54.541515 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a231989-acaf-4ed8-90a6-60dc47de7750-catalog-content\") pod \"redhat-marketplace-rxgns\" (UID: \"3a231989-acaf-4ed8-90a6-60dc47de7750\") " pod="openshift-marketplace/redhat-marketplace-rxgns" Nov 29 09:09:54 crc kubenswrapper[4795]: I1129 09:09:54.541626 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25v74\" (UniqueName: \"kubernetes.io/projected/3a231989-acaf-4ed8-90a6-60dc47de7750-kube-api-access-25v74\") pod \"redhat-marketplace-rxgns\" (UID: \"3a231989-acaf-4ed8-90a6-60dc47de7750\") " pod="openshift-marketplace/redhat-marketplace-rxgns" Nov 29 09:09:54 crc kubenswrapper[4795]: I1129 09:09:54.655277 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a231989-acaf-4ed8-90a6-60dc47de7750-utilities\") pod \"redhat-marketplace-rxgns\" (UID: \"3a231989-acaf-4ed8-90a6-60dc47de7750\") " pod="openshift-marketplace/redhat-marketplace-rxgns" Nov 29 09:09:54 crc kubenswrapper[4795]: I1129 09:09:54.655819 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a231989-acaf-4ed8-90a6-60dc47de7750-catalog-content\") pod \"redhat-marketplace-rxgns\" (UID: \"3a231989-acaf-4ed8-90a6-60dc47de7750\") " pod="openshift-marketplace/redhat-marketplace-rxgns" Nov 29 09:09:54 crc kubenswrapper[4795]: I1129 09:09:54.655980 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25v74\" (UniqueName: \"kubernetes.io/projected/3a231989-acaf-4ed8-90a6-60dc47de7750-kube-api-access-25v74\") pod \"redhat-marketplace-rxgns\" (UID: \"3a231989-acaf-4ed8-90a6-60dc47de7750\") " pod="openshift-marketplace/redhat-marketplace-rxgns" Nov 29 09:09:54 crc kubenswrapper[4795]: I1129 09:09:54.658721 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a231989-acaf-4ed8-90a6-60dc47de7750-utilities\") pod \"redhat-marketplace-rxgns\" (UID: \"3a231989-acaf-4ed8-90a6-60dc47de7750\") " pod="openshift-marketplace/redhat-marketplace-rxgns" Nov 29 09:09:54 crc kubenswrapper[4795]: I1129 09:09:54.658770 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a231989-acaf-4ed8-90a6-60dc47de7750-catalog-content\") pod \"redhat-marketplace-rxgns\" (UID: \"3a231989-acaf-4ed8-90a6-60dc47de7750\") " pod="openshift-marketplace/redhat-marketplace-rxgns" Nov 29 09:09:54 crc kubenswrapper[4795]: I1129 09:09:54.718895 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25v74\" (UniqueName: \"kubernetes.io/projected/3a231989-acaf-4ed8-90a6-60dc47de7750-kube-api-access-25v74\") pod \"redhat-marketplace-rxgns\" (UID: \"3a231989-acaf-4ed8-90a6-60dc47de7750\") " pod="openshift-marketplace/redhat-marketplace-rxgns" Nov 29 09:09:54 crc kubenswrapper[4795]: I1129 09:09:54.805927 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rxgns"] Nov 29 09:09:54 crc kubenswrapper[4795]: I1129 09:09:54.862215 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rxgns" Nov 29 09:09:55 crc kubenswrapper[4795]: I1129 09:09:55.593033 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rxgns"] Nov 29 09:09:55 crc kubenswrapper[4795]: I1129 09:09:55.956288 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rxgns" event={"ID":"3a231989-acaf-4ed8-90a6-60dc47de7750","Type":"ContainerStarted","Data":"1abf5cbf9f6603e7b3c5390745101f8aa7d80a1762f8fca04f7e053aeb830229"} Nov 29 09:09:56 crc kubenswrapper[4795]: I1129 09:09:56.971256 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rxgns" event={"ID":"3a231989-acaf-4ed8-90a6-60dc47de7750","Type":"ContainerDied","Data":"60310f4c2c50c4730c75006309ed22c575d3efe139033963db8c635f782b4113"} Nov 29 09:09:56 crc kubenswrapper[4795]: I1129 09:09:56.971110 4795 generic.go:334] "Generic (PLEG): container finished" podID="3a231989-acaf-4ed8-90a6-60dc47de7750" containerID="60310f4c2c50c4730c75006309ed22c575d3efe139033963db8c635f782b4113" exitCode=0 Nov 29 09:09:58 crc kubenswrapper[4795]: I1129 09:09:58.997810 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rxgns" event={"ID":"3a231989-acaf-4ed8-90a6-60dc47de7750","Type":"ContainerStarted","Data":"72ee36eb444dc46c2649eb5dd5e42bf480f419a1f9132df6072ae07c09a7dc00"} Nov 29 09:10:00 crc kubenswrapper[4795]: I1129 09:10:00.011403 4795 generic.go:334] "Generic (PLEG): container finished" podID="3a231989-acaf-4ed8-90a6-60dc47de7750" containerID="72ee36eb444dc46c2649eb5dd5e42bf480f419a1f9132df6072ae07c09a7dc00" exitCode=0 Nov 29 09:10:00 crc kubenswrapper[4795]: I1129 09:10:00.011791 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rxgns" event={"ID":"3a231989-acaf-4ed8-90a6-60dc47de7750","Type":"ContainerDied","Data":"72ee36eb444dc46c2649eb5dd5e42bf480f419a1f9132df6072ae07c09a7dc00"} Nov 29 09:10:01 crc kubenswrapper[4795]: I1129 09:10:01.043211 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rxgns" event={"ID":"3a231989-acaf-4ed8-90a6-60dc47de7750","Type":"ContainerStarted","Data":"a37b835b13ba1ad395a48b9bd514b18abe286b98ff3d161cbb0caf6f1b401aca"} Nov 29 09:10:01 crc kubenswrapper[4795]: I1129 09:10:01.065754 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rxgns" podStartSLOduration=4.61118452 podStartE2EDuration="8.06544448s" podCreationTimestamp="2025-11-29 09:09:53 +0000 UTC" firstStartedPulling="2025-11-29 09:09:56.975344702 +0000 UTC m=+5442.950920512" lastFinishedPulling="2025-11-29 09:10:00.429604682 +0000 UTC m=+5446.405180472" observedRunningTime="2025-11-29 09:10:01.062396944 +0000 UTC m=+5447.037972754" watchObservedRunningTime="2025-11-29 09:10:01.06544448 +0000 UTC m=+5447.041020290" Nov 29 09:10:02 crc kubenswrapper[4795]: I1129 09:10:02.276800 4795 scope.go:117] "RemoveContainer" containerID="218724df802519bbb881bb31211f4f6c81a4822a96523ad04fdb2384ff35b7c7" Nov 29 09:10:02 crc kubenswrapper[4795]: E1129 09:10:02.277153 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:10:04 crc kubenswrapper[4795]: I1129 09:10:04.863413 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rxgns" Nov 29 09:10:04 crc kubenswrapper[4795]: I1129 09:10:04.865298 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rxgns" Nov 29 09:10:04 crc kubenswrapper[4795]: I1129 09:10:04.932074 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rxgns" Nov 29 09:10:05 crc kubenswrapper[4795]: I1129 09:10:05.185051 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rxgns" Nov 29 09:10:05 crc kubenswrapper[4795]: I1129 09:10:05.250401 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rxgns"] Nov 29 09:10:07 crc kubenswrapper[4795]: I1129 09:10:07.139510 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rxgns" podUID="3a231989-acaf-4ed8-90a6-60dc47de7750" containerName="registry-server" containerID="cri-o://a37b835b13ba1ad395a48b9bd514b18abe286b98ff3d161cbb0caf6f1b401aca" gracePeriod=2 Nov 29 09:10:07 crc kubenswrapper[4795]: I1129 09:10:07.965161 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rxgns" Nov 29 09:10:08 crc kubenswrapper[4795]: I1129 09:10:08.047887 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25v74\" (UniqueName: \"kubernetes.io/projected/3a231989-acaf-4ed8-90a6-60dc47de7750-kube-api-access-25v74\") pod \"3a231989-acaf-4ed8-90a6-60dc47de7750\" (UID: \"3a231989-acaf-4ed8-90a6-60dc47de7750\") " Nov 29 09:10:08 crc kubenswrapper[4795]: I1129 09:10:08.048200 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a231989-acaf-4ed8-90a6-60dc47de7750-utilities\") pod \"3a231989-acaf-4ed8-90a6-60dc47de7750\" (UID: \"3a231989-acaf-4ed8-90a6-60dc47de7750\") " Nov 29 09:10:08 crc kubenswrapper[4795]: I1129 09:10:08.048459 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a231989-acaf-4ed8-90a6-60dc47de7750-catalog-content\") pod \"3a231989-acaf-4ed8-90a6-60dc47de7750\" (UID: \"3a231989-acaf-4ed8-90a6-60dc47de7750\") " Nov 29 09:10:08 crc kubenswrapper[4795]: I1129 09:10:08.050193 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a231989-acaf-4ed8-90a6-60dc47de7750-utilities" (OuterVolumeSpecName: "utilities") pod "3a231989-acaf-4ed8-90a6-60dc47de7750" (UID: "3a231989-acaf-4ed8-90a6-60dc47de7750"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 09:10:08 crc kubenswrapper[4795]: I1129 09:10:08.072412 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a231989-acaf-4ed8-90a6-60dc47de7750-kube-api-access-25v74" (OuterVolumeSpecName: "kube-api-access-25v74") pod "3a231989-acaf-4ed8-90a6-60dc47de7750" (UID: "3a231989-acaf-4ed8-90a6-60dc47de7750"). InnerVolumeSpecName "kube-api-access-25v74". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 09:10:08 crc kubenswrapper[4795]: I1129 09:10:08.114301 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a231989-acaf-4ed8-90a6-60dc47de7750-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3a231989-acaf-4ed8-90a6-60dc47de7750" (UID: "3a231989-acaf-4ed8-90a6-60dc47de7750"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 09:10:08 crc kubenswrapper[4795]: I1129 09:10:08.154941 4795 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a231989-acaf-4ed8-90a6-60dc47de7750-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 09:10:08 crc kubenswrapper[4795]: I1129 09:10:08.154984 4795 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a231989-acaf-4ed8-90a6-60dc47de7750-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 09:10:08 crc kubenswrapper[4795]: I1129 09:10:08.154997 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25v74\" (UniqueName: \"kubernetes.io/projected/3a231989-acaf-4ed8-90a6-60dc47de7750-kube-api-access-25v74\") on node \"crc\" DevicePath \"\"" Nov 29 09:10:08 crc kubenswrapper[4795]: I1129 09:10:08.162833 4795 generic.go:334] "Generic (PLEG): container finished" podID="3a231989-acaf-4ed8-90a6-60dc47de7750" containerID="a37b835b13ba1ad395a48b9bd514b18abe286b98ff3d161cbb0caf6f1b401aca" exitCode=0 Nov 29 09:10:08 crc kubenswrapper[4795]: I1129 09:10:08.162960 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rxgns" event={"ID":"3a231989-acaf-4ed8-90a6-60dc47de7750","Type":"ContainerDied","Data":"a37b835b13ba1ad395a48b9bd514b18abe286b98ff3d161cbb0caf6f1b401aca"} Nov 29 09:10:08 crc kubenswrapper[4795]: I1129 09:10:08.163087 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rxgns" event={"ID":"3a231989-acaf-4ed8-90a6-60dc47de7750","Type":"ContainerDied","Data":"1abf5cbf9f6603e7b3c5390745101f8aa7d80a1762f8fca04f7e053aeb830229"} Nov 29 09:10:08 crc kubenswrapper[4795]: I1129 09:10:08.163010 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rxgns" Nov 29 09:10:08 crc kubenswrapper[4795]: I1129 09:10:08.163134 4795 scope.go:117] "RemoveContainer" containerID="a37b835b13ba1ad395a48b9bd514b18abe286b98ff3d161cbb0caf6f1b401aca" Nov 29 09:10:08 crc kubenswrapper[4795]: I1129 09:10:08.188788 4795 scope.go:117] "RemoveContainer" containerID="72ee36eb444dc46c2649eb5dd5e42bf480f419a1f9132df6072ae07c09a7dc00" Nov 29 09:10:08 crc kubenswrapper[4795]: I1129 09:10:08.243930 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rxgns"] Nov 29 09:10:08 crc kubenswrapper[4795]: I1129 09:10:08.292074 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rxgns"] Nov 29 09:10:08 crc kubenswrapper[4795]: I1129 09:10:08.300257 4795 scope.go:117] "RemoveContainer" containerID="60310f4c2c50c4730c75006309ed22c575d3efe139033963db8c635f782b4113" Nov 29 09:10:08 crc kubenswrapper[4795]: I1129 09:10:08.349406 4795 scope.go:117] "RemoveContainer" containerID="a37b835b13ba1ad395a48b9bd514b18abe286b98ff3d161cbb0caf6f1b401aca" Nov 29 09:10:08 crc kubenswrapper[4795]: E1129 09:10:08.350435 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a37b835b13ba1ad395a48b9bd514b18abe286b98ff3d161cbb0caf6f1b401aca\": container with ID starting with a37b835b13ba1ad395a48b9bd514b18abe286b98ff3d161cbb0caf6f1b401aca not found: ID does not exist" containerID="a37b835b13ba1ad395a48b9bd514b18abe286b98ff3d161cbb0caf6f1b401aca" Nov 29 09:10:08 crc kubenswrapper[4795]: I1129 09:10:08.350695 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a37b835b13ba1ad395a48b9bd514b18abe286b98ff3d161cbb0caf6f1b401aca"} err="failed to get container status \"a37b835b13ba1ad395a48b9bd514b18abe286b98ff3d161cbb0caf6f1b401aca\": rpc error: code = NotFound desc = could not find container \"a37b835b13ba1ad395a48b9bd514b18abe286b98ff3d161cbb0caf6f1b401aca\": container with ID starting with a37b835b13ba1ad395a48b9bd514b18abe286b98ff3d161cbb0caf6f1b401aca not found: ID does not exist" Nov 29 09:10:08 crc kubenswrapper[4795]: I1129 09:10:08.350731 4795 scope.go:117] "RemoveContainer" containerID="72ee36eb444dc46c2649eb5dd5e42bf480f419a1f9132df6072ae07c09a7dc00" Nov 29 09:10:08 crc kubenswrapper[4795]: E1129 09:10:08.351193 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72ee36eb444dc46c2649eb5dd5e42bf480f419a1f9132df6072ae07c09a7dc00\": container with ID starting with 72ee36eb444dc46c2649eb5dd5e42bf480f419a1f9132df6072ae07c09a7dc00 not found: ID does not exist" containerID="72ee36eb444dc46c2649eb5dd5e42bf480f419a1f9132df6072ae07c09a7dc00" Nov 29 09:10:08 crc kubenswrapper[4795]: I1129 09:10:08.351241 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72ee36eb444dc46c2649eb5dd5e42bf480f419a1f9132df6072ae07c09a7dc00"} err="failed to get container status \"72ee36eb444dc46c2649eb5dd5e42bf480f419a1f9132df6072ae07c09a7dc00\": rpc error: code = NotFound desc = could not find container \"72ee36eb444dc46c2649eb5dd5e42bf480f419a1f9132df6072ae07c09a7dc00\": container with ID starting with 72ee36eb444dc46c2649eb5dd5e42bf480f419a1f9132df6072ae07c09a7dc00 not found: ID does not exist" Nov 29 09:10:08 crc kubenswrapper[4795]: I1129 09:10:08.351287 4795 scope.go:117] "RemoveContainer" containerID="60310f4c2c50c4730c75006309ed22c575d3efe139033963db8c635f782b4113" Nov 29 09:10:08 crc kubenswrapper[4795]: E1129 09:10:08.351581 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60310f4c2c50c4730c75006309ed22c575d3efe139033963db8c635f782b4113\": container with ID starting with 60310f4c2c50c4730c75006309ed22c575d3efe139033963db8c635f782b4113 not found: ID does not exist" containerID="60310f4c2c50c4730c75006309ed22c575d3efe139033963db8c635f782b4113" Nov 29 09:10:08 crc kubenswrapper[4795]: I1129 09:10:08.351618 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60310f4c2c50c4730c75006309ed22c575d3efe139033963db8c635f782b4113"} err="failed to get container status \"60310f4c2c50c4730c75006309ed22c575d3efe139033963db8c635f782b4113\": rpc error: code = NotFound desc = could not find container \"60310f4c2c50c4730c75006309ed22c575d3efe139033963db8c635f782b4113\": container with ID starting with 60310f4c2c50c4730c75006309ed22c575d3efe139033963db8c635f782b4113 not found: ID does not exist" Nov 29 09:10:10 crc kubenswrapper[4795]: I1129 09:10:10.301166 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a231989-acaf-4ed8-90a6-60dc47de7750" path="/var/lib/kubelet/pods/3a231989-acaf-4ed8-90a6-60dc47de7750/volumes" Nov 29 09:10:14 crc kubenswrapper[4795]: I1129 09:10:14.288316 4795 scope.go:117] "RemoveContainer" containerID="218724df802519bbb881bb31211f4f6c81a4822a96523ad04fdb2384ff35b7c7" Nov 29 09:10:14 crc kubenswrapper[4795]: E1129 09:10:14.289276 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:10:26 crc kubenswrapper[4795]: I1129 09:10:26.275881 4795 scope.go:117] "RemoveContainer" containerID="218724df802519bbb881bb31211f4f6c81a4822a96523ad04fdb2384ff35b7c7" Nov 29 09:10:26 crc kubenswrapper[4795]: E1129 09:10:26.277154 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:10:39 crc kubenswrapper[4795]: I1129 09:10:39.276032 4795 scope.go:117] "RemoveContainer" containerID="218724df802519bbb881bb31211f4f6c81a4822a96523ad04fdb2384ff35b7c7" Nov 29 09:10:39 crc kubenswrapper[4795]: E1129 09:10:39.278379 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:10:52 crc kubenswrapper[4795]: I1129 09:10:52.277865 4795 scope.go:117] "RemoveContainer" containerID="218724df802519bbb881bb31211f4f6c81a4822a96523ad04fdb2384ff35b7c7" Nov 29 09:10:52 crc kubenswrapper[4795]: E1129 09:10:52.279035 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:11:04 crc kubenswrapper[4795]: I1129 09:11:04.286171 4795 scope.go:117] "RemoveContainer" containerID="218724df802519bbb881bb31211f4f6c81a4822a96523ad04fdb2384ff35b7c7" Nov 29 09:11:04 crc kubenswrapper[4795]: E1129 09:11:04.287131 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:11:16 crc kubenswrapper[4795]: I1129 09:11:16.278786 4795 scope.go:117] "RemoveContainer" containerID="218724df802519bbb881bb31211f4f6c81a4822a96523ad04fdb2384ff35b7c7" Nov 29 09:11:16 crc kubenswrapper[4795]: E1129 09:11:16.279699 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:11:31 crc kubenswrapper[4795]: I1129 09:11:31.276798 4795 scope.go:117] "RemoveContainer" containerID="218724df802519bbb881bb31211f4f6c81a4822a96523ad04fdb2384ff35b7c7" Nov 29 09:11:31 crc kubenswrapper[4795]: E1129 09:11:31.277708 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:11:46 crc kubenswrapper[4795]: I1129 09:11:46.276624 4795 scope.go:117] "RemoveContainer" containerID="218724df802519bbb881bb31211f4f6c81a4822a96523ad04fdb2384ff35b7c7" Nov 29 09:11:46 crc kubenswrapper[4795]: E1129 09:11:46.277357 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:11:58 crc kubenswrapper[4795]: I1129 09:11:58.276323 4795 scope.go:117] "RemoveContainer" containerID="218724df802519bbb881bb31211f4f6c81a4822a96523ad04fdb2384ff35b7c7" Nov 29 09:11:58 crc kubenswrapper[4795]: E1129 09:11:58.276940 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:12:11 crc kubenswrapper[4795]: I1129 09:12:11.275951 4795 scope.go:117] "RemoveContainer" containerID="218724df802519bbb881bb31211f4f6c81a4822a96523ad04fdb2384ff35b7c7" Nov 29 09:12:11 crc kubenswrapper[4795]: E1129 09:12:11.277007 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:12:25 crc kubenswrapper[4795]: I1129 09:12:25.276638 4795 scope.go:117] "RemoveContainer" containerID="218724df802519bbb881bb31211f4f6c81a4822a96523ad04fdb2384ff35b7c7" Nov 29 09:12:25 crc kubenswrapper[4795]: E1129 09:12:25.277757 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:12:40 crc kubenswrapper[4795]: I1129 09:12:40.281269 4795 scope.go:117] "RemoveContainer" containerID="218724df802519bbb881bb31211f4f6c81a4822a96523ad04fdb2384ff35b7c7" Nov 29 09:12:40 crc kubenswrapper[4795]: E1129 09:12:40.282778 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:12:51 crc kubenswrapper[4795]: I1129 09:12:51.277353 4795 scope.go:117] "RemoveContainer" containerID="218724df802519bbb881bb31211f4f6c81a4822a96523ad04fdb2384ff35b7c7" Nov 29 09:12:51 crc kubenswrapper[4795]: E1129 09:12:51.278325 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:13:06 crc kubenswrapper[4795]: I1129 09:13:06.275822 4795 scope.go:117] "RemoveContainer" containerID="218724df802519bbb881bb31211f4f6c81a4822a96523ad04fdb2384ff35b7c7" Nov 29 09:13:06 crc kubenswrapper[4795]: E1129 09:13:06.276676 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:13:20 crc kubenswrapper[4795]: I1129 09:13:20.277146 4795 scope.go:117] "RemoveContainer" containerID="218724df802519bbb881bb31211f4f6c81a4822a96523ad04fdb2384ff35b7c7" Nov 29 09:13:20 crc kubenswrapper[4795]: E1129 09:13:20.278626 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:13:31 crc kubenswrapper[4795]: I1129 09:13:31.275760 4795 scope.go:117] "RemoveContainer" containerID="218724df802519bbb881bb31211f4f6c81a4822a96523ad04fdb2384ff35b7c7" Nov 29 09:13:31 crc kubenswrapper[4795]: E1129 09:13:31.276537 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:13:45 crc kubenswrapper[4795]: I1129 09:13:45.276224 4795 scope.go:117] "RemoveContainer" containerID="218724df802519bbb881bb31211f4f6c81a4822a96523ad04fdb2384ff35b7c7" Nov 29 09:13:45 crc kubenswrapper[4795]: I1129 09:13:45.794571 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" event={"ID":"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1","Type":"ContainerStarted","Data":"6c628579721cc1552166fc799b21fa600a7e10daf7d5387acee4ce9b0073e7fd"} Nov 29 09:15:00 crc kubenswrapper[4795]: I1129 09:15:00.183023 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406795-7jq6s"] Nov 29 09:15:00 crc kubenswrapper[4795]: E1129 09:15:00.184766 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a231989-acaf-4ed8-90a6-60dc47de7750" containerName="registry-server" Nov 29 09:15:00 crc kubenswrapper[4795]: I1129 09:15:00.184789 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a231989-acaf-4ed8-90a6-60dc47de7750" containerName="registry-server" Nov 29 09:15:00 crc kubenswrapper[4795]: E1129 09:15:00.184897 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a231989-acaf-4ed8-90a6-60dc47de7750" containerName="extract-content" Nov 29 09:15:00 crc kubenswrapper[4795]: I1129 09:15:00.184904 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a231989-acaf-4ed8-90a6-60dc47de7750" containerName="extract-content" Nov 29 09:15:00 crc kubenswrapper[4795]: E1129 09:15:00.184949 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a231989-acaf-4ed8-90a6-60dc47de7750" containerName="extract-utilities" Nov 29 09:15:00 crc kubenswrapper[4795]: I1129 09:15:00.184958 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a231989-acaf-4ed8-90a6-60dc47de7750" containerName="extract-utilities" Nov 29 09:15:00 crc kubenswrapper[4795]: I1129 09:15:00.185235 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a231989-acaf-4ed8-90a6-60dc47de7750" containerName="registry-server" Nov 29 09:15:00 crc kubenswrapper[4795]: I1129 09:15:00.187327 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406795-7jq6s" Nov 29 09:15:00 crc kubenswrapper[4795]: I1129 09:15:00.190666 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 29 09:15:00 crc kubenswrapper[4795]: I1129 09:15:00.190668 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 29 09:15:00 crc kubenswrapper[4795]: I1129 09:15:00.204240 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406795-7jq6s"] Nov 29 09:15:00 crc kubenswrapper[4795]: I1129 09:15:00.285953 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a6d0c717-3720-4198-9aeb-414fce2bc93f-config-volume\") pod \"collect-profiles-29406795-7jq6s\" (UID: \"a6d0c717-3720-4198-9aeb-414fce2bc93f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406795-7jq6s" Nov 29 09:15:00 crc kubenswrapper[4795]: I1129 09:15:00.286051 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jg9fq\" (UniqueName: \"kubernetes.io/projected/a6d0c717-3720-4198-9aeb-414fce2bc93f-kube-api-access-jg9fq\") pod \"collect-profiles-29406795-7jq6s\" (UID: \"a6d0c717-3720-4198-9aeb-414fce2bc93f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406795-7jq6s" Nov 29 09:15:00 crc kubenswrapper[4795]: I1129 09:15:00.286236 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a6d0c717-3720-4198-9aeb-414fce2bc93f-secret-volume\") pod \"collect-profiles-29406795-7jq6s\" (UID: \"a6d0c717-3720-4198-9aeb-414fce2bc93f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406795-7jq6s" Nov 29 09:15:00 crc kubenswrapper[4795]: I1129 09:15:00.388175 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a6d0c717-3720-4198-9aeb-414fce2bc93f-secret-volume\") pod \"collect-profiles-29406795-7jq6s\" (UID: \"a6d0c717-3720-4198-9aeb-414fce2bc93f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406795-7jq6s" Nov 29 09:15:00 crc kubenswrapper[4795]: I1129 09:15:00.388282 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a6d0c717-3720-4198-9aeb-414fce2bc93f-config-volume\") pod \"collect-profiles-29406795-7jq6s\" (UID: \"a6d0c717-3720-4198-9aeb-414fce2bc93f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406795-7jq6s" Nov 29 09:15:00 crc kubenswrapper[4795]: I1129 09:15:00.388338 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jg9fq\" (UniqueName: \"kubernetes.io/projected/a6d0c717-3720-4198-9aeb-414fce2bc93f-kube-api-access-jg9fq\") pod \"collect-profiles-29406795-7jq6s\" (UID: \"a6d0c717-3720-4198-9aeb-414fce2bc93f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406795-7jq6s" Nov 29 09:15:00 crc kubenswrapper[4795]: I1129 09:15:00.389830 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a6d0c717-3720-4198-9aeb-414fce2bc93f-config-volume\") pod \"collect-profiles-29406795-7jq6s\" (UID: \"a6d0c717-3720-4198-9aeb-414fce2bc93f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406795-7jq6s" Nov 29 09:15:00 crc kubenswrapper[4795]: I1129 09:15:00.865533 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a6d0c717-3720-4198-9aeb-414fce2bc93f-secret-volume\") pod \"collect-profiles-29406795-7jq6s\" (UID: \"a6d0c717-3720-4198-9aeb-414fce2bc93f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406795-7jq6s" Nov 29 09:15:00 crc kubenswrapper[4795]: I1129 09:15:00.865574 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jg9fq\" (UniqueName: \"kubernetes.io/projected/a6d0c717-3720-4198-9aeb-414fce2bc93f-kube-api-access-jg9fq\") pod \"collect-profiles-29406795-7jq6s\" (UID: \"a6d0c717-3720-4198-9aeb-414fce2bc93f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406795-7jq6s" Nov 29 09:15:01 crc kubenswrapper[4795]: I1129 09:15:01.108436 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406795-7jq6s" Nov 29 09:15:01 crc kubenswrapper[4795]: I1129 09:15:01.890746 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406795-7jq6s"] Nov 29 09:15:02 crc kubenswrapper[4795]: I1129 09:15:02.737015 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406795-7jq6s" event={"ID":"a6d0c717-3720-4198-9aeb-414fce2bc93f","Type":"ContainerDied","Data":"2bd54561bea630ab98d8e03a059988ad5e7e36f56d4515f8b59521e6b84f864b"} Nov 29 09:15:02 crc kubenswrapper[4795]: I1129 09:15:02.737105 4795 generic.go:334] "Generic (PLEG): container finished" podID="a6d0c717-3720-4198-9aeb-414fce2bc93f" containerID="2bd54561bea630ab98d8e03a059988ad5e7e36f56d4515f8b59521e6b84f864b" exitCode=0 Nov 29 09:15:02 crc kubenswrapper[4795]: I1129 09:15:02.737805 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406795-7jq6s" event={"ID":"a6d0c717-3720-4198-9aeb-414fce2bc93f","Type":"ContainerStarted","Data":"8047cac973814e9b3cb95720e6bfbe5fc7c19627af4eae9864ba461ddea37f6c"} Nov 29 09:15:04 crc kubenswrapper[4795]: I1129 09:15:04.211040 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406795-7jq6s" Nov 29 09:15:04 crc kubenswrapper[4795]: I1129 09:15:04.315881 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a6d0c717-3720-4198-9aeb-414fce2bc93f-config-volume\") pod \"a6d0c717-3720-4198-9aeb-414fce2bc93f\" (UID: \"a6d0c717-3720-4198-9aeb-414fce2bc93f\") " Nov 29 09:15:04 crc kubenswrapper[4795]: I1129 09:15:04.317890 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jg9fq\" (UniqueName: \"kubernetes.io/projected/a6d0c717-3720-4198-9aeb-414fce2bc93f-kube-api-access-jg9fq\") pod \"a6d0c717-3720-4198-9aeb-414fce2bc93f\" (UID: \"a6d0c717-3720-4198-9aeb-414fce2bc93f\") " Nov 29 09:15:04 crc kubenswrapper[4795]: I1129 09:15:04.318646 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a6d0c717-3720-4198-9aeb-414fce2bc93f-secret-volume\") pod \"a6d0c717-3720-4198-9aeb-414fce2bc93f\" (UID: \"a6d0c717-3720-4198-9aeb-414fce2bc93f\") " Nov 29 09:15:04 crc kubenswrapper[4795]: I1129 09:15:04.318114 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6d0c717-3720-4198-9aeb-414fce2bc93f-config-volume" (OuterVolumeSpecName: "config-volume") pod "a6d0c717-3720-4198-9aeb-414fce2bc93f" (UID: "a6d0c717-3720-4198-9aeb-414fce2bc93f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 09:15:04 crc kubenswrapper[4795]: I1129 09:15:04.327128 4795 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a6d0c717-3720-4198-9aeb-414fce2bc93f-config-volume\") on node \"crc\" DevicePath \"\"" Nov 29 09:15:04 crc kubenswrapper[4795]: I1129 09:15:04.327328 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6d0c717-3720-4198-9aeb-414fce2bc93f-kube-api-access-jg9fq" (OuterVolumeSpecName: "kube-api-access-jg9fq") pod "a6d0c717-3720-4198-9aeb-414fce2bc93f" (UID: "a6d0c717-3720-4198-9aeb-414fce2bc93f"). InnerVolumeSpecName "kube-api-access-jg9fq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 09:15:04 crc kubenswrapper[4795]: I1129 09:15:04.337570 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6d0c717-3720-4198-9aeb-414fce2bc93f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a6d0c717-3720-4198-9aeb-414fce2bc93f" (UID: "a6d0c717-3720-4198-9aeb-414fce2bc93f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 09:15:04 crc kubenswrapper[4795]: I1129 09:15:04.429232 4795 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a6d0c717-3720-4198-9aeb-414fce2bc93f-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 29 09:15:04 crc kubenswrapper[4795]: I1129 09:15:04.429282 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jg9fq\" (UniqueName: \"kubernetes.io/projected/a6d0c717-3720-4198-9aeb-414fce2bc93f-kube-api-access-jg9fq\") on node \"crc\" DevicePath \"\"" Nov 29 09:15:04 crc kubenswrapper[4795]: I1129 09:15:04.763571 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406795-7jq6s" event={"ID":"a6d0c717-3720-4198-9aeb-414fce2bc93f","Type":"ContainerDied","Data":"8047cac973814e9b3cb95720e6bfbe5fc7c19627af4eae9864ba461ddea37f6c"} Nov 29 09:15:04 crc kubenswrapper[4795]: I1129 09:15:04.763687 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406795-7jq6s" Nov 29 09:15:04 crc kubenswrapper[4795]: I1129 09:15:04.763978 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8047cac973814e9b3cb95720e6bfbe5fc7c19627af4eae9864ba461ddea37f6c" Nov 29 09:15:05 crc kubenswrapper[4795]: I1129 09:15:05.322722 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406750-rpcjn"] Nov 29 09:15:05 crc kubenswrapper[4795]: I1129 09:15:05.333144 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406750-rpcjn"] Nov 29 09:15:06 crc kubenswrapper[4795]: I1129 09:15:06.291811 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc8556ef-f832-4c85-b401-b0732fd13b58" path="/var/lib/kubelet/pods/dc8556ef-f832-4c85-b401-b0732fd13b58/volumes" Nov 29 09:15:34 crc kubenswrapper[4795]: I1129 09:15:34.880915 4795 scope.go:117] "RemoveContainer" containerID="7ad30826ff744f33020528cf832ceaab0bf6578f0a0bbdc700d7f6983dac08b6" Nov 29 09:16:11 crc kubenswrapper[4795]: I1129 09:16:11.940777 4795 patch_prober.go:28] interesting pod/machine-config-daemon-bkmq6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 09:16:11 crc kubenswrapper[4795]: I1129 09:16:11.941558 4795 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 09:16:41 crc kubenswrapper[4795]: I1129 09:16:41.941512 4795 patch_prober.go:28] interesting pod/machine-config-daemon-bkmq6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 09:16:41 crc kubenswrapper[4795]: I1129 09:16:41.942185 4795 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 09:17:11 crc kubenswrapper[4795]: I1129 09:17:11.941640 4795 patch_prober.go:28] interesting pod/machine-config-daemon-bkmq6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 09:17:11 crc kubenswrapper[4795]: I1129 09:17:11.942359 4795 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 09:17:11 crc kubenswrapper[4795]: I1129 09:17:11.942441 4795 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" Nov 29 09:17:11 crc kubenswrapper[4795]: I1129 09:17:11.944474 4795 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6c628579721cc1552166fc799b21fa600a7e10daf7d5387acee4ce9b0073e7fd"} pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 09:17:11 crc kubenswrapper[4795]: I1129 09:17:11.944826 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" containerID="cri-o://6c628579721cc1552166fc799b21fa600a7e10daf7d5387acee4ce9b0073e7fd" gracePeriod=600 Nov 29 09:17:12 crc kubenswrapper[4795]: I1129 09:17:12.198929 4795 generic.go:334] "Generic (PLEG): container finished" podID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerID="6c628579721cc1552166fc799b21fa600a7e10daf7d5387acee4ce9b0073e7fd" exitCode=0 Nov 29 09:17:12 crc kubenswrapper[4795]: I1129 09:17:12.199007 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" event={"ID":"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1","Type":"ContainerDied","Data":"6c628579721cc1552166fc799b21fa600a7e10daf7d5387acee4ce9b0073e7fd"} Nov 29 09:17:12 crc kubenswrapper[4795]: I1129 09:17:12.199306 4795 scope.go:117] "RemoveContainer" containerID="218724df802519bbb881bb31211f4f6c81a4822a96523ad04fdb2384ff35b7c7" Nov 29 09:17:13 crc kubenswrapper[4795]: I1129 09:17:13.210541 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" event={"ID":"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1","Type":"ContainerStarted","Data":"e3c02c44d457b8bc56d4e6c57834916c5a24411b16830e7c64fb79753ef4fe61"} Nov 29 09:17:30 crc kubenswrapper[4795]: I1129 09:17:30.440692 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-htpgz"] Nov 29 09:17:30 crc kubenswrapper[4795]: E1129 09:17:30.441750 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6d0c717-3720-4198-9aeb-414fce2bc93f" containerName="collect-profiles" Nov 29 09:17:30 crc kubenswrapper[4795]: I1129 09:17:30.441765 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6d0c717-3720-4198-9aeb-414fce2bc93f" containerName="collect-profiles" Nov 29 09:17:30 crc kubenswrapper[4795]: I1129 09:17:30.442032 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6d0c717-3720-4198-9aeb-414fce2bc93f" containerName="collect-profiles" Nov 29 09:17:30 crc kubenswrapper[4795]: I1129 09:17:30.444432 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-htpgz" Nov 29 09:17:30 crc kubenswrapper[4795]: I1129 09:17:30.459297 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-htpgz"] Nov 29 09:17:30 crc kubenswrapper[4795]: I1129 09:17:30.596388 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/402109ed-59ba-42a0-9015-283082e1021f-utilities\") pod \"redhat-operators-htpgz\" (UID: \"402109ed-59ba-42a0-9015-283082e1021f\") " pod="openshift-marketplace/redhat-operators-htpgz" Nov 29 09:17:30 crc kubenswrapper[4795]: I1129 09:17:30.596428 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzvwr\" (UniqueName: \"kubernetes.io/projected/402109ed-59ba-42a0-9015-283082e1021f-kube-api-access-dzvwr\") pod \"redhat-operators-htpgz\" (UID: \"402109ed-59ba-42a0-9015-283082e1021f\") " pod="openshift-marketplace/redhat-operators-htpgz" Nov 29 09:17:30 crc kubenswrapper[4795]: I1129 09:17:30.596449 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/402109ed-59ba-42a0-9015-283082e1021f-catalog-content\") pod \"redhat-operators-htpgz\" (UID: \"402109ed-59ba-42a0-9015-283082e1021f\") " pod="openshift-marketplace/redhat-operators-htpgz" Nov 29 09:17:30 crc kubenswrapper[4795]: I1129 09:17:30.699289 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/402109ed-59ba-42a0-9015-283082e1021f-utilities\") pod \"redhat-operators-htpgz\" (UID: \"402109ed-59ba-42a0-9015-283082e1021f\") " pod="openshift-marketplace/redhat-operators-htpgz" Nov 29 09:17:30 crc kubenswrapper[4795]: I1129 09:17:30.699353 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzvwr\" (UniqueName: \"kubernetes.io/projected/402109ed-59ba-42a0-9015-283082e1021f-kube-api-access-dzvwr\") pod \"redhat-operators-htpgz\" (UID: \"402109ed-59ba-42a0-9015-283082e1021f\") " pod="openshift-marketplace/redhat-operators-htpgz" Nov 29 09:17:30 crc kubenswrapper[4795]: I1129 09:17:30.699392 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/402109ed-59ba-42a0-9015-283082e1021f-catalog-content\") pod \"redhat-operators-htpgz\" (UID: \"402109ed-59ba-42a0-9015-283082e1021f\") " pod="openshift-marketplace/redhat-operators-htpgz" Nov 29 09:17:30 crc kubenswrapper[4795]: I1129 09:17:30.700588 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/402109ed-59ba-42a0-9015-283082e1021f-utilities\") pod \"redhat-operators-htpgz\" (UID: \"402109ed-59ba-42a0-9015-283082e1021f\") " pod="openshift-marketplace/redhat-operators-htpgz" Nov 29 09:17:30 crc kubenswrapper[4795]: I1129 09:17:30.700683 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/402109ed-59ba-42a0-9015-283082e1021f-catalog-content\") pod \"redhat-operators-htpgz\" (UID: \"402109ed-59ba-42a0-9015-283082e1021f\") " pod="openshift-marketplace/redhat-operators-htpgz" Nov 29 09:17:30 crc kubenswrapper[4795]: I1129 09:17:30.721444 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzvwr\" (UniqueName: \"kubernetes.io/projected/402109ed-59ba-42a0-9015-283082e1021f-kube-api-access-dzvwr\") pod \"redhat-operators-htpgz\" (UID: \"402109ed-59ba-42a0-9015-283082e1021f\") " pod="openshift-marketplace/redhat-operators-htpgz" Nov 29 09:17:30 crc kubenswrapper[4795]: I1129 09:17:30.769626 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-htpgz" Nov 29 09:17:31 crc kubenswrapper[4795]: I1129 09:17:31.289261 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-htpgz"] Nov 29 09:17:31 crc kubenswrapper[4795]: I1129 09:17:31.423509 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-htpgz" event={"ID":"402109ed-59ba-42a0-9015-283082e1021f","Type":"ContainerStarted","Data":"26d6b46ca5df5f70a630bb23102ac77d0b07eb169f6f69abb7a1c981e2123a6c"} Nov 29 09:17:32 crc kubenswrapper[4795]: I1129 09:17:32.437408 4795 generic.go:334] "Generic (PLEG): container finished" podID="402109ed-59ba-42a0-9015-283082e1021f" containerID="785fc51404183ac0656fa7017dcf3d1e9dc2fec8018e938966f1e2ee513ad3ed" exitCode=0 Nov 29 09:17:32 crc kubenswrapper[4795]: I1129 09:17:32.437508 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-htpgz" event={"ID":"402109ed-59ba-42a0-9015-283082e1021f","Type":"ContainerDied","Data":"785fc51404183ac0656fa7017dcf3d1e9dc2fec8018e938966f1e2ee513ad3ed"} Nov 29 09:17:32 crc kubenswrapper[4795]: I1129 09:17:32.442393 4795 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 29 09:17:34 crc kubenswrapper[4795]: I1129 09:17:34.461473 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-htpgz" event={"ID":"402109ed-59ba-42a0-9015-283082e1021f","Type":"ContainerStarted","Data":"199ef2e2f54244cbc05ffc145b3e46ccf10f342bbfe3c0c643beb85b4ba38106"} Nov 29 09:17:37 crc kubenswrapper[4795]: I1129 09:17:37.573303 4795 generic.go:334] "Generic (PLEG): container finished" podID="402109ed-59ba-42a0-9015-283082e1021f" containerID="199ef2e2f54244cbc05ffc145b3e46ccf10f342bbfe3c0c643beb85b4ba38106" exitCode=0 Nov 29 09:17:37 crc kubenswrapper[4795]: I1129 09:17:37.573413 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-htpgz" event={"ID":"402109ed-59ba-42a0-9015-283082e1021f","Type":"ContainerDied","Data":"199ef2e2f54244cbc05ffc145b3e46ccf10f342bbfe3c0c643beb85b4ba38106"} Nov 29 09:17:38 crc kubenswrapper[4795]: I1129 09:17:38.586714 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-htpgz" event={"ID":"402109ed-59ba-42a0-9015-283082e1021f","Type":"ContainerStarted","Data":"34a9ff886cff04eb34f3861a1bb37e7aac9bc2af1268d3caa3ba7d7d352e8555"} Nov 29 09:17:38 crc kubenswrapper[4795]: I1129 09:17:38.614343 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-htpgz" podStartSLOduration=2.924566143 podStartE2EDuration="8.614047038s" podCreationTimestamp="2025-11-29 09:17:30 +0000 UTC" firstStartedPulling="2025-11-29 09:17:32.440639969 +0000 UTC m=+5898.416215769" lastFinishedPulling="2025-11-29 09:17:38.130120874 +0000 UTC m=+5904.105696664" observedRunningTime="2025-11-29 09:17:38.604526779 +0000 UTC m=+5904.580102569" watchObservedRunningTime="2025-11-29 09:17:38.614047038 +0000 UTC m=+5904.589622828" Nov 29 09:17:40 crc kubenswrapper[4795]: I1129 09:17:40.770622 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-htpgz" Nov 29 09:17:40 crc kubenswrapper[4795]: I1129 09:17:40.771397 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-htpgz" Nov 29 09:17:41 crc kubenswrapper[4795]: I1129 09:17:41.827892 4795 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-htpgz" podUID="402109ed-59ba-42a0-9015-283082e1021f" containerName="registry-server" probeResult="failure" output=< Nov 29 09:17:41 crc kubenswrapper[4795]: timeout: failed to connect service ":50051" within 1s Nov 29 09:17:41 crc kubenswrapper[4795]: > Nov 29 09:17:51 crc kubenswrapper[4795]: I1129 09:17:51.835507 4795 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-htpgz" podUID="402109ed-59ba-42a0-9015-283082e1021f" containerName="registry-server" probeResult="failure" output=< Nov 29 09:17:51 crc kubenswrapper[4795]: timeout: failed to connect service ":50051" within 1s Nov 29 09:17:51 crc kubenswrapper[4795]: > Nov 29 09:18:00 crc kubenswrapper[4795]: I1129 09:18:00.846018 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-htpgz" Nov 29 09:18:00 crc kubenswrapper[4795]: I1129 09:18:00.926670 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-htpgz" Nov 29 09:18:01 crc kubenswrapper[4795]: I1129 09:18:01.647078 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-htpgz"] Nov 29 09:18:01 crc kubenswrapper[4795]: I1129 09:18:01.910460 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-htpgz" podUID="402109ed-59ba-42a0-9015-283082e1021f" containerName="registry-server" containerID="cri-o://34a9ff886cff04eb34f3861a1bb37e7aac9bc2af1268d3caa3ba7d7d352e8555" gracePeriod=2 Nov 29 09:18:02 crc kubenswrapper[4795]: I1129 09:18:02.505998 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-htpgz" Nov 29 09:18:02 crc kubenswrapper[4795]: I1129 09:18:02.624768 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/402109ed-59ba-42a0-9015-283082e1021f-utilities\") pod \"402109ed-59ba-42a0-9015-283082e1021f\" (UID: \"402109ed-59ba-42a0-9015-283082e1021f\") " Nov 29 09:18:02 crc kubenswrapper[4795]: I1129 09:18:02.624832 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzvwr\" (UniqueName: \"kubernetes.io/projected/402109ed-59ba-42a0-9015-283082e1021f-kube-api-access-dzvwr\") pod \"402109ed-59ba-42a0-9015-283082e1021f\" (UID: \"402109ed-59ba-42a0-9015-283082e1021f\") " Nov 29 09:18:02 crc kubenswrapper[4795]: I1129 09:18:02.625167 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/402109ed-59ba-42a0-9015-283082e1021f-catalog-content\") pod \"402109ed-59ba-42a0-9015-283082e1021f\" (UID: \"402109ed-59ba-42a0-9015-283082e1021f\") " Nov 29 09:18:02 crc kubenswrapper[4795]: I1129 09:18:02.625946 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/402109ed-59ba-42a0-9015-283082e1021f-utilities" (OuterVolumeSpecName: "utilities") pod "402109ed-59ba-42a0-9015-283082e1021f" (UID: "402109ed-59ba-42a0-9015-283082e1021f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 09:18:02 crc kubenswrapper[4795]: I1129 09:18:02.632136 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/402109ed-59ba-42a0-9015-283082e1021f-kube-api-access-dzvwr" (OuterVolumeSpecName: "kube-api-access-dzvwr") pod "402109ed-59ba-42a0-9015-283082e1021f" (UID: "402109ed-59ba-42a0-9015-283082e1021f"). InnerVolumeSpecName "kube-api-access-dzvwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 09:18:02 crc kubenswrapper[4795]: I1129 09:18:02.733620 4795 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/402109ed-59ba-42a0-9015-283082e1021f-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 09:18:02 crc kubenswrapper[4795]: I1129 09:18:02.733650 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzvwr\" (UniqueName: \"kubernetes.io/projected/402109ed-59ba-42a0-9015-283082e1021f-kube-api-access-dzvwr\") on node \"crc\" DevicePath \"\"" Nov 29 09:18:02 crc kubenswrapper[4795]: I1129 09:18:02.824678 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/402109ed-59ba-42a0-9015-283082e1021f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "402109ed-59ba-42a0-9015-283082e1021f" (UID: "402109ed-59ba-42a0-9015-283082e1021f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 09:18:02 crc kubenswrapper[4795]: I1129 09:18:02.835462 4795 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/402109ed-59ba-42a0-9015-283082e1021f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 09:18:02 crc kubenswrapper[4795]: I1129 09:18:02.922060 4795 generic.go:334] "Generic (PLEG): container finished" podID="402109ed-59ba-42a0-9015-283082e1021f" containerID="34a9ff886cff04eb34f3861a1bb37e7aac9bc2af1268d3caa3ba7d7d352e8555" exitCode=0 Nov 29 09:18:02 crc kubenswrapper[4795]: I1129 09:18:02.922109 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-htpgz" event={"ID":"402109ed-59ba-42a0-9015-283082e1021f","Type":"ContainerDied","Data":"34a9ff886cff04eb34f3861a1bb37e7aac9bc2af1268d3caa3ba7d7d352e8555"} Nov 29 09:18:02 crc kubenswrapper[4795]: I1129 09:18:02.922150 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-htpgz" event={"ID":"402109ed-59ba-42a0-9015-283082e1021f","Type":"ContainerDied","Data":"26d6b46ca5df5f70a630bb23102ac77d0b07eb169f6f69abb7a1c981e2123a6c"} Nov 29 09:18:02 crc kubenswrapper[4795]: I1129 09:18:02.922155 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-htpgz" Nov 29 09:18:02 crc kubenswrapper[4795]: I1129 09:18:02.922171 4795 scope.go:117] "RemoveContainer" containerID="34a9ff886cff04eb34f3861a1bb37e7aac9bc2af1268d3caa3ba7d7d352e8555" Nov 29 09:18:02 crc kubenswrapper[4795]: I1129 09:18:02.952344 4795 scope.go:117] "RemoveContainer" containerID="199ef2e2f54244cbc05ffc145b3e46ccf10f342bbfe3c0c643beb85b4ba38106" Nov 29 09:18:02 crc kubenswrapper[4795]: I1129 09:18:02.975301 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-htpgz"] Nov 29 09:18:02 crc kubenswrapper[4795]: I1129 09:18:02.987534 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-htpgz"] Nov 29 09:18:03 crc kubenswrapper[4795]: I1129 09:18:03.019848 4795 scope.go:117] "RemoveContainer" containerID="785fc51404183ac0656fa7017dcf3d1e9dc2fec8018e938966f1e2ee513ad3ed" Nov 29 09:18:03 crc kubenswrapper[4795]: I1129 09:18:03.059886 4795 scope.go:117] "RemoveContainer" containerID="34a9ff886cff04eb34f3861a1bb37e7aac9bc2af1268d3caa3ba7d7d352e8555" Nov 29 09:18:03 crc kubenswrapper[4795]: E1129 09:18:03.060789 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34a9ff886cff04eb34f3861a1bb37e7aac9bc2af1268d3caa3ba7d7d352e8555\": container with ID starting with 34a9ff886cff04eb34f3861a1bb37e7aac9bc2af1268d3caa3ba7d7d352e8555 not found: ID does not exist" containerID="34a9ff886cff04eb34f3861a1bb37e7aac9bc2af1268d3caa3ba7d7d352e8555" Nov 29 09:18:03 crc kubenswrapper[4795]: I1129 09:18:03.060838 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34a9ff886cff04eb34f3861a1bb37e7aac9bc2af1268d3caa3ba7d7d352e8555"} err="failed to get container status \"34a9ff886cff04eb34f3861a1bb37e7aac9bc2af1268d3caa3ba7d7d352e8555\": rpc error: code = NotFound desc = could not find container \"34a9ff886cff04eb34f3861a1bb37e7aac9bc2af1268d3caa3ba7d7d352e8555\": container with ID starting with 34a9ff886cff04eb34f3861a1bb37e7aac9bc2af1268d3caa3ba7d7d352e8555 not found: ID does not exist" Nov 29 09:18:03 crc kubenswrapper[4795]: I1129 09:18:03.060870 4795 scope.go:117] "RemoveContainer" containerID="199ef2e2f54244cbc05ffc145b3e46ccf10f342bbfe3c0c643beb85b4ba38106" Nov 29 09:18:03 crc kubenswrapper[4795]: E1129 09:18:03.061368 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"199ef2e2f54244cbc05ffc145b3e46ccf10f342bbfe3c0c643beb85b4ba38106\": container with ID starting with 199ef2e2f54244cbc05ffc145b3e46ccf10f342bbfe3c0c643beb85b4ba38106 not found: ID does not exist" containerID="199ef2e2f54244cbc05ffc145b3e46ccf10f342bbfe3c0c643beb85b4ba38106" Nov 29 09:18:03 crc kubenswrapper[4795]: I1129 09:18:03.061395 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"199ef2e2f54244cbc05ffc145b3e46ccf10f342bbfe3c0c643beb85b4ba38106"} err="failed to get container status \"199ef2e2f54244cbc05ffc145b3e46ccf10f342bbfe3c0c643beb85b4ba38106\": rpc error: code = NotFound desc = could not find container \"199ef2e2f54244cbc05ffc145b3e46ccf10f342bbfe3c0c643beb85b4ba38106\": container with ID starting with 199ef2e2f54244cbc05ffc145b3e46ccf10f342bbfe3c0c643beb85b4ba38106 not found: ID does not exist" Nov 29 09:18:03 crc kubenswrapper[4795]: I1129 09:18:03.061412 4795 scope.go:117] "RemoveContainer" containerID="785fc51404183ac0656fa7017dcf3d1e9dc2fec8018e938966f1e2ee513ad3ed" Nov 29 09:18:03 crc kubenswrapper[4795]: E1129 09:18:03.061706 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"785fc51404183ac0656fa7017dcf3d1e9dc2fec8018e938966f1e2ee513ad3ed\": container with ID starting with 785fc51404183ac0656fa7017dcf3d1e9dc2fec8018e938966f1e2ee513ad3ed not found: ID does not exist" containerID="785fc51404183ac0656fa7017dcf3d1e9dc2fec8018e938966f1e2ee513ad3ed" Nov 29 09:18:03 crc kubenswrapper[4795]: I1129 09:18:03.061753 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"785fc51404183ac0656fa7017dcf3d1e9dc2fec8018e938966f1e2ee513ad3ed"} err="failed to get container status \"785fc51404183ac0656fa7017dcf3d1e9dc2fec8018e938966f1e2ee513ad3ed\": rpc error: code = NotFound desc = could not find container \"785fc51404183ac0656fa7017dcf3d1e9dc2fec8018e938966f1e2ee513ad3ed\": container with ID starting with 785fc51404183ac0656fa7017dcf3d1e9dc2fec8018e938966f1e2ee513ad3ed not found: ID does not exist" Nov 29 09:18:04 crc kubenswrapper[4795]: I1129 09:18:04.288717 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="402109ed-59ba-42a0-9015-283082e1021f" path="/var/lib/kubelet/pods/402109ed-59ba-42a0-9015-283082e1021f/volumes" Nov 29 09:18:21 crc kubenswrapper[4795]: I1129 09:18:21.727023 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jrcc2"] Nov 29 09:18:21 crc kubenswrapper[4795]: E1129 09:18:21.728234 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="402109ed-59ba-42a0-9015-283082e1021f" containerName="registry-server" Nov 29 09:18:21 crc kubenswrapper[4795]: I1129 09:18:21.728253 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="402109ed-59ba-42a0-9015-283082e1021f" containerName="registry-server" Nov 29 09:18:21 crc kubenswrapper[4795]: E1129 09:18:21.728267 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="402109ed-59ba-42a0-9015-283082e1021f" containerName="extract-content" Nov 29 09:18:21 crc kubenswrapper[4795]: I1129 09:18:21.728276 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="402109ed-59ba-42a0-9015-283082e1021f" containerName="extract-content" Nov 29 09:18:21 crc kubenswrapper[4795]: E1129 09:18:21.728294 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="402109ed-59ba-42a0-9015-283082e1021f" containerName="extract-utilities" Nov 29 09:18:21 crc kubenswrapper[4795]: I1129 09:18:21.728302 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="402109ed-59ba-42a0-9015-283082e1021f" containerName="extract-utilities" Nov 29 09:18:21 crc kubenswrapper[4795]: I1129 09:18:21.728729 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="402109ed-59ba-42a0-9015-283082e1021f" containerName="registry-server" Nov 29 09:18:21 crc kubenswrapper[4795]: I1129 09:18:21.741803 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jrcc2"] Nov 29 09:18:21 crc kubenswrapper[4795]: I1129 09:18:21.741944 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jrcc2" Nov 29 09:18:21 crc kubenswrapper[4795]: I1129 09:18:21.762918 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbxx9\" (UniqueName: \"kubernetes.io/projected/56da70da-3e03-4b79-a6ac-c651ce3e731c-kube-api-access-zbxx9\") pod \"certified-operators-jrcc2\" (UID: \"56da70da-3e03-4b79-a6ac-c651ce3e731c\") " pod="openshift-marketplace/certified-operators-jrcc2" Nov 29 09:18:21 crc kubenswrapper[4795]: I1129 09:18:21.763088 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56da70da-3e03-4b79-a6ac-c651ce3e731c-catalog-content\") pod \"certified-operators-jrcc2\" (UID: \"56da70da-3e03-4b79-a6ac-c651ce3e731c\") " pod="openshift-marketplace/certified-operators-jrcc2" Nov 29 09:18:21 crc kubenswrapper[4795]: I1129 09:18:21.763162 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56da70da-3e03-4b79-a6ac-c651ce3e731c-utilities\") pod \"certified-operators-jrcc2\" (UID: \"56da70da-3e03-4b79-a6ac-c651ce3e731c\") " pod="openshift-marketplace/certified-operators-jrcc2" Nov 29 09:18:21 crc kubenswrapper[4795]: I1129 09:18:21.864766 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56da70da-3e03-4b79-a6ac-c651ce3e731c-catalog-content\") pod \"certified-operators-jrcc2\" (UID: \"56da70da-3e03-4b79-a6ac-c651ce3e731c\") " pod="openshift-marketplace/certified-operators-jrcc2" Nov 29 09:18:21 crc kubenswrapper[4795]: I1129 09:18:21.864851 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56da70da-3e03-4b79-a6ac-c651ce3e731c-utilities\") pod \"certified-operators-jrcc2\" (UID: \"56da70da-3e03-4b79-a6ac-c651ce3e731c\") " pod="openshift-marketplace/certified-operators-jrcc2" Nov 29 09:18:21 crc kubenswrapper[4795]: I1129 09:18:21.865011 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbxx9\" (UniqueName: \"kubernetes.io/projected/56da70da-3e03-4b79-a6ac-c651ce3e731c-kube-api-access-zbxx9\") pod \"certified-operators-jrcc2\" (UID: \"56da70da-3e03-4b79-a6ac-c651ce3e731c\") " pod="openshift-marketplace/certified-operators-jrcc2" Nov 29 09:18:21 crc kubenswrapper[4795]: I1129 09:18:21.865238 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56da70da-3e03-4b79-a6ac-c651ce3e731c-catalog-content\") pod \"certified-operators-jrcc2\" (UID: \"56da70da-3e03-4b79-a6ac-c651ce3e731c\") " pod="openshift-marketplace/certified-operators-jrcc2" Nov 29 09:18:21 crc kubenswrapper[4795]: I1129 09:18:21.865785 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56da70da-3e03-4b79-a6ac-c651ce3e731c-utilities\") pod \"certified-operators-jrcc2\" (UID: \"56da70da-3e03-4b79-a6ac-c651ce3e731c\") " pod="openshift-marketplace/certified-operators-jrcc2" Nov 29 09:18:21 crc kubenswrapper[4795]: I1129 09:18:21.892769 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbxx9\" (UniqueName: \"kubernetes.io/projected/56da70da-3e03-4b79-a6ac-c651ce3e731c-kube-api-access-zbxx9\") pod \"certified-operators-jrcc2\" (UID: \"56da70da-3e03-4b79-a6ac-c651ce3e731c\") " pod="openshift-marketplace/certified-operators-jrcc2" Nov 29 09:18:22 crc kubenswrapper[4795]: I1129 09:18:22.067005 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jrcc2" Nov 29 09:18:22 crc kubenswrapper[4795]: W1129 09:18:22.553837 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56da70da_3e03_4b79_a6ac_c651ce3e731c.slice/crio-44295a6b0283e4368aa0e526231f50eb50cadf35f0eab33c8078a599efa7d019 WatchSource:0}: Error finding container 44295a6b0283e4368aa0e526231f50eb50cadf35f0eab33c8078a599efa7d019: Status 404 returned error can't find the container with id 44295a6b0283e4368aa0e526231f50eb50cadf35f0eab33c8078a599efa7d019 Nov 29 09:18:22 crc kubenswrapper[4795]: I1129 09:18:22.555678 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jrcc2"] Nov 29 09:18:23 crc kubenswrapper[4795]: I1129 09:18:23.147353 4795 generic.go:334] "Generic (PLEG): container finished" podID="56da70da-3e03-4b79-a6ac-c651ce3e731c" containerID="0c980957cd618a57c69678b5d8205e79d8fe05e1cf7131dd1e2aec116e3116b9" exitCode=0 Nov 29 09:18:23 crc kubenswrapper[4795]: I1129 09:18:23.147460 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jrcc2" event={"ID":"56da70da-3e03-4b79-a6ac-c651ce3e731c","Type":"ContainerDied","Data":"0c980957cd618a57c69678b5d8205e79d8fe05e1cf7131dd1e2aec116e3116b9"} Nov 29 09:18:23 crc kubenswrapper[4795]: I1129 09:18:23.147697 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jrcc2" event={"ID":"56da70da-3e03-4b79-a6ac-c651ce3e731c","Type":"ContainerStarted","Data":"44295a6b0283e4368aa0e526231f50eb50cadf35f0eab33c8078a599efa7d019"} Nov 29 09:18:24 crc kubenswrapper[4795]: I1129 09:18:24.159551 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jrcc2" event={"ID":"56da70da-3e03-4b79-a6ac-c651ce3e731c","Type":"ContainerStarted","Data":"4991c87cb0a2c1107b5ab3ceec22c4768f13e8b1ec829fb6204a1582e20c8e7b"} Nov 29 09:18:25 crc kubenswrapper[4795]: I1129 09:18:25.173442 4795 generic.go:334] "Generic (PLEG): container finished" podID="56da70da-3e03-4b79-a6ac-c651ce3e731c" containerID="4991c87cb0a2c1107b5ab3ceec22c4768f13e8b1ec829fb6204a1582e20c8e7b" exitCode=0 Nov 29 09:18:25 crc kubenswrapper[4795]: I1129 09:18:25.173527 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jrcc2" event={"ID":"56da70da-3e03-4b79-a6ac-c651ce3e731c","Type":"ContainerDied","Data":"4991c87cb0a2c1107b5ab3ceec22c4768f13e8b1ec829fb6204a1582e20c8e7b"} Nov 29 09:18:26 crc kubenswrapper[4795]: I1129 09:18:26.186996 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jrcc2" event={"ID":"56da70da-3e03-4b79-a6ac-c651ce3e731c","Type":"ContainerStarted","Data":"8cd74f387f279b4a6c277ec83e2e27fdceb137a05de4e54cf9660c090707e738"} Nov 29 09:18:26 crc kubenswrapper[4795]: I1129 09:18:26.207797 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jrcc2" podStartSLOduration=2.574508172 podStartE2EDuration="5.207777996s" podCreationTimestamp="2025-11-29 09:18:21 +0000 UTC" firstStartedPulling="2025-11-29 09:18:23.149445436 +0000 UTC m=+5949.125021246" lastFinishedPulling="2025-11-29 09:18:25.78271526 +0000 UTC m=+5951.758291070" observedRunningTime="2025-11-29 09:18:26.202435515 +0000 UTC m=+5952.178011325" watchObservedRunningTime="2025-11-29 09:18:26.207777996 +0000 UTC m=+5952.183353796" Nov 29 09:18:32 crc kubenswrapper[4795]: I1129 09:18:32.067183 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jrcc2" Nov 29 09:18:32 crc kubenswrapper[4795]: I1129 09:18:32.067819 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jrcc2" Nov 29 09:18:32 crc kubenswrapper[4795]: I1129 09:18:32.135387 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jrcc2" Nov 29 09:18:32 crc kubenswrapper[4795]: I1129 09:18:32.326181 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jrcc2" Nov 29 09:18:32 crc kubenswrapper[4795]: I1129 09:18:32.381165 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jrcc2"] Nov 29 09:18:34 crc kubenswrapper[4795]: I1129 09:18:34.272270 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jrcc2" podUID="56da70da-3e03-4b79-a6ac-c651ce3e731c" containerName="registry-server" containerID="cri-o://8cd74f387f279b4a6c277ec83e2e27fdceb137a05de4e54cf9660c090707e738" gracePeriod=2 Nov 29 09:18:34 crc kubenswrapper[4795]: I1129 09:18:34.826401 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jrcc2" Nov 29 09:18:34 crc kubenswrapper[4795]: I1129 09:18:34.900719 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbxx9\" (UniqueName: \"kubernetes.io/projected/56da70da-3e03-4b79-a6ac-c651ce3e731c-kube-api-access-zbxx9\") pod \"56da70da-3e03-4b79-a6ac-c651ce3e731c\" (UID: \"56da70da-3e03-4b79-a6ac-c651ce3e731c\") " Nov 29 09:18:34 crc kubenswrapper[4795]: I1129 09:18:34.901116 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56da70da-3e03-4b79-a6ac-c651ce3e731c-catalog-content\") pod \"56da70da-3e03-4b79-a6ac-c651ce3e731c\" (UID: \"56da70da-3e03-4b79-a6ac-c651ce3e731c\") " Nov 29 09:18:34 crc kubenswrapper[4795]: I1129 09:18:34.901174 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56da70da-3e03-4b79-a6ac-c651ce3e731c-utilities\") pod \"56da70da-3e03-4b79-a6ac-c651ce3e731c\" (UID: \"56da70da-3e03-4b79-a6ac-c651ce3e731c\") " Nov 29 09:18:34 crc kubenswrapper[4795]: I1129 09:18:34.902007 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56da70da-3e03-4b79-a6ac-c651ce3e731c-utilities" (OuterVolumeSpecName: "utilities") pod "56da70da-3e03-4b79-a6ac-c651ce3e731c" (UID: "56da70da-3e03-4b79-a6ac-c651ce3e731c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 09:18:34 crc kubenswrapper[4795]: I1129 09:18:34.909879 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56da70da-3e03-4b79-a6ac-c651ce3e731c-kube-api-access-zbxx9" (OuterVolumeSpecName: "kube-api-access-zbxx9") pod "56da70da-3e03-4b79-a6ac-c651ce3e731c" (UID: "56da70da-3e03-4b79-a6ac-c651ce3e731c"). InnerVolumeSpecName "kube-api-access-zbxx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 09:18:34 crc kubenswrapper[4795]: I1129 09:18:34.952181 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56da70da-3e03-4b79-a6ac-c651ce3e731c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "56da70da-3e03-4b79-a6ac-c651ce3e731c" (UID: "56da70da-3e03-4b79-a6ac-c651ce3e731c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 09:18:35 crc kubenswrapper[4795]: I1129 09:18:35.002737 4795 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56da70da-3e03-4b79-a6ac-c651ce3e731c-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 09:18:35 crc kubenswrapper[4795]: I1129 09:18:35.002768 4795 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56da70da-3e03-4b79-a6ac-c651ce3e731c-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 09:18:35 crc kubenswrapper[4795]: I1129 09:18:35.002778 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zbxx9\" (UniqueName: \"kubernetes.io/projected/56da70da-3e03-4b79-a6ac-c651ce3e731c-kube-api-access-zbxx9\") on node \"crc\" DevicePath \"\"" Nov 29 09:18:35 crc kubenswrapper[4795]: I1129 09:18:35.284088 4795 generic.go:334] "Generic (PLEG): container finished" podID="56da70da-3e03-4b79-a6ac-c651ce3e731c" containerID="8cd74f387f279b4a6c277ec83e2e27fdceb137a05de4e54cf9660c090707e738" exitCode=0 Nov 29 09:18:35 crc kubenswrapper[4795]: I1129 09:18:35.284140 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jrcc2" event={"ID":"56da70da-3e03-4b79-a6ac-c651ce3e731c","Type":"ContainerDied","Data":"8cd74f387f279b4a6c277ec83e2e27fdceb137a05de4e54cf9660c090707e738"} Nov 29 09:18:35 crc kubenswrapper[4795]: I1129 09:18:35.284166 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jrcc2" event={"ID":"56da70da-3e03-4b79-a6ac-c651ce3e731c","Type":"ContainerDied","Data":"44295a6b0283e4368aa0e526231f50eb50cadf35f0eab33c8078a599efa7d019"} Nov 29 09:18:35 crc kubenswrapper[4795]: I1129 09:18:35.284187 4795 scope.go:117] "RemoveContainer" containerID="8cd74f387f279b4a6c277ec83e2e27fdceb137a05de4e54cf9660c090707e738" Nov 29 09:18:35 crc kubenswrapper[4795]: I1129 09:18:35.284352 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jrcc2" Nov 29 09:18:35 crc kubenswrapper[4795]: I1129 09:18:35.330996 4795 scope.go:117] "RemoveContainer" containerID="4991c87cb0a2c1107b5ab3ceec22c4768f13e8b1ec829fb6204a1582e20c8e7b" Nov 29 09:18:35 crc kubenswrapper[4795]: I1129 09:18:35.331861 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jrcc2"] Nov 29 09:18:35 crc kubenswrapper[4795]: I1129 09:18:35.344883 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jrcc2"] Nov 29 09:18:35 crc kubenswrapper[4795]: I1129 09:18:35.377360 4795 scope.go:117] "RemoveContainer" containerID="0c980957cd618a57c69678b5d8205e79d8fe05e1cf7131dd1e2aec116e3116b9" Nov 29 09:18:35 crc kubenswrapper[4795]: I1129 09:18:35.428211 4795 scope.go:117] "RemoveContainer" containerID="8cd74f387f279b4a6c277ec83e2e27fdceb137a05de4e54cf9660c090707e738" Nov 29 09:18:35 crc kubenswrapper[4795]: E1129 09:18:35.429081 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8cd74f387f279b4a6c277ec83e2e27fdceb137a05de4e54cf9660c090707e738\": container with ID starting with 8cd74f387f279b4a6c277ec83e2e27fdceb137a05de4e54cf9660c090707e738 not found: ID does not exist" containerID="8cd74f387f279b4a6c277ec83e2e27fdceb137a05de4e54cf9660c090707e738" Nov 29 09:18:35 crc kubenswrapper[4795]: I1129 09:18:35.429147 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cd74f387f279b4a6c277ec83e2e27fdceb137a05de4e54cf9660c090707e738"} err="failed to get container status \"8cd74f387f279b4a6c277ec83e2e27fdceb137a05de4e54cf9660c090707e738\": rpc error: code = NotFound desc = could not find container \"8cd74f387f279b4a6c277ec83e2e27fdceb137a05de4e54cf9660c090707e738\": container with ID starting with 8cd74f387f279b4a6c277ec83e2e27fdceb137a05de4e54cf9660c090707e738 not found: ID does not exist" Nov 29 09:18:35 crc kubenswrapper[4795]: I1129 09:18:35.429184 4795 scope.go:117] "RemoveContainer" containerID="4991c87cb0a2c1107b5ab3ceec22c4768f13e8b1ec829fb6204a1582e20c8e7b" Nov 29 09:18:35 crc kubenswrapper[4795]: E1129 09:18:35.429731 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4991c87cb0a2c1107b5ab3ceec22c4768f13e8b1ec829fb6204a1582e20c8e7b\": container with ID starting with 4991c87cb0a2c1107b5ab3ceec22c4768f13e8b1ec829fb6204a1582e20c8e7b not found: ID does not exist" containerID="4991c87cb0a2c1107b5ab3ceec22c4768f13e8b1ec829fb6204a1582e20c8e7b" Nov 29 09:18:35 crc kubenswrapper[4795]: I1129 09:18:35.429781 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4991c87cb0a2c1107b5ab3ceec22c4768f13e8b1ec829fb6204a1582e20c8e7b"} err="failed to get container status \"4991c87cb0a2c1107b5ab3ceec22c4768f13e8b1ec829fb6204a1582e20c8e7b\": rpc error: code = NotFound desc = could not find container \"4991c87cb0a2c1107b5ab3ceec22c4768f13e8b1ec829fb6204a1582e20c8e7b\": container with ID starting with 4991c87cb0a2c1107b5ab3ceec22c4768f13e8b1ec829fb6204a1582e20c8e7b not found: ID does not exist" Nov 29 09:18:35 crc kubenswrapper[4795]: I1129 09:18:35.429813 4795 scope.go:117] "RemoveContainer" containerID="0c980957cd618a57c69678b5d8205e79d8fe05e1cf7131dd1e2aec116e3116b9" Nov 29 09:18:35 crc kubenswrapper[4795]: E1129 09:18:35.430186 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c980957cd618a57c69678b5d8205e79d8fe05e1cf7131dd1e2aec116e3116b9\": container with ID starting with 0c980957cd618a57c69678b5d8205e79d8fe05e1cf7131dd1e2aec116e3116b9 not found: ID does not exist" containerID="0c980957cd618a57c69678b5d8205e79d8fe05e1cf7131dd1e2aec116e3116b9" Nov 29 09:18:35 crc kubenswrapper[4795]: I1129 09:18:35.430205 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c980957cd618a57c69678b5d8205e79d8fe05e1cf7131dd1e2aec116e3116b9"} err="failed to get container status \"0c980957cd618a57c69678b5d8205e79d8fe05e1cf7131dd1e2aec116e3116b9\": rpc error: code = NotFound desc = could not find container \"0c980957cd618a57c69678b5d8205e79d8fe05e1cf7131dd1e2aec116e3116b9\": container with ID starting with 0c980957cd618a57c69678b5d8205e79d8fe05e1cf7131dd1e2aec116e3116b9 not found: ID does not exist" Nov 29 09:18:36 crc kubenswrapper[4795]: I1129 09:18:36.290206 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56da70da-3e03-4b79-a6ac-c651ce3e731c" path="/var/lib/kubelet/pods/56da70da-3e03-4b79-a6ac-c651ce3e731c/volumes" Nov 29 09:19:20 crc kubenswrapper[4795]: I1129 09:19:20.288745 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vmgp6"] Nov 29 09:19:20 crc kubenswrapper[4795]: E1129 09:19:20.290542 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56da70da-3e03-4b79-a6ac-c651ce3e731c" containerName="registry-server" Nov 29 09:19:20 crc kubenswrapper[4795]: I1129 09:19:20.290647 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="56da70da-3e03-4b79-a6ac-c651ce3e731c" containerName="registry-server" Nov 29 09:19:20 crc kubenswrapper[4795]: E1129 09:19:20.290742 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56da70da-3e03-4b79-a6ac-c651ce3e731c" containerName="extract-utilities" Nov 29 09:19:20 crc kubenswrapper[4795]: I1129 09:19:20.290811 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="56da70da-3e03-4b79-a6ac-c651ce3e731c" containerName="extract-utilities" Nov 29 09:19:20 crc kubenswrapper[4795]: E1129 09:19:20.290897 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56da70da-3e03-4b79-a6ac-c651ce3e731c" containerName="extract-content" Nov 29 09:19:20 crc kubenswrapper[4795]: I1129 09:19:20.290958 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="56da70da-3e03-4b79-a6ac-c651ce3e731c" containerName="extract-content" Nov 29 09:19:20 crc kubenswrapper[4795]: I1129 09:19:20.291255 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="56da70da-3e03-4b79-a6ac-c651ce3e731c" containerName="registry-server" Nov 29 09:19:20 crc kubenswrapper[4795]: I1129 09:19:20.293070 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vmgp6" Nov 29 09:19:20 crc kubenswrapper[4795]: I1129 09:19:20.328466 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vmgp6"] Nov 29 09:19:20 crc kubenswrapper[4795]: I1129 09:19:20.340141 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4823d43c-f19c-4aa6-b85b-486c83206cfd-catalog-content\") pod \"community-operators-vmgp6\" (UID: \"4823d43c-f19c-4aa6-b85b-486c83206cfd\") " pod="openshift-marketplace/community-operators-vmgp6" Nov 29 09:19:20 crc kubenswrapper[4795]: I1129 09:19:20.340711 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4823d43c-f19c-4aa6-b85b-486c83206cfd-utilities\") pod \"community-operators-vmgp6\" (UID: \"4823d43c-f19c-4aa6-b85b-486c83206cfd\") " pod="openshift-marketplace/community-operators-vmgp6" Nov 29 09:19:20 crc kubenswrapper[4795]: I1129 09:19:20.341195 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9jjr\" (UniqueName: \"kubernetes.io/projected/4823d43c-f19c-4aa6-b85b-486c83206cfd-kube-api-access-b9jjr\") pod \"community-operators-vmgp6\" (UID: \"4823d43c-f19c-4aa6-b85b-486c83206cfd\") " pod="openshift-marketplace/community-operators-vmgp6" Nov 29 09:19:20 crc kubenswrapper[4795]: I1129 09:19:20.443426 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4823d43c-f19c-4aa6-b85b-486c83206cfd-catalog-content\") pod \"community-operators-vmgp6\" (UID: \"4823d43c-f19c-4aa6-b85b-486c83206cfd\") " pod="openshift-marketplace/community-operators-vmgp6" Nov 29 09:19:20 crc kubenswrapper[4795]: I1129 09:19:20.443535 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4823d43c-f19c-4aa6-b85b-486c83206cfd-utilities\") pod \"community-operators-vmgp6\" (UID: \"4823d43c-f19c-4aa6-b85b-486c83206cfd\") " pod="openshift-marketplace/community-operators-vmgp6" Nov 29 09:19:20 crc kubenswrapper[4795]: I1129 09:19:20.443665 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9jjr\" (UniqueName: \"kubernetes.io/projected/4823d43c-f19c-4aa6-b85b-486c83206cfd-kube-api-access-b9jjr\") pod \"community-operators-vmgp6\" (UID: \"4823d43c-f19c-4aa6-b85b-486c83206cfd\") " pod="openshift-marketplace/community-operators-vmgp6" Nov 29 09:19:20 crc kubenswrapper[4795]: I1129 09:19:20.444052 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4823d43c-f19c-4aa6-b85b-486c83206cfd-catalog-content\") pod \"community-operators-vmgp6\" (UID: \"4823d43c-f19c-4aa6-b85b-486c83206cfd\") " pod="openshift-marketplace/community-operators-vmgp6" Nov 29 09:19:20 crc kubenswrapper[4795]: I1129 09:19:20.444241 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4823d43c-f19c-4aa6-b85b-486c83206cfd-utilities\") pod \"community-operators-vmgp6\" (UID: \"4823d43c-f19c-4aa6-b85b-486c83206cfd\") " pod="openshift-marketplace/community-operators-vmgp6" Nov 29 09:19:20 crc kubenswrapper[4795]: I1129 09:19:20.464880 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9jjr\" (UniqueName: \"kubernetes.io/projected/4823d43c-f19c-4aa6-b85b-486c83206cfd-kube-api-access-b9jjr\") pod \"community-operators-vmgp6\" (UID: \"4823d43c-f19c-4aa6-b85b-486c83206cfd\") " pod="openshift-marketplace/community-operators-vmgp6" Nov 29 09:19:20 crc kubenswrapper[4795]: I1129 09:19:20.620580 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vmgp6" Nov 29 09:19:21 crc kubenswrapper[4795]: I1129 09:19:21.176837 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vmgp6"] Nov 29 09:19:21 crc kubenswrapper[4795]: I1129 09:19:21.845854 4795 generic.go:334] "Generic (PLEG): container finished" podID="4823d43c-f19c-4aa6-b85b-486c83206cfd" containerID="5cac4c7ac1616544fc77e7e117b19e126f5bc004a63e7bc7a235781dedf94808" exitCode=0 Nov 29 09:19:21 crc kubenswrapper[4795]: I1129 09:19:21.845935 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vmgp6" event={"ID":"4823d43c-f19c-4aa6-b85b-486c83206cfd","Type":"ContainerDied","Data":"5cac4c7ac1616544fc77e7e117b19e126f5bc004a63e7bc7a235781dedf94808"} Nov 29 09:19:21 crc kubenswrapper[4795]: I1129 09:19:21.846178 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vmgp6" event={"ID":"4823d43c-f19c-4aa6-b85b-486c83206cfd","Type":"ContainerStarted","Data":"43fd8058162837f4652e80f4bff435418da14e9aee40fca6e530a342be49fb1c"} Nov 29 09:19:23 crc kubenswrapper[4795]: I1129 09:19:23.869867 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vmgp6" event={"ID":"4823d43c-f19c-4aa6-b85b-486c83206cfd","Type":"ContainerStarted","Data":"d595a18f6c0b2e4ebb93288001d368203c652e54f0c2b3de1ec7c8a17b1bc539"} Nov 29 09:19:24 crc kubenswrapper[4795]: I1129 09:19:24.880889 4795 generic.go:334] "Generic (PLEG): container finished" podID="4823d43c-f19c-4aa6-b85b-486c83206cfd" containerID="d595a18f6c0b2e4ebb93288001d368203c652e54f0c2b3de1ec7c8a17b1bc539" exitCode=0 Nov 29 09:19:24 crc kubenswrapper[4795]: I1129 09:19:24.880944 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vmgp6" event={"ID":"4823d43c-f19c-4aa6-b85b-486c83206cfd","Type":"ContainerDied","Data":"d595a18f6c0b2e4ebb93288001d368203c652e54f0c2b3de1ec7c8a17b1bc539"} Nov 29 09:19:26 crc kubenswrapper[4795]: I1129 09:19:26.923931 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vmgp6" event={"ID":"4823d43c-f19c-4aa6-b85b-486c83206cfd","Type":"ContainerStarted","Data":"4d8f89f7bb8d11dc8a3b954533c6f83c5d0f9cb60cba3799db5c1d582514f54a"} Nov 29 09:19:26 crc kubenswrapper[4795]: I1129 09:19:26.948913 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vmgp6" podStartSLOduration=3.087596771 podStartE2EDuration="6.948893065s" podCreationTimestamp="2025-11-29 09:19:20 +0000 UTC" firstStartedPulling="2025-11-29 09:19:21.849794636 +0000 UTC m=+6007.825370426" lastFinishedPulling="2025-11-29 09:19:25.71109093 +0000 UTC m=+6011.686666720" observedRunningTime="2025-11-29 09:19:26.945975513 +0000 UTC m=+6012.921551343" watchObservedRunningTime="2025-11-29 09:19:26.948893065 +0000 UTC m=+6012.924468855" Nov 29 09:19:30 crc kubenswrapper[4795]: I1129 09:19:30.621742 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vmgp6" Nov 29 09:19:30 crc kubenswrapper[4795]: I1129 09:19:30.622360 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vmgp6" Nov 29 09:19:30 crc kubenswrapper[4795]: I1129 09:19:30.684524 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vmgp6" Nov 29 09:19:31 crc kubenswrapper[4795]: I1129 09:19:31.012883 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vmgp6" Nov 29 09:19:31 crc kubenswrapper[4795]: I1129 09:19:31.073091 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vmgp6"] Nov 29 09:19:32 crc kubenswrapper[4795]: I1129 09:19:32.985086 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vmgp6" podUID="4823d43c-f19c-4aa6-b85b-486c83206cfd" containerName="registry-server" containerID="cri-o://4d8f89f7bb8d11dc8a3b954533c6f83c5d0f9cb60cba3799db5c1d582514f54a" gracePeriod=2 Nov 29 09:19:33 crc kubenswrapper[4795]: I1129 09:19:33.536114 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vmgp6" Nov 29 09:19:33 crc kubenswrapper[4795]: I1129 09:19:33.632212 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4823d43c-f19c-4aa6-b85b-486c83206cfd-utilities\") pod \"4823d43c-f19c-4aa6-b85b-486c83206cfd\" (UID: \"4823d43c-f19c-4aa6-b85b-486c83206cfd\") " Nov 29 09:19:33 crc kubenswrapper[4795]: I1129 09:19:33.632282 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9jjr\" (UniqueName: \"kubernetes.io/projected/4823d43c-f19c-4aa6-b85b-486c83206cfd-kube-api-access-b9jjr\") pod \"4823d43c-f19c-4aa6-b85b-486c83206cfd\" (UID: \"4823d43c-f19c-4aa6-b85b-486c83206cfd\") " Nov 29 09:19:33 crc kubenswrapper[4795]: I1129 09:19:33.632517 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4823d43c-f19c-4aa6-b85b-486c83206cfd-catalog-content\") pod \"4823d43c-f19c-4aa6-b85b-486c83206cfd\" (UID: \"4823d43c-f19c-4aa6-b85b-486c83206cfd\") " Nov 29 09:19:33 crc kubenswrapper[4795]: I1129 09:19:33.634399 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4823d43c-f19c-4aa6-b85b-486c83206cfd-utilities" (OuterVolumeSpecName: "utilities") pod "4823d43c-f19c-4aa6-b85b-486c83206cfd" (UID: "4823d43c-f19c-4aa6-b85b-486c83206cfd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 09:19:33 crc kubenswrapper[4795]: I1129 09:19:33.640266 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4823d43c-f19c-4aa6-b85b-486c83206cfd-kube-api-access-b9jjr" (OuterVolumeSpecName: "kube-api-access-b9jjr") pod "4823d43c-f19c-4aa6-b85b-486c83206cfd" (UID: "4823d43c-f19c-4aa6-b85b-486c83206cfd"). InnerVolumeSpecName "kube-api-access-b9jjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 09:19:33 crc kubenswrapper[4795]: I1129 09:19:33.681420 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4823d43c-f19c-4aa6-b85b-486c83206cfd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4823d43c-f19c-4aa6-b85b-486c83206cfd" (UID: "4823d43c-f19c-4aa6-b85b-486c83206cfd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 09:19:33 crc kubenswrapper[4795]: I1129 09:19:33.736237 4795 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4823d43c-f19c-4aa6-b85b-486c83206cfd-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 09:19:33 crc kubenswrapper[4795]: I1129 09:19:33.736739 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9jjr\" (UniqueName: \"kubernetes.io/projected/4823d43c-f19c-4aa6-b85b-486c83206cfd-kube-api-access-b9jjr\") on node \"crc\" DevicePath \"\"" Nov 29 09:19:33 crc kubenswrapper[4795]: I1129 09:19:33.736833 4795 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4823d43c-f19c-4aa6-b85b-486c83206cfd-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 09:19:33 crc kubenswrapper[4795]: I1129 09:19:33.997149 4795 generic.go:334] "Generic (PLEG): container finished" podID="4823d43c-f19c-4aa6-b85b-486c83206cfd" containerID="4d8f89f7bb8d11dc8a3b954533c6f83c5d0f9cb60cba3799db5c1d582514f54a" exitCode=0 Nov 29 09:19:33 crc kubenswrapper[4795]: I1129 09:19:33.997195 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vmgp6" event={"ID":"4823d43c-f19c-4aa6-b85b-486c83206cfd","Type":"ContainerDied","Data":"4d8f89f7bb8d11dc8a3b954533c6f83c5d0f9cb60cba3799db5c1d582514f54a"} Nov 29 09:19:33 crc kubenswrapper[4795]: I1129 09:19:33.997223 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vmgp6" event={"ID":"4823d43c-f19c-4aa6-b85b-486c83206cfd","Type":"ContainerDied","Data":"43fd8058162837f4652e80f4bff435418da14e9aee40fca6e530a342be49fb1c"} Nov 29 09:19:33 crc kubenswrapper[4795]: I1129 09:19:33.997240 4795 scope.go:117] "RemoveContainer" containerID="4d8f89f7bb8d11dc8a3b954533c6f83c5d0f9cb60cba3799db5c1d582514f54a" Nov 29 09:19:33 crc kubenswrapper[4795]: I1129 09:19:33.997282 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vmgp6" Nov 29 09:19:34 crc kubenswrapper[4795]: I1129 09:19:34.030733 4795 scope.go:117] "RemoveContainer" containerID="d595a18f6c0b2e4ebb93288001d368203c652e54f0c2b3de1ec7c8a17b1bc539" Nov 29 09:19:34 crc kubenswrapper[4795]: I1129 09:19:34.036290 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vmgp6"] Nov 29 09:19:34 crc kubenswrapper[4795]: I1129 09:19:34.049260 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vmgp6"] Nov 29 09:19:34 crc kubenswrapper[4795]: I1129 09:19:34.055071 4795 scope.go:117] "RemoveContainer" containerID="5cac4c7ac1616544fc77e7e117b19e126f5bc004a63e7bc7a235781dedf94808" Nov 29 09:19:34 crc kubenswrapper[4795]: I1129 09:19:34.102705 4795 scope.go:117] "RemoveContainer" containerID="4d8f89f7bb8d11dc8a3b954533c6f83c5d0f9cb60cba3799db5c1d582514f54a" Nov 29 09:19:34 crc kubenswrapper[4795]: E1129 09:19:34.103298 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d8f89f7bb8d11dc8a3b954533c6f83c5d0f9cb60cba3799db5c1d582514f54a\": container with ID starting with 4d8f89f7bb8d11dc8a3b954533c6f83c5d0f9cb60cba3799db5c1d582514f54a not found: ID does not exist" containerID="4d8f89f7bb8d11dc8a3b954533c6f83c5d0f9cb60cba3799db5c1d582514f54a" Nov 29 09:19:34 crc kubenswrapper[4795]: I1129 09:19:34.103347 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d8f89f7bb8d11dc8a3b954533c6f83c5d0f9cb60cba3799db5c1d582514f54a"} err="failed to get container status \"4d8f89f7bb8d11dc8a3b954533c6f83c5d0f9cb60cba3799db5c1d582514f54a\": rpc error: code = NotFound desc = could not find container \"4d8f89f7bb8d11dc8a3b954533c6f83c5d0f9cb60cba3799db5c1d582514f54a\": container with ID starting with 4d8f89f7bb8d11dc8a3b954533c6f83c5d0f9cb60cba3799db5c1d582514f54a not found: ID does not exist" Nov 29 09:19:34 crc kubenswrapper[4795]: I1129 09:19:34.103375 4795 scope.go:117] "RemoveContainer" containerID="d595a18f6c0b2e4ebb93288001d368203c652e54f0c2b3de1ec7c8a17b1bc539" Nov 29 09:19:34 crc kubenswrapper[4795]: E1129 09:19:34.104322 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d595a18f6c0b2e4ebb93288001d368203c652e54f0c2b3de1ec7c8a17b1bc539\": container with ID starting with d595a18f6c0b2e4ebb93288001d368203c652e54f0c2b3de1ec7c8a17b1bc539 not found: ID does not exist" containerID="d595a18f6c0b2e4ebb93288001d368203c652e54f0c2b3de1ec7c8a17b1bc539" Nov 29 09:19:34 crc kubenswrapper[4795]: I1129 09:19:34.104424 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d595a18f6c0b2e4ebb93288001d368203c652e54f0c2b3de1ec7c8a17b1bc539"} err="failed to get container status \"d595a18f6c0b2e4ebb93288001d368203c652e54f0c2b3de1ec7c8a17b1bc539\": rpc error: code = NotFound desc = could not find container \"d595a18f6c0b2e4ebb93288001d368203c652e54f0c2b3de1ec7c8a17b1bc539\": container with ID starting with d595a18f6c0b2e4ebb93288001d368203c652e54f0c2b3de1ec7c8a17b1bc539 not found: ID does not exist" Nov 29 09:19:34 crc kubenswrapper[4795]: I1129 09:19:34.104513 4795 scope.go:117] "RemoveContainer" containerID="5cac4c7ac1616544fc77e7e117b19e126f5bc004a63e7bc7a235781dedf94808" Nov 29 09:19:34 crc kubenswrapper[4795]: E1129 09:19:34.104928 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5cac4c7ac1616544fc77e7e117b19e126f5bc004a63e7bc7a235781dedf94808\": container with ID starting with 5cac4c7ac1616544fc77e7e117b19e126f5bc004a63e7bc7a235781dedf94808 not found: ID does not exist" containerID="5cac4c7ac1616544fc77e7e117b19e126f5bc004a63e7bc7a235781dedf94808" Nov 29 09:19:34 crc kubenswrapper[4795]: I1129 09:19:34.105029 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cac4c7ac1616544fc77e7e117b19e126f5bc004a63e7bc7a235781dedf94808"} err="failed to get container status \"5cac4c7ac1616544fc77e7e117b19e126f5bc004a63e7bc7a235781dedf94808\": rpc error: code = NotFound desc = could not find container \"5cac4c7ac1616544fc77e7e117b19e126f5bc004a63e7bc7a235781dedf94808\": container with ID starting with 5cac4c7ac1616544fc77e7e117b19e126f5bc004a63e7bc7a235781dedf94808 not found: ID does not exist" Nov 29 09:19:34 crc kubenswrapper[4795]: I1129 09:19:34.289574 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4823d43c-f19c-4aa6-b85b-486c83206cfd" path="/var/lib/kubelet/pods/4823d43c-f19c-4aa6-b85b-486c83206cfd/volumes" Nov 29 09:19:41 crc kubenswrapper[4795]: I1129 09:19:41.941110 4795 patch_prober.go:28] interesting pod/machine-config-daemon-bkmq6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 09:19:41 crc kubenswrapper[4795]: I1129 09:19:41.941815 4795 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 09:20:11 crc kubenswrapper[4795]: I1129 09:20:11.941206 4795 patch_prober.go:28] interesting pod/machine-config-daemon-bkmq6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 09:20:11 crc kubenswrapper[4795]: I1129 09:20:11.941978 4795 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 09:20:41 crc kubenswrapper[4795]: I1129 09:20:41.941085 4795 patch_prober.go:28] interesting pod/machine-config-daemon-bkmq6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 09:20:41 crc kubenswrapper[4795]: I1129 09:20:41.942893 4795 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 09:20:41 crc kubenswrapper[4795]: I1129 09:20:41.943036 4795 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" Nov 29 09:20:41 crc kubenswrapper[4795]: I1129 09:20:41.944229 4795 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e3c02c44d457b8bc56d4e6c57834916c5a24411b16830e7c64fb79753ef4fe61"} pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 09:20:41 crc kubenswrapper[4795]: I1129 09:20:41.944386 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" containerID="cri-o://e3c02c44d457b8bc56d4e6c57834916c5a24411b16830e7c64fb79753ef4fe61" gracePeriod=600 Nov 29 09:20:42 crc kubenswrapper[4795]: E1129 09:20:42.077476 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:20:42 crc kubenswrapper[4795]: I1129 09:20:42.767748 4795 generic.go:334] "Generic (PLEG): container finished" podID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerID="e3c02c44d457b8bc56d4e6c57834916c5a24411b16830e7c64fb79753ef4fe61" exitCode=0 Nov 29 09:20:42 crc kubenswrapper[4795]: I1129 09:20:42.767803 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" event={"ID":"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1","Type":"ContainerDied","Data":"e3c02c44d457b8bc56d4e6c57834916c5a24411b16830e7c64fb79753ef4fe61"} Nov 29 09:20:42 crc kubenswrapper[4795]: I1129 09:20:42.767849 4795 scope.go:117] "RemoveContainer" containerID="6c628579721cc1552166fc799b21fa600a7e10daf7d5387acee4ce9b0073e7fd" Nov 29 09:20:42 crc kubenswrapper[4795]: I1129 09:20:42.768833 4795 scope.go:117] "RemoveContainer" containerID="e3c02c44d457b8bc56d4e6c57834916c5a24411b16830e7c64fb79753ef4fe61" Nov 29 09:20:42 crc kubenswrapper[4795]: E1129 09:20:42.769500 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:20:57 crc kubenswrapper[4795]: I1129 09:20:57.276435 4795 scope.go:117] "RemoveContainer" containerID="e3c02c44d457b8bc56d4e6c57834916c5a24411b16830e7c64fb79753ef4fe61" Nov 29 09:20:57 crc kubenswrapper[4795]: E1129 09:20:57.277783 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:21:12 crc kubenswrapper[4795]: I1129 09:21:12.276421 4795 scope.go:117] "RemoveContainer" containerID="e3c02c44d457b8bc56d4e6c57834916c5a24411b16830e7c64fb79753ef4fe61" Nov 29 09:21:12 crc kubenswrapper[4795]: E1129 09:21:12.277366 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:21:25 crc kubenswrapper[4795]: I1129 09:21:25.276610 4795 scope.go:117] "RemoveContainer" containerID="e3c02c44d457b8bc56d4e6c57834916c5a24411b16830e7c64fb79753ef4fe61" Nov 29 09:21:25 crc kubenswrapper[4795]: E1129 09:21:25.277284 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:21:40 crc kubenswrapper[4795]: I1129 09:21:40.276424 4795 scope.go:117] "RemoveContainer" containerID="e3c02c44d457b8bc56d4e6c57834916c5a24411b16830e7c64fb79753ef4fe61" Nov 29 09:21:40 crc kubenswrapper[4795]: E1129 09:21:40.277396 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:21:48 crc kubenswrapper[4795]: I1129 09:21:48.531991 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vld59"] Nov 29 09:21:48 crc kubenswrapper[4795]: E1129 09:21:48.533142 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4823d43c-f19c-4aa6-b85b-486c83206cfd" containerName="extract-utilities" Nov 29 09:21:48 crc kubenswrapper[4795]: I1129 09:21:48.533156 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="4823d43c-f19c-4aa6-b85b-486c83206cfd" containerName="extract-utilities" Nov 29 09:21:48 crc kubenswrapper[4795]: E1129 09:21:48.533178 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4823d43c-f19c-4aa6-b85b-486c83206cfd" containerName="extract-content" Nov 29 09:21:48 crc kubenswrapper[4795]: I1129 09:21:48.533185 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="4823d43c-f19c-4aa6-b85b-486c83206cfd" containerName="extract-content" Nov 29 09:21:48 crc kubenswrapper[4795]: E1129 09:21:48.533212 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4823d43c-f19c-4aa6-b85b-486c83206cfd" containerName="registry-server" Nov 29 09:21:48 crc kubenswrapper[4795]: I1129 09:21:48.533218 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="4823d43c-f19c-4aa6-b85b-486c83206cfd" containerName="registry-server" Nov 29 09:21:48 crc kubenswrapper[4795]: I1129 09:21:48.533470 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="4823d43c-f19c-4aa6-b85b-486c83206cfd" containerName="registry-server" Nov 29 09:21:48 crc kubenswrapper[4795]: I1129 09:21:48.535133 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vld59" Nov 29 09:21:48 crc kubenswrapper[4795]: I1129 09:21:48.547227 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vld59"] Nov 29 09:21:48 crc kubenswrapper[4795]: I1129 09:21:48.645651 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqw5q\" (UniqueName: \"kubernetes.io/projected/82a6e1c6-63f6-4b4b-be68-7ec6403e5966-kube-api-access-lqw5q\") pod \"redhat-marketplace-vld59\" (UID: \"82a6e1c6-63f6-4b4b-be68-7ec6403e5966\") " pod="openshift-marketplace/redhat-marketplace-vld59" Nov 29 09:21:48 crc kubenswrapper[4795]: I1129 09:21:48.645702 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82a6e1c6-63f6-4b4b-be68-7ec6403e5966-catalog-content\") pod \"redhat-marketplace-vld59\" (UID: \"82a6e1c6-63f6-4b4b-be68-7ec6403e5966\") " pod="openshift-marketplace/redhat-marketplace-vld59" Nov 29 09:21:48 crc kubenswrapper[4795]: I1129 09:21:48.645898 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82a6e1c6-63f6-4b4b-be68-7ec6403e5966-utilities\") pod \"redhat-marketplace-vld59\" (UID: \"82a6e1c6-63f6-4b4b-be68-7ec6403e5966\") " pod="openshift-marketplace/redhat-marketplace-vld59" Nov 29 09:21:48 crc kubenswrapper[4795]: I1129 09:21:48.747519 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82a6e1c6-63f6-4b4b-be68-7ec6403e5966-utilities\") pod \"redhat-marketplace-vld59\" (UID: \"82a6e1c6-63f6-4b4b-be68-7ec6403e5966\") " pod="openshift-marketplace/redhat-marketplace-vld59" Nov 29 09:21:48 crc kubenswrapper[4795]: I1129 09:21:48.747691 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqw5q\" (UniqueName: \"kubernetes.io/projected/82a6e1c6-63f6-4b4b-be68-7ec6403e5966-kube-api-access-lqw5q\") pod \"redhat-marketplace-vld59\" (UID: \"82a6e1c6-63f6-4b4b-be68-7ec6403e5966\") " pod="openshift-marketplace/redhat-marketplace-vld59" Nov 29 09:21:48 crc kubenswrapper[4795]: I1129 09:21:48.747713 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82a6e1c6-63f6-4b4b-be68-7ec6403e5966-catalog-content\") pod \"redhat-marketplace-vld59\" (UID: \"82a6e1c6-63f6-4b4b-be68-7ec6403e5966\") " pod="openshift-marketplace/redhat-marketplace-vld59" Nov 29 09:21:48 crc kubenswrapper[4795]: I1129 09:21:48.748045 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82a6e1c6-63f6-4b4b-be68-7ec6403e5966-utilities\") pod \"redhat-marketplace-vld59\" (UID: \"82a6e1c6-63f6-4b4b-be68-7ec6403e5966\") " pod="openshift-marketplace/redhat-marketplace-vld59" Nov 29 09:21:48 crc kubenswrapper[4795]: I1129 09:21:48.748208 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82a6e1c6-63f6-4b4b-be68-7ec6403e5966-catalog-content\") pod \"redhat-marketplace-vld59\" (UID: \"82a6e1c6-63f6-4b4b-be68-7ec6403e5966\") " pod="openshift-marketplace/redhat-marketplace-vld59" Nov 29 09:21:48 crc kubenswrapper[4795]: I1129 09:21:48.770791 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqw5q\" (UniqueName: \"kubernetes.io/projected/82a6e1c6-63f6-4b4b-be68-7ec6403e5966-kube-api-access-lqw5q\") pod \"redhat-marketplace-vld59\" (UID: \"82a6e1c6-63f6-4b4b-be68-7ec6403e5966\") " pod="openshift-marketplace/redhat-marketplace-vld59" Nov 29 09:21:48 crc kubenswrapper[4795]: I1129 09:21:48.859161 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vld59" Nov 29 09:21:49 crc kubenswrapper[4795]: I1129 09:21:49.360416 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vld59"] Nov 29 09:21:49 crc kubenswrapper[4795]: W1129 09:21:49.366367 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82a6e1c6_63f6_4b4b_be68_7ec6403e5966.slice/crio-53f5c5a1bcab77f42b1b6e65452f190fba22e638cca0e18ba470d13fe94be563 WatchSource:0}: Error finding container 53f5c5a1bcab77f42b1b6e65452f190fba22e638cca0e18ba470d13fe94be563: Status 404 returned error can't find the container with id 53f5c5a1bcab77f42b1b6e65452f190fba22e638cca0e18ba470d13fe94be563 Nov 29 09:21:49 crc kubenswrapper[4795]: I1129 09:21:49.530521 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vld59" event={"ID":"82a6e1c6-63f6-4b4b-be68-7ec6403e5966","Type":"ContainerStarted","Data":"53f5c5a1bcab77f42b1b6e65452f190fba22e638cca0e18ba470d13fe94be563"} Nov 29 09:21:50 crc kubenswrapper[4795]: I1129 09:21:50.547033 4795 generic.go:334] "Generic (PLEG): container finished" podID="82a6e1c6-63f6-4b4b-be68-7ec6403e5966" containerID="57ddb06988e856907ef47c147462b1487999b6073c8997133fd782c3cece7a59" exitCode=0 Nov 29 09:21:50 crc kubenswrapper[4795]: I1129 09:21:50.547156 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vld59" event={"ID":"82a6e1c6-63f6-4b4b-be68-7ec6403e5966","Type":"ContainerDied","Data":"57ddb06988e856907ef47c147462b1487999b6073c8997133fd782c3cece7a59"} Nov 29 09:21:51 crc kubenswrapper[4795]: I1129 09:21:51.558935 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vld59" event={"ID":"82a6e1c6-63f6-4b4b-be68-7ec6403e5966","Type":"ContainerStarted","Data":"340314b644b2434ac53cc012185bd7a799dae7eb9b441ff7f6533fdf09211796"} Nov 29 09:21:52 crc kubenswrapper[4795]: I1129 09:21:52.569674 4795 generic.go:334] "Generic (PLEG): container finished" podID="82a6e1c6-63f6-4b4b-be68-7ec6403e5966" containerID="340314b644b2434ac53cc012185bd7a799dae7eb9b441ff7f6533fdf09211796" exitCode=0 Nov 29 09:21:52 crc kubenswrapper[4795]: I1129 09:21:52.569714 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vld59" event={"ID":"82a6e1c6-63f6-4b4b-be68-7ec6403e5966","Type":"ContainerDied","Data":"340314b644b2434ac53cc012185bd7a799dae7eb9b441ff7f6533fdf09211796"} Nov 29 09:21:53 crc kubenswrapper[4795]: I1129 09:21:53.276030 4795 scope.go:117] "RemoveContainer" containerID="e3c02c44d457b8bc56d4e6c57834916c5a24411b16830e7c64fb79753ef4fe61" Nov 29 09:21:53 crc kubenswrapper[4795]: E1129 09:21:53.276619 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:21:53 crc kubenswrapper[4795]: I1129 09:21:53.581139 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vld59" event={"ID":"82a6e1c6-63f6-4b4b-be68-7ec6403e5966","Type":"ContainerStarted","Data":"cbfcdfeec240983e3068d196de4c555c7e80df308548314e6f3e0378cee22926"} Nov 29 09:21:53 crc kubenswrapper[4795]: I1129 09:21:53.607192 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vld59" podStartSLOduration=3.09976013 podStartE2EDuration="5.607170293s" podCreationTimestamp="2025-11-29 09:21:48 +0000 UTC" firstStartedPulling="2025-11-29 09:21:50.549849614 +0000 UTC m=+6156.525425424" lastFinishedPulling="2025-11-29 09:21:53.057259797 +0000 UTC m=+6159.032835587" observedRunningTime="2025-11-29 09:21:53.603440067 +0000 UTC m=+6159.579015857" watchObservedRunningTime="2025-11-29 09:21:53.607170293 +0000 UTC m=+6159.582746083" Nov 29 09:21:58 crc kubenswrapper[4795]: I1129 09:21:58.862672 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vld59" Nov 29 09:21:58 crc kubenswrapper[4795]: I1129 09:21:58.863456 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vld59" Nov 29 09:21:58 crc kubenswrapper[4795]: I1129 09:21:58.918313 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vld59" Nov 29 09:21:59 crc kubenswrapper[4795]: I1129 09:21:59.704578 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vld59" Nov 29 09:21:59 crc kubenswrapper[4795]: I1129 09:21:59.756516 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vld59"] Nov 29 09:22:01 crc kubenswrapper[4795]: I1129 09:22:01.674565 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vld59" podUID="82a6e1c6-63f6-4b4b-be68-7ec6403e5966" containerName="registry-server" containerID="cri-o://cbfcdfeec240983e3068d196de4c555c7e80df308548314e6f3e0378cee22926" gracePeriod=2 Nov 29 09:22:02 crc kubenswrapper[4795]: I1129 09:22:02.211948 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vld59" Nov 29 09:22:02 crc kubenswrapper[4795]: I1129 09:22:02.397488 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqw5q\" (UniqueName: \"kubernetes.io/projected/82a6e1c6-63f6-4b4b-be68-7ec6403e5966-kube-api-access-lqw5q\") pod \"82a6e1c6-63f6-4b4b-be68-7ec6403e5966\" (UID: \"82a6e1c6-63f6-4b4b-be68-7ec6403e5966\") " Nov 29 09:22:02 crc kubenswrapper[4795]: I1129 09:22:02.397980 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82a6e1c6-63f6-4b4b-be68-7ec6403e5966-utilities\") pod \"82a6e1c6-63f6-4b4b-be68-7ec6403e5966\" (UID: \"82a6e1c6-63f6-4b4b-be68-7ec6403e5966\") " Nov 29 09:22:02 crc kubenswrapper[4795]: I1129 09:22:02.398204 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82a6e1c6-63f6-4b4b-be68-7ec6403e5966-catalog-content\") pod \"82a6e1c6-63f6-4b4b-be68-7ec6403e5966\" (UID: \"82a6e1c6-63f6-4b4b-be68-7ec6403e5966\") " Nov 29 09:22:02 crc kubenswrapper[4795]: I1129 09:22:02.398641 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82a6e1c6-63f6-4b4b-be68-7ec6403e5966-utilities" (OuterVolumeSpecName: "utilities") pod "82a6e1c6-63f6-4b4b-be68-7ec6403e5966" (UID: "82a6e1c6-63f6-4b4b-be68-7ec6403e5966"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 09:22:02 crc kubenswrapper[4795]: I1129 09:22:02.398916 4795 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82a6e1c6-63f6-4b4b-be68-7ec6403e5966-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 09:22:02 crc kubenswrapper[4795]: I1129 09:22:02.405491 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82a6e1c6-63f6-4b4b-be68-7ec6403e5966-kube-api-access-lqw5q" (OuterVolumeSpecName: "kube-api-access-lqw5q") pod "82a6e1c6-63f6-4b4b-be68-7ec6403e5966" (UID: "82a6e1c6-63f6-4b4b-be68-7ec6403e5966"). InnerVolumeSpecName "kube-api-access-lqw5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 09:22:02 crc kubenswrapper[4795]: I1129 09:22:02.429981 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82a6e1c6-63f6-4b4b-be68-7ec6403e5966-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "82a6e1c6-63f6-4b4b-be68-7ec6403e5966" (UID: "82a6e1c6-63f6-4b4b-be68-7ec6403e5966"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 09:22:02 crc kubenswrapper[4795]: I1129 09:22:02.501439 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lqw5q\" (UniqueName: \"kubernetes.io/projected/82a6e1c6-63f6-4b4b-be68-7ec6403e5966-kube-api-access-lqw5q\") on node \"crc\" DevicePath \"\"" Nov 29 09:22:02 crc kubenswrapper[4795]: I1129 09:22:02.501490 4795 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82a6e1c6-63f6-4b4b-be68-7ec6403e5966-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 09:22:02 crc kubenswrapper[4795]: I1129 09:22:02.692942 4795 generic.go:334] "Generic (PLEG): container finished" podID="82a6e1c6-63f6-4b4b-be68-7ec6403e5966" containerID="cbfcdfeec240983e3068d196de4c555c7e80df308548314e6f3e0378cee22926" exitCode=0 Nov 29 09:22:02 crc kubenswrapper[4795]: I1129 09:22:02.692987 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vld59" event={"ID":"82a6e1c6-63f6-4b4b-be68-7ec6403e5966","Type":"ContainerDied","Data":"cbfcdfeec240983e3068d196de4c555c7e80df308548314e6f3e0378cee22926"} Nov 29 09:22:02 crc kubenswrapper[4795]: I1129 09:22:02.693013 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vld59" event={"ID":"82a6e1c6-63f6-4b4b-be68-7ec6403e5966","Type":"ContainerDied","Data":"53f5c5a1bcab77f42b1b6e65452f190fba22e638cca0e18ba470d13fe94be563"} Nov 29 09:22:02 crc kubenswrapper[4795]: I1129 09:22:02.693031 4795 scope.go:117] "RemoveContainer" containerID="cbfcdfeec240983e3068d196de4c555c7e80df308548314e6f3e0378cee22926" Nov 29 09:22:02 crc kubenswrapper[4795]: I1129 09:22:02.693176 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vld59" Nov 29 09:22:02 crc kubenswrapper[4795]: I1129 09:22:02.736061 4795 scope.go:117] "RemoveContainer" containerID="340314b644b2434ac53cc012185bd7a799dae7eb9b441ff7f6533fdf09211796" Nov 29 09:22:02 crc kubenswrapper[4795]: I1129 09:22:02.737895 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vld59"] Nov 29 09:22:02 crc kubenswrapper[4795]: I1129 09:22:02.762404 4795 scope.go:117] "RemoveContainer" containerID="57ddb06988e856907ef47c147462b1487999b6073c8997133fd782c3cece7a59" Nov 29 09:22:02 crc kubenswrapper[4795]: I1129 09:22:02.773307 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vld59"] Nov 29 09:22:02 crc kubenswrapper[4795]: I1129 09:22:02.811611 4795 scope.go:117] "RemoveContainer" containerID="cbfcdfeec240983e3068d196de4c555c7e80df308548314e6f3e0378cee22926" Nov 29 09:22:02 crc kubenswrapper[4795]: E1129 09:22:02.812165 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbfcdfeec240983e3068d196de4c555c7e80df308548314e6f3e0378cee22926\": container with ID starting with cbfcdfeec240983e3068d196de4c555c7e80df308548314e6f3e0378cee22926 not found: ID does not exist" containerID="cbfcdfeec240983e3068d196de4c555c7e80df308548314e6f3e0378cee22926" Nov 29 09:22:02 crc kubenswrapper[4795]: I1129 09:22:02.812203 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbfcdfeec240983e3068d196de4c555c7e80df308548314e6f3e0378cee22926"} err="failed to get container status \"cbfcdfeec240983e3068d196de4c555c7e80df308548314e6f3e0378cee22926\": rpc error: code = NotFound desc = could not find container \"cbfcdfeec240983e3068d196de4c555c7e80df308548314e6f3e0378cee22926\": container with ID starting with cbfcdfeec240983e3068d196de4c555c7e80df308548314e6f3e0378cee22926 not found: ID does not exist" Nov 29 09:22:02 crc kubenswrapper[4795]: I1129 09:22:02.812229 4795 scope.go:117] "RemoveContainer" containerID="340314b644b2434ac53cc012185bd7a799dae7eb9b441ff7f6533fdf09211796" Nov 29 09:22:02 crc kubenswrapper[4795]: E1129 09:22:02.812515 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"340314b644b2434ac53cc012185bd7a799dae7eb9b441ff7f6533fdf09211796\": container with ID starting with 340314b644b2434ac53cc012185bd7a799dae7eb9b441ff7f6533fdf09211796 not found: ID does not exist" containerID="340314b644b2434ac53cc012185bd7a799dae7eb9b441ff7f6533fdf09211796" Nov 29 09:22:02 crc kubenswrapper[4795]: I1129 09:22:02.812541 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"340314b644b2434ac53cc012185bd7a799dae7eb9b441ff7f6533fdf09211796"} err="failed to get container status \"340314b644b2434ac53cc012185bd7a799dae7eb9b441ff7f6533fdf09211796\": rpc error: code = NotFound desc = could not find container \"340314b644b2434ac53cc012185bd7a799dae7eb9b441ff7f6533fdf09211796\": container with ID starting with 340314b644b2434ac53cc012185bd7a799dae7eb9b441ff7f6533fdf09211796 not found: ID does not exist" Nov 29 09:22:02 crc kubenswrapper[4795]: I1129 09:22:02.812560 4795 scope.go:117] "RemoveContainer" containerID="57ddb06988e856907ef47c147462b1487999b6073c8997133fd782c3cece7a59" Nov 29 09:22:02 crc kubenswrapper[4795]: E1129 09:22:02.813004 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57ddb06988e856907ef47c147462b1487999b6073c8997133fd782c3cece7a59\": container with ID starting with 57ddb06988e856907ef47c147462b1487999b6073c8997133fd782c3cece7a59 not found: ID does not exist" containerID="57ddb06988e856907ef47c147462b1487999b6073c8997133fd782c3cece7a59" Nov 29 09:22:02 crc kubenswrapper[4795]: I1129 09:22:02.813053 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57ddb06988e856907ef47c147462b1487999b6073c8997133fd782c3cece7a59"} err="failed to get container status \"57ddb06988e856907ef47c147462b1487999b6073c8997133fd782c3cece7a59\": rpc error: code = NotFound desc = could not find container \"57ddb06988e856907ef47c147462b1487999b6073c8997133fd782c3cece7a59\": container with ID starting with 57ddb06988e856907ef47c147462b1487999b6073c8997133fd782c3cece7a59 not found: ID does not exist" Nov 29 09:22:04 crc kubenswrapper[4795]: I1129 09:22:04.296301 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82a6e1c6-63f6-4b4b-be68-7ec6403e5966" path="/var/lib/kubelet/pods/82a6e1c6-63f6-4b4b-be68-7ec6403e5966/volumes" Nov 29 09:22:07 crc kubenswrapper[4795]: I1129 09:22:07.276110 4795 scope.go:117] "RemoveContainer" containerID="e3c02c44d457b8bc56d4e6c57834916c5a24411b16830e7c64fb79753ef4fe61" Nov 29 09:22:07 crc kubenswrapper[4795]: E1129 09:22:07.277074 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:22:19 crc kubenswrapper[4795]: I1129 09:22:19.276163 4795 scope.go:117] "RemoveContainer" containerID="e3c02c44d457b8bc56d4e6c57834916c5a24411b16830e7c64fb79753ef4fe61" Nov 29 09:22:19 crc kubenswrapper[4795]: E1129 09:22:19.277009 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:22:33 crc kubenswrapper[4795]: I1129 09:22:33.276891 4795 scope.go:117] "RemoveContainer" containerID="e3c02c44d457b8bc56d4e6c57834916c5a24411b16830e7c64fb79753ef4fe61" Nov 29 09:22:33 crc kubenswrapper[4795]: E1129 09:22:33.279127 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:22:48 crc kubenswrapper[4795]: I1129 09:22:48.275863 4795 scope.go:117] "RemoveContainer" containerID="e3c02c44d457b8bc56d4e6c57834916c5a24411b16830e7c64fb79753ef4fe61" Nov 29 09:22:48 crc kubenswrapper[4795]: E1129 09:22:48.276631 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:23:00 crc kubenswrapper[4795]: I1129 09:23:00.276117 4795 scope.go:117] "RemoveContainer" containerID="e3c02c44d457b8bc56d4e6c57834916c5a24411b16830e7c64fb79753ef4fe61" Nov 29 09:23:00 crc kubenswrapper[4795]: E1129 09:23:00.276863 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:23:13 crc kubenswrapper[4795]: I1129 09:23:13.277575 4795 scope.go:117] "RemoveContainer" containerID="e3c02c44d457b8bc56d4e6c57834916c5a24411b16830e7c64fb79753ef4fe61" Nov 29 09:23:13 crc kubenswrapper[4795]: E1129 09:23:13.278745 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:23:24 crc kubenswrapper[4795]: I1129 09:23:24.289886 4795 scope.go:117] "RemoveContainer" containerID="e3c02c44d457b8bc56d4e6c57834916c5a24411b16830e7c64fb79753ef4fe61" Nov 29 09:23:24 crc kubenswrapper[4795]: E1129 09:23:24.290741 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:23:35 crc kubenswrapper[4795]: I1129 09:23:35.276228 4795 scope.go:117] "RemoveContainer" containerID="e3c02c44d457b8bc56d4e6c57834916c5a24411b16830e7c64fb79753ef4fe61" Nov 29 09:23:35 crc kubenswrapper[4795]: E1129 09:23:35.277316 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:23:48 crc kubenswrapper[4795]: I1129 09:23:48.276007 4795 scope.go:117] "RemoveContainer" containerID="e3c02c44d457b8bc56d4e6c57834916c5a24411b16830e7c64fb79753ef4fe61" Nov 29 09:23:48 crc kubenswrapper[4795]: E1129 09:23:48.276774 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:24:03 crc kubenswrapper[4795]: I1129 09:24:03.275447 4795 scope.go:117] "RemoveContainer" containerID="e3c02c44d457b8bc56d4e6c57834916c5a24411b16830e7c64fb79753ef4fe61" Nov 29 09:24:03 crc kubenswrapper[4795]: E1129 09:24:03.276545 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:24:14 crc kubenswrapper[4795]: I1129 09:24:14.298001 4795 scope.go:117] "RemoveContainer" containerID="e3c02c44d457b8bc56d4e6c57834916c5a24411b16830e7c64fb79753ef4fe61" Nov 29 09:24:14 crc kubenswrapper[4795]: E1129 09:24:14.298960 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:24:29 crc kubenswrapper[4795]: I1129 09:24:29.278022 4795 scope.go:117] "RemoveContainer" containerID="e3c02c44d457b8bc56d4e6c57834916c5a24411b16830e7c64fb79753ef4fe61" Nov 29 09:24:29 crc kubenswrapper[4795]: E1129 09:24:29.279110 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:24:41 crc kubenswrapper[4795]: I1129 09:24:41.276846 4795 scope.go:117] "RemoveContainer" containerID="e3c02c44d457b8bc56d4e6c57834916c5a24411b16830e7c64fb79753ef4fe61" Nov 29 09:24:41 crc kubenswrapper[4795]: E1129 09:24:41.277772 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:24:52 crc kubenswrapper[4795]: I1129 09:24:52.276784 4795 scope.go:117] "RemoveContainer" containerID="e3c02c44d457b8bc56d4e6c57834916c5a24411b16830e7c64fb79753ef4fe61" Nov 29 09:24:52 crc kubenswrapper[4795]: E1129 09:24:52.277719 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:25:05 crc kubenswrapper[4795]: I1129 09:25:05.275737 4795 scope.go:117] "RemoveContainer" containerID="e3c02c44d457b8bc56d4e6c57834916c5a24411b16830e7c64fb79753ef4fe61" Nov 29 09:25:05 crc kubenswrapper[4795]: E1129 09:25:05.276515 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:25:16 crc kubenswrapper[4795]: I1129 09:25:16.279128 4795 scope.go:117] "RemoveContainer" containerID="e3c02c44d457b8bc56d4e6c57834916c5a24411b16830e7c64fb79753ef4fe61" Nov 29 09:25:16 crc kubenswrapper[4795]: E1129 09:25:16.285176 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:25:31 crc kubenswrapper[4795]: I1129 09:25:31.275780 4795 scope.go:117] "RemoveContainer" containerID="e3c02c44d457b8bc56d4e6c57834916c5a24411b16830e7c64fb79753ef4fe61" Nov 29 09:25:31 crc kubenswrapper[4795]: E1129 09:25:31.277882 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:25:42 crc kubenswrapper[4795]: I1129 09:25:42.277641 4795 scope.go:117] "RemoveContainer" containerID="e3c02c44d457b8bc56d4e6c57834916c5a24411b16830e7c64fb79753ef4fe61" Nov 29 09:25:43 crc kubenswrapper[4795]: I1129 09:25:43.371796 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" event={"ID":"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1","Type":"ContainerStarted","Data":"38542a01fd0557a5c242f6a842e56a1fb659b4b79c57d243f879c3de7726e808"} Nov 29 09:25:57 crc kubenswrapper[4795]: I1129 09:25:57.540368 4795 generic.go:334] "Generic (PLEG): container finished" podID="95552069-4919-43f3-88d5-2c40ff4c0836" containerID="a01c12be7747d9f5daf6a2f081591e4950b15dbf1e36e3d23246657455eb56bf" exitCode=0 Nov 29 09:25:57 crc kubenswrapper[4795]: I1129 09:25:57.540452 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"95552069-4919-43f3-88d5-2c40ff4c0836","Type":"ContainerDied","Data":"a01c12be7747d9f5daf6a2f081591e4950b15dbf1e36e3d23246657455eb56bf"} Nov 29 09:25:59 crc kubenswrapper[4795]: I1129 09:25:59.007361 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 29 09:25:59 crc kubenswrapper[4795]: I1129 09:25:59.210529 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbc2r\" (UniqueName: \"kubernetes.io/projected/95552069-4919-43f3-88d5-2c40ff4c0836-kube-api-access-wbc2r\") pod \"95552069-4919-43f3-88d5-2c40ff4c0836\" (UID: \"95552069-4919-43f3-88d5-2c40ff4c0836\") " Nov 29 09:25:59 crc kubenswrapper[4795]: I1129 09:25:59.210613 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/95552069-4919-43f3-88d5-2c40ff4c0836-openstack-config-secret\") pod \"95552069-4919-43f3-88d5-2c40ff4c0836\" (UID: \"95552069-4919-43f3-88d5-2c40ff4c0836\") " Nov 29 09:25:59 crc kubenswrapper[4795]: I1129 09:25:59.210640 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/95552069-4919-43f3-88d5-2c40ff4c0836-ca-certs\") pod \"95552069-4919-43f3-88d5-2c40ff4c0836\" (UID: \"95552069-4919-43f3-88d5-2c40ff4c0836\") " Nov 29 09:25:59 crc kubenswrapper[4795]: I1129 09:25:59.210740 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"95552069-4919-43f3-88d5-2c40ff4c0836\" (UID: \"95552069-4919-43f3-88d5-2c40ff4c0836\") " Nov 29 09:25:59 crc kubenswrapper[4795]: I1129 09:25:59.210764 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/95552069-4919-43f3-88d5-2c40ff4c0836-openstack-config\") pod \"95552069-4919-43f3-88d5-2c40ff4c0836\" (UID: \"95552069-4919-43f3-88d5-2c40ff4c0836\") " Nov 29 09:25:59 crc kubenswrapper[4795]: I1129 09:25:59.210785 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/95552069-4919-43f3-88d5-2c40ff4c0836-test-operator-ephemeral-workdir\") pod \"95552069-4919-43f3-88d5-2c40ff4c0836\" (UID: \"95552069-4919-43f3-88d5-2c40ff4c0836\") " Nov 29 09:25:59 crc kubenswrapper[4795]: I1129 09:25:59.210839 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/95552069-4919-43f3-88d5-2c40ff4c0836-config-data\") pod \"95552069-4919-43f3-88d5-2c40ff4c0836\" (UID: \"95552069-4919-43f3-88d5-2c40ff4c0836\") " Nov 29 09:25:59 crc kubenswrapper[4795]: I1129 09:25:59.210887 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/95552069-4919-43f3-88d5-2c40ff4c0836-test-operator-ephemeral-temporary\") pod \"95552069-4919-43f3-88d5-2c40ff4c0836\" (UID: \"95552069-4919-43f3-88d5-2c40ff4c0836\") " Nov 29 09:25:59 crc kubenswrapper[4795]: I1129 09:25:59.211094 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/95552069-4919-43f3-88d5-2c40ff4c0836-ssh-key\") pod \"95552069-4919-43f3-88d5-2c40ff4c0836\" (UID: \"95552069-4919-43f3-88d5-2c40ff4c0836\") " Nov 29 09:25:59 crc kubenswrapper[4795]: I1129 09:25:59.212386 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95552069-4919-43f3-88d5-2c40ff4c0836-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "95552069-4919-43f3-88d5-2c40ff4c0836" (UID: "95552069-4919-43f3-88d5-2c40ff4c0836"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 09:25:59 crc kubenswrapper[4795]: I1129 09:25:59.212535 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95552069-4919-43f3-88d5-2c40ff4c0836-config-data" (OuterVolumeSpecName: "config-data") pod "95552069-4919-43f3-88d5-2c40ff4c0836" (UID: "95552069-4919-43f3-88d5-2c40ff4c0836"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 09:25:59 crc kubenswrapper[4795]: I1129 09:25:59.217768 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "test-operator-logs") pod "95552069-4919-43f3-88d5-2c40ff4c0836" (UID: "95552069-4919-43f3-88d5-2c40ff4c0836"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 29 09:25:59 crc kubenswrapper[4795]: I1129 09:25:59.223558 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95552069-4919-43f3-88d5-2c40ff4c0836-kube-api-access-wbc2r" (OuterVolumeSpecName: "kube-api-access-wbc2r") pod "95552069-4919-43f3-88d5-2c40ff4c0836" (UID: "95552069-4919-43f3-88d5-2c40ff4c0836"). InnerVolumeSpecName "kube-api-access-wbc2r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 09:25:59 crc kubenswrapper[4795]: I1129 09:25:59.229688 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95552069-4919-43f3-88d5-2c40ff4c0836-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "95552069-4919-43f3-88d5-2c40ff4c0836" (UID: "95552069-4919-43f3-88d5-2c40ff4c0836"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 09:25:59 crc kubenswrapper[4795]: I1129 09:25:59.248693 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95552069-4919-43f3-88d5-2c40ff4c0836-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "95552069-4919-43f3-88d5-2c40ff4c0836" (UID: "95552069-4919-43f3-88d5-2c40ff4c0836"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 09:25:59 crc kubenswrapper[4795]: I1129 09:25:59.251973 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95552069-4919-43f3-88d5-2c40ff4c0836-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "95552069-4919-43f3-88d5-2c40ff4c0836" (UID: "95552069-4919-43f3-88d5-2c40ff4c0836"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 09:25:59 crc kubenswrapper[4795]: I1129 09:25:59.252320 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95552069-4919-43f3-88d5-2c40ff4c0836-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "95552069-4919-43f3-88d5-2c40ff4c0836" (UID: "95552069-4919-43f3-88d5-2c40ff4c0836"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 09:25:59 crc kubenswrapper[4795]: I1129 09:25:59.297399 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95552069-4919-43f3-88d5-2c40ff4c0836-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "95552069-4919-43f3-88d5-2c40ff4c0836" (UID: "95552069-4919-43f3-88d5-2c40ff4c0836"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 09:25:59 crc kubenswrapper[4795]: I1129 09:25:59.315013 4795 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Nov 29 09:25:59 crc kubenswrapper[4795]: I1129 09:25:59.315054 4795 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/95552069-4919-43f3-88d5-2c40ff4c0836-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 29 09:25:59 crc kubenswrapper[4795]: I1129 09:25:59.315068 4795 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/95552069-4919-43f3-88d5-2c40ff4c0836-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Nov 29 09:25:59 crc kubenswrapper[4795]: I1129 09:25:59.315079 4795 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/95552069-4919-43f3-88d5-2c40ff4c0836-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 09:25:59 crc kubenswrapper[4795]: I1129 09:25:59.315092 4795 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/95552069-4919-43f3-88d5-2c40ff4c0836-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Nov 29 09:25:59 crc kubenswrapper[4795]: I1129 09:25:59.315100 4795 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/95552069-4919-43f3-88d5-2c40ff4c0836-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 09:25:59 crc kubenswrapper[4795]: I1129 09:25:59.315109 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wbc2r\" (UniqueName: \"kubernetes.io/projected/95552069-4919-43f3-88d5-2c40ff4c0836-kube-api-access-wbc2r\") on node \"crc\" DevicePath \"\"" Nov 29 09:25:59 crc kubenswrapper[4795]: I1129 09:25:59.315118 4795 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/95552069-4919-43f3-88d5-2c40ff4c0836-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 29 09:25:59 crc kubenswrapper[4795]: I1129 09:25:59.315128 4795 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/95552069-4919-43f3-88d5-2c40ff4c0836-ca-certs\") on node \"crc\" DevicePath \"\"" Nov 29 09:25:59 crc kubenswrapper[4795]: I1129 09:25:59.340140 4795 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Nov 29 09:25:59 crc kubenswrapper[4795]: I1129 09:25:59.418218 4795 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Nov 29 09:25:59 crc kubenswrapper[4795]: I1129 09:25:59.561528 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"95552069-4919-43f3-88d5-2c40ff4c0836","Type":"ContainerDied","Data":"804c0f98417f267aeda32ae3ee3fc5ae8afa0cf36735633e62d8e888830e5425"} Nov 29 09:25:59 crc kubenswrapper[4795]: I1129 09:25:59.561825 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="804c0f98417f267aeda32ae3ee3fc5ae8afa0cf36735633e62d8e888830e5425" Nov 29 09:25:59 crc kubenswrapper[4795]: I1129 09:25:59.561580 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 29 09:26:09 crc kubenswrapper[4795]: I1129 09:26:09.073166 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 29 09:26:09 crc kubenswrapper[4795]: E1129 09:26:09.074523 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82a6e1c6-63f6-4b4b-be68-7ec6403e5966" containerName="registry-server" Nov 29 09:26:09 crc kubenswrapper[4795]: I1129 09:26:09.074546 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="82a6e1c6-63f6-4b4b-be68-7ec6403e5966" containerName="registry-server" Nov 29 09:26:09 crc kubenswrapper[4795]: E1129 09:26:09.074579 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95552069-4919-43f3-88d5-2c40ff4c0836" containerName="tempest-tests-tempest-tests-runner" Nov 29 09:26:09 crc kubenswrapper[4795]: I1129 09:26:09.074623 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="95552069-4919-43f3-88d5-2c40ff4c0836" containerName="tempest-tests-tempest-tests-runner" Nov 29 09:26:09 crc kubenswrapper[4795]: E1129 09:26:09.074642 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82a6e1c6-63f6-4b4b-be68-7ec6403e5966" containerName="extract-content" Nov 29 09:26:09 crc kubenswrapper[4795]: I1129 09:26:09.074652 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="82a6e1c6-63f6-4b4b-be68-7ec6403e5966" containerName="extract-content" Nov 29 09:26:09 crc kubenswrapper[4795]: E1129 09:26:09.074665 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82a6e1c6-63f6-4b4b-be68-7ec6403e5966" containerName="extract-utilities" Nov 29 09:26:09 crc kubenswrapper[4795]: I1129 09:26:09.074673 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="82a6e1c6-63f6-4b4b-be68-7ec6403e5966" containerName="extract-utilities" Nov 29 09:26:09 crc kubenswrapper[4795]: I1129 09:26:09.074943 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="82a6e1c6-63f6-4b4b-be68-7ec6403e5966" containerName="registry-server" Nov 29 09:26:09 crc kubenswrapper[4795]: I1129 09:26:09.074984 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="95552069-4919-43f3-88d5-2c40ff4c0836" containerName="tempest-tests-tempest-tests-runner" Nov 29 09:26:09 crc kubenswrapper[4795]: I1129 09:26:09.076010 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 29 09:26:09 crc kubenswrapper[4795]: I1129 09:26:09.080665 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-wk5x5" Nov 29 09:26:09 crc kubenswrapper[4795]: I1129 09:26:09.085801 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 29 09:26:09 crc kubenswrapper[4795]: I1129 09:26:09.248642 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdm7r\" (UniqueName: \"kubernetes.io/projected/04e8ed70-5207-4d59-8de4-b96ad0270b54-kube-api-access-wdm7r\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"04e8ed70-5207-4d59-8de4-b96ad0270b54\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 29 09:26:09 crc kubenswrapper[4795]: I1129 09:26:09.249117 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"04e8ed70-5207-4d59-8de4-b96ad0270b54\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 29 09:26:09 crc kubenswrapper[4795]: I1129 09:26:09.351115 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdm7r\" (UniqueName: \"kubernetes.io/projected/04e8ed70-5207-4d59-8de4-b96ad0270b54-kube-api-access-wdm7r\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"04e8ed70-5207-4d59-8de4-b96ad0270b54\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 29 09:26:09 crc kubenswrapper[4795]: I1129 09:26:09.351300 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"04e8ed70-5207-4d59-8de4-b96ad0270b54\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 29 09:26:09 crc kubenswrapper[4795]: I1129 09:26:09.352173 4795 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"04e8ed70-5207-4d59-8de4-b96ad0270b54\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 29 09:26:09 crc kubenswrapper[4795]: I1129 09:26:09.373044 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdm7r\" (UniqueName: \"kubernetes.io/projected/04e8ed70-5207-4d59-8de4-b96ad0270b54-kube-api-access-wdm7r\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"04e8ed70-5207-4d59-8de4-b96ad0270b54\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 29 09:26:09 crc kubenswrapper[4795]: I1129 09:26:09.418447 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"04e8ed70-5207-4d59-8de4-b96ad0270b54\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 29 09:26:09 crc kubenswrapper[4795]: I1129 09:26:09.702682 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 29 09:26:10 crc kubenswrapper[4795]: I1129 09:26:10.195757 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 29 09:26:10 crc kubenswrapper[4795]: W1129 09:26:10.202171 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod04e8ed70_5207_4d59_8de4_b96ad0270b54.slice/crio-53b8919391142dcc675bb948503f5dfadeee3e84520117d6886c9fd6b67392d8 WatchSource:0}: Error finding container 53b8919391142dcc675bb948503f5dfadeee3e84520117d6886c9fd6b67392d8: Status 404 returned error can't find the container with id 53b8919391142dcc675bb948503f5dfadeee3e84520117d6886c9fd6b67392d8 Nov 29 09:26:10 crc kubenswrapper[4795]: I1129 09:26:10.207139 4795 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 29 09:26:10 crc kubenswrapper[4795]: I1129 09:26:10.688056 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"04e8ed70-5207-4d59-8de4-b96ad0270b54","Type":"ContainerStarted","Data":"53b8919391142dcc675bb948503f5dfadeee3e84520117d6886c9fd6b67392d8"} Nov 29 09:26:11 crc kubenswrapper[4795]: I1129 09:26:11.712657 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"04e8ed70-5207-4d59-8de4-b96ad0270b54","Type":"ContainerStarted","Data":"b8fccb7715bac3c0f1d0d9912942873058271fbf1697de64a32f1d397f838097"} Nov 29 09:26:11 crc kubenswrapper[4795]: I1129 09:26:11.738515 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.718642965 podStartE2EDuration="2.738497671s" podCreationTimestamp="2025-11-29 09:26:09 +0000 UTC" firstStartedPulling="2025-11-29 09:26:10.206779237 +0000 UTC m=+6416.182355027" lastFinishedPulling="2025-11-29 09:26:11.226633943 +0000 UTC m=+6417.202209733" observedRunningTime="2025-11-29 09:26:11.733026236 +0000 UTC m=+6417.708602026" watchObservedRunningTime="2025-11-29 09:26:11.738497671 +0000 UTC m=+6417.714073461" Nov 29 09:26:42 crc kubenswrapper[4795]: I1129 09:26:42.487640 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-qq8lk/must-gather-kwlk8"] Nov 29 09:26:42 crc kubenswrapper[4795]: I1129 09:26:42.490830 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qq8lk/must-gather-kwlk8" Nov 29 09:26:42 crc kubenswrapper[4795]: I1129 09:26:42.493017 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-qq8lk"/"openshift-service-ca.crt" Nov 29 09:26:42 crc kubenswrapper[4795]: I1129 09:26:42.493187 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-qq8lk"/"kube-root-ca.crt" Nov 29 09:26:42 crc kubenswrapper[4795]: I1129 09:26:42.494774 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-qq8lk"/"default-dockercfg-sb6pl" Nov 29 09:26:42 crc kubenswrapper[4795]: I1129 09:26:42.504099 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-qq8lk/must-gather-kwlk8"] Nov 29 09:26:42 crc kubenswrapper[4795]: I1129 09:26:42.613493 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8ca4938e-5980-4d8a-98f4-e379b958f3e7-must-gather-output\") pod \"must-gather-kwlk8\" (UID: \"8ca4938e-5980-4d8a-98f4-e379b958f3e7\") " pod="openshift-must-gather-qq8lk/must-gather-kwlk8" Nov 29 09:26:42 crc kubenswrapper[4795]: I1129 09:26:42.614327 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhmps\" (UniqueName: \"kubernetes.io/projected/8ca4938e-5980-4d8a-98f4-e379b958f3e7-kube-api-access-zhmps\") pod \"must-gather-kwlk8\" (UID: \"8ca4938e-5980-4d8a-98f4-e379b958f3e7\") " pod="openshift-must-gather-qq8lk/must-gather-kwlk8" Nov 29 09:26:42 crc kubenswrapper[4795]: I1129 09:26:42.733439 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8ca4938e-5980-4d8a-98f4-e379b958f3e7-must-gather-output\") pod \"must-gather-kwlk8\" (UID: \"8ca4938e-5980-4d8a-98f4-e379b958f3e7\") " pod="openshift-must-gather-qq8lk/must-gather-kwlk8" Nov 29 09:26:42 crc kubenswrapper[4795]: I1129 09:26:42.733627 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhmps\" (UniqueName: \"kubernetes.io/projected/8ca4938e-5980-4d8a-98f4-e379b958f3e7-kube-api-access-zhmps\") pod \"must-gather-kwlk8\" (UID: \"8ca4938e-5980-4d8a-98f4-e379b958f3e7\") " pod="openshift-must-gather-qq8lk/must-gather-kwlk8" Nov 29 09:26:42 crc kubenswrapper[4795]: I1129 09:26:42.734004 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8ca4938e-5980-4d8a-98f4-e379b958f3e7-must-gather-output\") pod \"must-gather-kwlk8\" (UID: \"8ca4938e-5980-4d8a-98f4-e379b958f3e7\") " pod="openshift-must-gather-qq8lk/must-gather-kwlk8" Nov 29 09:26:42 crc kubenswrapper[4795]: I1129 09:26:42.778377 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhmps\" (UniqueName: \"kubernetes.io/projected/8ca4938e-5980-4d8a-98f4-e379b958f3e7-kube-api-access-zhmps\") pod \"must-gather-kwlk8\" (UID: \"8ca4938e-5980-4d8a-98f4-e379b958f3e7\") " pod="openshift-must-gather-qq8lk/must-gather-kwlk8" Nov 29 09:26:42 crc kubenswrapper[4795]: I1129 09:26:42.814266 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qq8lk/must-gather-kwlk8" Nov 29 09:26:43 crc kubenswrapper[4795]: I1129 09:26:43.466038 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-qq8lk/must-gather-kwlk8"] Nov 29 09:26:43 crc kubenswrapper[4795]: W1129 09:26:43.481466 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ca4938e_5980_4d8a_98f4_e379b958f3e7.slice/crio-413544ec9da1f68d44b987768826d306008e3a5d84ebc455b317577955c9f960 WatchSource:0}: Error finding container 413544ec9da1f68d44b987768826d306008e3a5d84ebc455b317577955c9f960: Status 404 returned error can't find the container with id 413544ec9da1f68d44b987768826d306008e3a5d84ebc455b317577955c9f960 Nov 29 09:26:43 crc kubenswrapper[4795]: I1129 09:26:43.702471 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qq8lk/must-gather-kwlk8" event={"ID":"8ca4938e-5980-4d8a-98f4-e379b958f3e7","Type":"ContainerStarted","Data":"413544ec9da1f68d44b987768826d306008e3a5d84ebc455b317577955c9f960"} Nov 29 09:26:49 crc kubenswrapper[4795]: I1129 09:26:49.772157 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qq8lk/must-gather-kwlk8" event={"ID":"8ca4938e-5980-4d8a-98f4-e379b958f3e7","Type":"ContainerStarted","Data":"1b8c034f730364515afee56e054c8860b9d06819ad971b33c9be14bbfe3db16b"} Nov 29 09:26:49 crc kubenswrapper[4795]: I1129 09:26:49.772572 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qq8lk/must-gather-kwlk8" event={"ID":"8ca4938e-5980-4d8a-98f4-e379b958f3e7","Type":"ContainerStarted","Data":"701b2d05de16b7ffd060c0920f1dcd05dc7c6605c2b84c2cf334e54fd26b6186"} Nov 29 09:26:49 crc kubenswrapper[4795]: I1129 09:26:49.806521 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-qq8lk/must-gather-kwlk8" podStartSLOduration=2.293985344 podStartE2EDuration="7.806499429s" podCreationTimestamp="2025-11-29 09:26:42 +0000 UTC" firstStartedPulling="2025-11-29 09:26:43.485651844 +0000 UTC m=+6449.461227634" lastFinishedPulling="2025-11-29 09:26:48.998165909 +0000 UTC m=+6454.973741719" observedRunningTime="2025-11-29 09:26:49.794098418 +0000 UTC m=+6455.769674228" watchObservedRunningTime="2025-11-29 09:26:49.806499429 +0000 UTC m=+6455.782075229" Nov 29 09:26:53 crc kubenswrapper[4795]: I1129 09:26:53.686328 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-qq8lk/crc-debug-fq6s7"] Nov 29 09:26:53 crc kubenswrapper[4795]: I1129 09:26:53.689061 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qq8lk/crc-debug-fq6s7" Nov 29 09:26:53 crc kubenswrapper[4795]: I1129 09:26:53.704174 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1ceed53d-e157-4118-9c3c-e91f48bcd255-host\") pod \"crc-debug-fq6s7\" (UID: \"1ceed53d-e157-4118-9c3c-e91f48bcd255\") " pod="openshift-must-gather-qq8lk/crc-debug-fq6s7" Nov 29 09:26:53 crc kubenswrapper[4795]: I1129 09:26:53.704574 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhdxx\" (UniqueName: \"kubernetes.io/projected/1ceed53d-e157-4118-9c3c-e91f48bcd255-kube-api-access-jhdxx\") pod \"crc-debug-fq6s7\" (UID: \"1ceed53d-e157-4118-9c3c-e91f48bcd255\") " pod="openshift-must-gather-qq8lk/crc-debug-fq6s7" Nov 29 09:26:53 crc kubenswrapper[4795]: I1129 09:26:53.806997 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhdxx\" (UniqueName: \"kubernetes.io/projected/1ceed53d-e157-4118-9c3c-e91f48bcd255-kube-api-access-jhdxx\") pod \"crc-debug-fq6s7\" (UID: \"1ceed53d-e157-4118-9c3c-e91f48bcd255\") " pod="openshift-must-gather-qq8lk/crc-debug-fq6s7" Nov 29 09:26:53 crc kubenswrapper[4795]: I1129 09:26:53.807460 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1ceed53d-e157-4118-9c3c-e91f48bcd255-host\") pod \"crc-debug-fq6s7\" (UID: \"1ceed53d-e157-4118-9c3c-e91f48bcd255\") " pod="openshift-must-gather-qq8lk/crc-debug-fq6s7" Nov 29 09:26:53 crc kubenswrapper[4795]: I1129 09:26:53.807934 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1ceed53d-e157-4118-9c3c-e91f48bcd255-host\") pod \"crc-debug-fq6s7\" (UID: \"1ceed53d-e157-4118-9c3c-e91f48bcd255\") " pod="openshift-must-gather-qq8lk/crc-debug-fq6s7" Nov 29 09:26:53 crc kubenswrapper[4795]: I1129 09:26:53.858685 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhdxx\" (UniqueName: \"kubernetes.io/projected/1ceed53d-e157-4118-9c3c-e91f48bcd255-kube-api-access-jhdxx\") pod \"crc-debug-fq6s7\" (UID: \"1ceed53d-e157-4118-9c3c-e91f48bcd255\") " pod="openshift-must-gather-qq8lk/crc-debug-fq6s7" Nov 29 09:26:54 crc kubenswrapper[4795]: I1129 09:26:54.007863 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qq8lk/crc-debug-fq6s7" Nov 29 09:26:54 crc kubenswrapper[4795]: W1129 09:26:54.053755 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1ceed53d_e157_4118_9c3c_e91f48bcd255.slice/crio-2d5336077e32b77d953a2848a6aa1ea3b0c257c42af4ac4c1a8831aba9acb88d WatchSource:0}: Error finding container 2d5336077e32b77d953a2848a6aa1ea3b0c257c42af4ac4c1a8831aba9acb88d: Status 404 returned error can't find the container with id 2d5336077e32b77d953a2848a6aa1ea3b0c257c42af4ac4c1a8831aba9acb88d Nov 29 09:26:54 crc kubenswrapper[4795]: I1129 09:26:54.837533 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qq8lk/crc-debug-fq6s7" event={"ID":"1ceed53d-e157-4118-9c3c-e91f48bcd255","Type":"ContainerStarted","Data":"2d5336077e32b77d953a2848a6aa1ea3b0c257c42af4ac4c1a8831aba9acb88d"} Nov 29 09:27:06 crc kubenswrapper[4795]: I1129 09:27:06.989308 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qq8lk/crc-debug-fq6s7" event={"ID":"1ceed53d-e157-4118-9c3c-e91f48bcd255","Type":"ContainerStarted","Data":"4907b8cc129e963e766d44001254a2d5bb59508eaa928cadbf46bbc6554bc35d"} Nov 29 09:27:07 crc kubenswrapper[4795]: I1129 09:27:07.030461 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-qq8lk/crc-debug-fq6s7" podStartSLOduration=1.58093012 podStartE2EDuration="14.030435078s" podCreationTimestamp="2025-11-29 09:26:53 +0000 UTC" firstStartedPulling="2025-11-29 09:26:54.055696083 +0000 UTC m=+6460.031271873" lastFinishedPulling="2025-11-29 09:27:06.505201041 +0000 UTC m=+6472.480776831" observedRunningTime="2025-11-29 09:27:07.006082929 +0000 UTC m=+6472.981658719" watchObservedRunningTime="2025-11-29 09:27:07.030435078 +0000 UTC m=+6473.006010868" Nov 29 09:27:30 crc kubenswrapper[4795]: I1129 09:27:30.402056 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-b7lxz"] Nov 29 09:27:30 crc kubenswrapper[4795]: I1129 09:27:30.423118 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b7lxz" Nov 29 09:27:30 crc kubenswrapper[4795]: I1129 09:27:30.426592 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b7lxz"] Nov 29 09:27:30 crc kubenswrapper[4795]: I1129 09:27:30.528994 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/467a0f34-685f-460c-8594-235ff491c2e9-utilities\") pod \"redhat-operators-b7lxz\" (UID: \"467a0f34-685f-460c-8594-235ff491c2e9\") " pod="openshift-marketplace/redhat-operators-b7lxz" Nov 29 09:27:30 crc kubenswrapper[4795]: I1129 09:27:30.529495 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7djwd\" (UniqueName: \"kubernetes.io/projected/467a0f34-685f-460c-8594-235ff491c2e9-kube-api-access-7djwd\") pod \"redhat-operators-b7lxz\" (UID: \"467a0f34-685f-460c-8594-235ff491c2e9\") " pod="openshift-marketplace/redhat-operators-b7lxz" Nov 29 09:27:30 crc kubenswrapper[4795]: I1129 09:27:30.529736 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/467a0f34-685f-460c-8594-235ff491c2e9-catalog-content\") pod \"redhat-operators-b7lxz\" (UID: \"467a0f34-685f-460c-8594-235ff491c2e9\") " pod="openshift-marketplace/redhat-operators-b7lxz" Nov 29 09:27:30 crc kubenswrapper[4795]: I1129 09:27:30.631438 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/467a0f34-685f-460c-8594-235ff491c2e9-utilities\") pod \"redhat-operators-b7lxz\" (UID: \"467a0f34-685f-460c-8594-235ff491c2e9\") " pod="openshift-marketplace/redhat-operators-b7lxz" Nov 29 09:27:30 crc kubenswrapper[4795]: I1129 09:27:30.631715 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7djwd\" (UniqueName: \"kubernetes.io/projected/467a0f34-685f-460c-8594-235ff491c2e9-kube-api-access-7djwd\") pod \"redhat-operators-b7lxz\" (UID: \"467a0f34-685f-460c-8594-235ff491c2e9\") " pod="openshift-marketplace/redhat-operators-b7lxz" Nov 29 09:27:30 crc kubenswrapper[4795]: I1129 09:27:30.631810 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/467a0f34-685f-460c-8594-235ff491c2e9-catalog-content\") pod \"redhat-operators-b7lxz\" (UID: \"467a0f34-685f-460c-8594-235ff491c2e9\") " pod="openshift-marketplace/redhat-operators-b7lxz" Nov 29 09:27:30 crc kubenswrapper[4795]: I1129 09:27:30.631917 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/467a0f34-685f-460c-8594-235ff491c2e9-utilities\") pod \"redhat-operators-b7lxz\" (UID: \"467a0f34-685f-460c-8594-235ff491c2e9\") " pod="openshift-marketplace/redhat-operators-b7lxz" Nov 29 09:27:30 crc kubenswrapper[4795]: I1129 09:27:30.632101 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/467a0f34-685f-460c-8594-235ff491c2e9-catalog-content\") pod \"redhat-operators-b7lxz\" (UID: \"467a0f34-685f-460c-8594-235ff491c2e9\") " pod="openshift-marketplace/redhat-operators-b7lxz" Nov 29 09:27:30 crc kubenswrapper[4795]: I1129 09:27:30.650708 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7djwd\" (UniqueName: \"kubernetes.io/projected/467a0f34-685f-460c-8594-235ff491c2e9-kube-api-access-7djwd\") pod \"redhat-operators-b7lxz\" (UID: \"467a0f34-685f-460c-8594-235ff491c2e9\") " pod="openshift-marketplace/redhat-operators-b7lxz" Nov 29 09:27:30 crc kubenswrapper[4795]: I1129 09:27:30.771724 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b7lxz" Nov 29 09:27:31 crc kubenswrapper[4795]: I1129 09:27:31.344709 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b7lxz"] Nov 29 09:27:32 crc kubenswrapper[4795]: I1129 09:27:32.275899 4795 generic.go:334] "Generic (PLEG): container finished" podID="467a0f34-685f-460c-8594-235ff491c2e9" containerID="d5edde9fc0e88825be25b366b81a8c25a05e67f4366f016c3efe2ad49a7b0f60" exitCode=0 Nov 29 09:27:32 crc kubenswrapper[4795]: I1129 09:27:32.290544 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b7lxz" event={"ID":"467a0f34-685f-460c-8594-235ff491c2e9","Type":"ContainerDied","Data":"d5edde9fc0e88825be25b366b81a8c25a05e67f4366f016c3efe2ad49a7b0f60"} Nov 29 09:27:32 crc kubenswrapper[4795]: I1129 09:27:32.290583 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b7lxz" event={"ID":"467a0f34-685f-460c-8594-235ff491c2e9","Type":"ContainerStarted","Data":"d2ce78736f4ef1666af76249a6104714444c84b1a20fdcd2f9905e2c30dc8e12"} Nov 29 09:27:34 crc kubenswrapper[4795]: I1129 09:27:34.308813 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b7lxz" event={"ID":"467a0f34-685f-460c-8594-235ff491c2e9","Type":"ContainerStarted","Data":"22b5e8489b3dd58789273dd898cc91ac84a77a2c3c0abba68555fbf9c4fdb818"} Nov 29 09:27:36 crc kubenswrapper[4795]: I1129 09:27:36.338575 4795 generic.go:334] "Generic (PLEG): container finished" podID="467a0f34-685f-460c-8594-235ff491c2e9" containerID="22b5e8489b3dd58789273dd898cc91ac84a77a2c3c0abba68555fbf9c4fdb818" exitCode=0 Nov 29 09:27:36 crc kubenswrapper[4795]: I1129 09:27:36.338660 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b7lxz" event={"ID":"467a0f34-685f-460c-8594-235ff491c2e9","Type":"ContainerDied","Data":"22b5e8489b3dd58789273dd898cc91ac84a77a2c3c0abba68555fbf9c4fdb818"} Nov 29 09:27:37 crc kubenswrapper[4795]: I1129 09:27:37.353998 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b7lxz" event={"ID":"467a0f34-685f-460c-8594-235ff491c2e9","Type":"ContainerStarted","Data":"15f9ff6fe6864e962a420d4ca5ea05dc1f2e5a37e4f321f2ece2128561bb5137"} Nov 29 09:27:37 crc kubenswrapper[4795]: I1129 09:27:37.378889 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-b7lxz" podStartSLOduration=2.8835542690000002 podStartE2EDuration="7.378867099s" podCreationTimestamp="2025-11-29 09:27:30 +0000 UTC" firstStartedPulling="2025-11-29 09:27:32.278386739 +0000 UTC m=+6498.253962529" lastFinishedPulling="2025-11-29 09:27:36.773699579 +0000 UTC m=+6502.749275359" observedRunningTime="2025-11-29 09:27:37.369745141 +0000 UTC m=+6503.345320951" watchObservedRunningTime="2025-11-29 09:27:37.378867099 +0000 UTC m=+6503.354442899" Nov 29 09:27:40 crc kubenswrapper[4795]: I1129 09:27:40.772154 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-b7lxz" Nov 29 09:27:40 crc kubenswrapper[4795]: I1129 09:27:40.772834 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-b7lxz" Nov 29 09:27:41 crc kubenswrapper[4795]: I1129 09:27:41.831621 4795 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-b7lxz" podUID="467a0f34-685f-460c-8594-235ff491c2e9" containerName="registry-server" probeResult="failure" output=< Nov 29 09:27:41 crc kubenswrapper[4795]: timeout: failed to connect service ":50051" within 1s Nov 29 09:27:41 crc kubenswrapper[4795]: > Nov 29 09:27:50 crc kubenswrapper[4795]: I1129 09:27:50.845694 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-b7lxz" Nov 29 09:27:50 crc kubenswrapper[4795]: I1129 09:27:50.923401 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-b7lxz" Nov 29 09:27:51 crc kubenswrapper[4795]: I1129 09:27:51.091676 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b7lxz"] Nov 29 09:27:52 crc kubenswrapper[4795]: I1129 09:27:52.526242 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-b7lxz" podUID="467a0f34-685f-460c-8594-235ff491c2e9" containerName="registry-server" containerID="cri-o://15f9ff6fe6864e962a420d4ca5ea05dc1f2e5a37e4f321f2ece2128561bb5137" gracePeriod=2 Nov 29 09:27:53 crc kubenswrapper[4795]: I1129 09:27:53.093957 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b7lxz" Nov 29 09:27:53 crc kubenswrapper[4795]: I1129 09:27:53.268325 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/467a0f34-685f-460c-8594-235ff491c2e9-catalog-content\") pod \"467a0f34-685f-460c-8594-235ff491c2e9\" (UID: \"467a0f34-685f-460c-8594-235ff491c2e9\") " Nov 29 09:27:53 crc kubenswrapper[4795]: I1129 09:27:53.268374 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7djwd\" (UniqueName: \"kubernetes.io/projected/467a0f34-685f-460c-8594-235ff491c2e9-kube-api-access-7djwd\") pod \"467a0f34-685f-460c-8594-235ff491c2e9\" (UID: \"467a0f34-685f-460c-8594-235ff491c2e9\") " Nov 29 09:27:53 crc kubenswrapper[4795]: I1129 09:27:53.268488 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/467a0f34-685f-460c-8594-235ff491c2e9-utilities\") pod \"467a0f34-685f-460c-8594-235ff491c2e9\" (UID: \"467a0f34-685f-460c-8594-235ff491c2e9\") " Nov 29 09:27:53 crc kubenswrapper[4795]: I1129 09:27:53.270202 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/467a0f34-685f-460c-8594-235ff491c2e9-utilities" (OuterVolumeSpecName: "utilities") pod "467a0f34-685f-460c-8594-235ff491c2e9" (UID: "467a0f34-685f-460c-8594-235ff491c2e9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 09:27:53 crc kubenswrapper[4795]: I1129 09:27:53.278258 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/467a0f34-685f-460c-8594-235ff491c2e9-kube-api-access-7djwd" (OuterVolumeSpecName: "kube-api-access-7djwd") pod "467a0f34-685f-460c-8594-235ff491c2e9" (UID: "467a0f34-685f-460c-8594-235ff491c2e9"). InnerVolumeSpecName "kube-api-access-7djwd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 09:27:53 crc kubenswrapper[4795]: I1129 09:27:53.371409 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/467a0f34-685f-460c-8594-235ff491c2e9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "467a0f34-685f-460c-8594-235ff491c2e9" (UID: "467a0f34-685f-460c-8594-235ff491c2e9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 09:27:53 crc kubenswrapper[4795]: I1129 09:27:53.372750 4795 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/467a0f34-685f-460c-8594-235ff491c2e9-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 09:27:53 crc kubenswrapper[4795]: I1129 09:27:53.373130 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7djwd\" (UniqueName: \"kubernetes.io/projected/467a0f34-685f-460c-8594-235ff491c2e9-kube-api-access-7djwd\") on node \"crc\" DevicePath \"\"" Nov 29 09:27:53 crc kubenswrapper[4795]: I1129 09:27:53.373147 4795 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/467a0f34-685f-460c-8594-235ff491c2e9-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 09:27:53 crc kubenswrapper[4795]: I1129 09:27:53.538984 4795 generic.go:334] "Generic (PLEG): container finished" podID="467a0f34-685f-460c-8594-235ff491c2e9" containerID="15f9ff6fe6864e962a420d4ca5ea05dc1f2e5a37e4f321f2ece2128561bb5137" exitCode=0 Nov 29 09:27:53 crc kubenswrapper[4795]: I1129 09:27:53.539039 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b7lxz" event={"ID":"467a0f34-685f-460c-8594-235ff491c2e9","Type":"ContainerDied","Data":"15f9ff6fe6864e962a420d4ca5ea05dc1f2e5a37e4f321f2ece2128561bb5137"} Nov 29 09:27:53 crc kubenswrapper[4795]: I1129 09:27:53.539075 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b7lxz" event={"ID":"467a0f34-685f-460c-8594-235ff491c2e9","Type":"ContainerDied","Data":"d2ce78736f4ef1666af76249a6104714444c84b1a20fdcd2f9905e2c30dc8e12"} Nov 29 09:27:53 crc kubenswrapper[4795]: I1129 09:27:53.539098 4795 scope.go:117] "RemoveContainer" containerID="15f9ff6fe6864e962a420d4ca5ea05dc1f2e5a37e4f321f2ece2128561bb5137" Nov 29 09:27:53 crc kubenswrapper[4795]: I1129 09:27:53.539304 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b7lxz" Nov 29 09:27:53 crc kubenswrapper[4795]: I1129 09:27:53.570735 4795 scope.go:117] "RemoveContainer" containerID="22b5e8489b3dd58789273dd898cc91ac84a77a2c3c0abba68555fbf9c4fdb818" Nov 29 09:27:53 crc kubenswrapper[4795]: I1129 09:27:53.579517 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b7lxz"] Nov 29 09:27:53 crc kubenswrapper[4795]: I1129 09:27:53.594731 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-b7lxz"] Nov 29 09:27:53 crc kubenswrapper[4795]: I1129 09:27:53.595049 4795 scope.go:117] "RemoveContainer" containerID="d5edde9fc0e88825be25b366b81a8c25a05e67f4366f016c3efe2ad49a7b0f60" Nov 29 09:27:53 crc kubenswrapper[4795]: I1129 09:27:53.658604 4795 scope.go:117] "RemoveContainer" containerID="15f9ff6fe6864e962a420d4ca5ea05dc1f2e5a37e4f321f2ece2128561bb5137" Nov 29 09:27:53 crc kubenswrapper[4795]: E1129 09:27:53.659028 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15f9ff6fe6864e962a420d4ca5ea05dc1f2e5a37e4f321f2ece2128561bb5137\": container with ID starting with 15f9ff6fe6864e962a420d4ca5ea05dc1f2e5a37e4f321f2ece2128561bb5137 not found: ID does not exist" containerID="15f9ff6fe6864e962a420d4ca5ea05dc1f2e5a37e4f321f2ece2128561bb5137" Nov 29 09:27:53 crc kubenswrapper[4795]: I1129 09:27:53.659065 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15f9ff6fe6864e962a420d4ca5ea05dc1f2e5a37e4f321f2ece2128561bb5137"} err="failed to get container status \"15f9ff6fe6864e962a420d4ca5ea05dc1f2e5a37e4f321f2ece2128561bb5137\": rpc error: code = NotFound desc = could not find container \"15f9ff6fe6864e962a420d4ca5ea05dc1f2e5a37e4f321f2ece2128561bb5137\": container with ID starting with 15f9ff6fe6864e962a420d4ca5ea05dc1f2e5a37e4f321f2ece2128561bb5137 not found: ID does not exist" Nov 29 09:27:53 crc kubenswrapper[4795]: I1129 09:27:53.659087 4795 scope.go:117] "RemoveContainer" containerID="22b5e8489b3dd58789273dd898cc91ac84a77a2c3c0abba68555fbf9c4fdb818" Nov 29 09:27:53 crc kubenswrapper[4795]: E1129 09:27:53.659398 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22b5e8489b3dd58789273dd898cc91ac84a77a2c3c0abba68555fbf9c4fdb818\": container with ID starting with 22b5e8489b3dd58789273dd898cc91ac84a77a2c3c0abba68555fbf9c4fdb818 not found: ID does not exist" containerID="22b5e8489b3dd58789273dd898cc91ac84a77a2c3c0abba68555fbf9c4fdb818" Nov 29 09:27:53 crc kubenswrapper[4795]: I1129 09:27:53.659420 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22b5e8489b3dd58789273dd898cc91ac84a77a2c3c0abba68555fbf9c4fdb818"} err="failed to get container status \"22b5e8489b3dd58789273dd898cc91ac84a77a2c3c0abba68555fbf9c4fdb818\": rpc error: code = NotFound desc = could not find container \"22b5e8489b3dd58789273dd898cc91ac84a77a2c3c0abba68555fbf9c4fdb818\": container with ID starting with 22b5e8489b3dd58789273dd898cc91ac84a77a2c3c0abba68555fbf9c4fdb818 not found: ID does not exist" Nov 29 09:27:53 crc kubenswrapper[4795]: I1129 09:27:53.659434 4795 scope.go:117] "RemoveContainer" containerID="d5edde9fc0e88825be25b366b81a8c25a05e67f4366f016c3efe2ad49a7b0f60" Nov 29 09:27:53 crc kubenswrapper[4795]: E1129 09:27:53.659712 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5edde9fc0e88825be25b366b81a8c25a05e67f4366f016c3efe2ad49a7b0f60\": container with ID starting with d5edde9fc0e88825be25b366b81a8c25a05e67f4366f016c3efe2ad49a7b0f60 not found: ID does not exist" containerID="d5edde9fc0e88825be25b366b81a8c25a05e67f4366f016c3efe2ad49a7b0f60" Nov 29 09:27:53 crc kubenswrapper[4795]: I1129 09:27:53.659762 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5edde9fc0e88825be25b366b81a8c25a05e67f4366f016c3efe2ad49a7b0f60"} err="failed to get container status \"d5edde9fc0e88825be25b366b81a8c25a05e67f4366f016c3efe2ad49a7b0f60\": rpc error: code = NotFound desc = could not find container \"d5edde9fc0e88825be25b366b81a8c25a05e67f4366f016c3efe2ad49a7b0f60\": container with ID starting with d5edde9fc0e88825be25b366b81a8c25a05e67f4366f016c3efe2ad49a7b0f60 not found: ID does not exist" Nov 29 09:27:54 crc kubenswrapper[4795]: I1129 09:27:54.296519 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="467a0f34-685f-460c-8594-235ff491c2e9" path="/var/lib/kubelet/pods/467a0f34-685f-460c-8594-235ff491c2e9/volumes" Nov 29 09:27:55 crc kubenswrapper[4795]: I1129 09:27:55.571213 4795 generic.go:334] "Generic (PLEG): container finished" podID="1ceed53d-e157-4118-9c3c-e91f48bcd255" containerID="4907b8cc129e963e766d44001254a2d5bb59508eaa928cadbf46bbc6554bc35d" exitCode=0 Nov 29 09:27:55 crc kubenswrapper[4795]: I1129 09:27:55.571812 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qq8lk/crc-debug-fq6s7" event={"ID":"1ceed53d-e157-4118-9c3c-e91f48bcd255","Type":"ContainerDied","Data":"4907b8cc129e963e766d44001254a2d5bb59508eaa928cadbf46bbc6554bc35d"} Nov 29 09:27:56 crc kubenswrapper[4795]: I1129 09:27:56.703949 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qq8lk/crc-debug-fq6s7" Nov 29 09:27:56 crc kubenswrapper[4795]: I1129 09:27:56.745223 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-qq8lk/crc-debug-fq6s7"] Nov 29 09:27:56 crc kubenswrapper[4795]: I1129 09:27:56.755164 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-qq8lk/crc-debug-fq6s7"] Nov 29 09:27:56 crc kubenswrapper[4795]: I1129 09:27:56.779568 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1ceed53d-e157-4118-9c3c-e91f48bcd255-host\") pod \"1ceed53d-e157-4118-9c3c-e91f48bcd255\" (UID: \"1ceed53d-e157-4118-9c3c-e91f48bcd255\") " Nov 29 09:27:56 crc kubenswrapper[4795]: I1129 09:27:56.779694 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ceed53d-e157-4118-9c3c-e91f48bcd255-host" (OuterVolumeSpecName: "host") pod "1ceed53d-e157-4118-9c3c-e91f48bcd255" (UID: "1ceed53d-e157-4118-9c3c-e91f48bcd255"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 09:27:56 crc kubenswrapper[4795]: I1129 09:27:56.779875 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhdxx\" (UniqueName: \"kubernetes.io/projected/1ceed53d-e157-4118-9c3c-e91f48bcd255-kube-api-access-jhdxx\") pod \"1ceed53d-e157-4118-9c3c-e91f48bcd255\" (UID: \"1ceed53d-e157-4118-9c3c-e91f48bcd255\") " Nov 29 09:27:56 crc kubenswrapper[4795]: I1129 09:27:56.780550 4795 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1ceed53d-e157-4118-9c3c-e91f48bcd255-host\") on node \"crc\" DevicePath \"\"" Nov 29 09:27:56 crc kubenswrapper[4795]: I1129 09:27:56.802495 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ceed53d-e157-4118-9c3c-e91f48bcd255-kube-api-access-jhdxx" (OuterVolumeSpecName: "kube-api-access-jhdxx") pod "1ceed53d-e157-4118-9c3c-e91f48bcd255" (UID: "1ceed53d-e157-4118-9c3c-e91f48bcd255"). InnerVolumeSpecName "kube-api-access-jhdxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 09:27:56 crc kubenswrapper[4795]: I1129 09:27:56.882927 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhdxx\" (UniqueName: \"kubernetes.io/projected/1ceed53d-e157-4118-9c3c-e91f48bcd255-kube-api-access-jhdxx\") on node \"crc\" DevicePath \"\"" Nov 29 09:27:57 crc kubenswrapper[4795]: I1129 09:27:57.593911 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d5336077e32b77d953a2848a6aa1ea3b0c257c42af4ac4c1a8831aba9acb88d" Nov 29 09:27:57 crc kubenswrapper[4795]: I1129 09:27:57.594005 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qq8lk/crc-debug-fq6s7" Nov 29 09:27:57 crc kubenswrapper[4795]: I1129 09:27:57.973434 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-qq8lk/crc-debug-kndwj"] Nov 29 09:27:57 crc kubenswrapper[4795]: E1129 09:27:57.974022 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="467a0f34-685f-460c-8594-235ff491c2e9" containerName="extract-content" Nov 29 09:27:57 crc kubenswrapper[4795]: I1129 09:27:57.974039 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="467a0f34-685f-460c-8594-235ff491c2e9" containerName="extract-content" Nov 29 09:27:57 crc kubenswrapper[4795]: E1129 09:27:57.974062 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="467a0f34-685f-460c-8594-235ff491c2e9" containerName="extract-utilities" Nov 29 09:27:57 crc kubenswrapper[4795]: I1129 09:27:57.974070 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="467a0f34-685f-460c-8594-235ff491c2e9" containerName="extract-utilities" Nov 29 09:27:57 crc kubenswrapper[4795]: E1129 09:27:57.974118 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="467a0f34-685f-460c-8594-235ff491c2e9" containerName="registry-server" Nov 29 09:27:57 crc kubenswrapper[4795]: I1129 09:27:57.974129 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="467a0f34-685f-460c-8594-235ff491c2e9" containerName="registry-server" Nov 29 09:27:57 crc kubenswrapper[4795]: E1129 09:27:57.974147 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ceed53d-e157-4118-9c3c-e91f48bcd255" containerName="container-00" Nov 29 09:27:57 crc kubenswrapper[4795]: I1129 09:27:57.974155 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ceed53d-e157-4118-9c3c-e91f48bcd255" containerName="container-00" Nov 29 09:27:57 crc kubenswrapper[4795]: I1129 09:27:57.974455 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ceed53d-e157-4118-9c3c-e91f48bcd255" containerName="container-00" Nov 29 09:27:57 crc kubenswrapper[4795]: I1129 09:27:57.974486 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="467a0f34-685f-460c-8594-235ff491c2e9" containerName="registry-server" Nov 29 09:27:57 crc kubenswrapper[4795]: I1129 09:27:57.975482 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qq8lk/crc-debug-kndwj" Nov 29 09:27:58 crc kubenswrapper[4795]: I1129 09:27:58.004114 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/86f3c0af-d56b-4fea-8a04-cca5850a212b-host\") pod \"crc-debug-kndwj\" (UID: \"86f3c0af-d56b-4fea-8a04-cca5850a212b\") " pod="openshift-must-gather-qq8lk/crc-debug-kndwj" Nov 29 09:27:58 crc kubenswrapper[4795]: I1129 09:27:58.004738 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-887hd\" (UniqueName: \"kubernetes.io/projected/86f3c0af-d56b-4fea-8a04-cca5850a212b-kube-api-access-887hd\") pod \"crc-debug-kndwj\" (UID: \"86f3c0af-d56b-4fea-8a04-cca5850a212b\") " pod="openshift-must-gather-qq8lk/crc-debug-kndwj" Nov 29 09:27:58 crc kubenswrapper[4795]: I1129 09:27:58.107085 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-887hd\" (UniqueName: \"kubernetes.io/projected/86f3c0af-d56b-4fea-8a04-cca5850a212b-kube-api-access-887hd\") pod \"crc-debug-kndwj\" (UID: \"86f3c0af-d56b-4fea-8a04-cca5850a212b\") " pod="openshift-must-gather-qq8lk/crc-debug-kndwj" Nov 29 09:27:58 crc kubenswrapper[4795]: I1129 09:27:58.107178 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/86f3c0af-d56b-4fea-8a04-cca5850a212b-host\") pod \"crc-debug-kndwj\" (UID: \"86f3c0af-d56b-4fea-8a04-cca5850a212b\") " pod="openshift-must-gather-qq8lk/crc-debug-kndwj" Nov 29 09:27:58 crc kubenswrapper[4795]: I1129 09:27:58.107268 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/86f3c0af-d56b-4fea-8a04-cca5850a212b-host\") pod \"crc-debug-kndwj\" (UID: \"86f3c0af-d56b-4fea-8a04-cca5850a212b\") " pod="openshift-must-gather-qq8lk/crc-debug-kndwj" Nov 29 09:27:58 crc kubenswrapper[4795]: I1129 09:27:58.130844 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-887hd\" (UniqueName: \"kubernetes.io/projected/86f3c0af-d56b-4fea-8a04-cca5850a212b-kube-api-access-887hd\") pod \"crc-debug-kndwj\" (UID: \"86f3c0af-d56b-4fea-8a04-cca5850a212b\") " pod="openshift-must-gather-qq8lk/crc-debug-kndwj" Nov 29 09:27:58 crc kubenswrapper[4795]: I1129 09:27:58.288295 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ceed53d-e157-4118-9c3c-e91f48bcd255" path="/var/lib/kubelet/pods/1ceed53d-e157-4118-9c3c-e91f48bcd255/volumes" Nov 29 09:27:58 crc kubenswrapper[4795]: I1129 09:27:58.298469 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qq8lk/crc-debug-kndwj" Nov 29 09:27:58 crc kubenswrapper[4795]: I1129 09:27:58.605975 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qq8lk/crc-debug-kndwj" event={"ID":"86f3c0af-d56b-4fea-8a04-cca5850a212b","Type":"ContainerStarted","Data":"a811e8237455deddad83e479548a0339c6b643e563fd6c0ed8d84a784031b25d"} Nov 29 09:27:58 crc kubenswrapper[4795]: I1129 09:27:58.606352 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qq8lk/crc-debug-kndwj" event={"ID":"86f3c0af-d56b-4fea-8a04-cca5850a212b","Type":"ContainerStarted","Data":"287d7acb126aaf598da5eb7848c78bde9cdb95ba3f9aaa2e8102337f72f06f1d"} Nov 29 09:27:58 crc kubenswrapper[4795]: I1129 09:27:58.624906 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-qq8lk/crc-debug-kndwj" podStartSLOduration=1.624889183 podStartE2EDuration="1.624889183s" podCreationTimestamp="2025-11-29 09:27:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:27:58.621539928 +0000 UTC m=+6524.597115728" watchObservedRunningTime="2025-11-29 09:27:58.624889183 +0000 UTC m=+6524.600464973" Nov 29 09:27:59 crc kubenswrapper[4795]: I1129 09:27:59.617089 4795 generic.go:334] "Generic (PLEG): container finished" podID="86f3c0af-d56b-4fea-8a04-cca5850a212b" containerID="a811e8237455deddad83e479548a0339c6b643e563fd6c0ed8d84a784031b25d" exitCode=0 Nov 29 09:27:59 crc kubenswrapper[4795]: I1129 09:27:59.617140 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qq8lk/crc-debug-kndwj" event={"ID":"86f3c0af-d56b-4fea-8a04-cca5850a212b","Type":"ContainerDied","Data":"a811e8237455deddad83e479548a0339c6b643e563fd6c0ed8d84a784031b25d"} Nov 29 09:28:00 crc kubenswrapper[4795]: I1129 09:28:00.748054 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qq8lk/crc-debug-kndwj" Nov 29 09:28:00 crc kubenswrapper[4795]: I1129 09:28:00.773388 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/86f3c0af-d56b-4fea-8a04-cca5850a212b-host\") pod \"86f3c0af-d56b-4fea-8a04-cca5850a212b\" (UID: \"86f3c0af-d56b-4fea-8a04-cca5850a212b\") " Nov 29 09:28:00 crc kubenswrapper[4795]: I1129 09:28:00.773490 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86f3c0af-d56b-4fea-8a04-cca5850a212b-host" (OuterVolumeSpecName: "host") pod "86f3c0af-d56b-4fea-8a04-cca5850a212b" (UID: "86f3c0af-d56b-4fea-8a04-cca5850a212b"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 09:28:00 crc kubenswrapper[4795]: I1129 09:28:00.773608 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-887hd\" (UniqueName: \"kubernetes.io/projected/86f3c0af-d56b-4fea-8a04-cca5850a212b-kube-api-access-887hd\") pod \"86f3c0af-d56b-4fea-8a04-cca5850a212b\" (UID: \"86f3c0af-d56b-4fea-8a04-cca5850a212b\") " Nov 29 09:28:00 crc kubenswrapper[4795]: I1129 09:28:00.774024 4795 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/86f3c0af-d56b-4fea-8a04-cca5850a212b-host\") on node \"crc\" DevicePath \"\"" Nov 29 09:28:00 crc kubenswrapper[4795]: I1129 09:28:00.780304 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86f3c0af-d56b-4fea-8a04-cca5850a212b-kube-api-access-887hd" (OuterVolumeSpecName: "kube-api-access-887hd") pod "86f3c0af-d56b-4fea-8a04-cca5850a212b" (UID: "86f3c0af-d56b-4fea-8a04-cca5850a212b"). InnerVolumeSpecName "kube-api-access-887hd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 09:28:00 crc kubenswrapper[4795]: I1129 09:28:00.875233 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-887hd\" (UniqueName: \"kubernetes.io/projected/86f3c0af-d56b-4fea-8a04-cca5850a212b-kube-api-access-887hd\") on node \"crc\" DevicePath \"\"" Nov 29 09:28:01 crc kubenswrapper[4795]: I1129 09:28:01.090361 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-qq8lk/crc-debug-kndwj"] Nov 29 09:28:01 crc kubenswrapper[4795]: I1129 09:28:01.100437 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-qq8lk/crc-debug-kndwj"] Nov 29 09:28:01 crc kubenswrapper[4795]: I1129 09:28:01.642208 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="287d7acb126aaf598da5eb7848c78bde9cdb95ba3f9aaa2e8102337f72f06f1d" Nov 29 09:28:01 crc kubenswrapper[4795]: I1129 09:28:01.642236 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qq8lk/crc-debug-kndwj" Nov 29 09:28:02 crc kubenswrapper[4795]: I1129 09:28:02.292737 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86f3c0af-d56b-4fea-8a04-cca5850a212b" path="/var/lib/kubelet/pods/86f3c0af-d56b-4fea-8a04-cca5850a212b/volumes" Nov 29 09:28:02 crc kubenswrapper[4795]: I1129 09:28:02.331150 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-qq8lk/crc-debug-g2679"] Nov 29 09:28:02 crc kubenswrapper[4795]: E1129 09:28:02.331611 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86f3c0af-d56b-4fea-8a04-cca5850a212b" containerName="container-00" Nov 29 09:28:02 crc kubenswrapper[4795]: I1129 09:28:02.331631 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="86f3c0af-d56b-4fea-8a04-cca5850a212b" containerName="container-00" Nov 29 09:28:02 crc kubenswrapper[4795]: I1129 09:28:02.331915 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="86f3c0af-d56b-4fea-8a04-cca5850a212b" containerName="container-00" Nov 29 09:28:02 crc kubenswrapper[4795]: I1129 09:28:02.332906 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qq8lk/crc-debug-g2679" Nov 29 09:28:02 crc kubenswrapper[4795]: I1129 09:28:02.509257 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rntwd\" (UniqueName: \"kubernetes.io/projected/df88bc28-08df-465e-a193-e51ac33ac9ed-kube-api-access-rntwd\") pod \"crc-debug-g2679\" (UID: \"df88bc28-08df-465e-a193-e51ac33ac9ed\") " pod="openshift-must-gather-qq8lk/crc-debug-g2679" Nov 29 09:28:02 crc kubenswrapper[4795]: I1129 09:28:02.509627 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/df88bc28-08df-465e-a193-e51ac33ac9ed-host\") pod \"crc-debug-g2679\" (UID: \"df88bc28-08df-465e-a193-e51ac33ac9ed\") " pod="openshift-must-gather-qq8lk/crc-debug-g2679" Nov 29 09:28:02 crc kubenswrapper[4795]: I1129 09:28:02.613512 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rntwd\" (UniqueName: \"kubernetes.io/projected/df88bc28-08df-465e-a193-e51ac33ac9ed-kube-api-access-rntwd\") pod \"crc-debug-g2679\" (UID: \"df88bc28-08df-465e-a193-e51ac33ac9ed\") " pod="openshift-must-gather-qq8lk/crc-debug-g2679" Nov 29 09:28:02 crc kubenswrapper[4795]: I1129 09:28:02.613623 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/df88bc28-08df-465e-a193-e51ac33ac9ed-host\") pod \"crc-debug-g2679\" (UID: \"df88bc28-08df-465e-a193-e51ac33ac9ed\") " pod="openshift-must-gather-qq8lk/crc-debug-g2679" Nov 29 09:28:02 crc kubenswrapper[4795]: I1129 09:28:02.613793 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/df88bc28-08df-465e-a193-e51ac33ac9ed-host\") pod \"crc-debug-g2679\" (UID: \"df88bc28-08df-465e-a193-e51ac33ac9ed\") " pod="openshift-must-gather-qq8lk/crc-debug-g2679" Nov 29 09:28:02 crc kubenswrapper[4795]: I1129 09:28:02.652024 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rntwd\" (UniqueName: \"kubernetes.io/projected/df88bc28-08df-465e-a193-e51ac33ac9ed-kube-api-access-rntwd\") pod \"crc-debug-g2679\" (UID: \"df88bc28-08df-465e-a193-e51ac33ac9ed\") " pod="openshift-must-gather-qq8lk/crc-debug-g2679" Nov 29 09:28:02 crc kubenswrapper[4795]: I1129 09:28:02.655787 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qq8lk/crc-debug-g2679" Nov 29 09:28:03 crc kubenswrapper[4795]: I1129 09:28:03.668857 4795 generic.go:334] "Generic (PLEG): container finished" podID="df88bc28-08df-465e-a193-e51ac33ac9ed" containerID="ebc84eab5650d865126596856cb45abaebdab08672622eebfea0722ff62e6462" exitCode=0 Nov 29 09:28:03 crc kubenswrapper[4795]: I1129 09:28:03.668950 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qq8lk/crc-debug-g2679" event={"ID":"df88bc28-08df-465e-a193-e51ac33ac9ed","Type":"ContainerDied","Data":"ebc84eab5650d865126596856cb45abaebdab08672622eebfea0722ff62e6462"} Nov 29 09:28:03 crc kubenswrapper[4795]: I1129 09:28:03.669197 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qq8lk/crc-debug-g2679" event={"ID":"df88bc28-08df-465e-a193-e51ac33ac9ed","Type":"ContainerStarted","Data":"642fe17faccbe6e2cfdb6cc550881ea0ebd130b26a630459dc110aac8ee6fef3"} Nov 29 09:28:03 crc kubenswrapper[4795]: I1129 09:28:03.717620 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-qq8lk/crc-debug-g2679"] Nov 29 09:28:03 crc kubenswrapper[4795]: I1129 09:28:03.739774 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-qq8lk/crc-debug-g2679"] Nov 29 09:28:04 crc kubenswrapper[4795]: I1129 09:28:04.805338 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qq8lk/crc-debug-g2679" Nov 29 09:28:04 crc kubenswrapper[4795]: I1129 09:28:04.960279 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rntwd\" (UniqueName: \"kubernetes.io/projected/df88bc28-08df-465e-a193-e51ac33ac9ed-kube-api-access-rntwd\") pod \"df88bc28-08df-465e-a193-e51ac33ac9ed\" (UID: \"df88bc28-08df-465e-a193-e51ac33ac9ed\") " Nov 29 09:28:04 crc kubenswrapper[4795]: I1129 09:28:04.960382 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/df88bc28-08df-465e-a193-e51ac33ac9ed-host\") pod \"df88bc28-08df-465e-a193-e51ac33ac9ed\" (UID: \"df88bc28-08df-465e-a193-e51ac33ac9ed\") " Nov 29 09:28:04 crc kubenswrapper[4795]: I1129 09:28:04.960516 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df88bc28-08df-465e-a193-e51ac33ac9ed-host" (OuterVolumeSpecName: "host") pod "df88bc28-08df-465e-a193-e51ac33ac9ed" (UID: "df88bc28-08df-465e-a193-e51ac33ac9ed"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 09:28:04 crc kubenswrapper[4795]: I1129 09:28:04.961444 4795 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/df88bc28-08df-465e-a193-e51ac33ac9ed-host\") on node \"crc\" DevicePath \"\"" Nov 29 09:28:04 crc kubenswrapper[4795]: I1129 09:28:04.966097 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df88bc28-08df-465e-a193-e51ac33ac9ed-kube-api-access-rntwd" (OuterVolumeSpecName: "kube-api-access-rntwd") pod "df88bc28-08df-465e-a193-e51ac33ac9ed" (UID: "df88bc28-08df-465e-a193-e51ac33ac9ed"). InnerVolumeSpecName "kube-api-access-rntwd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 09:28:05 crc kubenswrapper[4795]: I1129 09:28:05.063989 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rntwd\" (UniqueName: \"kubernetes.io/projected/df88bc28-08df-465e-a193-e51ac33ac9ed-kube-api-access-rntwd\") on node \"crc\" DevicePath \"\"" Nov 29 09:28:05 crc kubenswrapper[4795]: I1129 09:28:05.708257 4795 scope.go:117] "RemoveContainer" containerID="ebc84eab5650d865126596856cb45abaebdab08672622eebfea0722ff62e6462" Nov 29 09:28:05 crc kubenswrapper[4795]: I1129 09:28:05.708465 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qq8lk/crc-debug-g2679" Nov 29 09:28:06 crc kubenswrapper[4795]: I1129 09:28:06.290143 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df88bc28-08df-465e-a193-e51ac33ac9ed" path="/var/lib/kubelet/pods/df88bc28-08df-465e-a193-e51ac33ac9ed/volumes" Nov 29 09:28:11 crc kubenswrapper[4795]: I1129 09:28:11.941557 4795 patch_prober.go:28] interesting pod/machine-config-daemon-bkmq6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 09:28:11 crc kubenswrapper[4795]: I1129 09:28:11.942280 4795 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 09:28:30 crc kubenswrapper[4795]: I1129 09:28:30.188069 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8s2lq"] Nov 29 09:28:30 crc kubenswrapper[4795]: E1129 09:28:30.189105 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df88bc28-08df-465e-a193-e51ac33ac9ed" containerName="container-00" Nov 29 09:28:30 crc kubenswrapper[4795]: I1129 09:28:30.189120 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="df88bc28-08df-465e-a193-e51ac33ac9ed" containerName="container-00" Nov 29 09:28:30 crc kubenswrapper[4795]: I1129 09:28:30.189322 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="df88bc28-08df-465e-a193-e51ac33ac9ed" containerName="container-00" Nov 29 09:28:30 crc kubenswrapper[4795]: I1129 09:28:30.191539 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8s2lq" Nov 29 09:28:30 crc kubenswrapper[4795]: I1129 09:28:30.204271 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8s2lq"] Nov 29 09:28:30 crc kubenswrapper[4795]: I1129 09:28:30.218082 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af09520d-7496-4809-84ad-1aa4b4d69b3d-utilities\") pod \"certified-operators-8s2lq\" (UID: \"af09520d-7496-4809-84ad-1aa4b4d69b3d\") " pod="openshift-marketplace/certified-operators-8s2lq" Nov 29 09:28:30 crc kubenswrapper[4795]: I1129 09:28:30.218142 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqmlh\" (UniqueName: \"kubernetes.io/projected/af09520d-7496-4809-84ad-1aa4b4d69b3d-kube-api-access-gqmlh\") pod \"certified-operators-8s2lq\" (UID: \"af09520d-7496-4809-84ad-1aa4b4d69b3d\") " pod="openshift-marketplace/certified-operators-8s2lq" Nov 29 09:28:30 crc kubenswrapper[4795]: I1129 09:28:30.218252 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af09520d-7496-4809-84ad-1aa4b4d69b3d-catalog-content\") pod \"certified-operators-8s2lq\" (UID: \"af09520d-7496-4809-84ad-1aa4b4d69b3d\") " pod="openshift-marketplace/certified-operators-8s2lq" Nov 29 09:28:30 crc kubenswrapper[4795]: I1129 09:28:30.320253 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af09520d-7496-4809-84ad-1aa4b4d69b3d-utilities\") pod \"certified-operators-8s2lq\" (UID: \"af09520d-7496-4809-84ad-1aa4b4d69b3d\") " pod="openshift-marketplace/certified-operators-8s2lq" Nov 29 09:28:30 crc kubenswrapper[4795]: I1129 09:28:30.320327 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqmlh\" (UniqueName: \"kubernetes.io/projected/af09520d-7496-4809-84ad-1aa4b4d69b3d-kube-api-access-gqmlh\") pod \"certified-operators-8s2lq\" (UID: \"af09520d-7496-4809-84ad-1aa4b4d69b3d\") " pod="openshift-marketplace/certified-operators-8s2lq" Nov 29 09:28:30 crc kubenswrapper[4795]: I1129 09:28:30.320479 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af09520d-7496-4809-84ad-1aa4b4d69b3d-catalog-content\") pod \"certified-operators-8s2lq\" (UID: \"af09520d-7496-4809-84ad-1aa4b4d69b3d\") " pod="openshift-marketplace/certified-operators-8s2lq" Nov 29 09:28:30 crc kubenswrapper[4795]: I1129 09:28:30.320865 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af09520d-7496-4809-84ad-1aa4b4d69b3d-utilities\") pod \"certified-operators-8s2lq\" (UID: \"af09520d-7496-4809-84ad-1aa4b4d69b3d\") " pod="openshift-marketplace/certified-operators-8s2lq" Nov 29 09:28:30 crc kubenswrapper[4795]: I1129 09:28:30.320996 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af09520d-7496-4809-84ad-1aa4b4d69b3d-catalog-content\") pod \"certified-operators-8s2lq\" (UID: \"af09520d-7496-4809-84ad-1aa4b4d69b3d\") " pod="openshift-marketplace/certified-operators-8s2lq" Nov 29 09:28:30 crc kubenswrapper[4795]: I1129 09:28:30.343332 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqmlh\" (UniqueName: \"kubernetes.io/projected/af09520d-7496-4809-84ad-1aa4b4d69b3d-kube-api-access-gqmlh\") pod \"certified-operators-8s2lq\" (UID: \"af09520d-7496-4809-84ad-1aa4b4d69b3d\") " pod="openshift-marketplace/certified-operators-8s2lq" Nov 29 09:28:30 crc kubenswrapper[4795]: I1129 09:28:30.555683 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8s2lq" Nov 29 09:28:31 crc kubenswrapper[4795]: I1129 09:28:31.028681 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_5e8728db-5fe3-46f5-a628-2b3a0f708438/aodh-api/0.log" Nov 29 09:28:31 crc kubenswrapper[4795]: I1129 09:28:31.109771 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8s2lq"] Nov 29 09:28:31 crc kubenswrapper[4795]: I1129 09:28:31.223817 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_5e8728db-5fe3-46f5-a628-2b3a0f708438/aodh-evaluator/0.log" Nov 29 09:28:31 crc kubenswrapper[4795]: I1129 09:28:31.337060 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_5e8728db-5fe3-46f5-a628-2b3a0f708438/aodh-listener/0.log" Nov 29 09:28:31 crc kubenswrapper[4795]: I1129 09:28:31.340941 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_5e8728db-5fe3-46f5-a628-2b3a0f708438/aodh-notifier/0.log" Nov 29 09:28:31 crc kubenswrapper[4795]: I1129 09:28:31.577194 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-dc94996-zmk9m_6613a7a2-0f90-4a83-80ea-18e316d6338d/barbican-api/0.log" Nov 29 09:28:31 crc kubenswrapper[4795]: I1129 09:28:31.652879 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-dc94996-zmk9m_6613a7a2-0f90-4a83-80ea-18e316d6338d/barbican-api-log/0.log" Nov 29 09:28:31 crc kubenswrapper[4795]: I1129 09:28:31.706624 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-65899db58b-mz594_0c64caf8-e57d-495f-985c-844edea0d146/barbican-keystone-listener/0.log" Nov 29 09:28:31 crc kubenswrapper[4795]: I1129 09:28:31.856455 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6875d4f667-x5hjc_1bca86ff-c24c-4d08-b7ed-be2433fe9735/barbican-worker/0.log" Nov 29 09:28:31 crc kubenswrapper[4795]: I1129 09:28:31.913586 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-65899db58b-mz594_0c64caf8-e57d-495f-985c-844edea0d146/barbican-keystone-listener-log/0.log" Nov 29 09:28:31 crc kubenswrapper[4795]: I1129 09:28:31.926357 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6875d4f667-x5hjc_1bca86ff-c24c-4d08-b7ed-be2433fe9735/barbican-worker-log/0.log" Nov 29 09:28:32 crc kubenswrapper[4795]: I1129 09:28:32.015617 4795 generic.go:334] "Generic (PLEG): container finished" podID="af09520d-7496-4809-84ad-1aa4b4d69b3d" containerID="48f39bafa125022a810cc38b7f025a4b232655c2c07b525d34e030373a3d63d6" exitCode=0 Nov 29 09:28:32 crc kubenswrapper[4795]: I1129 09:28:32.015664 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8s2lq" event={"ID":"af09520d-7496-4809-84ad-1aa4b4d69b3d","Type":"ContainerDied","Data":"48f39bafa125022a810cc38b7f025a4b232655c2c07b525d34e030373a3d63d6"} Nov 29 09:28:32 crc kubenswrapper[4795]: I1129 09:28:32.015763 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8s2lq" event={"ID":"af09520d-7496-4809-84ad-1aa4b4d69b3d","Type":"ContainerStarted","Data":"7eaf15282bce43e9c3fbc4aeec8cd1a094b1658afd4d4021d1a2cfe0ff94c20a"} Nov 29 09:28:32 crc kubenswrapper[4795]: I1129 09:28:32.171120 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-glkt9_5db55adf-c067-44de-ad20-4b8a138e2576/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 09:28:32 crc kubenswrapper[4795]: I1129 09:28:32.226234 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_1187372d-c24e-4ca1-b985-64ba7bb4df2b/ceilometer-central-agent/0.log" Nov 29 09:28:32 crc kubenswrapper[4795]: I1129 09:28:32.334662 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_1187372d-c24e-4ca1-b985-64ba7bb4df2b/ceilometer-notification-agent/0.log" Nov 29 09:28:32 crc kubenswrapper[4795]: I1129 09:28:32.374855 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_1187372d-c24e-4ca1-b985-64ba7bb4df2b/sg-core/0.log" Nov 29 09:28:32 crc kubenswrapper[4795]: I1129 09:28:32.414854 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_1187372d-c24e-4ca1-b985-64ba7bb4df2b/proxy-httpd/0.log" Nov 29 09:28:32 crc kubenswrapper[4795]: I1129 09:28:32.566266 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_477216f2-bd7e-4768-9a1f-53915135fbc3/cinder-api-log/0.log" Nov 29 09:28:32 crc kubenswrapper[4795]: I1129 09:28:32.627078 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_477216f2-bd7e-4768-9a1f-53915135fbc3/cinder-api/0.log" Nov 29 09:28:32 crc kubenswrapper[4795]: I1129 09:28:32.725846 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_945f619e-60af-4c36-8ec9-a98d54c15276/cinder-scheduler/0.log" Nov 29 09:28:32 crc kubenswrapper[4795]: I1129 09:28:32.869363 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_945f619e-60af-4c36-8ec9-a98d54c15276/probe/0.log" Nov 29 09:28:32 crc kubenswrapper[4795]: I1129 09:28:32.896299 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-5c5jq_ccffb059-764c-49a4-afd1-356ba3189628/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 09:28:33 crc kubenswrapper[4795]: I1129 09:28:33.039628 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8s2lq" event={"ID":"af09520d-7496-4809-84ad-1aa4b4d69b3d","Type":"ContainerStarted","Data":"c3f7753b3b33383272a892ac76deda1bfde0a587d3c797edeea3596c051cd8b6"} Nov 29 09:28:33 crc kubenswrapper[4795]: I1129 09:28:33.143697 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-9p64t_c19e0492-7b5e-4a23-a1aa-f09ea195448d/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 09:28:33 crc kubenswrapper[4795]: I1129 09:28:33.227009 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5596c69fcc-wq44q_2ad1010b-e9f3-4cbf-aefe-4ad2bed529e1/init/0.log" Nov 29 09:28:33 crc kubenswrapper[4795]: I1129 09:28:33.382922 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5596c69fcc-wq44q_2ad1010b-e9f3-4cbf-aefe-4ad2bed529e1/init/0.log" Nov 29 09:28:33 crc kubenswrapper[4795]: I1129 09:28:33.492481 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5596c69fcc-wq44q_2ad1010b-e9f3-4cbf-aefe-4ad2bed529e1/dnsmasq-dns/0.log" Nov 29 09:28:33 crc kubenswrapper[4795]: I1129 09:28:33.594106 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-lvqcr_ce2a356f-1605-4fb0-ae3c-a40094296d8f/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 09:28:34 crc kubenswrapper[4795]: I1129 09:28:34.119371 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_3625e087-5469-4cc2-b580-13d7201ff475/glance-log/0.log" Nov 29 09:28:34 crc kubenswrapper[4795]: I1129 09:28:34.180797 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_6199696f-3f60-4893-8029-6e62879319f9/glance-httpd/0.log" Nov 29 09:28:34 crc kubenswrapper[4795]: I1129 09:28:34.183877 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_3625e087-5469-4cc2-b580-13d7201ff475/glance-httpd/0.log" Nov 29 09:28:34 crc kubenswrapper[4795]: I1129 09:28:34.351949 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_6199696f-3f60-4893-8029-6e62879319f9/glance-log/0.log" Nov 29 09:28:34 crc kubenswrapper[4795]: I1129 09:28:34.829776 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-9d7d54f9b-6mtps_2d909210-4168-4e0a-967e-dfde70b1762b/heat-engine/0.log" Nov 29 09:28:35 crc kubenswrapper[4795]: I1129 09:28:35.034158 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr_96e2c266-8570-40c8-adbc-d4939bde4ad9/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 09:28:35 crc kubenswrapper[4795]: I1129 09:28:35.060280 4795 generic.go:334] "Generic (PLEG): container finished" podID="af09520d-7496-4809-84ad-1aa4b4d69b3d" containerID="c3f7753b3b33383272a892ac76deda1bfde0a587d3c797edeea3596c051cd8b6" exitCode=0 Nov 29 09:28:35 crc kubenswrapper[4795]: I1129 09:28:35.060324 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8s2lq" event={"ID":"af09520d-7496-4809-84ad-1aa4b4d69b3d","Type":"ContainerDied","Data":"c3f7753b3b33383272a892ac76deda1bfde0a587d3c797edeea3596c051cd8b6"} Nov 29 09:28:35 crc kubenswrapper[4795]: I1129 09:28:35.177993 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-tppn7_5e0dffd4-e9e3-434d-b842-5b5849bf2fa9/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 09:28:35 crc kubenswrapper[4795]: I1129 09:28:35.215042 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-94c46bb5b-pj8dm_59489eb7-639e-4155-b88d-45aee638fbaa/heat-api/0.log" Nov 29 09:28:35 crc kubenswrapper[4795]: I1129 09:28:35.297226 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-6f66c97b48-7q9bs_d6b4f039-61d1-4b2c-b912-69c1bde3e4a6/heat-cfnapi/0.log" Nov 29 09:28:35 crc kubenswrapper[4795]: I1129 09:28:35.441366 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29406781-9xlk7_3eaf8e63-a9e1-47a4-b093-d1f65a80c4db/keystone-cron/0.log" Nov 29 09:28:35 crc kubenswrapper[4795]: I1129 09:28:35.522208 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_259b25c2-59e5-4ee8-bedc-23b7423bfae6/kube-state-metrics/0.log" Nov 29 09:28:35 crc kubenswrapper[4795]: I1129 09:28:35.718672 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-mdl5h_8223751f-5de9-4d5c-a9b2-200cf9c164ee/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 09:28:35 crc kubenswrapper[4795]: I1129 09:28:35.828442 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_logging-edpm-deployment-openstack-edpm-ipam-g2krp_4700d212-5bd7-4b67-a36a-ae486608b8a8/logging-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 09:28:35 crc kubenswrapper[4795]: I1129 09:28:35.835482 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7db455fcf4-bs6l9_f0a7d947-7e48-449a-a691-63de87afc9c4/keystone-api/0.log" Nov 29 09:28:36 crc kubenswrapper[4795]: I1129 09:28:36.069045 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mysqld-exporter-0_1f634ca3-f70b-49c4-9cc4-54fc9a0f4cde/mysqld-exporter/0.log" Nov 29 09:28:36 crc kubenswrapper[4795]: I1129 09:28:36.072206 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8s2lq" event={"ID":"af09520d-7496-4809-84ad-1aa4b4d69b3d","Type":"ContainerStarted","Data":"6e50653608904e90f634feab7fbd02ff4e393f5bf3f6a90fc67df2e31e78b8ef"} Nov 29 09:28:36 crc kubenswrapper[4795]: I1129 09:28:36.117085 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8s2lq" podStartSLOduration=2.480059673 podStartE2EDuration="6.11706684s" podCreationTimestamp="2025-11-29 09:28:30 +0000 UTC" firstStartedPulling="2025-11-29 09:28:32.017727067 +0000 UTC m=+6557.993302857" lastFinishedPulling="2025-11-29 09:28:35.654734234 +0000 UTC m=+6561.630310024" observedRunningTime="2025-11-29 09:28:36.088401499 +0000 UTC m=+6562.063977289" watchObservedRunningTime="2025-11-29 09:28:36.11706684 +0000 UTC m=+6562.092642630" Nov 29 09:28:36 crc kubenswrapper[4795]: I1129 09:28:36.412992 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2k97_b55cbe5f-b90e-47f3-a446-82d1577ac07d/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 09:28:36 crc kubenswrapper[4795]: I1129 09:28:36.490091 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-bcc9b4b57-btgmc_25dbe68e-5c4f-4d79-afb2-a0ac640aa889/neutron-httpd/0.log" Nov 29 09:28:36 crc kubenswrapper[4795]: I1129 09:28:36.547281 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-bcc9b4b57-btgmc_25dbe68e-5c4f-4d79-afb2-a0ac640aa889/neutron-api/0.log" Nov 29 09:28:37 crc kubenswrapper[4795]: I1129 09:28:37.152894 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_e29df3aa-49f9-4776-8b5d-6448d3032696/nova-cell0-conductor-conductor/0.log" Nov 29 09:28:37 crc kubenswrapper[4795]: I1129 09:28:37.443299 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_f574bcc1-8e96-4c98-a600-1fcd846864d9/nova-api-log/0.log" Nov 29 09:28:37 crc kubenswrapper[4795]: I1129 09:28:37.567033 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_a16ef858-2118-49ef-be27-4389ab4c34dc/nova-cell1-conductor-conductor/0.log" Nov 29 09:28:37 crc kubenswrapper[4795]: I1129 09:28:37.868729 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_e9e160dc-75ec-49d4-8145-76df59c61dda/nova-cell1-novncproxy-novncproxy/0.log" Nov 29 09:28:37 crc kubenswrapper[4795]: I1129 09:28:37.945979 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-ss9hv_51f82fb3-fb43-4802-9ce6-46930382229b/nova-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 09:28:38 crc kubenswrapper[4795]: I1129 09:28:38.036728 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_f574bcc1-8e96-4c98-a600-1fcd846864d9/nova-api-api/0.log" Nov 29 09:28:38 crc kubenswrapper[4795]: I1129 09:28:38.205036 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_c23a0993-0a7b-4452-bdcc-a199abf1de88/nova-metadata-log/0.log" Nov 29 09:28:38 crc kubenswrapper[4795]: I1129 09:28:38.479860 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_7e58a5e7-cc35-47ee-af21-e80500efd523/nova-scheduler-scheduler/0.log" Nov 29 09:28:38 crc kubenswrapper[4795]: I1129 09:28:38.618061 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a9c19857-e09b-4c26-bf5a-a64655eaa024/mysql-bootstrap/0.log" Nov 29 09:28:38 crc kubenswrapper[4795]: I1129 09:28:38.733015 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a9c19857-e09b-4c26-bf5a-a64655eaa024/mysql-bootstrap/0.log" Nov 29 09:28:38 crc kubenswrapper[4795]: I1129 09:28:38.805300 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a9c19857-e09b-4c26-bf5a-a64655eaa024/galera/0.log" Nov 29 09:28:38 crc kubenswrapper[4795]: I1129 09:28:38.965034 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_a2fd879b-6f46-437e-acf0-c60e879af239/mysql-bootstrap/0.log" Nov 29 09:28:39 crc kubenswrapper[4795]: I1129 09:28:39.180470 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_a2fd879b-6f46-437e-acf0-c60e879af239/galera/0.log" Nov 29 09:28:39 crc kubenswrapper[4795]: I1129 09:28:39.185830 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_a2fd879b-6f46-437e-acf0-c60e879af239/mysql-bootstrap/0.log" Nov 29 09:28:39 crc kubenswrapper[4795]: I1129 09:28:39.372188 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_de5e89b3-d4a1-4ef0-bf3a-814cb09d5ade/openstackclient/0.log" Nov 29 09:28:39 crc kubenswrapper[4795]: I1129 09:28:39.490860 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-j2gnj_fb13c276-73ed-4b9b-90fd-58d6ae6e4169/openstack-network-exporter/0.log" Nov 29 09:28:39 crc kubenswrapper[4795]: I1129 09:28:39.717453 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-fjltq_e351bdf6-6e04-4bbd-bfae-e28c7bf2179f/ovsdb-server-init/0.log" Nov 29 09:28:39 crc kubenswrapper[4795]: I1129 09:28:39.908530 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-fjltq_e351bdf6-6e04-4bbd-bfae-e28c7bf2179f/ovsdb-server-init/0.log" Nov 29 09:28:39 crc kubenswrapper[4795]: I1129 09:28:39.962789 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-fjltq_e351bdf6-6e04-4bbd-bfae-e28c7bf2179f/ovsdb-server/0.log" Nov 29 09:28:40 crc kubenswrapper[4795]: I1129 09:28:40.009991 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-fjltq_e351bdf6-6e04-4bbd-bfae-e28c7bf2179f/ovs-vswitchd/0.log" Nov 29 09:28:40 crc kubenswrapper[4795]: I1129 09:28:40.225395 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-r5w67_77e980be-cb41-448f-96d7-0c99fec4d400/ovn-controller/0.log" Nov 29 09:28:40 crc kubenswrapper[4795]: I1129 09:28:40.455031 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-mrqcm_1a1dfc06-678d-4418-a57f-7a9a2ba2c441/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 09:28:40 crc kubenswrapper[4795]: I1129 09:28:40.498840 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_3f2bc988-6251-4a6d-95ab-8610dc2a2650/openstack-network-exporter/0.log" Nov 29 09:28:40 crc kubenswrapper[4795]: I1129 09:28:40.556858 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8s2lq" Nov 29 09:28:40 crc kubenswrapper[4795]: I1129 09:28:40.556910 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8s2lq" Nov 29 09:28:40 crc kubenswrapper[4795]: I1129 09:28:40.616015 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8s2lq" Nov 29 09:28:40 crc kubenswrapper[4795]: I1129 09:28:40.624537 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_3f2bc988-6251-4a6d-95ab-8610dc2a2650/ovn-northd/0.log" Nov 29 09:28:40 crc kubenswrapper[4795]: I1129 09:28:40.731960 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_acd08e2d-0e1b-473c-ae31-d63d742d2061/openstack-network-exporter/0.log" Nov 29 09:28:40 crc kubenswrapper[4795]: I1129 09:28:40.851356 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_acd08e2d-0e1b-473c-ae31-d63d742d2061/ovsdbserver-nb/0.log" Nov 29 09:28:40 crc kubenswrapper[4795]: I1129 09:28:40.885113 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_c23a0993-0a7b-4452-bdcc-a199abf1de88/nova-metadata-metadata/0.log" Nov 29 09:28:40 crc kubenswrapper[4795]: I1129 09:28:40.976715 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_75918c04-f960-4321-8894-582921ced50d/openstack-network-exporter/0.log" Nov 29 09:28:41 crc kubenswrapper[4795]: I1129 09:28:41.039124 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_75918c04-f960-4321-8894-582921ced50d/ovsdbserver-sb/0.log" Nov 29 09:28:41 crc kubenswrapper[4795]: I1129 09:28:41.173398 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8s2lq" Nov 29 09:28:41 crc kubenswrapper[4795]: I1129 09:28:41.230196 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8s2lq"] Nov 29 09:28:41 crc kubenswrapper[4795]: I1129 09:28:41.409209 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_9301745a-4dd1-469a-a37f-465f65a063e4/init-config-reloader/0.log" Nov 29 09:28:41 crc kubenswrapper[4795]: I1129 09:28:41.419130 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-8686d8994d-2mhmq_e3c503b5-e625-4ba5-af4d-9ff304b3f371/placement-api/0.log" Nov 29 09:28:41 crc kubenswrapper[4795]: I1129 09:28:41.487545 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-8686d8994d-2mhmq_e3c503b5-e625-4ba5-af4d-9ff304b3f371/placement-log/0.log" Nov 29 09:28:41 crc kubenswrapper[4795]: I1129 09:28:41.602261 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_9301745a-4dd1-469a-a37f-465f65a063e4/config-reloader/0.log" Nov 29 09:28:41 crc kubenswrapper[4795]: I1129 09:28:41.625726 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_9301745a-4dd1-469a-a37f-465f65a063e4/prometheus/0.log" Nov 29 09:28:41 crc kubenswrapper[4795]: I1129 09:28:41.634252 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_9301745a-4dd1-469a-a37f-465f65a063e4/init-config-reloader/0.log" Nov 29 09:28:41 crc kubenswrapper[4795]: I1129 09:28:41.675350 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_9301745a-4dd1-469a-a37f-465f65a063e4/thanos-sidecar/0.log" Nov 29 09:28:41 crc kubenswrapper[4795]: I1129 09:28:41.877115 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_85a82139-8137-40d2-a6e9-b384592f9919/setup-container/0.log" Nov 29 09:28:41 crc kubenswrapper[4795]: I1129 09:28:41.940874 4795 patch_prober.go:28] interesting pod/machine-config-daemon-bkmq6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 09:28:41 crc kubenswrapper[4795]: I1129 09:28:41.940945 4795 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 09:28:42 crc kubenswrapper[4795]: I1129 09:28:42.014307 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_85a82139-8137-40d2-a6e9-b384592f9919/rabbitmq/0.log" Nov 29 09:28:42 crc kubenswrapper[4795]: I1129 09:28:42.014968 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_85a82139-8137-40d2-a6e9-b384592f9919/setup-container/0.log" Nov 29 09:28:42 crc kubenswrapper[4795]: I1129 09:28:42.157116 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_d1c7dfa2-1b2a-438d-9378-fd998f873999/setup-container/0.log" Nov 29 09:28:42 crc kubenswrapper[4795]: I1129 09:28:42.288523 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_d1c7dfa2-1b2a-438d-9378-fd998f873999/setup-container/0.log" Nov 29 09:28:42 crc kubenswrapper[4795]: I1129 09:28:42.437396 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-mcx48_7bdb8420-3c48-48f2-977d-f163da761f04/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 09:28:42 crc kubenswrapper[4795]: I1129 09:28:42.472008 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_d1c7dfa2-1b2a-438d-9378-fd998f873999/rabbitmq/0.log" Nov 29 09:28:42 crc kubenswrapper[4795]: I1129 09:28:42.574473 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-vsfsq_52ed8cc8-9050-49af-ad5b-b48bc27eeb12/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 09:28:42 crc kubenswrapper[4795]: I1129 09:28:42.706747 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-r5c5p_fd6f9117-b1f3-4533-b3f6-3b614a790521/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 09:28:42 crc kubenswrapper[4795]: I1129 09:28:42.780264 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-9nkc7_2e79257d-05f2-41d6-97cb-0872075ec6bf/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 09:28:42 crc kubenswrapper[4795]: I1129 09:28:42.927815 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-mmx5g_a8a9108e-9590-423a-819e-9b009a41e91a/ssh-known-hosts-edpm-deployment/0.log" Nov 29 09:28:43 crc kubenswrapper[4795]: I1129 09:28:43.152354 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8s2lq" podUID="af09520d-7496-4809-84ad-1aa4b4d69b3d" containerName="registry-server" containerID="cri-o://6e50653608904e90f634feab7fbd02ff4e393f5bf3f6a90fc67df2e31e78b8ef" gracePeriod=2 Nov 29 09:28:43 crc kubenswrapper[4795]: I1129 09:28:43.179045 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7ff5f779bc-nzx8l_dd1d8c65-0785-455c-9991-e32eea8a9b83/proxy-server/0.log" Nov 29 09:28:43 crc kubenswrapper[4795]: I1129 09:28:43.291807 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-ww42q_60a004a5-f226-49aa-b9e7-12a384ddece6/swift-ring-rebalance/0.log" Nov 29 09:28:43 crc kubenswrapper[4795]: I1129 09:28:43.326823 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7ff5f779bc-nzx8l_dd1d8c65-0785-455c-9991-e32eea8a9b83/proxy-httpd/0.log" Nov 29 09:28:43 crc kubenswrapper[4795]: I1129 09:28:43.474417 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_28437f9f-e92e-46d7-9ffb-fcdda5dea25e/account-auditor/0.log" Nov 29 09:28:43 crc kubenswrapper[4795]: I1129 09:28:43.513640 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_28437f9f-e92e-46d7-9ffb-fcdda5dea25e/account-reaper/0.log" Nov 29 09:28:43 crc kubenswrapper[4795]: I1129 09:28:43.660449 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_28437f9f-e92e-46d7-9ffb-fcdda5dea25e/account-replicator/0.log" Nov 29 09:28:43 crc kubenswrapper[4795]: I1129 09:28:43.776735 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8s2lq" Nov 29 09:28:43 crc kubenswrapper[4795]: I1129 09:28:43.778186 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_28437f9f-e92e-46d7-9ffb-fcdda5dea25e/account-server/0.log" Nov 29 09:28:43 crc kubenswrapper[4795]: I1129 09:28:43.780391 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_28437f9f-e92e-46d7-9ffb-fcdda5dea25e/container-auditor/0.log" Nov 29 09:28:43 crc kubenswrapper[4795]: I1129 09:28:43.820151 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af09520d-7496-4809-84ad-1aa4b4d69b3d-utilities\") pod \"af09520d-7496-4809-84ad-1aa4b4d69b3d\" (UID: \"af09520d-7496-4809-84ad-1aa4b4d69b3d\") " Nov 29 09:28:43 crc kubenswrapper[4795]: I1129 09:28:43.820475 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af09520d-7496-4809-84ad-1aa4b4d69b3d-catalog-content\") pod \"af09520d-7496-4809-84ad-1aa4b4d69b3d\" (UID: \"af09520d-7496-4809-84ad-1aa4b4d69b3d\") " Nov 29 09:28:43 crc kubenswrapper[4795]: I1129 09:28:43.820509 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqmlh\" (UniqueName: \"kubernetes.io/projected/af09520d-7496-4809-84ad-1aa4b4d69b3d-kube-api-access-gqmlh\") pod \"af09520d-7496-4809-84ad-1aa4b4d69b3d\" (UID: \"af09520d-7496-4809-84ad-1aa4b4d69b3d\") " Nov 29 09:28:43 crc kubenswrapper[4795]: I1129 09:28:43.826222 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af09520d-7496-4809-84ad-1aa4b4d69b3d-utilities" (OuterVolumeSpecName: "utilities") pod "af09520d-7496-4809-84ad-1aa4b4d69b3d" (UID: "af09520d-7496-4809-84ad-1aa4b4d69b3d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 09:28:43 crc kubenswrapper[4795]: I1129 09:28:43.831995 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af09520d-7496-4809-84ad-1aa4b4d69b3d-kube-api-access-gqmlh" (OuterVolumeSpecName: "kube-api-access-gqmlh") pod "af09520d-7496-4809-84ad-1aa4b4d69b3d" (UID: "af09520d-7496-4809-84ad-1aa4b4d69b3d"). InnerVolumeSpecName "kube-api-access-gqmlh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 09:28:43 crc kubenswrapper[4795]: I1129 09:28:43.919570 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af09520d-7496-4809-84ad-1aa4b4d69b3d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "af09520d-7496-4809-84ad-1aa4b4d69b3d" (UID: "af09520d-7496-4809-84ad-1aa4b4d69b3d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 09:28:43 crc kubenswrapper[4795]: I1129 09:28:43.923109 4795 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af09520d-7496-4809-84ad-1aa4b4d69b3d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 09:28:43 crc kubenswrapper[4795]: I1129 09:28:43.923142 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqmlh\" (UniqueName: \"kubernetes.io/projected/af09520d-7496-4809-84ad-1aa4b4d69b3d-kube-api-access-gqmlh\") on node \"crc\" DevicePath \"\"" Nov 29 09:28:43 crc kubenswrapper[4795]: I1129 09:28:43.923153 4795 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af09520d-7496-4809-84ad-1aa4b4d69b3d-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 09:28:43 crc kubenswrapper[4795]: I1129 09:28:43.932653 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_28437f9f-e92e-46d7-9ffb-fcdda5dea25e/container-server/0.log" Nov 29 09:28:43 crc kubenswrapper[4795]: I1129 09:28:43.961607 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_28437f9f-e92e-46d7-9ffb-fcdda5dea25e/container-replicator/0.log" Nov 29 09:28:44 crc kubenswrapper[4795]: I1129 09:28:44.049435 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_28437f9f-e92e-46d7-9ffb-fcdda5dea25e/container-updater/0.log" Nov 29 09:28:44 crc kubenswrapper[4795]: I1129 09:28:44.080634 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_28437f9f-e92e-46d7-9ffb-fcdda5dea25e/object-auditor/0.log" Nov 29 09:28:44 crc kubenswrapper[4795]: I1129 09:28:44.158046 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_28437f9f-e92e-46d7-9ffb-fcdda5dea25e/object-expirer/0.log" Nov 29 09:28:44 crc kubenswrapper[4795]: I1129 09:28:44.164584 4795 generic.go:334] "Generic (PLEG): container finished" podID="af09520d-7496-4809-84ad-1aa4b4d69b3d" containerID="6e50653608904e90f634feab7fbd02ff4e393f5bf3f6a90fc67df2e31e78b8ef" exitCode=0 Nov 29 09:28:44 crc kubenswrapper[4795]: I1129 09:28:44.164643 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8s2lq" event={"ID":"af09520d-7496-4809-84ad-1aa4b4d69b3d","Type":"ContainerDied","Data":"6e50653608904e90f634feab7fbd02ff4e393f5bf3f6a90fc67df2e31e78b8ef"} Nov 29 09:28:44 crc kubenswrapper[4795]: I1129 09:28:44.164668 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8s2lq" event={"ID":"af09520d-7496-4809-84ad-1aa4b4d69b3d","Type":"ContainerDied","Data":"7eaf15282bce43e9c3fbc4aeec8cd1a094b1658afd4d4021d1a2cfe0ff94c20a"} Nov 29 09:28:44 crc kubenswrapper[4795]: I1129 09:28:44.164684 4795 scope.go:117] "RemoveContainer" containerID="6e50653608904e90f634feab7fbd02ff4e393f5bf3f6a90fc67df2e31e78b8ef" Nov 29 09:28:44 crc kubenswrapper[4795]: I1129 09:28:44.164849 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8s2lq" Nov 29 09:28:44 crc kubenswrapper[4795]: I1129 09:28:44.199179 4795 scope.go:117] "RemoveContainer" containerID="c3f7753b3b33383272a892ac76deda1bfde0a587d3c797edeea3596c051cd8b6" Nov 29 09:28:44 crc kubenswrapper[4795]: I1129 09:28:44.199403 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8s2lq"] Nov 29 09:28:44 crc kubenswrapper[4795]: I1129 09:28:44.212401 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8s2lq"] Nov 29 09:28:44 crc kubenswrapper[4795]: I1129 09:28:44.245431 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_28437f9f-e92e-46d7-9ffb-fcdda5dea25e/object-replicator/0.log" Nov 29 09:28:44 crc kubenswrapper[4795]: I1129 09:28:44.262978 4795 scope.go:117] "RemoveContainer" containerID="48f39bafa125022a810cc38b7f025a4b232655c2c07b525d34e030373a3d63d6" Nov 29 09:28:44 crc kubenswrapper[4795]: I1129 09:28:44.315580 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af09520d-7496-4809-84ad-1aa4b4d69b3d" path="/var/lib/kubelet/pods/af09520d-7496-4809-84ad-1aa4b4d69b3d/volumes" Nov 29 09:28:44 crc kubenswrapper[4795]: I1129 09:28:44.320873 4795 scope.go:117] "RemoveContainer" containerID="6e50653608904e90f634feab7fbd02ff4e393f5bf3f6a90fc67df2e31e78b8ef" Nov 29 09:28:44 crc kubenswrapper[4795]: E1129 09:28:44.321531 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e50653608904e90f634feab7fbd02ff4e393f5bf3f6a90fc67df2e31e78b8ef\": container with ID starting with 6e50653608904e90f634feab7fbd02ff4e393f5bf3f6a90fc67df2e31e78b8ef not found: ID does not exist" containerID="6e50653608904e90f634feab7fbd02ff4e393f5bf3f6a90fc67df2e31e78b8ef" Nov 29 09:28:44 crc kubenswrapper[4795]: I1129 09:28:44.321638 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e50653608904e90f634feab7fbd02ff4e393f5bf3f6a90fc67df2e31e78b8ef"} err="failed to get container status \"6e50653608904e90f634feab7fbd02ff4e393f5bf3f6a90fc67df2e31e78b8ef\": rpc error: code = NotFound desc = could not find container \"6e50653608904e90f634feab7fbd02ff4e393f5bf3f6a90fc67df2e31e78b8ef\": container with ID starting with 6e50653608904e90f634feab7fbd02ff4e393f5bf3f6a90fc67df2e31e78b8ef not found: ID does not exist" Nov 29 09:28:44 crc kubenswrapper[4795]: I1129 09:28:44.321720 4795 scope.go:117] "RemoveContainer" containerID="c3f7753b3b33383272a892ac76deda1bfde0a587d3c797edeea3596c051cd8b6" Nov 29 09:28:44 crc kubenswrapper[4795]: E1129 09:28:44.322837 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3f7753b3b33383272a892ac76deda1bfde0a587d3c797edeea3596c051cd8b6\": container with ID starting with c3f7753b3b33383272a892ac76deda1bfde0a587d3c797edeea3596c051cd8b6 not found: ID does not exist" containerID="c3f7753b3b33383272a892ac76deda1bfde0a587d3c797edeea3596c051cd8b6" Nov 29 09:28:44 crc kubenswrapper[4795]: I1129 09:28:44.322926 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3f7753b3b33383272a892ac76deda1bfde0a587d3c797edeea3596c051cd8b6"} err="failed to get container status \"c3f7753b3b33383272a892ac76deda1bfde0a587d3c797edeea3596c051cd8b6\": rpc error: code = NotFound desc = could not find container \"c3f7753b3b33383272a892ac76deda1bfde0a587d3c797edeea3596c051cd8b6\": container with ID starting with c3f7753b3b33383272a892ac76deda1bfde0a587d3c797edeea3596c051cd8b6 not found: ID does not exist" Nov 29 09:28:44 crc kubenswrapper[4795]: I1129 09:28:44.322994 4795 scope.go:117] "RemoveContainer" containerID="48f39bafa125022a810cc38b7f025a4b232655c2c07b525d34e030373a3d63d6" Nov 29 09:28:44 crc kubenswrapper[4795]: E1129 09:28:44.324307 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48f39bafa125022a810cc38b7f025a4b232655c2c07b525d34e030373a3d63d6\": container with ID starting with 48f39bafa125022a810cc38b7f025a4b232655c2c07b525d34e030373a3d63d6 not found: ID does not exist" containerID="48f39bafa125022a810cc38b7f025a4b232655c2c07b525d34e030373a3d63d6" Nov 29 09:28:44 crc kubenswrapper[4795]: I1129 09:28:44.324426 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48f39bafa125022a810cc38b7f025a4b232655c2c07b525d34e030373a3d63d6"} err="failed to get container status \"48f39bafa125022a810cc38b7f025a4b232655c2c07b525d34e030373a3d63d6\": rpc error: code = NotFound desc = could not find container \"48f39bafa125022a810cc38b7f025a4b232655c2c07b525d34e030373a3d63d6\": container with ID starting with 48f39bafa125022a810cc38b7f025a4b232655c2c07b525d34e030373a3d63d6 not found: ID does not exist" Nov 29 09:28:44 crc kubenswrapper[4795]: I1129 09:28:44.328346 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_28437f9f-e92e-46d7-9ffb-fcdda5dea25e/object-server/0.log" Nov 29 09:28:44 crc kubenswrapper[4795]: I1129 09:28:44.352527 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_28437f9f-e92e-46d7-9ffb-fcdda5dea25e/object-updater/0.log" Nov 29 09:28:44 crc kubenswrapper[4795]: I1129 09:28:44.443580 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_28437f9f-e92e-46d7-9ffb-fcdda5dea25e/rsync/0.log" Nov 29 09:28:44 crc kubenswrapper[4795]: I1129 09:28:44.518651 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_28437f9f-e92e-46d7-9ffb-fcdda5dea25e/swift-recon-cron/0.log" Nov 29 09:28:44 crc kubenswrapper[4795]: I1129 09:28:44.653178 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-z67pl_de36795d-fe29-4964-bcc6-c63bf2eda290/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 09:28:44 crc kubenswrapper[4795]: I1129 09:28:44.816526 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-power-monitoring-edpm-deployment-openstack-edpm-fjmtk_46efc96a-a270-4709-a9a1-cf8d60484215/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 09:28:45 crc kubenswrapper[4795]: I1129 09:28:45.081667 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_04e8ed70-5207-4d59-8de4-b96ad0270b54/test-operator-logs-container/0.log" Nov 29 09:28:45 crc kubenswrapper[4795]: I1129 09:28:45.212511 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-k58jj_b8224f42-d933-4b1a-bab0-8f79fa3a5369/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 09:28:45 crc kubenswrapper[4795]: I1129 09:28:45.909043 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_95552069-4919-43f3-88d5-2c40ff4c0836/tempest-tests-tempest-tests-runner/0.log" Nov 29 09:28:53 crc kubenswrapper[4795]: I1129 09:28:53.111791 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_ab20548b-7f96-4eb8-aa44-80425459c0ed/memcached/0.log" Nov 29 09:29:11 crc kubenswrapper[4795]: I1129 09:29:11.941092 4795 patch_prober.go:28] interesting pod/machine-config-daemon-bkmq6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 09:29:11 crc kubenswrapper[4795]: I1129 09:29:11.941515 4795 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 09:29:11 crc kubenswrapper[4795]: I1129 09:29:11.941557 4795 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" Nov 29 09:29:11 crc kubenswrapper[4795]: I1129 09:29:11.942349 4795 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"38542a01fd0557a5c242f6a842e56a1fb659b4b79c57d243f879c3de7726e808"} pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 09:29:11 crc kubenswrapper[4795]: I1129 09:29:11.942401 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" containerID="cri-o://38542a01fd0557a5c242f6a842e56a1fb659b4b79c57d243f879c3de7726e808" gracePeriod=600 Nov 29 09:29:12 crc kubenswrapper[4795]: I1129 09:29:12.169135 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6f4a794a7e4af1c5b0c6f5512a2ac7527d4469dfd9e88afc91933ffc30kzj2m_2d7978d1-9d8a-4d2d-a6f8-d44c5c2b4eef/util/0.log" Nov 29 09:29:12 crc kubenswrapper[4795]: I1129 09:29:12.378902 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6f4a794a7e4af1c5b0c6f5512a2ac7527d4469dfd9e88afc91933ffc30kzj2m_2d7978d1-9d8a-4d2d-a6f8-d44c5c2b4eef/util/0.log" Nov 29 09:29:12 crc kubenswrapper[4795]: I1129 09:29:12.428050 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6f4a794a7e4af1c5b0c6f5512a2ac7527d4469dfd9e88afc91933ffc30kzj2m_2d7978d1-9d8a-4d2d-a6f8-d44c5c2b4eef/pull/0.log" Nov 29 09:29:12 crc kubenswrapper[4795]: I1129 09:29:12.461691 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6f4a794a7e4af1c5b0c6f5512a2ac7527d4469dfd9e88afc91933ffc30kzj2m_2d7978d1-9d8a-4d2d-a6f8-d44c5c2b4eef/pull/0.log" Nov 29 09:29:12 crc kubenswrapper[4795]: I1129 09:29:12.551232 4795 generic.go:334] "Generic (PLEG): container finished" podID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerID="38542a01fd0557a5c242f6a842e56a1fb659b4b79c57d243f879c3de7726e808" exitCode=0 Nov 29 09:29:12 crc kubenswrapper[4795]: I1129 09:29:12.551274 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" event={"ID":"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1","Type":"ContainerDied","Data":"38542a01fd0557a5c242f6a842e56a1fb659b4b79c57d243f879c3de7726e808"} Nov 29 09:29:12 crc kubenswrapper[4795]: I1129 09:29:12.551326 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" event={"ID":"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1","Type":"ContainerStarted","Data":"a0961216ed7f33ef63b3697986049c3d884283e351de87a69a04219f3b1944a4"} Nov 29 09:29:12 crc kubenswrapper[4795]: I1129 09:29:12.551344 4795 scope.go:117] "RemoveContainer" containerID="e3c02c44d457b8bc56d4e6c57834916c5a24411b16830e7c64fb79753ef4fe61" Nov 29 09:29:12 crc kubenswrapper[4795]: I1129 09:29:12.623467 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6f4a794a7e4af1c5b0c6f5512a2ac7527d4469dfd9e88afc91933ffc30kzj2m_2d7978d1-9d8a-4d2d-a6f8-d44c5c2b4eef/util/0.log" Nov 29 09:29:12 crc kubenswrapper[4795]: I1129 09:29:12.626080 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6f4a794a7e4af1c5b0c6f5512a2ac7527d4469dfd9e88afc91933ffc30kzj2m_2d7978d1-9d8a-4d2d-a6f8-d44c5c2b4eef/pull/0.log" Nov 29 09:29:12 crc kubenswrapper[4795]: I1129 09:29:12.656533 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6f4a794a7e4af1c5b0c6f5512a2ac7527d4469dfd9e88afc91933ffc30kzj2m_2d7978d1-9d8a-4d2d-a6f8-d44c5c2b4eef/extract/0.log" Nov 29 09:29:12 crc kubenswrapper[4795]: I1129 09:29:12.852950 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-klbwf_7bed5103-966d-43d3-92f1-73a2f8b6d551/kube-rbac-proxy/0.log" Nov 29 09:29:12 crc kubenswrapper[4795]: I1129 09:29:12.897216 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-klbwf_7bed5103-966d-43d3-92f1-73a2f8b6d551/manager/0.log" Nov 29 09:29:12 crc kubenswrapper[4795]: I1129 09:29:12.921663 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-859b6ccc6-qtbtd_86217734-815f-461c-a32d-8d744192003e/kube-rbac-proxy/0.log" Nov 29 09:29:13 crc kubenswrapper[4795]: I1129 09:29:13.069143 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-859b6ccc6-qtbtd_86217734-815f-461c-a32d-8d744192003e/manager/0.log" Nov 29 09:29:13 crc kubenswrapper[4795]: I1129 09:29:13.099255 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-8mk4s_7bfb1aea-ee92-4033-ba5f-c7ac79c60d0e/kube-rbac-proxy/0.log" Nov 29 09:29:13 crc kubenswrapper[4795]: I1129 09:29:13.135008 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-8mk4s_7bfb1aea-ee92-4033-ba5f-c7ac79c60d0e/manager/0.log" Nov 29 09:29:13 crc kubenswrapper[4795]: I1129 09:29:13.302120 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-668d9c48b9-dr8f4_3ff17662-f7b1-4870-9ef2-18a81fdb5d73/kube-rbac-proxy/0.log" Nov 29 09:29:13 crc kubenswrapper[4795]: I1129 09:29:13.353528 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-668d9c48b9-dr8f4_3ff17662-f7b1-4870-9ef2-18a81fdb5d73/manager/0.log" Nov 29 09:29:13 crc kubenswrapper[4795]: I1129 09:29:13.531928 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-xjns5_36a279fc-25f1-407e-a1c6-6b8689d68cd2/kube-rbac-proxy/0.log" Nov 29 09:29:13 crc kubenswrapper[4795]: I1129 09:29:13.613429 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-xjns5_36a279fc-25f1-407e-a1c6-6b8689d68cd2/manager/0.log" Nov 29 09:29:13 crc kubenswrapper[4795]: I1129 09:29:13.650224 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-spfsh_d4e1473d-8426-452b-8030-764680cc5a20/kube-rbac-proxy/0.log" Nov 29 09:29:13 crc kubenswrapper[4795]: I1129 09:29:13.737726 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-spfsh_d4e1473d-8426-452b-8030-764680cc5a20/manager/0.log" Nov 29 09:29:13 crc kubenswrapper[4795]: I1129 09:29:13.823254 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-rd6w8_cc9825dd-340b-4dda-ab8a-91d95ee67678/kube-rbac-proxy/0.log" Nov 29 09:29:14 crc kubenswrapper[4795]: I1129 09:29:14.018153 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-rd6w8_cc9825dd-340b-4dda-ab8a-91d95ee67678/manager/0.log" Nov 29 09:29:14 crc kubenswrapper[4795]: I1129 09:29:14.043230 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-5q5dd_b53e6db5-a3fe-4f86-9b6a-49eb7d4629fd/kube-rbac-proxy/0.log" Nov 29 09:29:14 crc kubenswrapper[4795]: I1129 09:29:14.109706 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-5q5dd_b53e6db5-a3fe-4f86-9b6a-49eb7d4629fd/manager/0.log" Nov 29 09:29:14 crc kubenswrapper[4795]: I1129 09:29:14.248530 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-546d4bdf48-cn6z7_1d6dd43f-eee0-4257-adbb-a53218a86eb9/kube-rbac-proxy/0.log" Nov 29 09:29:14 crc kubenswrapper[4795]: I1129 09:29:14.305535 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-546d4bdf48-cn6z7_1d6dd43f-eee0-4257-adbb-a53218a86eb9/manager/0.log" Nov 29 09:29:14 crc kubenswrapper[4795]: I1129 09:29:14.489533 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-6546668bfd-mvrzj_36512615-d21b-4484-af03-ffa1d325883b/kube-rbac-proxy/0.log" Nov 29 09:29:14 crc kubenswrapper[4795]: I1129 09:29:14.489873 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-6546668bfd-mvrzj_36512615-d21b-4484-af03-ffa1d325883b/manager/0.log" Nov 29 09:29:14 crc kubenswrapper[4795]: I1129 09:29:14.569301 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-cmvt9_94d164fa-c521-4617-8338-1eba3ee1c31d/kube-rbac-proxy/0.log" Nov 29 09:29:14 crc kubenswrapper[4795]: I1129 09:29:14.700289 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-cmvt9_94d164fa-c521-4617-8338-1eba3ee1c31d/manager/0.log" Nov 29 09:29:14 crc kubenswrapper[4795]: I1129 09:29:14.747224 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-njfdg_bfb2e88b-d2db-4afa-8511-e1a896eb9039/kube-rbac-proxy/0.log" Nov 29 09:29:14 crc kubenswrapper[4795]: I1129 09:29:14.834901 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-njfdg_bfb2e88b-d2db-4afa-8511-e1a896eb9039/manager/0.log" Nov 29 09:29:14 crc kubenswrapper[4795]: I1129 09:29:14.918472 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-966jn_8de1af69-5c67-4669-83d5-02de0ecd32d3/kube-rbac-proxy/0.log" Nov 29 09:29:15 crc kubenswrapper[4795]: I1129 09:29:15.057441 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-966jn_8de1af69-5c67-4669-83d5-02de0ecd32d3/manager/0.log" Nov 29 09:29:15 crc kubenswrapper[4795]: I1129 09:29:15.153766 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-d5d4r_f2367076-6d52-4047-908c-c1e32c4ca2c4/kube-rbac-proxy/0.log" Nov 29 09:29:15 crc kubenswrapper[4795]: I1129 09:29:15.161064 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-d5d4r_f2367076-6d52-4047-908c-c1e32c4ca2c4/manager/0.log" Nov 29 09:29:15 crc kubenswrapper[4795]: I1129 09:29:15.278382 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd4wf8kl_543b785e-bdb9-4582-b9dd-8a987b5129f6/kube-rbac-proxy/0.log" Nov 29 09:29:15 crc kubenswrapper[4795]: I1129 09:29:15.378264 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd4wf8kl_543b785e-bdb9-4582-b9dd-8a987b5129f6/manager/0.log" Nov 29 09:29:15 crc kubenswrapper[4795]: I1129 09:29:15.795017 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-fc66n_cf43b8b5-a117-4ed8-853b-869086fd5197/registry-server/0.log" Nov 29 09:29:15 crc kubenswrapper[4795]: I1129 09:29:15.887385 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-5c79c4cd8-99qw8_1b9bc471-e43a-403f-8bd9-83744b7746a7/operator/0.log" Nov 29 09:29:15 crc kubenswrapper[4795]: I1129 09:29:15.976459 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-vdmph_a197813b-f5c3-49c1-81f6-b6b2e08e0617/kube-rbac-proxy/0.log" Nov 29 09:29:16 crc kubenswrapper[4795]: I1129 09:29:16.118198 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-vdmph_a197813b-f5c3-49c1-81f6-b6b2e08e0617/manager/0.log" Nov 29 09:29:16 crc kubenswrapper[4795]: I1129 09:29:16.134290 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-9g674_f5e4dece-2c02-41a1-ac8b-ec46eb2a3d45/kube-rbac-proxy/0.log" Nov 29 09:29:16 crc kubenswrapper[4795]: I1129 09:29:16.230041 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-9g674_f5e4dece-2c02-41a1-ac8b-ec46eb2a3d45/manager/0.log" Nov 29 09:29:16 crc kubenswrapper[4795]: I1129 09:29:16.369644 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-27n4r_7ce03a92-9abd-485c-b949-fb95301de889/operator/0.log" Nov 29 09:29:16 crc kubenswrapper[4795]: I1129 09:29:16.492575 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-6fkwt_e56bb4ff-9936-4876-8616-0958e9892fa3/kube-rbac-proxy/0.log" Nov 29 09:29:16 crc kubenswrapper[4795]: I1129 09:29:16.630719 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-6fkwt_e56bb4ff-9936-4876-8616-0958e9892fa3/manager/0.log" Nov 29 09:29:16 crc kubenswrapper[4795]: I1129 09:29:16.653327 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-d486dbd66-bt6tr_c75b943b-8281-4fbd-a94a-3d5db0475d5d/kube-rbac-proxy/0.log" Nov 29 09:29:16 crc kubenswrapper[4795]: I1129 09:29:16.942847 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-k6h4m_4eea915e-348e-48a3-b5e1-767648dac19d/kube-rbac-proxy/0.log" Nov 29 09:29:17 crc kubenswrapper[4795]: I1129 09:29:17.008194 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-k6h4m_4eea915e-348e-48a3-b5e1-767648dac19d/manager/0.log" Nov 29 09:29:17 crc kubenswrapper[4795]: I1129 09:29:17.185281 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-d486dbd66-bt6tr_c75b943b-8281-4fbd-a94a-3d5db0475d5d/manager/0.log" Nov 29 09:29:17 crc kubenswrapper[4795]: I1129 09:29:17.191175 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-h74tt_78c2fefa-d0f0-4123-9513-231b2c3ca5fd/kube-rbac-proxy/0.log" Nov 29 09:29:17 crc kubenswrapper[4795]: I1129 09:29:17.235741 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-8688fc7b8-5sbpb_868e2666-5606-4891-ba11-ac02f852c48d/manager/0.log" Nov 29 09:29:17 crc kubenswrapper[4795]: I1129 09:29:17.265017 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-h74tt_78c2fefa-d0f0-4123-9513-231b2c3ca5fd/manager/0.log" Nov 29 09:29:35 crc kubenswrapper[4795]: I1129 09:29:35.323948 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-t6scb_a9829eb2-ba44-41ca-a0f7-fd92d6114927/control-plane-machine-set-operator/0.log" Nov 29 09:29:35 crc kubenswrapper[4795]: I1129 09:29:35.531126 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-v4fx7_725af35a-cc1c-4178-ae7f-e909af583a5f/kube-rbac-proxy/0.log" Nov 29 09:29:35 crc kubenswrapper[4795]: I1129 09:29:35.545673 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-v4fx7_725af35a-cc1c-4178-ae7f-e909af583a5f/machine-api-operator/0.log" Nov 29 09:29:42 crc kubenswrapper[4795]: I1129 09:29:42.156014 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6rr7h"] Nov 29 09:29:42 crc kubenswrapper[4795]: E1129 09:29:42.158400 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af09520d-7496-4809-84ad-1aa4b4d69b3d" containerName="extract-utilities" Nov 29 09:29:42 crc kubenswrapper[4795]: I1129 09:29:42.158429 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="af09520d-7496-4809-84ad-1aa4b4d69b3d" containerName="extract-utilities" Nov 29 09:29:42 crc kubenswrapper[4795]: E1129 09:29:42.158477 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af09520d-7496-4809-84ad-1aa4b4d69b3d" containerName="registry-server" Nov 29 09:29:42 crc kubenswrapper[4795]: I1129 09:29:42.158496 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="af09520d-7496-4809-84ad-1aa4b4d69b3d" containerName="registry-server" Nov 29 09:29:42 crc kubenswrapper[4795]: E1129 09:29:42.158520 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af09520d-7496-4809-84ad-1aa4b4d69b3d" containerName="extract-content" Nov 29 09:29:42 crc kubenswrapper[4795]: I1129 09:29:42.158533 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="af09520d-7496-4809-84ad-1aa4b4d69b3d" containerName="extract-content" Nov 29 09:29:42 crc kubenswrapper[4795]: I1129 09:29:42.159093 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="af09520d-7496-4809-84ad-1aa4b4d69b3d" containerName="registry-server" Nov 29 09:29:42 crc kubenswrapper[4795]: I1129 09:29:42.162884 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6rr7h" Nov 29 09:29:42 crc kubenswrapper[4795]: I1129 09:29:42.166663 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6rr7h"] Nov 29 09:29:42 crc kubenswrapper[4795]: I1129 09:29:42.280229 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9srk\" (UniqueName: \"kubernetes.io/projected/c5419a38-f33f-4857-9331-c259af5ae522-kube-api-access-w9srk\") pod \"community-operators-6rr7h\" (UID: \"c5419a38-f33f-4857-9331-c259af5ae522\") " pod="openshift-marketplace/community-operators-6rr7h" Nov 29 09:29:42 crc kubenswrapper[4795]: I1129 09:29:42.280431 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5419a38-f33f-4857-9331-c259af5ae522-catalog-content\") pod \"community-operators-6rr7h\" (UID: \"c5419a38-f33f-4857-9331-c259af5ae522\") " pod="openshift-marketplace/community-operators-6rr7h" Nov 29 09:29:42 crc kubenswrapper[4795]: I1129 09:29:42.280458 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5419a38-f33f-4857-9331-c259af5ae522-utilities\") pod \"community-operators-6rr7h\" (UID: \"c5419a38-f33f-4857-9331-c259af5ae522\") " pod="openshift-marketplace/community-operators-6rr7h" Nov 29 09:29:42 crc kubenswrapper[4795]: I1129 09:29:42.382575 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5419a38-f33f-4857-9331-c259af5ae522-utilities\") pod \"community-operators-6rr7h\" (UID: \"c5419a38-f33f-4857-9331-c259af5ae522\") " pod="openshift-marketplace/community-operators-6rr7h" Nov 29 09:29:42 crc kubenswrapper[4795]: I1129 09:29:42.382703 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9srk\" (UniqueName: \"kubernetes.io/projected/c5419a38-f33f-4857-9331-c259af5ae522-kube-api-access-w9srk\") pod \"community-operators-6rr7h\" (UID: \"c5419a38-f33f-4857-9331-c259af5ae522\") " pod="openshift-marketplace/community-operators-6rr7h" Nov 29 09:29:42 crc kubenswrapper[4795]: I1129 09:29:42.383070 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5419a38-f33f-4857-9331-c259af5ae522-catalog-content\") pod \"community-operators-6rr7h\" (UID: \"c5419a38-f33f-4857-9331-c259af5ae522\") " pod="openshift-marketplace/community-operators-6rr7h" Nov 29 09:29:42 crc kubenswrapper[4795]: I1129 09:29:42.383262 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5419a38-f33f-4857-9331-c259af5ae522-utilities\") pod \"community-operators-6rr7h\" (UID: \"c5419a38-f33f-4857-9331-c259af5ae522\") " pod="openshift-marketplace/community-operators-6rr7h" Nov 29 09:29:42 crc kubenswrapper[4795]: I1129 09:29:42.383444 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5419a38-f33f-4857-9331-c259af5ae522-catalog-content\") pod \"community-operators-6rr7h\" (UID: \"c5419a38-f33f-4857-9331-c259af5ae522\") " pod="openshift-marketplace/community-operators-6rr7h" Nov 29 09:29:42 crc kubenswrapper[4795]: I1129 09:29:42.409451 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9srk\" (UniqueName: \"kubernetes.io/projected/c5419a38-f33f-4857-9331-c259af5ae522-kube-api-access-w9srk\") pod \"community-operators-6rr7h\" (UID: \"c5419a38-f33f-4857-9331-c259af5ae522\") " pod="openshift-marketplace/community-operators-6rr7h" Nov 29 09:29:42 crc kubenswrapper[4795]: I1129 09:29:42.496159 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6rr7h" Nov 29 09:29:42 crc kubenswrapper[4795]: I1129 09:29:42.975502 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6rr7h"] Nov 29 09:29:43 crc kubenswrapper[4795]: I1129 09:29:43.913116 4795 generic.go:334] "Generic (PLEG): container finished" podID="c5419a38-f33f-4857-9331-c259af5ae522" containerID="f65b0da3f315d6b84382dcf2928e39f495a3049a30f4aa10ac25aa856f17f792" exitCode=0 Nov 29 09:29:43 crc kubenswrapper[4795]: I1129 09:29:43.913393 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6rr7h" event={"ID":"c5419a38-f33f-4857-9331-c259af5ae522","Type":"ContainerDied","Data":"f65b0da3f315d6b84382dcf2928e39f495a3049a30f4aa10ac25aa856f17f792"} Nov 29 09:29:43 crc kubenswrapper[4795]: I1129 09:29:43.914580 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6rr7h" event={"ID":"c5419a38-f33f-4857-9331-c259af5ae522","Type":"ContainerStarted","Data":"f5d5bf70b88a74560222181e14105e2ce5ec310d32cc13a701e811c3f7f8759c"} Nov 29 09:29:44 crc kubenswrapper[4795]: I1129 09:29:44.926269 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6rr7h" event={"ID":"c5419a38-f33f-4857-9331-c259af5ae522","Type":"ContainerStarted","Data":"dbf3c6f99e817edc1d0c88517e4f5bb1643a4bc871d6fff2d3acc2b8ec8f74f0"} Nov 29 09:29:45 crc kubenswrapper[4795]: I1129 09:29:45.937809 4795 generic.go:334] "Generic (PLEG): container finished" podID="c5419a38-f33f-4857-9331-c259af5ae522" containerID="dbf3c6f99e817edc1d0c88517e4f5bb1643a4bc871d6fff2d3acc2b8ec8f74f0" exitCode=0 Nov 29 09:29:45 crc kubenswrapper[4795]: I1129 09:29:45.937905 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6rr7h" event={"ID":"c5419a38-f33f-4857-9331-c259af5ae522","Type":"ContainerDied","Data":"dbf3c6f99e817edc1d0c88517e4f5bb1643a4bc871d6fff2d3acc2b8ec8f74f0"} Nov 29 09:29:46 crc kubenswrapper[4795]: I1129 09:29:46.953196 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6rr7h" event={"ID":"c5419a38-f33f-4857-9331-c259af5ae522","Type":"ContainerStarted","Data":"e84f9514b4a3d07683dae01cc0fae0bd186ae0c6bbb65b719708d56c5fc05911"} Nov 29 09:29:46 crc kubenswrapper[4795]: I1129 09:29:46.984727 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6rr7h" podStartSLOduration=2.391258932 podStartE2EDuration="4.98469541s" podCreationTimestamp="2025-11-29 09:29:42 +0000 UTC" firstStartedPulling="2025-11-29 09:29:43.916894215 +0000 UTC m=+6629.892470005" lastFinishedPulling="2025-11-29 09:29:46.510330693 +0000 UTC m=+6632.485906483" observedRunningTime="2025-11-29 09:29:46.971393324 +0000 UTC m=+6632.946969124" watchObservedRunningTime="2025-11-29 09:29:46.98469541 +0000 UTC m=+6632.960271240" Nov 29 09:29:48 crc kubenswrapper[4795]: I1129 09:29:48.759009 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-l4ctt_286839ed-cd16-46c0-81a4-d0c90bb32fb4/cert-manager-controller/0.log" Nov 29 09:29:48 crc kubenswrapper[4795]: I1129 09:29:48.862661 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-mn9s2_780eadcf-077c-4f71-8570-5ebbca30d61e/cert-manager-webhook/0.log" Nov 29 09:29:48 crc kubenswrapper[4795]: I1129 09:29:48.974346 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-2vns9_a921719c-ebed-49c3-9482-87b58c96c819/cert-manager-cainjector/0.log" Nov 29 09:29:52 crc kubenswrapper[4795]: I1129 09:29:52.496436 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6rr7h" Nov 29 09:29:52 crc kubenswrapper[4795]: I1129 09:29:52.497145 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6rr7h" Nov 29 09:29:52 crc kubenswrapper[4795]: I1129 09:29:52.579217 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6rr7h" Nov 29 09:29:53 crc kubenswrapper[4795]: I1129 09:29:53.074879 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6rr7h" Nov 29 09:29:53 crc kubenswrapper[4795]: I1129 09:29:53.143129 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6rr7h"] Nov 29 09:29:55 crc kubenswrapper[4795]: I1129 09:29:55.039935 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6rr7h" podUID="c5419a38-f33f-4857-9331-c259af5ae522" containerName="registry-server" containerID="cri-o://e84f9514b4a3d07683dae01cc0fae0bd186ae0c6bbb65b719708d56c5fc05911" gracePeriod=2 Nov 29 09:29:55 crc kubenswrapper[4795]: I1129 09:29:55.583714 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6rr7h" Nov 29 09:29:55 crc kubenswrapper[4795]: I1129 09:29:55.628822 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5419a38-f33f-4857-9331-c259af5ae522-catalog-content\") pod \"c5419a38-f33f-4857-9331-c259af5ae522\" (UID: \"c5419a38-f33f-4857-9331-c259af5ae522\") " Nov 29 09:29:55 crc kubenswrapper[4795]: I1129 09:29:55.628868 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9srk\" (UniqueName: \"kubernetes.io/projected/c5419a38-f33f-4857-9331-c259af5ae522-kube-api-access-w9srk\") pod \"c5419a38-f33f-4857-9331-c259af5ae522\" (UID: \"c5419a38-f33f-4857-9331-c259af5ae522\") " Nov 29 09:29:55 crc kubenswrapper[4795]: I1129 09:29:55.628917 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5419a38-f33f-4857-9331-c259af5ae522-utilities\") pod \"c5419a38-f33f-4857-9331-c259af5ae522\" (UID: \"c5419a38-f33f-4857-9331-c259af5ae522\") " Nov 29 09:29:55 crc kubenswrapper[4795]: I1129 09:29:55.629633 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5419a38-f33f-4857-9331-c259af5ae522-utilities" (OuterVolumeSpecName: "utilities") pod "c5419a38-f33f-4857-9331-c259af5ae522" (UID: "c5419a38-f33f-4857-9331-c259af5ae522"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 09:29:55 crc kubenswrapper[4795]: I1129 09:29:55.630192 4795 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5419a38-f33f-4857-9331-c259af5ae522-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 09:29:55 crc kubenswrapper[4795]: I1129 09:29:55.636764 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5419a38-f33f-4857-9331-c259af5ae522-kube-api-access-w9srk" (OuterVolumeSpecName: "kube-api-access-w9srk") pod "c5419a38-f33f-4857-9331-c259af5ae522" (UID: "c5419a38-f33f-4857-9331-c259af5ae522"). InnerVolumeSpecName "kube-api-access-w9srk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 09:29:55 crc kubenswrapper[4795]: I1129 09:29:55.683184 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5419a38-f33f-4857-9331-c259af5ae522-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c5419a38-f33f-4857-9331-c259af5ae522" (UID: "c5419a38-f33f-4857-9331-c259af5ae522"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 09:29:55 crc kubenswrapper[4795]: I1129 09:29:55.732843 4795 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5419a38-f33f-4857-9331-c259af5ae522-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 09:29:55 crc kubenswrapper[4795]: I1129 09:29:55.732872 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9srk\" (UniqueName: \"kubernetes.io/projected/c5419a38-f33f-4857-9331-c259af5ae522-kube-api-access-w9srk\") on node \"crc\" DevicePath \"\"" Nov 29 09:29:56 crc kubenswrapper[4795]: I1129 09:29:56.054566 4795 generic.go:334] "Generic (PLEG): container finished" podID="c5419a38-f33f-4857-9331-c259af5ae522" containerID="e84f9514b4a3d07683dae01cc0fae0bd186ae0c6bbb65b719708d56c5fc05911" exitCode=0 Nov 29 09:29:56 crc kubenswrapper[4795]: I1129 09:29:56.054634 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6rr7h" event={"ID":"c5419a38-f33f-4857-9331-c259af5ae522","Type":"ContainerDied","Data":"e84f9514b4a3d07683dae01cc0fae0bd186ae0c6bbb65b719708d56c5fc05911"} Nov 29 09:29:56 crc kubenswrapper[4795]: I1129 09:29:56.054664 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6rr7h" Nov 29 09:29:56 crc kubenswrapper[4795]: I1129 09:29:56.054686 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6rr7h" event={"ID":"c5419a38-f33f-4857-9331-c259af5ae522","Type":"ContainerDied","Data":"f5d5bf70b88a74560222181e14105e2ce5ec310d32cc13a701e811c3f7f8759c"} Nov 29 09:29:56 crc kubenswrapper[4795]: I1129 09:29:56.054706 4795 scope.go:117] "RemoveContainer" containerID="e84f9514b4a3d07683dae01cc0fae0bd186ae0c6bbb65b719708d56c5fc05911" Nov 29 09:29:56 crc kubenswrapper[4795]: I1129 09:29:56.087803 4795 scope.go:117] "RemoveContainer" containerID="dbf3c6f99e817edc1d0c88517e4f5bb1643a4bc871d6fff2d3acc2b8ec8f74f0" Nov 29 09:29:56 crc kubenswrapper[4795]: I1129 09:29:56.128308 4795 scope.go:117] "RemoveContainer" containerID="f65b0da3f315d6b84382dcf2928e39f495a3049a30f4aa10ac25aa856f17f792" Nov 29 09:29:56 crc kubenswrapper[4795]: I1129 09:29:56.137653 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6rr7h"] Nov 29 09:29:56 crc kubenswrapper[4795]: I1129 09:29:56.152543 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6rr7h"] Nov 29 09:29:56 crc kubenswrapper[4795]: I1129 09:29:56.164153 4795 scope.go:117] "RemoveContainer" containerID="e84f9514b4a3d07683dae01cc0fae0bd186ae0c6bbb65b719708d56c5fc05911" Nov 29 09:29:56 crc kubenswrapper[4795]: E1129 09:29:56.164983 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e84f9514b4a3d07683dae01cc0fae0bd186ae0c6bbb65b719708d56c5fc05911\": container with ID starting with e84f9514b4a3d07683dae01cc0fae0bd186ae0c6bbb65b719708d56c5fc05911 not found: ID does not exist" containerID="e84f9514b4a3d07683dae01cc0fae0bd186ae0c6bbb65b719708d56c5fc05911" Nov 29 09:29:56 crc kubenswrapper[4795]: I1129 09:29:56.165026 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e84f9514b4a3d07683dae01cc0fae0bd186ae0c6bbb65b719708d56c5fc05911"} err="failed to get container status \"e84f9514b4a3d07683dae01cc0fae0bd186ae0c6bbb65b719708d56c5fc05911\": rpc error: code = NotFound desc = could not find container \"e84f9514b4a3d07683dae01cc0fae0bd186ae0c6bbb65b719708d56c5fc05911\": container with ID starting with e84f9514b4a3d07683dae01cc0fae0bd186ae0c6bbb65b719708d56c5fc05911 not found: ID does not exist" Nov 29 09:29:56 crc kubenswrapper[4795]: I1129 09:29:56.165343 4795 scope.go:117] "RemoveContainer" containerID="dbf3c6f99e817edc1d0c88517e4f5bb1643a4bc871d6fff2d3acc2b8ec8f74f0" Nov 29 09:29:56 crc kubenswrapper[4795]: E1129 09:29:56.165777 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbf3c6f99e817edc1d0c88517e4f5bb1643a4bc871d6fff2d3acc2b8ec8f74f0\": container with ID starting with dbf3c6f99e817edc1d0c88517e4f5bb1643a4bc871d6fff2d3acc2b8ec8f74f0 not found: ID does not exist" containerID="dbf3c6f99e817edc1d0c88517e4f5bb1643a4bc871d6fff2d3acc2b8ec8f74f0" Nov 29 09:29:56 crc kubenswrapper[4795]: I1129 09:29:56.165818 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbf3c6f99e817edc1d0c88517e4f5bb1643a4bc871d6fff2d3acc2b8ec8f74f0"} err="failed to get container status \"dbf3c6f99e817edc1d0c88517e4f5bb1643a4bc871d6fff2d3acc2b8ec8f74f0\": rpc error: code = NotFound desc = could not find container \"dbf3c6f99e817edc1d0c88517e4f5bb1643a4bc871d6fff2d3acc2b8ec8f74f0\": container with ID starting with dbf3c6f99e817edc1d0c88517e4f5bb1643a4bc871d6fff2d3acc2b8ec8f74f0 not found: ID does not exist" Nov 29 09:29:56 crc kubenswrapper[4795]: I1129 09:29:56.165844 4795 scope.go:117] "RemoveContainer" containerID="f65b0da3f315d6b84382dcf2928e39f495a3049a30f4aa10ac25aa856f17f792" Nov 29 09:29:56 crc kubenswrapper[4795]: E1129 09:29:56.166179 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f65b0da3f315d6b84382dcf2928e39f495a3049a30f4aa10ac25aa856f17f792\": container with ID starting with f65b0da3f315d6b84382dcf2928e39f495a3049a30f4aa10ac25aa856f17f792 not found: ID does not exist" containerID="f65b0da3f315d6b84382dcf2928e39f495a3049a30f4aa10ac25aa856f17f792" Nov 29 09:29:56 crc kubenswrapper[4795]: I1129 09:29:56.166201 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f65b0da3f315d6b84382dcf2928e39f495a3049a30f4aa10ac25aa856f17f792"} err="failed to get container status \"f65b0da3f315d6b84382dcf2928e39f495a3049a30f4aa10ac25aa856f17f792\": rpc error: code = NotFound desc = could not find container \"f65b0da3f315d6b84382dcf2928e39f495a3049a30f4aa10ac25aa856f17f792\": container with ID starting with f65b0da3f315d6b84382dcf2928e39f495a3049a30f4aa10ac25aa856f17f792 not found: ID does not exist" Nov 29 09:29:56 crc kubenswrapper[4795]: I1129 09:29:56.290904 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5419a38-f33f-4857-9331-c259af5ae522" path="/var/lib/kubelet/pods/c5419a38-f33f-4857-9331-c259af5ae522/volumes" Nov 29 09:30:00 crc kubenswrapper[4795]: I1129 09:30:00.238537 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406810-vpd6q"] Nov 29 09:30:00 crc kubenswrapper[4795]: E1129 09:30:00.241119 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5419a38-f33f-4857-9331-c259af5ae522" containerName="registry-server" Nov 29 09:30:00 crc kubenswrapper[4795]: I1129 09:30:00.241261 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5419a38-f33f-4857-9331-c259af5ae522" containerName="registry-server" Nov 29 09:30:00 crc kubenswrapper[4795]: E1129 09:30:00.241361 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5419a38-f33f-4857-9331-c259af5ae522" containerName="extract-content" Nov 29 09:30:00 crc kubenswrapper[4795]: I1129 09:30:00.241434 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5419a38-f33f-4857-9331-c259af5ae522" containerName="extract-content" Nov 29 09:30:00 crc kubenswrapper[4795]: E1129 09:30:00.241542 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5419a38-f33f-4857-9331-c259af5ae522" containerName="extract-utilities" Nov 29 09:30:00 crc kubenswrapper[4795]: I1129 09:30:00.241639 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5419a38-f33f-4857-9331-c259af5ae522" containerName="extract-utilities" Nov 29 09:30:00 crc kubenswrapper[4795]: I1129 09:30:00.241958 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5419a38-f33f-4857-9331-c259af5ae522" containerName="registry-server" Nov 29 09:30:00 crc kubenswrapper[4795]: I1129 09:30:00.243043 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406810-vpd6q" Nov 29 09:30:00 crc kubenswrapper[4795]: I1129 09:30:00.250147 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 29 09:30:00 crc kubenswrapper[4795]: I1129 09:30:00.250467 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 29 09:30:00 crc kubenswrapper[4795]: I1129 09:30:00.261642 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406810-vpd6q"] Nov 29 09:30:00 crc kubenswrapper[4795]: I1129 09:30:00.439391 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/42c27b41-6aa2-4368-acf5-fc9f97ae1548-secret-volume\") pod \"collect-profiles-29406810-vpd6q\" (UID: \"42c27b41-6aa2-4368-acf5-fc9f97ae1548\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406810-vpd6q" Nov 29 09:30:00 crc kubenswrapper[4795]: I1129 09:30:00.440108 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvntm\" (UniqueName: \"kubernetes.io/projected/42c27b41-6aa2-4368-acf5-fc9f97ae1548-kube-api-access-fvntm\") pod \"collect-profiles-29406810-vpd6q\" (UID: \"42c27b41-6aa2-4368-acf5-fc9f97ae1548\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406810-vpd6q" Nov 29 09:30:00 crc kubenswrapper[4795]: I1129 09:30:00.440285 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/42c27b41-6aa2-4368-acf5-fc9f97ae1548-config-volume\") pod \"collect-profiles-29406810-vpd6q\" (UID: \"42c27b41-6aa2-4368-acf5-fc9f97ae1548\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406810-vpd6q" Nov 29 09:30:00 crc kubenswrapper[4795]: I1129 09:30:00.543689 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/42c27b41-6aa2-4368-acf5-fc9f97ae1548-secret-volume\") pod \"collect-profiles-29406810-vpd6q\" (UID: \"42c27b41-6aa2-4368-acf5-fc9f97ae1548\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406810-vpd6q" Nov 29 09:30:00 crc kubenswrapper[4795]: I1129 09:30:00.543799 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvntm\" (UniqueName: \"kubernetes.io/projected/42c27b41-6aa2-4368-acf5-fc9f97ae1548-kube-api-access-fvntm\") pod \"collect-profiles-29406810-vpd6q\" (UID: \"42c27b41-6aa2-4368-acf5-fc9f97ae1548\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406810-vpd6q" Nov 29 09:30:00 crc kubenswrapper[4795]: I1129 09:30:00.543864 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/42c27b41-6aa2-4368-acf5-fc9f97ae1548-config-volume\") pod \"collect-profiles-29406810-vpd6q\" (UID: \"42c27b41-6aa2-4368-acf5-fc9f97ae1548\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406810-vpd6q" Nov 29 09:30:00 crc kubenswrapper[4795]: I1129 09:30:00.545219 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/42c27b41-6aa2-4368-acf5-fc9f97ae1548-config-volume\") pod \"collect-profiles-29406810-vpd6q\" (UID: \"42c27b41-6aa2-4368-acf5-fc9f97ae1548\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406810-vpd6q" Nov 29 09:30:00 crc kubenswrapper[4795]: I1129 09:30:00.565431 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/42c27b41-6aa2-4368-acf5-fc9f97ae1548-secret-volume\") pod \"collect-profiles-29406810-vpd6q\" (UID: \"42c27b41-6aa2-4368-acf5-fc9f97ae1548\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406810-vpd6q" Nov 29 09:30:00 crc kubenswrapper[4795]: I1129 09:30:00.567511 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvntm\" (UniqueName: \"kubernetes.io/projected/42c27b41-6aa2-4368-acf5-fc9f97ae1548-kube-api-access-fvntm\") pod \"collect-profiles-29406810-vpd6q\" (UID: \"42c27b41-6aa2-4368-acf5-fc9f97ae1548\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406810-vpd6q" Nov 29 09:30:00 crc kubenswrapper[4795]: I1129 09:30:00.581081 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406810-vpd6q" Nov 29 09:30:01 crc kubenswrapper[4795]: I1129 09:30:01.089494 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406810-vpd6q"] Nov 29 09:30:01 crc kubenswrapper[4795]: I1129 09:30:01.119198 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406810-vpd6q" event={"ID":"42c27b41-6aa2-4368-acf5-fc9f97ae1548","Type":"ContainerStarted","Data":"ce676d4fdc48a0945f04458845e8fb544701b44b2f6e3e0768cb8a1e7a4fc995"} Nov 29 09:30:01 crc kubenswrapper[4795]: I1129 09:30:01.883675 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7fbb5f6569-qxwvv_127f1845-59b8-4b9f-9702-2aae122b06e3/nmstate-console-plugin/0.log" Nov 29 09:30:02 crc kubenswrapper[4795]: I1129 09:30:02.035547 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-c5pwq_b1c76aa0-5bd2-4df9-8555-83bb44cb23b7/nmstate-handler/0.log" Nov 29 09:30:02 crc kubenswrapper[4795]: I1129 09:30:02.104882 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-f8cgx_471b40b9-dbc5-467e-abd1-18e64ea6a111/kube-rbac-proxy/0.log" Nov 29 09:30:02 crc kubenswrapper[4795]: I1129 09:30:02.130439 4795 generic.go:334] "Generic (PLEG): container finished" podID="42c27b41-6aa2-4368-acf5-fc9f97ae1548" containerID="601aef062d6d939dd3308fb73a3c2733a4fd620f1538ae300f6c75dc2911059d" exitCode=0 Nov 29 09:30:02 crc kubenswrapper[4795]: I1129 09:30:02.130492 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406810-vpd6q" event={"ID":"42c27b41-6aa2-4368-acf5-fc9f97ae1548","Type":"ContainerDied","Data":"601aef062d6d939dd3308fb73a3c2733a4fd620f1538ae300f6c75dc2911059d"} Nov 29 09:30:02 crc kubenswrapper[4795]: I1129 09:30:02.157716 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-f8cgx_471b40b9-dbc5-467e-abd1-18e64ea6a111/nmstate-metrics/0.log" Nov 29 09:30:02 crc kubenswrapper[4795]: I1129 09:30:02.270514 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-5b5b58f5c8-7c5mp_cadc9dc9-f67d-440d-9169-9f7816d26a56/nmstate-operator/0.log" Nov 29 09:30:02 crc kubenswrapper[4795]: I1129 09:30:02.374515 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f6d4c5ccb-jk8vq_dcd14fe7-954f-445a-bd8d-0a62399e71d5/nmstate-webhook/0.log" Nov 29 09:30:03 crc kubenswrapper[4795]: I1129 09:30:03.608911 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406810-vpd6q" Nov 29 09:30:03 crc kubenswrapper[4795]: I1129 09:30:03.728901 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/42c27b41-6aa2-4368-acf5-fc9f97ae1548-secret-volume\") pod \"42c27b41-6aa2-4368-acf5-fc9f97ae1548\" (UID: \"42c27b41-6aa2-4368-acf5-fc9f97ae1548\") " Nov 29 09:30:03 crc kubenswrapper[4795]: I1129 09:30:03.729000 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvntm\" (UniqueName: \"kubernetes.io/projected/42c27b41-6aa2-4368-acf5-fc9f97ae1548-kube-api-access-fvntm\") pod \"42c27b41-6aa2-4368-acf5-fc9f97ae1548\" (UID: \"42c27b41-6aa2-4368-acf5-fc9f97ae1548\") " Nov 29 09:30:03 crc kubenswrapper[4795]: I1129 09:30:03.729279 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/42c27b41-6aa2-4368-acf5-fc9f97ae1548-config-volume\") pod \"42c27b41-6aa2-4368-acf5-fc9f97ae1548\" (UID: \"42c27b41-6aa2-4368-acf5-fc9f97ae1548\") " Nov 29 09:30:03 crc kubenswrapper[4795]: I1129 09:30:03.729863 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42c27b41-6aa2-4368-acf5-fc9f97ae1548-config-volume" (OuterVolumeSpecName: "config-volume") pod "42c27b41-6aa2-4368-acf5-fc9f97ae1548" (UID: "42c27b41-6aa2-4368-acf5-fc9f97ae1548"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 09:30:03 crc kubenswrapper[4795]: I1129 09:30:03.730253 4795 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/42c27b41-6aa2-4368-acf5-fc9f97ae1548-config-volume\") on node \"crc\" DevicePath \"\"" Nov 29 09:30:03 crc kubenswrapper[4795]: I1129 09:30:03.735531 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42c27b41-6aa2-4368-acf5-fc9f97ae1548-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "42c27b41-6aa2-4368-acf5-fc9f97ae1548" (UID: "42c27b41-6aa2-4368-acf5-fc9f97ae1548"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 09:30:03 crc kubenswrapper[4795]: I1129 09:30:03.742887 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42c27b41-6aa2-4368-acf5-fc9f97ae1548-kube-api-access-fvntm" (OuterVolumeSpecName: "kube-api-access-fvntm") pod "42c27b41-6aa2-4368-acf5-fc9f97ae1548" (UID: "42c27b41-6aa2-4368-acf5-fc9f97ae1548"). InnerVolumeSpecName "kube-api-access-fvntm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 09:30:03 crc kubenswrapper[4795]: I1129 09:30:03.834249 4795 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/42c27b41-6aa2-4368-acf5-fc9f97ae1548-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 29 09:30:03 crc kubenswrapper[4795]: I1129 09:30:03.834294 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvntm\" (UniqueName: \"kubernetes.io/projected/42c27b41-6aa2-4368-acf5-fc9f97ae1548-kube-api-access-fvntm\") on node \"crc\" DevicePath \"\"" Nov 29 09:30:04 crc kubenswrapper[4795]: I1129 09:30:04.178348 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406810-vpd6q" event={"ID":"42c27b41-6aa2-4368-acf5-fc9f97ae1548","Type":"ContainerDied","Data":"ce676d4fdc48a0945f04458845e8fb544701b44b2f6e3e0768cb8a1e7a4fc995"} Nov 29 09:30:04 crc kubenswrapper[4795]: I1129 09:30:04.178759 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce676d4fdc48a0945f04458845e8fb544701b44b2f6e3e0768cb8a1e7a4fc995" Nov 29 09:30:04 crc kubenswrapper[4795]: I1129 09:30:04.178526 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406810-vpd6q" Nov 29 09:30:04 crc kubenswrapper[4795]: I1129 09:30:04.689401 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406765-mpfc5"] Nov 29 09:30:04 crc kubenswrapper[4795]: I1129 09:30:04.700271 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406765-mpfc5"] Nov 29 09:30:06 crc kubenswrapper[4795]: I1129 09:30:06.295573 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00916148-0906-47ee-b3f1-b243256135ed" path="/var/lib/kubelet/pods/00916148-0906-47ee-b3f1-b243256135ed/volumes" Nov 29 09:30:14 crc kubenswrapper[4795]: I1129 09:30:14.548295 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-585cfc87fc-tt7jf_48d9911d-3ed8-4474-9537-cbfcb462dd44/kube-rbac-proxy/0.log" Nov 29 09:30:14 crc kubenswrapper[4795]: I1129 09:30:14.657761 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-585cfc87fc-tt7jf_48d9911d-3ed8-4474-9537-cbfcb462dd44/manager/0.log" Nov 29 09:30:29 crc kubenswrapper[4795]: I1129 09:30:29.952806 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_cluster-logging-operator-ff9846bd-gh2bv_ba3b17d1-c4c3-4575-b722-c8134c6cd690/cluster-logging-operator/0.log" Nov 29 09:30:30 crc kubenswrapper[4795]: I1129 09:30:30.163828 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_collector-plv7b_a6a9c50b-4559-45f6-a382-a236c88aa72e/collector/0.log" Nov 29 09:30:30 crc kubenswrapper[4795]: I1129 09:30:30.240948 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-compactor-0_65b29f76-cf84-4166-b1b1-17927cbfd032/loki-compactor/0.log" Nov 29 09:30:30 crc kubenswrapper[4795]: I1129 09:30:30.366560 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-distributor-76cc67bf56-2j6fx_2fd41086-3cec-46c6-a4ed-82885461095c/loki-distributor/0.log" Nov 29 09:30:30 crc kubenswrapper[4795]: I1129 09:30:30.445144 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-575bf4587d-cslrg_04a66bf0-d1a8-4bf7-85e4-8974cc247cd0/gateway/0.log" Nov 29 09:30:30 crc kubenswrapper[4795]: I1129 09:30:30.528093 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-575bf4587d-cslrg_04a66bf0-d1a8-4bf7-85e4-8974cc247cd0/opa/0.log" Nov 29 09:30:30 crc kubenswrapper[4795]: I1129 09:30:30.665977 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-575bf4587d-fvfcs_c5d428b2-eb39-4936-819d-08321d96d015/gateway/0.log" Nov 29 09:30:30 crc kubenswrapper[4795]: I1129 09:30:30.730529 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-575bf4587d-fvfcs_c5d428b2-eb39-4936-819d-08321d96d015/opa/0.log" Nov 29 09:30:30 crc kubenswrapper[4795]: I1129 09:30:30.746994 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-index-gateway-0_419b2242-f6e9-429b-89fe-a8e499b5952b/loki-index-gateway/0.log" Nov 29 09:30:30 crc kubenswrapper[4795]: I1129 09:30:30.978964 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-querier-5895d59bb8-jqcx2_996554b0-3876-4c69-be10-a5f2c4a5c2e4/loki-querier/0.log" Nov 29 09:30:30 crc kubenswrapper[4795]: I1129 09:30:30.996280 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-ingester-0_ec05fb5e-2fc9-424a-a305-4ac1734df8d5/loki-ingester/0.log" Nov 29 09:30:31 crc kubenswrapper[4795]: I1129 09:30:31.172798 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-query-frontend-84558f7c9f-g2h7b_73ef818d-4038-418e-87e6-a16224e788c5/loki-query-frontend/0.log" Nov 29 09:30:35 crc kubenswrapper[4795]: I1129 09:30:35.697734 4795 scope.go:117] "RemoveContainer" containerID="bcf34fe16286a7fa1d756b0c7332935520b5c06403ad64bf2ec70ea250b0afc7" Nov 29 09:30:47 crc kubenswrapper[4795]: I1129 09:30:47.449454 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-wc858_402aa6f5-7950-4290-ab83-bd5bafa2a8d7/kube-rbac-proxy/0.log" Nov 29 09:30:47 crc kubenswrapper[4795]: I1129 09:30:47.635350 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-wc858_402aa6f5-7950-4290-ab83-bd5bafa2a8d7/controller/0.log" Nov 29 09:30:47 crc kubenswrapper[4795]: I1129 09:30:47.652946 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tkql9_166b46fd-e087-476d-9491-d173847e5fb9/cp-frr-files/0.log" Nov 29 09:30:47 crc kubenswrapper[4795]: I1129 09:30:47.961913 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tkql9_166b46fd-e087-476d-9491-d173847e5fb9/cp-frr-files/0.log" Nov 29 09:30:47 crc kubenswrapper[4795]: I1129 09:30:47.974703 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tkql9_166b46fd-e087-476d-9491-d173847e5fb9/cp-reloader/0.log" Nov 29 09:30:47 crc kubenswrapper[4795]: I1129 09:30:47.985807 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tkql9_166b46fd-e087-476d-9491-d173847e5fb9/cp-metrics/0.log" Nov 29 09:30:47 crc kubenswrapper[4795]: I1129 09:30:47.998250 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tkql9_166b46fd-e087-476d-9491-d173847e5fb9/cp-reloader/0.log" Nov 29 09:30:48 crc kubenswrapper[4795]: I1129 09:30:48.174915 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tkql9_166b46fd-e087-476d-9491-d173847e5fb9/cp-reloader/0.log" Nov 29 09:30:48 crc kubenswrapper[4795]: I1129 09:30:48.197974 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tkql9_166b46fd-e087-476d-9491-d173847e5fb9/cp-metrics/0.log" Nov 29 09:30:48 crc kubenswrapper[4795]: I1129 09:30:48.212113 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tkql9_166b46fd-e087-476d-9491-d173847e5fb9/cp-frr-files/0.log" Nov 29 09:30:48 crc kubenswrapper[4795]: I1129 09:30:48.248653 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tkql9_166b46fd-e087-476d-9491-d173847e5fb9/cp-metrics/0.log" Nov 29 09:30:48 crc kubenswrapper[4795]: I1129 09:30:48.410809 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tkql9_166b46fd-e087-476d-9491-d173847e5fb9/cp-reloader/0.log" Nov 29 09:30:48 crc kubenswrapper[4795]: I1129 09:30:48.460361 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tkql9_166b46fd-e087-476d-9491-d173847e5fb9/cp-frr-files/0.log" Nov 29 09:30:48 crc kubenswrapper[4795]: I1129 09:30:48.467574 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tkql9_166b46fd-e087-476d-9491-d173847e5fb9/cp-metrics/0.log" Nov 29 09:30:48 crc kubenswrapper[4795]: I1129 09:30:48.470317 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tkql9_166b46fd-e087-476d-9491-d173847e5fb9/controller/0.log" Nov 29 09:30:48 crc kubenswrapper[4795]: I1129 09:30:48.651214 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tkql9_166b46fd-e087-476d-9491-d173847e5fb9/frr-metrics/0.log" Nov 29 09:30:48 crc kubenswrapper[4795]: I1129 09:30:48.696948 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tkql9_166b46fd-e087-476d-9491-d173847e5fb9/kube-rbac-proxy-frr/0.log" Nov 29 09:30:48 crc kubenswrapper[4795]: I1129 09:30:48.743035 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tkql9_166b46fd-e087-476d-9491-d173847e5fb9/kube-rbac-proxy/0.log" Nov 29 09:30:48 crc kubenswrapper[4795]: I1129 09:30:48.888293 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tkql9_166b46fd-e087-476d-9491-d173847e5fb9/reloader/0.log" Nov 29 09:30:49 crc kubenswrapper[4795]: I1129 09:30:49.057869 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7fcb986d4-gbxmx_cc6bc09a-5187-429a-8f93-1f57bb5cd0d0/frr-k8s-webhook-server/0.log" Nov 29 09:30:49 crc kubenswrapper[4795]: I1129 09:30:49.259637 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-77967fb544-pfl5d_88516493-98c7-4365-9293-73456d8d0913/manager/0.log" Nov 29 09:30:49 crc kubenswrapper[4795]: I1129 09:30:49.434393 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-85bfb995d5-2snm7_00a1cd25-da5a-4ff5-b0d0-632a4ccdc0a3/webhook-server/0.log" Nov 29 09:30:49 crc kubenswrapper[4795]: I1129 09:30:49.518553 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-9ckmx_c433634f-86e7-44a7-9dfa-e0d09a1f5747/kube-rbac-proxy/0.log" Nov 29 09:30:50 crc kubenswrapper[4795]: I1129 09:30:50.244696 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-9ckmx_c433634f-86e7-44a7-9dfa-e0d09a1f5747/speaker/0.log" Nov 29 09:30:50 crc kubenswrapper[4795]: I1129 09:30:50.713236 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tkql9_166b46fd-e087-476d-9491-d173847e5fb9/frr/0.log" Nov 29 09:31:04 crc kubenswrapper[4795]: I1129 09:31:04.314925 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8w8dhw_bf4954ba-ab7e-4c71-af52-f1cf638d0a0b/util/0.log" Nov 29 09:31:04 crc kubenswrapper[4795]: I1129 09:31:04.525368 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8w8dhw_bf4954ba-ab7e-4c71-af52-f1cf638d0a0b/util/0.log" Nov 29 09:31:04 crc kubenswrapper[4795]: I1129 09:31:04.556504 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8w8dhw_bf4954ba-ab7e-4c71-af52-f1cf638d0a0b/pull/0.log" Nov 29 09:31:04 crc kubenswrapper[4795]: I1129 09:31:04.566165 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8w8dhw_bf4954ba-ab7e-4c71-af52-f1cf638d0a0b/pull/0.log" Nov 29 09:31:04 crc kubenswrapper[4795]: I1129 09:31:04.781113 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8w8dhw_bf4954ba-ab7e-4c71-af52-f1cf638d0a0b/pull/0.log" Nov 29 09:31:04 crc kubenswrapper[4795]: I1129 09:31:04.809239 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8w8dhw_bf4954ba-ab7e-4c71-af52-f1cf638d0a0b/util/0.log" Nov 29 09:31:04 crc kubenswrapper[4795]: I1129 09:31:04.823128 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8w8dhw_bf4954ba-ab7e-4c71-af52-f1cf638d0a0b/extract/0.log" Nov 29 09:31:04 crc kubenswrapper[4795]: I1129 09:31:04.976567 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwpl88_0830d1da-b370-4d0d-8418-7dca445d0ca5/util/0.log" Nov 29 09:31:05 crc kubenswrapper[4795]: I1129 09:31:05.231001 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwpl88_0830d1da-b370-4d0d-8418-7dca445d0ca5/pull/0.log" Nov 29 09:31:05 crc kubenswrapper[4795]: I1129 09:31:05.266303 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwpl88_0830d1da-b370-4d0d-8418-7dca445d0ca5/pull/0.log" Nov 29 09:31:05 crc kubenswrapper[4795]: I1129 09:31:05.316963 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwpl88_0830d1da-b370-4d0d-8418-7dca445d0ca5/util/0.log" Nov 29 09:31:05 crc kubenswrapper[4795]: I1129 09:31:05.490370 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwpl88_0830d1da-b370-4d0d-8418-7dca445d0ca5/util/0.log" Nov 29 09:31:05 crc kubenswrapper[4795]: I1129 09:31:05.490928 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwpl88_0830d1da-b370-4d0d-8418-7dca445d0ca5/extract/0.log" Nov 29 09:31:05 crc kubenswrapper[4795]: I1129 09:31:05.503797 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwpl88_0830d1da-b370-4d0d-8418-7dca445d0ca5/pull/0.log" Nov 29 09:31:05 crc kubenswrapper[4795]: I1129 09:31:05.743948 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921076vbr_45700729-39c7-4828-81fd-3763836d1dbe/util/0.log" Nov 29 09:31:05 crc kubenswrapper[4795]: I1129 09:31:05.897368 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921076vbr_45700729-39c7-4828-81fd-3763836d1dbe/pull/0.log" Nov 29 09:31:05 crc kubenswrapper[4795]: I1129 09:31:05.911607 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921076vbr_45700729-39c7-4828-81fd-3763836d1dbe/pull/0.log" Nov 29 09:31:05 crc kubenswrapper[4795]: I1129 09:31:05.940041 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921076vbr_45700729-39c7-4828-81fd-3763836d1dbe/util/0.log" Nov 29 09:31:06 crc kubenswrapper[4795]: I1129 09:31:06.087901 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921076vbr_45700729-39c7-4828-81fd-3763836d1dbe/pull/0.log" Nov 29 09:31:06 crc kubenswrapper[4795]: I1129 09:31:06.089023 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921076vbr_45700729-39c7-4828-81fd-3763836d1dbe/extract/0.log" Nov 29 09:31:06 crc kubenswrapper[4795]: I1129 09:31:06.115033 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921076vbr_45700729-39c7-4828-81fd-3763836d1dbe/util/0.log" Nov 29 09:31:06 crc kubenswrapper[4795]: I1129 09:31:06.314287 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f5mq9r_45ac49e4-eef2-4cef-bac3-06fb40427b2b/util/0.log" Nov 29 09:31:06 crc kubenswrapper[4795]: I1129 09:31:06.523840 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f5mq9r_45ac49e4-eef2-4cef-bac3-06fb40427b2b/util/0.log" Nov 29 09:31:06 crc kubenswrapper[4795]: I1129 09:31:06.570472 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f5mq9r_45ac49e4-eef2-4cef-bac3-06fb40427b2b/pull/0.log" Nov 29 09:31:06 crc kubenswrapper[4795]: I1129 09:31:06.618031 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f5mq9r_45ac49e4-eef2-4cef-bac3-06fb40427b2b/pull/0.log" Nov 29 09:31:06 crc kubenswrapper[4795]: I1129 09:31:06.719413 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f5mq9r_45ac49e4-eef2-4cef-bac3-06fb40427b2b/util/0.log" Nov 29 09:31:06 crc kubenswrapper[4795]: I1129 09:31:06.802026 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f5mq9r_45ac49e4-eef2-4cef-bac3-06fb40427b2b/pull/0.log" Nov 29 09:31:06 crc kubenswrapper[4795]: I1129 09:31:06.880709 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f5mq9r_45ac49e4-eef2-4cef-bac3-06fb40427b2b/extract/0.log" Nov 29 09:31:07 crc kubenswrapper[4795]: I1129 09:31:07.060933 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hm892_1be51a9d-eb05-4b33-85aa-2134496eb1b6/util/0.log" Nov 29 09:31:07 crc kubenswrapper[4795]: I1129 09:31:07.229528 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hm892_1be51a9d-eb05-4b33-85aa-2134496eb1b6/pull/0.log" Nov 29 09:31:07 crc kubenswrapper[4795]: I1129 09:31:07.267482 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hm892_1be51a9d-eb05-4b33-85aa-2134496eb1b6/pull/0.log" Nov 29 09:31:07 crc kubenswrapper[4795]: I1129 09:31:07.286352 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hm892_1be51a9d-eb05-4b33-85aa-2134496eb1b6/util/0.log" Nov 29 09:31:07 crc kubenswrapper[4795]: I1129 09:31:07.490967 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hm892_1be51a9d-eb05-4b33-85aa-2134496eb1b6/extract/0.log" Nov 29 09:31:07 crc kubenswrapper[4795]: I1129 09:31:07.501084 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hm892_1be51a9d-eb05-4b33-85aa-2134496eb1b6/pull/0.log" Nov 29 09:31:07 crc kubenswrapper[4795]: I1129 09:31:07.517141 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hm892_1be51a9d-eb05-4b33-85aa-2134496eb1b6/util/0.log" Nov 29 09:31:07 crc kubenswrapper[4795]: I1129 09:31:07.671973 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8vhrd_edee21c8-0c84-4772-826f-0fa6de3076ba/extract-utilities/0.log" Nov 29 09:31:07 crc kubenswrapper[4795]: I1129 09:31:07.844065 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8vhrd_edee21c8-0c84-4772-826f-0fa6de3076ba/extract-content/0.log" Nov 29 09:31:07 crc kubenswrapper[4795]: I1129 09:31:07.860002 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8vhrd_edee21c8-0c84-4772-826f-0fa6de3076ba/extract-utilities/0.log" Nov 29 09:31:07 crc kubenswrapper[4795]: I1129 09:31:07.911182 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8vhrd_edee21c8-0c84-4772-826f-0fa6de3076ba/extract-content/0.log" Nov 29 09:31:08 crc kubenswrapper[4795]: I1129 09:31:08.133410 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8vhrd_edee21c8-0c84-4772-826f-0fa6de3076ba/extract-content/0.log" Nov 29 09:31:08 crc kubenswrapper[4795]: I1129 09:31:08.155631 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8vhrd_edee21c8-0c84-4772-826f-0fa6de3076ba/extract-utilities/0.log" Nov 29 09:31:08 crc kubenswrapper[4795]: I1129 09:31:08.170073 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8xl7p_fda09a90-a63a-44bd-a55a-5cc411392f7d/extract-utilities/0.log" Nov 29 09:31:08 crc kubenswrapper[4795]: I1129 09:31:08.411677 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8xl7p_fda09a90-a63a-44bd-a55a-5cc411392f7d/extract-content/0.log" Nov 29 09:31:08 crc kubenswrapper[4795]: I1129 09:31:08.442201 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8xl7p_fda09a90-a63a-44bd-a55a-5cc411392f7d/extract-content/0.log" Nov 29 09:31:08 crc kubenswrapper[4795]: I1129 09:31:08.454867 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8xl7p_fda09a90-a63a-44bd-a55a-5cc411392f7d/extract-utilities/0.log" Nov 29 09:31:08 crc kubenswrapper[4795]: I1129 09:31:08.707776 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8xl7p_fda09a90-a63a-44bd-a55a-5cc411392f7d/extract-content/0.log" Nov 29 09:31:08 crc kubenswrapper[4795]: I1129 09:31:08.731843 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8xl7p_fda09a90-a63a-44bd-a55a-5cc411392f7d/extract-utilities/0.log" Nov 29 09:31:09 crc kubenswrapper[4795]: I1129 09:31:09.034109 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-nx2gj_d49cd6f6-0b90-4c8f-9e8f-30a52c232522/marketplace-operator/0.log" Nov 29 09:31:09 crc kubenswrapper[4795]: I1129 09:31:09.219813 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kvpxx_18e2d8ec-4739-4097-8772-98689f8d8626/extract-utilities/0.log" Nov 29 09:31:09 crc kubenswrapper[4795]: I1129 09:31:09.425320 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8vhrd_edee21c8-0c84-4772-826f-0fa6de3076ba/registry-server/0.log" Nov 29 09:31:09 crc kubenswrapper[4795]: I1129 09:31:09.459467 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8xl7p_fda09a90-a63a-44bd-a55a-5cc411392f7d/registry-server/0.log" Nov 29 09:31:09 crc kubenswrapper[4795]: I1129 09:31:09.487140 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kvpxx_18e2d8ec-4739-4097-8772-98689f8d8626/extract-utilities/0.log" Nov 29 09:31:09 crc kubenswrapper[4795]: I1129 09:31:09.542482 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kvpxx_18e2d8ec-4739-4097-8772-98689f8d8626/extract-content/0.log" Nov 29 09:31:09 crc kubenswrapper[4795]: I1129 09:31:09.546373 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kvpxx_18e2d8ec-4739-4097-8772-98689f8d8626/extract-content/0.log" Nov 29 09:31:09 crc kubenswrapper[4795]: I1129 09:31:09.768423 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kvpxx_18e2d8ec-4739-4097-8772-98689f8d8626/extract-content/0.log" Nov 29 09:31:09 crc kubenswrapper[4795]: I1129 09:31:09.799233 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kvpxx_18e2d8ec-4739-4097-8772-98689f8d8626/extract-utilities/0.log" Nov 29 09:31:09 crc kubenswrapper[4795]: I1129 09:31:09.836250 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tr957_8efac1d2-dfa3-48d7-9928-823442690b91/extract-utilities/0.log" Nov 29 09:31:10 crc kubenswrapper[4795]: I1129 09:31:10.050960 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kvpxx_18e2d8ec-4739-4097-8772-98689f8d8626/registry-server/0.log" Nov 29 09:31:10 crc kubenswrapper[4795]: I1129 09:31:10.369630 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tr957_8efac1d2-dfa3-48d7-9928-823442690b91/extract-content/0.log" Nov 29 09:31:10 crc kubenswrapper[4795]: I1129 09:31:10.393844 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tr957_8efac1d2-dfa3-48d7-9928-823442690b91/extract-content/0.log" Nov 29 09:31:10 crc kubenswrapper[4795]: I1129 09:31:10.394905 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tr957_8efac1d2-dfa3-48d7-9928-823442690b91/extract-utilities/0.log" Nov 29 09:31:10 crc kubenswrapper[4795]: I1129 09:31:10.501164 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tr957_8efac1d2-dfa3-48d7-9928-823442690b91/extract-content/0.log" Nov 29 09:31:10 crc kubenswrapper[4795]: I1129 09:31:10.507978 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tr957_8efac1d2-dfa3-48d7-9928-823442690b91/extract-utilities/0.log" Nov 29 09:31:11 crc kubenswrapper[4795]: I1129 09:31:11.570353 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tr957_8efac1d2-dfa3-48d7-9928-823442690b91/registry-server/0.log" Nov 29 09:31:25 crc kubenswrapper[4795]: I1129 09:31:25.225476 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-668cf9dfbb-xmczn_69e46873-1e0c-4187-810e-584aa956ba47/prometheus-operator/0.log" Nov 29 09:31:25 crc kubenswrapper[4795]: I1129 09:31:25.385107 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-8fdc8d5d-9scj8_63008df8-40b1-4ab0-966e-d88d426e3b1b/prometheus-operator-admission-webhook/0.log" Nov 29 09:31:25 crc kubenswrapper[4795]: I1129 09:31:25.455899 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-8fdc8d5d-r6sxj_5f84b151-fbdd-40bc-9457-ec560370a162/prometheus-operator-admission-webhook/0.log" Nov 29 09:31:25 crc kubenswrapper[4795]: I1129 09:31:25.589376 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-d8bb48f5d-7h8sj_52ff2b6a-cefb-4a70-ac45-9d0c5b9d315d/operator/0.log" Nov 29 09:31:25 crc kubenswrapper[4795]: I1129 09:31:25.687701 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-7d5fb4cbfb-q86dd_5194485f-a306-493d-a1a3-f33030371413/observability-ui-dashboards/0.log" Nov 29 09:31:25 crc kubenswrapper[4795]: I1129 09:31:25.774627 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5446b9c989-rb87j_7cca6c3b-ab29-47c1-94fe-bb8d2ac90062/perses-operator/0.log" Nov 29 09:31:39 crc kubenswrapper[4795]: I1129 09:31:39.433734 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-585cfc87fc-tt7jf_48d9911d-3ed8-4474-9537-cbfcb462dd44/kube-rbac-proxy/0.log" Nov 29 09:31:39 crc kubenswrapper[4795]: I1129 09:31:39.496136 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-585cfc87fc-tt7jf_48d9911d-3ed8-4474-9537-cbfcb462dd44/manager/0.log" Nov 29 09:31:41 crc kubenswrapper[4795]: I1129 09:31:41.940906 4795 patch_prober.go:28] interesting pod/machine-config-daemon-bkmq6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 09:31:41 crc kubenswrapper[4795]: I1129 09:31:41.941480 4795 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 09:32:11 crc kubenswrapper[4795]: I1129 09:32:11.941834 4795 patch_prober.go:28] interesting pod/machine-config-daemon-bkmq6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 09:32:11 crc kubenswrapper[4795]: I1129 09:32:11.942524 4795 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 09:32:35 crc kubenswrapper[4795]: I1129 09:32:35.001497 4795 patch_prober.go:28] interesting pod/logging-loki-gateway-575bf4587d-fvfcs container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 29 09:32:35 crc kubenswrapper[4795]: I1129 09:32:35.002211 4795 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-575bf4587d-fvfcs" podUID="c5d428b2-eb39-4936-819d-08321d96d015" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.58:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 29 09:32:41 crc kubenswrapper[4795]: I1129 09:32:41.941370 4795 patch_prober.go:28] interesting pod/machine-config-daemon-bkmq6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 09:32:41 crc kubenswrapper[4795]: I1129 09:32:41.943440 4795 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 09:32:41 crc kubenswrapper[4795]: I1129 09:32:41.943578 4795 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" Nov 29 09:32:41 crc kubenswrapper[4795]: I1129 09:32:41.944661 4795 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a0961216ed7f33ef63b3697986049c3d884283e351de87a69a04219f3b1944a4"} pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 09:32:41 crc kubenswrapper[4795]: I1129 09:32:41.945033 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" containerID="cri-o://a0961216ed7f33ef63b3697986049c3d884283e351de87a69a04219f3b1944a4" gracePeriod=600 Nov 29 09:32:42 crc kubenswrapper[4795]: E1129 09:32:42.077717 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:32:42 crc kubenswrapper[4795]: I1129 09:32:42.313079 4795 generic.go:334] "Generic (PLEG): container finished" podID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerID="a0961216ed7f33ef63b3697986049c3d884283e351de87a69a04219f3b1944a4" exitCode=0 Nov 29 09:32:42 crc kubenswrapper[4795]: I1129 09:32:42.313140 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" event={"ID":"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1","Type":"ContainerDied","Data":"a0961216ed7f33ef63b3697986049c3d884283e351de87a69a04219f3b1944a4"} Nov 29 09:32:42 crc kubenswrapper[4795]: I1129 09:32:42.313546 4795 scope.go:117] "RemoveContainer" containerID="38542a01fd0557a5c242f6a842e56a1fb659b4b79c57d243f879c3de7726e808" Nov 29 09:32:42 crc kubenswrapper[4795]: I1129 09:32:42.314405 4795 scope.go:117] "RemoveContainer" containerID="a0961216ed7f33ef63b3697986049c3d884283e351de87a69a04219f3b1944a4" Nov 29 09:32:42 crc kubenswrapper[4795]: E1129 09:32:42.314696 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:32:57 crc kubenswrapper[4795]: I1129 09:32:57.278165 4795 scope.go:117] "RemoveContainer" containerID="a0961216ed7f33ef63b3697986049c3d884283e351de87a69a04219f3b1944a4" Nov 29 09:32:57 crc kubenswrapper[4795]: E1129 09:32:57.279229 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:33:12 crc kubenswrapper[4795]: I1129 09:33:12.276673 4795 scope.go:117] "RemoveContainer" containerID="a0961216ed7f33ef63b3697986049c3d884283e351de87a69a04219f3b1944a4" Nov 29 09:33:12 crc kubenswrapper[4795]: E1129 09:33:12.279231 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:33:23 crc kubenswrapper[4795]: I1129 09:33:23.276150 4795 scope.go:117] "RemoveContainer" containerID="a0961216ed7f33ef63b3697986049c3d884283e351de87a69a04219f3b1944a4" Nov 29 09:33:23 crc kubenswrapper[4795]: E1129 09:33:23.277873 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:33:31 crc kubenswrapper[4795]: I1129 09:33:31.886104 4795 generic.go:334] "Generic (PLEG): container finished" podID="8ca4938e-5980-4d8a-98f4-e379b958f3e7" containerID="701b2d05de16b7ffd060c0920f1dcd05dc7c6605c2b84c2cf334e54fd26b6186" exitCode=0 Nov 29 09:33:31 crc kubenswrapper[4795]: I1129 09:33:31.886891 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qq8lk/must-gather-kwlk8" event={"ID":"8ca4938e-5980-4d8a-98f4-e379b958f3e7","Type":"ContainerDied","Data":"701b2d05de16b7ffd060c0920f1dcd05dc7c6605c2b84c2cf334e54fd26b6186"} Nov 29 09:33:31 crc kubenswrapper[4795]: I1129 09:33:31.888480 4795 scope.go:117] "RemoveContainer" containerID="701b2d05de16b7ffd060c0920f1dcd05dc7c6605c2b84c2cf334e54fd26b6186" Nov 29 09:33:32 crc kubenswrapper[4795]: I1129 09:33:32.231304 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-qq8lk_must-gather-kwlk8_8ca4938e-5980-4d8a-98f4-e379b958f3e7/gather/0.log" Nov 29 09:33:34 crc kubenswrapper[4795]: I1129 09:33:34.309770 4795 scope.go:117] "RemoveContainer" containerID="a0961216ed7f33ef63b3697986049c3d884283e351de87a69a04219f3b1944a4" Nov 29 09:33:34 crc kubenswrapper[4795]: E1129 09:33:34.310368 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:33:35 crc kubenswrapper[4795]: I1129 09:33:35.891171 4795 scope.go:117] "RemoveContainer" containerID="4907b8cc129e963e766d44001254a2d5bb59508eaa928cadbf46bbc6554bc35d" Nov 29 09:33:41 crc kubenswrapper[4795]: I1129 09:33:41.499431 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-qq8lk/must-gather-kwlk8"] Nov 29 09:33:41 crc kubenswrapper[4795]: I1129 09:33:41.500629 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-qq8lk/must-gather-kwlk8" podUID="8ca4938e-5980-4d8a-98f4-e379b958f3e7" containerName="copy" containerID="cri-o://1b8c034f730364515afee56e054c8860b9d06819ad971b33c9be14bbfe3db16b" gracePeriod=2 Nov 29 09:33:41 crc kubenswrapper[4795]: I1129 09:33:41.513389 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-qq8lk/must-gather-kwlk8"] Nov 29 09:33:42 crc kubenswrapper[4795]: I1129 09:33:42.053403 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-qq8lk_must-gather-kwlk8_8ca4938e-5980-4d8a-98f4-e379b958f3e7/copy/0.log" Nov 29 09:33:42 crc kubenswrapper[4795]: I1129 09:33:42.054155 4795 generic.go:334] "Generic (PLEG): container finished" podID="8ca4938e-5980-4d8a-98f4-e379b958f3e7" containerID="1b8c034f730364515afee56e054c8860b9d06819ad971b33c9be14bbfe3db16b" exitCode=143 Nov 29 09:33:42 crc kubenswrapper[4795]: I1129 09:33:42.170794 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-qq8lk_must-gather-kwlk8_8ca4938e-5980-4d8a-98f4-e379b958f3e7/copy/0.log" Nov 29 09:33:42 crc kubenswrapper[4795]: I1129 09:33:42.171449 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qq8lk/must-gather-kwlk8" Nov 29 09:33:42 crc kubenswrapper[4795]: I1129 09:33:42.272982 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8ca4938e-5980-4d8a-98f4-e379b958f3e7-must-gather-output\") pod \"8ca4938e-5980-4d8a-98f4-e379b958f3e7\" (UID: \"8ca4938e-5980-4d8a-98f4-e379b958f3e7\") " Nov 29 09:33:42 crc kubenswrapper[4795]: I1129 09:33:42.273061 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhmps\" (UniqueName: \"kubernetes.io/projected/8ca4938e-5980-4d8a-98f4-e379b958f3e7-kube-api-access-zhmps\") pod \"8ca4938e-5980-4d8a-98f4-e379b958f3e7\" (UID: \"8ca4938e-5980-4d8a-98f4-e379b958f3e7\") " Nov 29 09:33:42 crc kubenswrapper[4795]: I1129 09:33:42.281288 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ca4938e-5980-4d8a-98f4-e379b958f3e7-kube-api-access-zhmps" (OuterVolumeSpecName: "kube-api-access-zhmps") pod "8ca4938e-5980-4d8a-98f4-e379b958f3e7" (UID: "8ca4938e-5980-4d8a-98f4-e379b958f3e7"). InnerVolumeSpecName "kube-api-access-zhmps". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 09:33:42 crc kubenswrapper[4795]: I1129 09:33:42.374950 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zhmps\" (UniqueName: \"kubernetes.io/projected/8ca4938e-5980-4d8a-98f4-e379b958f3e7-kube-api-access-zhmps\") on node \"crc\" DevicePath \"\"" Nov 29 09:33:42 crc kubenswrapper[4795]: I1129 09:33:42.453370 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ca4938e-5980-4d8a-98f4-e379b958f3e7-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "8ca4938e-5980-4d8a-98f4-e379b958f3e7" (UID: "8ca4938e-5980-4d8a-98f4-e379b958f3e7"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 09:33:42 crc kubenswrapper[4795]: I1129 09:33:42.477540 4795 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8ca4938e-5980-4d8a-98f4-e379b958f3e7-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 29 09:33:43 crc kubenswrapper[4795]: I1129 09:33:43.067399 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-qq8lk_must-gather-kwlk8_8ca4938e-5980-4d8a-98f4-e379b958f3e7/copy/0.log" Nov 29 09:33:43 crc kubenswrapper[4795]: I1129 09:33:43.068261 4795 scope.go:117] "RemoveContainer" containerID="1b8c034f730364515afee56e054c8860b9d06819ad971b33c9be14bbfe3db16b" Nov 29 09:33:43 crc kubenswrapper[4795]: I1129 09:33:43.068279 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qq8lk/must-gather-kwlk8" Nov 29 09:33:43 crc kubenswrapper[4795]: I1129 09:33:43.110424 4795 scope.go:117] "RemoveContainer" containerID="701b2d05de16b7ffd060c0920f1dcd05dc7c6605c2b84c2cf334e54fd26b6186" Nov 29 09:33:44 crc kubenswrapper[4795]: I1129 09:33:44.295997 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ca4938e-5980-4d8a-98f4-e379b958f3e7" path="/var/lib/kubelet/pods/8ca4938e-5980-4d8a-98f4-e379b958f3e7/volumes" Nov 29 09:33:49 crc kubenswrapper[4795]: I1129 09:33:49.275784 4795 scope.go:117] "RemoveContainer" containerID="a0961216ed7f33ef63b3697986049c3d884283e351de87a69a04219f3b1944a4" Nov 29 09:33:49 crc kubenswrapper[4795]: E1129 09:33:49.276571 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:34:04 crc kubenswrapper[4795]: I1129 09:34:04.290718 4795 scope.go:117] "RemoveContainer" containerID="a0961216ed7f33ef63b3697986049c3d884283e351de87a69a04219f3b1944a4" Nov 29 09:34:04 crc kubenswrapper[4795]: E1129 09:34:04.291618 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:34:19 crc kubenswrapper[4795]: I1129 09:34:19.275895 4795 scope.go:117] "RemoveContainer" containerID="a0961216ed7f33ef63b3697986049c3d884283e351de87a69a04219f3b1944a4" Nov 29 09:34:19 crc kubenswrapper[4795]: E1129 09:34:19.277076 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:34:34 crc kubenswrapper[4795]: I1129 09:34:34.311199 4795 scope.go:117] "RemoveContainer" containerID="a0961216ed7f33ef63b3697986049c3d884283e351de87a69a04219f3b1944a4" Nov 29 09:34:34 crc kubenswrapper[4795]: E1129 09:34:34.312136 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:34:35 crc kubenswrapper[4795]: I1129 09:34:35.961976 4795 scope.go:117] "RemoveContainer" containerID="a811e8237455deddad83e479548a0339c6b643e563fd6c0ed8d84a784031b25d" Nov 29 09:34:46 crc kubenswrapper[4795]: I1129 09:34:46.276068 4795 scope.go:117] "RemoveContainer" containerID="a0961216ed7f33ef63b3697986049c3d884283e351de87a69a04219f3b1944a4" Nov 29 09:34:46 crc kubenswrapper[4795]: E1129 09:34:46.277142 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:34:57 crc kubenswrapper[4795]: I1129 09:34:57.275947 4795 scope.go:117] "RemoveContainer" containerID="a0961216ed7f33ef63b3697986049c3d884283e351de87a69a04219f3b1944a4" Nov 29 09:34:57 crc kubenswrapper[4795]: E1129 09:34:57.276778 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:35:08 crc kubenswrapper[4795]: I1129 09:35:08.275989 4795 scope.go:117] "RemoveContainer" containerID="a0961216ed7f33ef63b3697986049c3d884283e351de87a69a04219f3b1944a4" Nov 29 09:35:08 crc kubenswrapper[4795]: E1129 09:35:08.277017 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:35:22 crc kubenswrapper[4795]: I1129 09:35:22.280917 4795 scope.go:117] "RemoveContainer" containerID="a0961216ed7f33ef63b3697986049c3d884283e351de87a69a04219f3b1944a4" Nov 29 09:35:22 crc kubenswrapper[4795]: E1129 09:35:22.282569 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:35:33 crc kubenswrapper[4795]: I1129 09:35:33.277215 4795 scope.go:117] "RemoveContainer" containerID="a0961216ed7f33ef63b3697986049c3d884283e351de87a69a04219f3b1944a4" Nov 29 09:35:33 crc kubenswrapper[4795]: E1129 09:35:33.278527 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:35:35 crc kubenswrapper[4795]: I1129 09:35:35.739186 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wm7gn"] Nov 29 09:35:35 crc kubenswrapper[4795]: E1129 09:35:35.740012 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ca4938e-5980-4d8a-98f4-e379b958f3e7" containerName="gather" Nov 29 09:35:35 crc kubenswrapper[4795]: I1129 09:35:35.740029 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ca4938e-5980-4d8a-98f4-e379b958f3e7" containerName="gather" Nov 29 09:35:35 crc kubenswrapper[4795]: E1129 09:35:35.740078 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ca4938e-5980-4d8a-98f4-e379b958f3e7" containerName="copy" Nov 29 09:35:35 crc kubenswrapper[4795]: I1129 09:35:35.740084 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ca4938e-5980-4d8a-98f4-e379b958f3e7" containerName="copy" Nov 29 09:35:35 crc kubenswrapper[4795]: E1129 09:35:35.740099 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42c27b41-6aa2-4368-acf5-fc9f97ae1548" containerName="collect-profiles" Nov 29 09:35:35 crc kubenswrapper[4795]: I1129 09:35:35.740105 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="42c27b41-6aa2-4368-acf5-fc9f97ae1548" containerName="collect-profiles" Nov 29 09:35:35 crc kubenswrapper[4795]: I1129 09:35:35.740306 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ca4938e-5980-4d8a-98f4-e379b958f3e7" containerName="gather" Nov 29 09:35:35 crc kubenswrapper[4795]: I1129 09:35:35.740322 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="42c27b41-6aa2-4368-acf5-fc9f97ae1548" containerName="collect-profiles" Nov 29 09:35:35 crc kubenswrapper[4795]: I1129 09:35:35.740337 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ca4938e-5980-4d8a-98f4-e379b958f3e7" containerName="copy" Nov 29 09:35:35 crc kubenswrapper[4795]: I1129 09:35:35.742219 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wm7gn" Nov 29 09:35:35 crc kubenswrapper[4795]: I1129 09:35:35.760364 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wm7gn"] Nov 29 09:35:35 crc kubenswrapper[4795]: I1129 09:35:35.926216 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af282153-6d72-48b7-845a-76b7527f6912-catalog-content\") pod \"redhat-marketplace-wm7gn\" (UID: \"af282153-6d72-48b7-845a-76b7527f6912\") " pod="openshift-marketplace/redhat-marketplace-wm7gn" Nov 29 09:35:35 crc kubenswrapper[4795]: I1129 09:35:35.926735 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af282153-6d72-48b7-845a-76b7527f6912-utilities\") pod \"redhat-marketplace-wm7gn\" (UID: \"af282153-6d72-48b7-845a-76b7527f6912\") " pod="openshift-marketplace/redhat-marketplace-wm7gn" Nov 29 09:35:35 crc kubenswrapper[4795]: I1129 09:35:35.926889 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rh42z\" (UniqueName: \"kubernetes.io/projected/af282153-6d72-48b7-845a-76b7527f6912-kube-api-access-rh42z\") pod \"redhat-marketplace-wm7gn\" (UID: \"af282153-6d72-48b7-845a-76b7527f6912\") " pod="openshift-marketplace/redhat-marketplace-wm7gn" Nov 29 09:35:36 crc kubenswrapper[4795]: I1129 09:35:36.028948 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af282153-6d72-48b7-845a-76b7527f6912-catalog-content\") pod \"redhat-marketplace-wm7gn\" (UID: \"af282153-6d72-48b7-845a-76b7527f6912\") " pod="openshift-marketplace/redhat-marketplace-wm7gn" Nov 29 09:35:36 crc kubenswrapper[4795]: I1129 09:35:36.029194 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af282153-6d72-48b7-845a-76b7527f6912-utilities\") pod \"redhat-marketplace-wm7gn\" (UID: \"af282153-6d72-48b7-845a-76b7527f6912\") " pod="openshift-marketplace/redhat-marketplace-wm7gn" Nov 29 09:35:36 crc kubenswrapper[4795]: I1129 09:35:36.029274 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rh42z\" (UniqueName: \"kubernetes.io/projected/af282153-6d72-48b7-845a-76b7527f6912-kube-api-access-rh42z\") pod \"redhat-marketplace-wm7gn\" (UID: \"af282153-6d72-48b7-845a-76b7527f6912\") " pod="openshift-marketplace/redhat-marketplace-wm7gn" Nov 29 09:35:36 crc kubenswrapper[4795]: I1129 09:35:36.029519 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af282153-6d72-48b7-845a-76b7527f6912-utilities\") pod \"redhat-marketplace-wm7gn\" (UID: \"af282153-6d72-48b7-845a-76b7527f6912\") " pod="openshift-marketplace/redhat-marketplace-wm7gn" Nov 29 09:35:36 crc kubenswrapper[4795]: I1129 09:35:36.029719 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af282153-6d72-48b7-845a-76b7527f6912-catalog-content\") pod \"redhat-marketplace-wm7gn\" (UID: \"af282153-6d72-48b7-845a-76b7527f6912\") " pod="openshift-marketplace/redhat-marketplace-wm7gn" Nov 29 09:35:36 crc kubenswrapper[4795]: I1129 09:35:36.060420 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rh42z\" (UniqueName: \"kubernetes.io/projected/af282153-6d72-48b7-845a-76b7527f6912-kube-api-access-rh42z\") pod \"redhat-marketplace-wm7gn\" (UID: \"af282153-6d72-48b7-845a-76b7527f6912\") " pod="openshift-marketplace/redhat-marketplace-wm7gn" Nov 29 09:35:36 crc kubenswrapper[4795]: I1129 09:35:36.063932 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wm7gn" Nov 29 09:35:36 crc kubenswrapper[4795]: I1129 09:35:36.578773 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wm7gn"] Nov 29 09:35:37 crc kubenswrapper[4795]: I1129 09:35:37.559532 4795 generic.go:334] "Generic (PLEG): container finished" podID="af282153-6d72-48b7-845a-76b7527f6912" containerID="839bc44bb18250c2ba2e5e40c90bbcd74f4c45fb8465b8590cffda444256ddd5" exitCode=0 Nov 29 09:35:37 crc kubenswrapper[4795]: I1129 09:35:37.559973 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wm7gn" event={"ID":"af282153-6d72-48b7-845a-76b7527f6912","Type":"ContainerDied","Data":"839bc44bb18250c2ba2e5e40c90bbcd74f4c45fb8465b8590cffda444256ddd5"} Nov 29 09:35:37 crc kubenswrapper[4795]: I1129 09:35:37.560013 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wm7gn" event={"ID":"af282153-6d72-48b7-845a-76b7527f6912","Type":"ContainerStarted","Data":"d90738570a60e305989f3bac01a537f92776460be6be08e05385b38d16847b41"} Nov 29 09:35:37 crc kubenswrapper[4795]: I1129 09:35:37.562953 4795 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 29 09:35:39 crc kubenswrapper[4795]: I1129 09:35:39.581404 4795 generic.go:334] "Generic (PLEG): container finished" podID="af282153-6d72-48b7-845a-76b7527f6912" containerID="8ad5b736f5ceb82259291f1468e8ac09ffc3ed5dd004abecaf1d8ac9fa2b7b54" exitCode=0 Nov 29 09:35:39 crc kubenswrapper[4795]: I1129 09:35:39.581493 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wm7gn" event={"ID":"af282153-6d72-48b7-845a-76b7527f6912","Type":"ContainerDied","Data":"8ad5b736f5ceb82259291f1468e8ac09ffc3ed5dd004abecaf1d8ac9fa2b7b54"} Nov 29 09:35:40 crc kubenswrapper[4795]: I1129 09:35:40.592916 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wm7gn" event={"ID":"af282153-6d72-48b7-845a-76b7527f6912","Type":"ContainerStarted","Data":"8bdf131d24eea8ec9698672c0f7c5635ea46c31819e315f010e4bfe751f74555"} Nov 29 09:35:40 crc kubenswrapper[4795]: I1129 09:35:40.629285 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wm7gn" podStartSLOduration=3.119491489 podStartE2EDuration="5.629261998s" podCreationTimestamp="2025-11-29 09:35:35 +0000 UTC" firstStartedPulling="2025-11-29 09:35:37.562268526 +0000 UTC m=+6983.537844316" lastFinishedPulling="2025-11-29 09:35:40.072039035 +0000 UTC m=+6986.047614825" observedRunningTime="2025-11-29 09:35:40.616009233 +0000 UTC m=+6986.591585023" watchObservedRunningTime="2025-11-29 09:35:40.629261998 +0000 UTC m=+6986.604837808" Nov 29 09:35:44 crc kubenswrapper[4795]: I1129 09:35:44.275861 4795 scope.go:117] "RemoveContainer" containerID="a0961216ed7f33ef63b3697986049c3d884283e351de87a69a04219f3b1944a4" Nov 29 09:35:44 crc kubenswrapper[4795]: E1129 09:35:44.276680 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:35:46 crc kubenswrapper[4795]: I1129 09:35:46.064768 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wm7gn" Nov 29 09:35:46 crc kubenswrapper[4795]: I1129 09:35:46.065100 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wm7gn" Nov 29 09:35:46 crc kubenswrapper[4795]: I1129 09:35:46.121585 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wm7gn" Nov 29 09:35:46 crc kubenswrapper[4795]: I1129 09:35:46.723491 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wm7gn" Nov 29 09:35:46 crc kubenswrapper[4795]: I1129 09:35:46.782175 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wm7gn"] Nov 29 09:35:48 crc kubenswrapper[4795]: I1129 09:35:48.675716 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wm7gn" podUID="af282153-6d72-48b7-845a-76b7527f6912" containerName="registry-server" containerID="cri-o://8bdf131d24eea8ec9698672c0f7c5635ea46c31819e315f010e4bfe751f74555" gracePeriod=2 Nov 29 09:35:49 crc kubenswrapper[4795]: I1129 09:35:49.261092 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wm7gn" Nov 29 09:35:49 crc kubenswrapper[4795]: I1129 09:35:49.367125 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rh42z\" (UniqueName: \"kubernetes.io/projected/af282153-6d72-48b7-845a-76b7527f6912-kube-api-access-rh42z\") pod \"af282153-6d72-48b7-845a-76b7527f6912\" (UID: \"af282153-6d72-48b7-845a-76b7527f6912\") " Nov 29 09:35:49 crc kubenswrapper[4795]: I1129 09:35:49.367211 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af282153-6d72-48b7-845a-76b7527f6912-catalog-content\") pod \"af282153-6d72-48b7-845a-76b7527f6912\" (UID: \"af282153-6d72-48b7-845a-76b7527f6912\") " Nov 29 09:35:49 crc kubenswrapper[4795]: I1129 09:35:49.367533 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af282153-6d72-48b7-845a-76b7527f6912-utilities\") pod \"af282153-6d72-48b7-845a-76b7527f6912\" (UID: \"af282153-6d72-48b7-845a-76b7527f6912\") " Nov 29 09:35:49 crc kubenswrapper[4795]: I1129 09:35:49.369120 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af282153-6d72-48b7-845a-76b7527f6912-utilities" (OuterVolumeSpecName: "utilities") pod "af282153-6d72-48b7-845a-76b7527f6912" (UID: "af282153-6d72-48b7-845a-76b7527f6912"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 09:35:49 crc kubenswrapper[4795]: I1129 09:35:49.373915 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af282153-6d72-48b7-845a-76b7527f6912-kube-api-access-rh42z" (OuterVolumeSpecName: "kube-api-access-rh42z") pod "af282153-6d72-48b7-845a-76b7527f6912" (UID: "af282153-6d72-48b7-845a-76b7527f6912"). InnerVolumeSpecName "kube-api-access-rh42z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 09:35:49 crc kubenswrapper[4795]: I1129 09:35:49.386941 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af282153-6d72-48b7-845a-76b7527f6912-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "af282153-6d72-48b7-845a-76b7527f6912" (UID: "af282153-6d72-48b7-845a-76b7527f6912"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 09:35:49 crc kubenswrapper[4795]: I1129 09:35:49.470654 4795 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af282153-6d72-48b7-845a-76b7527f6912-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 09:35:49 crc kubenswrapper[4795]: I1129 09:35:49.470834 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rh42z\" (UniqueName: \"kubernetes.io/projected/af282153-6d72-48b7-845a-76b7527f6912-kube-api-access-rh42z\") on node \"crc\" DevicePath \"\"" Nov 29 09:35:49 crc kubenswrapper[4795]: I1129 09:35:49.470848 4795 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af282153-6d72-48b7-845a-76b7527f6912-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 09:35:49 crc kubenswrapper[4795]: I1129 09:35:49.688110 4795 generic.go:334] "Generic (PLEG): container finished" podID="af282153-6d72-48b7-845a-76b7527f6912" containerID="8bdf131d24eea8ec9698672c0f7c5635ea46c31819e315f010e4bfe751f74555" exitCode=0 Nov 29 09:35:49 crc kubenswrapper[4795]: I1129 09:35:49.688161 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wm7gn" event={"ID":"af282153-6d72-48b7-845a-76b7527f6912","Type":"ContainerDied","Data":"8bdf131d24eea8ec9698672c0f7c5635ea46c31819e315f010e4bfe751f74555"} Nov 29 09:35:49 crc kubenswrapper[4795]: I1129 09:35:49.688192 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wm7gn" event={"ID":"af282153-6d72-48b7-845a-76b7527f6912","Type":"ContainerDied","Data":"d90738570a60e305989f3bac01a537f92776460be6be08e05385b38d16847b41"} Nov 29 09:35:49 crc kubenswrapper[4795]: I1129 09:35:49.688200 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wm7gn" Nov 29 09:35:49 crc kubenswrapper[4795]: I1129 09:35:49.688209 4795 scope.go:117] "RemoveContainer" containerID="8bdf131d24eea8ec9698672c0f7c5635ea46c31819e315f010e4bfe751f74555" Nov 29 09:35:49 crc kubenswrapper[4795]: I1129 09:35:49.709034 4795 scope.go:117] "RemoveContainer" containerID="8ad5b736f5ceb82259291f1468e8ac09ffc3ed5dd004abecaf1d8ac9fa2b7b54" Nov 29 09:35:49 crc kubenswrapper[4795]: I1129 09:35:49.731322 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wm7gn"] Nov 29 09:35:49 crc kubenswrapper[4795]: I1129 09:35:49.741656 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wm7gn"] Nov 29 09:35:49 crc kubenswrapper[4795]: I1129 09:35:49.767329 4795 scope.go:117] "RemoveContainer" containerID="839bc44bb18250c2ba2e5e40c90bbcd74f4c45fb8465b8590cffda444256ddd5" Nov 29 09:35:49 crc kubenswrapper[4795]: I1129 09:35:49.792271 4795 scope.go:117] "RemoveContainer" containerID="8bdf131d24eea8ec9698672c0f7c5635ea46c31819e315f010e4bfe751f74555" Nov 29 09:35:49 crc kubenswrapper[4795]: E1129 09:35:49.792719 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8bdf131d24eea8ec9698672c0f7c5635ea46c31819e315f010e4bfe751f74555\": container with ID starting with 8bdf131d24eea8ec9698672c0f7c5635ea46c31819e315f010e4bfe751f74555 not found: ID does not exist" containerID="8bdf131d24eea8ec9698672c0f7c5635ea46c31819e315f010e4bfe751f74555" Nov 29 09:35:49 crc kubenswrapper[4795]: I1129 09:35:49.792786 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bdf131d24eea8ec9698672c0f7c5635ea46c31819e315f010e4bfe751f74555"} err="failed to get container status \"8bdf131d24eea8ec9698672c0f7c5635ea46c31819e315f010e4bfe751f74555\": rpc error: code = NotFound desc = could not find container \"8bdf131d24eea8ec9698672c0f7c5635ea46c31819e315f010e4bfe751f74555\": container with ID starting with 8bdf131d24eea8ec9698672c0f7c5635ea46c31819e315f010e4bfe751f74555 not found: ID does not exist" Nov 29 09:35:49 crc kubenswrapper[4795]: I1129 09:35:49.792819 4795 scope.go:117] "RemoveContainer" containerID="8ad5b736f5ceb82259291f1468e8ac09ffc3ed5dd004abecaf1d8ac9fa2b7b54" Nov 29 09:35:49 crc kubenswrapper[4795]: E1129 09:35:49.793356 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ad5b736f5ceb82259291f1468e8ac09ffc3ed5dd004abecaf1d8ac9fa2b7b54\": container with ID starting with 8ad5b736f5ceb82259291f1468e8ac09ffc3ed5dd004abecaf1d8ac9fa2b7b54 not found: ID does not exist" containerID="8ad5b736f5ceb82259291f1468e8ac09ffc3ed5dd004abecaf1d8ac9fa2b7b54" Nov 29 09:35:49 crc kubenswrapper[4795]: I1129 09:35:49.793469 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ad5b736f5ceb82259291f1468e8ac09ffc3ed5dd004abecaf1d8ac9fa2b7b54"} err="failed to get container status \"8ad5b736f5ceb82259291f1468e8ac09ffc3ed5dd004abecaf1d8ac9fa2b7b54\": rpc error: code = NotFound desc = could not find container \"8ad5b736f5ceb82259291f1468e8ac09ffc3ed5dd004abecaf1d8ac9fa2b7b54\": container with ID starting with 8ad5b736f5ceb82259291f1468e8ac09ffc3ed5dd004abecaf1d8ac9fa2b7b54 not found: ID does not exist" Nov 29 09:35:49 crc kubenswrapper[4795]: I1129 09:35:49.793561 4795 scope.go:117] "RemoveContainer" containerID="839bc44bb18250c2ba2e5e40c90bbcd74f4c45fb8465b8590cffda444256ddd5" Nov 29 09:35:49 crc kubenswrapper[4795]: E1129 09:35:49.793997 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"839bc44bb18250c2ba2e5e40c90bbcd74f4c45fb8465b8590cffda444256ddd5\": container with ID starting with 839bc44bb18250c2ba2e5e40c90bbcd74f4c45fb8465b8590cffda444256ddd5 not found: ID does not exist" containerID="839bc44bb18250c2ba2e5e40c90bbcd74f4c45fb8465b8590cffda444256ddd5" Nov 29 09:35:49 crc kubenswrapper[4795]: I1129 09:35:49.794034 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"839bc44bb18250c2ba2e5e40c90bbcd74f4c45fb8465b8590cffda444256ddd5"} err="failed to get container status \"839bc44bb18250c2ba2e5e40c90bbcd74f4c45fb8465b8590cffda444256ddd5\": rpc error: code = NotFound desc = could not find container \"839bc44bb18250c2ba2e5e40c90bbcd74f4c45fb8465b8590cffda444256ddd5\": container with ID starting with 839bc44bb18250c2ba2e5e40c90bbcd74f4c45fb8465b8590cffda444256ddd5 not found: ID does not exist" Nov 29 09:35:50 crc kubenswrapper[4795]: I1129 09:35:50.304270 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af282153-6d72-48b7-845a-76b7527f6912" path="/var/lib/kubelet/pods/af282153-6d72-48b7-845a-76b7527f6912/volumes" Nov 29 09:35:55 crc kubenswrapper[4795]: I1129 09:35:55.277138 4795 scope.go:117] "RemoveContainer" containerID="a0961216ed7f33ef63b3697986049c3d884283e351de87a69a04219f3b1944a4" Nov 29 09:35:55 crc kubenswrapper[4795]: E1129 09:35:55.278371 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:36:08 crc kubenswrapper[4795]: I1129 09:36:08.276390 4795 scope.go:117] "RemoveContainer" containerID="a0961216ed7f33ef63b3697986049c3d884283e351de87a69a04219f3b1944a4" Nov 29 09:36:08 crc kubenswrapper[4795]: E1129 09:36:08.277472 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:36:20 crc kubenswrapper[4795]: I1129 09:36:20.276709 4795 scope.go:117] "RemoveContainer" containerID="a0961216ed7f33ef63b3697986049c3d884283e351de87a69a04219f3b1944a4" Nov 29 09:36:20 crc kubenswrapper[4795]: E1129 09:36:20.277545 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:36:31 crc kubenswrapper[4795]: I1129 09:36:31.276931 4795 scope.go:117] "RemoveContainer" containerID="a0961216ed7f33ef63b3697986049c3d884283e351de87a69a04219f3b1944a4" Nov 29 09:36:31 crc kubenswrapper[4795]: E1129 09:36:31.277933 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:36:42 crc kubenswrapper[4795]: I1129 09:36:42.276978 4795 scope.go:117] "RemoveContainer" containerID="a0961216ed7f33ef63b3697986049c3d884283e351de87a69a04219f3b1944a4" Nov 29 09:36:42 crc kubenswrapper[4795]: E1129 09:36:42.278505 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:36:50 crc kubenswrapper[4795]: I1129 09:36:50.629476 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-cpgxf/must-gather-qkqh8"] Nov 29 09:36:50 crc kubenswrapper[4795]: E1129 09:36:50.630424 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af282153-6d72-48b7-845a-76b7527f6912" containerName="extract-content" Nov 29 09:36:50 crc kubenswrapper[4795]: I1129 09:36:50.630438 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="af282153-6d72-48b7-845a-76b7527f6912" containerName="extract-content" Nov 29 09:36:50 crc kubenswrapper[4795]: E1129 09:36:50.630460 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af282153-6d72-48b7-845a-76b7527f6912" containerName="extract-utilities" Nov 29 09:36:50 crc kubenswrapper[4795]: I1129 09:36:50.630467 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="af282153-6d72-48b7-845a-76b7527f6912" containerName="extract-utilities" Nov 29 09:36:50 crc kubenswrapper[4795]: E1129 09:36:50.630501 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af282153-6d72-48b7-845a-76b7527f6912" containerName="registry-server" Nov 29 09:36:50 crc kubenswrapper[4795]: I1129 09:36:50.630508 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="af282153-6d72-48b7-845a-76b7527f6912" containerName="registry-server" Nov 29 09:36:50 crc kubenswrapper[4795]: I1129 09:36:50.630754 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="af282153-6d72-48b7-845a-76b7527f6912" containerName="registry-server" Nov 29 09:36:50 crc kubenswrapper[4795]: I1129 09:36:50.633066 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cpgxf/must-gather-qkqh8" Nov 29 09:36:50 crc kubenswrapper[4795]: I1129 09:36:50.642844 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-cpgxf"/"kube-root-ca.crt" Nov 29 09:36:50 crc kubenswrapper[4795]: I1129 09:36:50.643127 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-cpgxf"/"openshift-service-ca.crt" Nov 29 09:36:50 crc kubenswrapper[4795]: I1129 09:36:50.667905 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-cpgxf/must-gather-qkqh8"] Nov 29 09:36:50 crc kubenswrapper[4795]: I1129 09:36:50.695541 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/043b9785-4403-44ff-b8d1-e2d279b1ccdb-must-gather-output\") pod \"must-gather-qkqh8\" (UID: \"043b9785-4403-44ff-b8d1-e2d279b1ccdb\") " pod="openshift-must-gather-cpgxf/must-gather-qkqh8" Nov 29 09:36:50 crc kubenswrapper[4795]: I1129 09:36:50.695614 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vb2d6\" (UniqueName: \"kubernetes.io/projected/043b9785-4403-44ff-b8d1-e2d279b1ccdb-kube-api-access-vb2d6\") pod \"must-gather-qkqh8\" (UID: \"043b9785-4403-44ff-b8d1-e2d279b1ccdb\") " pod="openshift-must-gather-cpgxf/must-gather-qkqh8" Nov 29 09:36:50 crc kubenswrapper[4795]: I1129 09:36:50.810754 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/043b9785-4403-44ff-b8d1-e2d279b1ccdb-must-gather-output\") pod \"must-gather-qkqh8\" (UID: \"043b9785-4403-44ff-b8d1-e2d279b1ccdb\") " pod="openshift-must-gather-cpgxf/must-gather-qkqh8" Nov 29 09:36:50 crc kubenswrapper[4795]: I1129 09:36:50.810862 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vb2d6\" (UniqueName: \"kubernetes.io/projected/043b9785-4403-44ff-b8d1-e2d279b1ccdb-kube-api-access-vb2d6\") pod \"must-gather-qkqh8\" (UID: \"043b9785-4403-44ff-b8d1-e2d279b1ccdb\") " pod="openshift-must-gather-cpgxf/must-gather-qkqh8" Nov 29 09:36:50 crc kubenswrapper[4795]: I1129 09:36:50.812051 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/043b9785-4403-44ff-b8d1-e2d279b1ccdb-must-gather-output\") pod \"must-gather-qkqh8\" (UID: \"043b9785-4403-44ff-b8d1-e2d279b1ccdb\") " pod="openshift-must-gather-cpgxf/must-gather-qkqh8" Nov 29 09:36:50 crc kubenswrapper[4795]: I1129 09:36:50.841228 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vb2d6\" (UniqueName: \"kubernetes.io/projected/043b9785-4403-44ff-b8d1-e2d279b1ccdb-kube-api-access-vb2d6\") pod \"must-gather-qkqh8\" (UID: \"043b9785-4403-44ff-b8d1-e2d279b1ccdb\") " pod="openshift-must-gather-cpgxf/must-gather-qkqh8" Nov 29 09:36:50 crc kubenswrapper[4795]: I1129 09:36:50.953894 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cpgxf/must-gather-qkqh8" Nov 29 09:36:51 crc kubenswrapper[4795]: I1129 09:36:51.503656 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-cpgxf/must-gather-qkqh8"] Nov 29 09:36:52 crc kubenswrapper[4795]: I1129 09:36:52.474987 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cpgxf/must-gather-qkqh8" event={"ID":"043b9785-4403-44ff-b8d1-e2d279b1ccdb","Type":"ContainerStarted","Data":"3384d58cb121046efec389f34a14dedbe2291f28bd62d64eb9c03b8791bb8e99"} Nov 29 09:36:52 crc kubenswrapper[4795]: I1129 09:36:52.475578 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cpgxf/must-gather-qkqh8" event={"ID":"043b9785-4403-44ff-b8d1-e2d279b1ccdb","Type":"ContainerStarted","Data":"8b678a3d9a1a31588ab9270df0ab923693284d184c39e9755808d9726a725009"} Nov 29 09:36:52 crc kubenswrapper[4795]: I1129 09:36:52.475618 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cpgxf/must-gather-qkqh8" event={"ID":"043b9785-4403-44ff-b8d1-e2d279b1ccdb","Type":"ContainerStarted","Data":"62720c3a1978b4459fb59250b59fb8414de3b6c0eeb92f131da351e12628314b"} Nov 29 09:36:52 crc kubenswrapper[4795]: I1129 09:36:52.507811 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-cpgxf/must-gather-qkqh8" podStartSLOduration=2.507783008 podStartE2EDuration="2.507783008s" podCreationTimestamp="2025-11-29 09:36:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:36:52.494853412 +0000 UTC m=+7058.470429202" watchObservedRunningTime="2025-11-29 09:36:52.507783008 +0000 UTC m=+7058.483358828" Nov 29 09:36:55 crc kubenswrapper[4795]: I1129 09:36:55.764521 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-cpgxf/crc-debug-swjgz"] Nov 29 09:36:55 crc kubenswrapper[4795]: I1129 09:36:55.767555 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cpgxf/crc-debug-swjgz" Nov 29 09:36:55 crc kubenswrapper[4795]: I1129 09:36:55.771192 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-cpgxf"/"default-dockercfg-4gtp6" Nov 29 09:36:55 crc kubenswrapper[4795]: I1129 09:36:55.828444 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hdpk\" (UniqueName: \"kubernetes.io/projected/577e560b-e6a5-4f91-a90b-0e29341ea110-kube-api-access-9hdpk\") pod \"crc-debug-swjgz\" (UID: \"577e560b-e6a5-4f91-a90b-0e29341ea110\") " pod="openshift-must-gather-cpgxf/crc-debug-swjgz" Nov 29 09:36:55 crc kubenswrapper[4795]: I1129 09:36:55.828840 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/577e560b-e6a5-4f91-a90b-0e29341ea110-host\") pod \"crc-debug-swjgz\" (UID: \"577e560b-e6a5-4f91-a90b-0e29341ea110\") " pod="openshift-must-gather-cpgxf/crc-debug-swjgz" Nov 29 09:36:55 crc kubenswrapper[4795]: I1129 09:36:55.931728 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hdpk\" (UniqueName: \"kubernetes.io/projected/577e560b-e6a5-4f91-a90b-0e29341ea110-kube-api-access-9hdpk\") pod \"crc-debug-swjgz\" (UID: \"577e560b-e6a5-4f91-a90b-0e29341ea110\") " pod="openshift-must-gather-cpgxf/crc-debug-swjgz" Nov 29 09:36:55 crc kubenswrapper[4795]: I1129 09:36:55.932132 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/577e560b-e6a5-4f91-a90b-0e29341ea110-host\") pod \"crc-debug-swjgz\" (UID: \"577e560b-e6a5-4f91-a90b-0e29341ea110\") " pod="openshift-must-gather-cpgxf/crc-debug-swjgz" Nov 29 09:36:55 crc kubenswrapper[4795]: I1129 09:36:55.932244 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/577e560b-e6a5-4f91-a90b-0e29341ea110-host\") pod \"crc-debug-swjgz\" (UID: \"577e560b-e6a5-4f91-a90b-0e29341ea110\") " pod="openshift-must-gather-cpgxf/crc-debug-swjgz" Nov 29 09:36:55 crc kubenswrapper[4795]: I1129 09:36:55.958288 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hdpk\" (UniqueName: \"kubernetes.io/projected/577e560b-e6a5-4f91-a90b-0e29341ea110-kube-api-access-9hdpk\") pod \"crc-debug-swjgz\" (UID: \"577e560b-e6a5-4f91-a90b-0e29341ea110\") " pod="openshift-must-gather-cpgxf/crc-debug-swjgz" Nov 29 09:36:56 crc kubenswrapper[4795]: I1129 09:36:56.092938 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cpgxf/crc-debug-swjgz" Nov 29 09:36:56 crc kubenswrapper[4795]: W1129 09:36:56.125216 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod577e560b_e6a5_4f91_a90b_0e29341ea110.slice/crio-caed0469e62d67cadaa5e9f9784dac32883fc8bd7f0a9c582b780baafeb952f1 WatchSource:0}: Error finding container caed0469e62d67cadaa5e9f9784dac32883fc8bd7f0a9c582b780baafeb952f1: Status 404 returned error can't find the container with id caed0469e62d67cadaa5e9f9784dac32883fc8bd7f0a9c582b780baafeb952f1 Nov 29 09:36:56 crc kubenswrapper[4795]: I1129 09:36:56.516450 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cpgxf/crc-debug-swjgz" event={"ID":"577e560b-e6a5-4f91-a90b-0e29341ea110","Type":"ContainerStarted","Data":"59d6e219b8000eb0dff07fab472dbca3f018d457ac1d7e054ead4e879ff86496"} Nov 29 09:36:56 crc kubenswrapper[4795]: I1129 09:36:56.516732 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cpgxf/crc-debug-swjgz" event={"ID":"577e560b-e6a5-4f91-a90b-0e29341ea110","Type":"ContainerStarted","Data":"caed0469e62d67cadaa5e9f9784dac32883fc8bd7f0a9c582b780baafeb952f1"} Nov 29 09:36:56 crc kubenswrapper[4795]: I1129 09:36:56.546386 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-cpgxf/crc-debug-swjgz" podStartSLOduration=1.546370069 podStartE2EDuration="1.546370069s" podCreationTimestamp="2025-11-29 09:36:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:36:56.540773961 +0000 UTC m=+7062.516349751" watchObservedRunningTime="2025-11-29 09:36:56.546370069 +0000 UTC m=+7062.521945859" Nov 29 09:36:57 crc kubenswrapper[4795]: I1129 09:36:57.277053 4795 scope.go:117] "RemoveContainer" containerID="a0961216ed7f33ef63b3697986049c3d884283e351de87a69a04219f3b1944a4" Nov 29 09:36:57 crc kubenswrapper[4795]: E1129 09:36:57.278855 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:37:09 crc kubenswrapper[4795]: I1129 09:37:09.276823 4795 scope.go:117] "RemoveContainer" containerID="a0961216ed7f33ef63b3697986049c3d884283e351de87a69a04219f3b1944a4" Nov 29 09:37:09 crc kubenswrapper[4795]: E1129 09:37:09.278568 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:37:20 crc kubenswrapper[4795]: I1129 09:37:20.276312 4795 scope.go:117] "RemoveContainer" containerID="a0961216ed7f33ef63b3697986049c3d884283e351de87a69a04219f3b1944a4" Nov 29 09:37:20 crc kubenswrapper[4795]: E1129 09:37:20.277001 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:37:35 crc kubenswrapper[4795]: I1129 09:37:35.276731 4795 scope.go:117] "RemoveContainer" containerID="a0961216ed7f33ef63b3697986049c3d884283e351de87a69a04219f3b1944a4" Nov 29 09:37:35 crc kubenswrapper[4795]: E1129 09:37:35.277685 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:37:39 crc kubenswrapper[4795]: I1129 09:37:39.998576 4795 generic.go:334] "Generic (PLEG): container finished" podID="577e560b-e6a5-4f91-a90b-0e29341ea110" containerID="59d6e219b8000eb0dff07fab472dbca3f018d457ac1d7e054ead4e879ff86496" exitCode=0 Nov 29 09:37:39 crc kubenswrapper[4795]: I1129 09:37:39.999283 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cpgxf/crc-debug-swjgz" event={"ID":"577e560b-e6a5-4f91-a90b-0e29341ea110","Type":"ContainerDied","Data":"59d6e219b8000eb0dff07fab472dbca3f018d457ac1d7e054ead4e879ff86496"} Nov 29 09:37:41 crc kubenswrapper[4795]: I1129 09:37:41.140354 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cpgxf/crc-debug-swjgz" Nov 29 09:37:41 crc kubenswrapper[4795]: I1129 09:37:41.179515 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-cpgxf/crc-debug-swjgz"] Nov 29 09:37:41 crc kubenswrapper[4795]: I1129 09:37:41.191275 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-cpgxf/crc-debug-swjgz"] Nov 29 09:37:41 crc kubenswrapper[4795]: I1129 09:37:41.225380 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hdpk\" (UniqueName: \"kubernetes.io/projected/577e560b-e6a5-4f91-a90b-0e29341ea110-kube-api-access-9hdpk\") pod \"577e560b-e6a5-4f91-a90b-0e29341ea110\" (UID: \"577e560b-e6a5-4f91-a90b-0e29341ea110\") " Nov 29 09:37:41 crc kubenswrapper[4795]: I1129 09:37:41.225608 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/577e560b-e6a5-4f91-a90b-0e29341ea110-host\") pod \"577e560b-e6a5-4f91-a90b-0e29341ea110\" (UID: \"577e560b-e6a5-4f91-a90b-0e29341ea110\") " Nov 29 09:37:41 crc kubenswrapper[4795]: I1129 09:37:41.225746 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/577e560b-e6a5-4f91-a90b-0e29341ea110-host" (OuterVolumeSpecName: "host") pod "577e560b-e6a5-4f91-a90b-0e29341ea110" (UID: "577e560b-e6a5-4f91-a90b-0e29341ea110"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 09:37:41 crc kubenswrapper[4795]: I1129 09:37:41.226153 4795 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/577e560b-e6a5-4f91-a90b-0e29341ea110-host\") on node \"crc\" DevicePath \"\"" Nov 29 09:37:41 crc kubenswrapper[4795]: I1129 09:37:41.238784 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/577e560b-e6a5-4f91-a90b-0e29341ea110-kube-api-access-9hdpk" (OuterVolumeSpecName: "kube-api-access-9hdpk") pod "577e560b-e6a5-4f91-a90b-0e29341ea110" (UID: "577e560b-e6a5-4f91-a90b-0e29341ea110"). InnerVolumeSpecName "kube-api-access-9hdpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 09:37:41 crc kubenswrapper[4795]: I1129 09:37:41.329018 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hdpk\" (UniqueName: \"kubernetes.io/projected/577e560b-e6a5-4f91-a90b-0e29341ea110-kube-api-access-9hdpk\") on node \"crc\" DevicePath \"\"" Nov 29 09:37:42 crc kubenswrapper[4795]: I1129 09:37:42.021233 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="caed0469e62d67cadaa5e9f9784dac32883fc8bd7f0a9c582b780baafeb952f1" Nov 29 09:37:42 crc kubenswrapper[4795]: I1129 09:37:42.021690 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cpgxf/crc-debug-swjgz" Nov 29 09:37:42 crc kubenswrapper[4795]: I1129 09:37:42.288269 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="577e560b-e6a5-4f91-a90b-0e29341ea110" path="/var/lib/kubelet/pods/577e560b-e6a5-4f91-a90b-0e29341ea110/volumes" Nov 29 09:37:42 crc kubenswrapper[4795]: I1129 09:37:42.377962 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-cpgxf/crc-debug-n22dv"] Nov 29 09:37:42 crc kubenswrapper[4795]: E1129 09:37:42.378468 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="577e560b-e6a5-4f91-a90b-0e29341ea110" containerName="container-00" Nov 29 09:37:42 crc kubenswrapper[4795]: I1129 09:37:42.378485 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="577e560b-e6a5-4f91-a90b-0e29341ea110" containerName="container-00" Nov 29 09:37:42 crc kubenswrapper[4795]: I1129 09:37:42.378882 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="577e560b-e6a5-4f91-a90b-0e29341ea110" containerName="container-00" Nov 29 09:37:42 crc kubenswrapper[4795]: I1129 09:37:42.379981 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cpgxf/crc-debug-n22dv" Nov 29 09:37:42 crc kubenswrapper[4795]: I1129 09:37:42.386378 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-cpgxf"/"default-dockercfg-4gtp6" Nov 29 09:37:42 crc kubenswrapper[4795]: I1129 09:37:42.453071 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/86f7a0a2-336d-4525-b877-0257fed27111-host\") pod \"crc-debug-n22dv\" (UID: \"86f7a0a2-336d-4525-b877-0257fed27111\") " pod="openshift-must-gather-cpgxf/crc-debug-n22dv" Nov 29 09:37:42 crc kubenswrapper[4795]: I1129 09:37:42.453241 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkcxb\" (UniqueName: \"kubernetes.io/projected/86f7a0a2-336d-4525-b877-0257fed27111-kube-api-access-wkcxb\") pod \"crc-debug-n22dv\" (UID: \"86f7a0a2-336d-4525-b877-0257fed27111\") " pod="openshift-must-gather-cpgxf/crc-debug-n22dv" Nov 29 09:37:42 crc kubenswrapper[4795]: I1129 09:37:42.555100 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkcxb\" (UniqueName: \"kubernetes.io/projected/86f7a0a2-336d-4525-b877-0257fed27111-kube-api-access-wkcxb\") pod \"crc-debug-n22dv\" (UID: \"86f7a0a2-336d-4525-b877-0257fed27111\") " pod="openshift-must-gather-cpgxf/crc-debug-n22dv" Nov 29 09:37:42 crc kubenswrapper[4795]: I1129 09:37:42.555248 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/86f7a0a2-336d-4525-b877-0257fed27111-host\") pod \"crc-debug-n22dv\" (UID: \"86f7a0a2-336d-4525-b877-0257fed27111\") " pod="openshift-must-gather-cpgxf/crc-debug-n22dv" Nov 29 09:37:42 crc kubenswrapper[4795]: I1129 09:37:42.555415 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/86f7a0a2-336d-4525-b877-0257fed27111-host\") pod \"crc-debug-n22dv\" (UID: \"86f7a0a2-336d-4525-b877-0257fed27111\") " pod="openshift-must-gather-cpgxf/crc-debug-n22dv" Nov 29 09:37:42 crc kubenswrapper[4795]: I1129 09:37:42.583371 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkcxb\" (UniqueName: \"kubernetes.io/projected/86f7a0a2-336d-4525-b877-0257fed27111-kube-api-access-wkcxb\") pod \"crc-debug-n22dv\" (UID: \"86f7a0a2-336d-4525-b877-0257fed27111\") " pod="openshift-must-gather-cpgxf/crc-debug-n22dv" Nov 29 09:37:42 crc kubenswrapper[4795]: I1129 09:37:42.701794 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cpgxf/crc-debug-n22dv" Nov 29 09:37:43 crc kubenswrapper[4795]: I1129 09:37:43.087387 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cpgxf/crc-debug-n22dv" event={"ID":"86f7a0a2-336d-4525-b877-0257fed27111","Type":"ContainerStarted","Data":"80317178d18194bfda19efae247fcb66905a39e91f0dce1c193d546143a27fac"} Nov 29 09:37:44 crc kubenswrapper[4795]: I1129 09:37:44.097986 4795 generic.go:334] "Generic (PLEG): container finished" podID="86f7a0a2-336d-4525-b877-0257fed27111" containerID="aa64b7eeb4cfd3796dba387209ec8e80d5e5b9d929812371c652bdab794ea5eb" exitCode=0 Nov 29 09:37:44 crc kubenswrapper[4795]: I1129 09:37:44.098061 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cpgxf/crc-debug-n22dv" event={"ID":"86f7a0a2-336d-4525-b877-0257fed27111","Type":"ContainerDied","Data":"aa64b7eeb4cfd3796dba387209ec8e80d5e5b9d929812371c652bdab794ea5eb"} Nov 29 09:37:45 crc kubenswrapper[4795]: I1129 09:37:45.238310 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cpgxf/crc-debug-n22dv" Nov 29 09:37:45 crc kubenswrapper[4795]: I1129 09:37:45.334000 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkcxb\" (UniqueName: \"kubernetes.io/projected/86f7a0a2-336d-4525-b877-0257fed27111-kube-api-access-wkcxb\") pod \"86f7a0a2-336d-4525-b877-0257fed27111\" (UID: \"86f7a0a2-336d-4525-b877-0257fed27111\") " Nov 29 09:37:45 crc kubenswrapper[4795]: I1129 09:37:45.334264 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/86f7a0a2-336d-4525-b877-0257fed27111-host\") pod \"86f7a0a2-336d-4525-b877-0257fed27111\" (UID: \"86f7a0a2-336d-4525-b877-0257fed27111\") " Nov 29 09:37:45 crc kubenswrapper[4795]: I1129 09:37:45.334885 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86f7a0a2-336d-4525-b877-0257fed27111-host" (OuterVolumeSpecName: "host") pod "86f7a0a2-336d-4525-b877-0257fed27111" (UID: "86f7a0a2-336d-4525-b877-0257fed27111"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 09:37:45 crc kubenswrapper[4795]: I1129 09:37:45.340793 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86f7a0a2-336d-4525-b877-0257fed27111-kube-api-access-wkcxb" (OuterVolumeSpecName: "kube-api-access-wkcxb") pod "86f7a0a2-336d-4525-b877-0257fed27111" (UID: "86f7a0a2-336d-4525-b877-0257fed27111"). InnerVolumeSpecName "kube-api-access-wkcxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 09:37:45 crc kubenswrapper[4795]: I1129 09:37:45.436732 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wkcxb\" (UniqueName: \"kubernetes.io/projected/86f7a0a2-336d-4525-b877-0257fed27111-kube-api-access-wkcxb\") on node \"crc\" DevicePath \"\"" Nov 29 09:37:45 crc kubenswrapper[4795]: I1129 09:37:45.437108 4795 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/86f7a0a2-336d-4525-b877-0257fed27111-host\") on node \"crc\" DevicePath \"\"" Nov 29 09:37:46 crc kubenswrapper[4795]: I1129 09:37:46.117618 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cpgxf/crc-debug-n22dv" event={"ID":"86f7a0a2-336d-4525-b877-0257fed27111","Type":"ContainerDied","Data":"80317178d18194bfda19efae247fcb66905a39e91f0dce1c193d546143a27fac"} Nov 29 09:37:46 crc kubenswrapper[4795]: I1129 09:37:46.117657 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80317178d18194bfda19efae247fcb66905a39e91f0dce1c193d546143a27fac" Nov 29 09:37:46 crc kubenswrapper[4795]: I1129 09:37:46.117674 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cpgxf/crc-debug-n22dv" Nov 29 09:37:46 crc kubenswrapper[4795]: I1129 09:37:46.126895 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-cpgxf/crc-debug-n22dv"] Nov 29 09:37:46 crc kubenswrapper[4795]: I1129 09:37:46.138107 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-cpgxf/crc-debug-n22dv"] Nov 29 09:37:46 crc kubenswrapper[4795]: I1129 09:37:46.288923 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86f7a0a2-336d-4525-b877-0257fed27111" path="/var/lib/kubelet/pods/86f7a0a2-336d-4525-b877-0257fed27111/volumes" Nov 29 09:37:47 crc kubenswrapper[4795]: I1129 09:37:47.560321 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-cpgxf/crc-debug-pnsm6"] Nov 29 09:37:47 crc kubenswrapper[4795]: E1129 09:37:47.561306 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86f7a0a2-336d-4525-b877-0257fed27111" containerName="container-00" Nov 29 09:37:47 crc kubenswrapper[4795]: I1129 09:37:47.561320 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="86f7a0a2-336d-4525-b877-0257fed27111" containerName="container-00" Nov 29 09:37:47 crc kubenswrapper[4795]: I1129 09:37:47.561558 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="86f7a0a2-336d-4525-b877-0257fed27111" containerName="container-00" Nov 29 09:37:47 crc kubenswrapper[4795]: I1129 09:37:47.562414 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cpgxf/crc-debug-pnsm6" Nov 29 09:37:47 crc kubenswrapper[4795]: I1129 09:37:47.564665 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-cpgxf"/"default-dockercfg-4gtp6" Nov 29 09:37:47 crc kubenswrapper[4795]: I1129 09:37:47.681468 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgr5g\" (UniqueName: \"kubernetes.io/projected/8c3b78b5-7ffd-40a4-a1f8-b2c2d672245e-kube-api-access-sgr5g\") pod \"crc-debug-pnsm6\" (UID: \"8c3b78b5-7ffd-40a4-a1f8-b2c2d672245e\") " pod="openshift-must-gather-cpgxf/crc-debug-pnsm6" Nov 29 09:37:47 crc kubenswrapper[4795]: I1129 09:37:47.681810 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8c3b78b5-7ffd-40a4-a1f8-b2c2d672245e-host\") pod \"crc-debug-pnsm6\" (UID: \"8c3b78b5-7ffd-40a4-a1f8-b2c2d672245e\") " pod="openshift-must-gather-cpgxf/crc-debug-pnsm6" Nov 29 09:37:47 crc kubenswrapper[4795]: I1129 09:37:47.783986 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgr5g\" (UniqueName: \"kubernetes.io/projected/8c3b78b5-7ffd-40a4-a1f8-b2c2d672245e-kube-api-access-sgr5g\") pod \"crc-debug-pnsm6\" (UID: \"8c3b78b5-7ffd-40a4-a1f8-b2c2d672245e\") " pod="openshift-must-gather-cpgxf/crc-debug-pnsm6" Nov 29 09:37:47 crc kubenswrapper[4795]: I1129 09:37:47.784031 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8c3b78b5-7ffd-40a4-a1f8-b2c2d672245e-host\") pod \"crc-debug-pnsm6\" (UID: \"8c3b78b5-7ffd-40a4-a1f8-b2c2d672245e\") " pod="openshift-must-gather-cpgxf/crc-debug-pnsm6" Nov 29 09:37:47 crc kubenswrapper[4795]: I1129 09:37:47.784257 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8c3b78b5-7ffd-40a4-a1f8-b2c2d672245e-host\") pod \"crc-debug-pnsm6\" (UID: \"8c3b78b5-7ffd-40a4-a1f8-b2c2d672245e\") " pod="openshift-must-gather-cpgxf/crc-debug-pnsm6" Nov 29 09:37:47 crc kubenswrapper[4795]: I1129 09:37:47.801825 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgr5g\" (UniqueName: \"kubernetes.io/projected/8c3b78b5-7ffd-40a4-a1f8-b2c2d672245e-kube-api-access-sgr5g\") pod \"crc-debug-pnsm6\" (UID: \"8c3b78b5-7ffd-40a4-a1f8-b2c2d672245e\") " pod="openshift-must-gather-cpgxf/crc-debug-pnsm6" Nov 29 09:37:47 crc kubenswrapper[4795]: I1129 09:37:47.880547 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cpgxf/crc-debug-pnsm6" Nov 29 09:37:47 crc kubenswrapper[4795]: W1129 09:37:47.946494 4795 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c3b78b5_7ffd_40a4_a1f8_b2c2d672245e.slice/crio-41667dc82e6692a49586f1980b52df8b153453660e31117dbdfb64cf2f3463b6 WatchSource:0}: Error finding container 41667dc82e6692a49586f1980b52df8b153453660e31117dbdfb64cf2f3463b6: Status 404 returned error can't find the container with id 41667dc82e6692a49586f1980b52df8b153453660e31117dbdfb64cf2f3463b6 Nov 29 09:37:48 crc kubenswrapper[4795]: I1129 09:37:48.142936 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cpgxf/crc-debug-pnsm6" event={"ID":"8c3b78b5-7ffd-40a4-a1f8-b2c2d672245e","Type":"ContainerStarted","Data":"41667dc82e6692a49586f1980b52df8b153453660e31117dbdfb64cf2f3463b6"} Nov 29 09:37:48 crc kubenswrapper[4795]: I1129 09:37:48.275852 4795 scope.go:117] "RemoveContainer" containerID="a0961216ed7f33ef63b3697986049c3d884283e351de87a69a04219f3b1944a4" Nov 29 09:37:49 crc kubenswrapper[4795]: I1129 09:37:49.157851 4795 generic.go:334] "Generic (PLEG): container finished" podID="8c3b78b5-7ffd-40a4-a1f8-b2c2d672245e" containerID="a6800522b944f906e8821f45e888c5b129a5fa27c312ff873021534d4ae34d75" exitCode=0 Nov 29 09:37:49 crc kubenswrapper[4795]: I1129 09:37:49.158519 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cpgxf/crc-debug-pnsm6" event={"ID":"8c3b78b5-7ffd-40a4-a1f8-b2c2d672245e","Type":"ContainerDied","Data":"a6800522b944f906e8821f45e888c5b129a5fa27c312ff873021534d4ae34d75"} Nov 29 09:37:49 crc kubenswrapper[4795]: I1129 09:37:49.165136 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" event={"ID":"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1","Type":"ContainerStarted","Data":"216a6fd5200a9fb496152b641226ae7296ea85812450cb9b81cd8b2ac3b4a1d5"} Nov 29 09:37:49 crc kubenswrapper[4795]: I1129 09:37:49.253663 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-cpgxf/crc-debug-pnsm6"] Nov 29 09:37:49 crc kubenswrapper[4795]: I1129 09:37:49.417071 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-cpgxf/crc-debug-pnsm6"] Nov 29 09:37:50 crc kubenswrapper[4795]: I1129 09:37:50.294532 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cpgxf/crc-debug-pnsm6" Nov 29 09:37:50 crc kubenswrapper[4795]: I1129 09:37:50.455783 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8c3b78b5-7ffd-40a4-a1f8-b2c2d672245e-host\") pod \"8c3b78b5-7ffd-40a4-a1f8-b2c2d672245e\" (UID: \"8c3b78b5-7ffd-40a4-a1f8-b2c2d672245e\") " Nov 29 09:37:50 crc kubenswrapper[4795]: I1129 09:37:50.455851 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sgr5g\" (UniqueName: \"kubernetes.io/projected/8c3b78b5-7ffd-40a4-a1f8-b2c2d672245e-kube-api-access-sgr5g\") pod \"8c3b78b5-7ffd-40a4-a1f8-b2c2d672245e\" (UID: \"8c3b78b5-7ffd-40a4-a1f8-b2c2d672245e\") " Nov 29 09:37:50 crc kubenswrapper[4795]: I1129 09:37:50.455953 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c3b78b5-7ffd-40a4-a1f8-b2c2d672245e-host" (OuterVolumeSpecName: "host") pod "8c3b78b5-7ffd-40a4-a1f8-b2c2d672245e" (UID: "8c3b78b5-7ffd-40a4-a1f8-b2c2d672245e"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 09:37:50 crc kubenswrapper[4795]: I1129 09:37:50.456473 4795 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8c3b78b5-7ffd-40a4-a1f8-b2c2d672245e-host\") on node \"crc\" DevicePath \"\"" Nov 29 09:37:50 crc kubenswrapper[4795]: I1129 09:37:50.460964 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c3b78b5-7ffd-40a4-a1f8-b2c2d672245e-kube-api-access-sgr5g" (OuterVolumeSpecName: "kube-api-access-sgr5g") pod "8c3b78b5-7ffd-40a4-a1f8-b2c2d672245e" (UID: "8c3b78b5-7ffd-40a4-a1f8-b2c2d672245e"). InnerVolumeSpecName "kube-api-access-sgr5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 09:37:50 crc kubenswrapper[4795]: I1129 09:37:50.558914 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sgr5g\" (UniqueName: \"kubernetes.io/projected/8c3b78b5-7ffd-40a4-a1f8-b2c2d672245e-kube-api-access-sgr5g\") on node \"crc\" DevicePath \"\"" Nov 29 09:37:51 crc kubenswrapper[4795]: I1129 09:37:51.185735 4795 scope.go:117] "RemoveContainer" containerID="a6800522b944f906e8821f45e888c5b129a5fa27c312ff873021534d4ae34d75" Nov 29 09:37:51 crc kubenswrapper[4795]: I1129 09:37:51.185792 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cpgxf/crc-debug-pnsm6" Nov 29 09:37:52 crc kubenswrapper[4795]: I1129 09:37:52.292899 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c3b78b5-7ffd-40a4-a1f8-b2c2d672245e" path="/var/lib/kubelet/pods/8c3b78b5-7ffd-40a4-a1f8-b2c2d672245e/volumes" Nov 29 09:38:06 crc kubenswrapper[4795]: I1129 09:38:06.290131 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fwgs5"] Nov 29 09:38:06 crc kubenswrapper[4795]: E1129 09:38:06.291140 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c3b78b5-7ffd-40a4-a1f8-b2c2d672245e" containerName="container-00" Nov 29 09:38:06 crc kubenswrapper[4795]: I1129 09:38:06.291157 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c3b78b5-7ffd-40a4-a1f8-b2c2d672245e" containerName="container-00" Nov 29 09:38:06 crc kubenswrapper[4795]: I1129 09:38:06.291432 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c3b78b5-7ffd-40a4-a1f8-b2c2d672245e" containerName="container-00" Nov 29 09:38:06 crc kubenswrapper[4795]: I1129 09:38:06.293820 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fwgs5" Nov 29 09:38:06 crc kubenswrapper[4795]: I1129 09:38:06.295768 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fwgs5"] Nov 29 09:38:06 crc kubenswrapper[4795]: I1129 09:38:06.349783 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/497e8281-9e64-4922-a7c1-6e20a0fcf8bf-catalog-content\") pod \"redhat-operators-fwgs5\" (UID: \"497e8281-9e64-4922-a7c1-6e20a0fcf8bf\") " pod="openshift-marketplace/redhat-operators-fwgs5" Nov 29 09:38:06 crc kubenswrapper[4795]: I1129 09:38:06.349850 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bx52q\" (UniqueName: \"kubernetes.io/projected/497e8281-9e64-4922-a7c1-6e20a0fcf8bf-kube-api-access-bx52q\") pod \"redhat-operators-fwgs5\" (UID: \"497e8281-9e64-4922-a7c1-6e20a0fcf8bf\") " pod="openshift-marketplace/redhat-operators-fwgs5" Nov 29 09:38:06 crc kubenswrapper[4795]: I1129 09:38:06.349880 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/497e8281-9e64-4922-a7c1-6e20a0fcf8bf-utilities\") pod \"redhat-operators-fwgs5\" (UID: \"497e8281-9e64-4922-a7c1-6e20a0fcf8bf\") " pod="openshift-marketplace/redhat-operators-fwgs5" Nov 29 09:38:06 crc kubenswrapper[4795]: I1129 09:38:06.452695 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/497e8281-9e64-4922-a7c1-6e20a0fcf8bf-catalog-content\") pod \"redhat-operators-fwgs5\" (UID: \"497e8281-9e64-4922-a7c1-6e20a0fcf8bf\") " pod="openshift-marketplace/redhat-operators-fwgs5" Nov 29 09:38:06 crc kubenswrapper[4795]: I1129 09:38:06.452774 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bx52q\" (UniqueName: \"kubernetes.io/projected/497e8281-9e64-4922-a7c1-6e20a0fcf8bf-kube-api-access-bx52q\") pod \"redhat-operators-fwgs5\" (UID: \"497e8281-9e64-4922-a7c1-6e20a0fcf8bf\") " pod="openshift-marketplace/redhat-operators-fwgs5" Nov 29 09:38:06 crc kubenswrapper[4795]: I1129 09:38:06.452797 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/497e8281-9e64-4922-a7c1-6e20a0fcf8bf-utilities\") pod \"redhat-operators-fwgs5\" (UID: \"497e8281-9e64-4922-a7c1-6e20a0fcf8bf\") " pod="openshift-marketplace/redhat-operators-fwgs5" Nov 29 09:38:06 crc kubenswrapper[4795]: I1129 09:38:06.453304 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/497e8281-9e64-4922-a7c1-6e20a0fcf8bf-catalog-content\") pod \"redhat-operators-fwgs5\" (UID: \"497e8281-9e64-4922-a7c1-6e20a0fcf8bf\") " pod="openshift-marketplace/redhat-operators-fwgs5" Nov 29 09:38:06 crc kubenswrapper[4795]: I1129 09:38:06.453357 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/497e8281-9e64-4922-a7c1-6e20a0fcf8bf-utilities\") pod \"redhat-operators-fwgs5\" (UID: \"497e8281-9e64-4922-a7c1-6e20a0fcf8bf\") " pod="openshift-marketplace/redhat-operators-fwgs5" Nov 29 09:38:06 crc kubenswrapper[4795]: I1129 09:38:06.478503 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bx52q\" (UniqueName: \"kubernetes.io/projected/497e8281-9e64-4922-a7c1-6e20a0fcf8bf-kube-api-access-bx52q\") pod \"redhat-operators-fwgs5\" (UID: \"497e8281-9e64-4922-a7c1-6e20a0fcf8bf\") " pod="openshift-marketplace/redhat-operators-fwgs5" Nov 29 09:38:06 crc kubenswrapper[4795]: I1129 09:38:06.629214 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fwgs5" Nov 29 09:38:07 crc kubenswrapper[4795]: I1129 09:38:07.277271 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fwgs5"] Nov 29 09:38:07 crc kubenswrapper[4795]: I1129 09:38:07.384046 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fwgs5" event={"ID":"497e8281-9e64-4922-a7c1-6e20a0fcf8bf","Type":"ContainerStarted","Data":"ddfed506df8d1ae83d17a06aae22d1bff4ca5f0e46c0638e3c27a4423487127e"} Nov 29 09:38:08 crc kubenswrapper[4795]: I1129 09:38:08.405734 4795 generic.go:334] "Generic (PLEG): container finished" podID="497e8281-9e64-4922-a7c1-6e20a0fcf8bf" containerID="67b434e3d39dfdb1eac5ca260d7b47f920e0357c3c56c1336913aa1bd35bca23" exitCode=0 Nov 29 09:38:08 crc kubenswrapper[4795]: I1129 09:38:08.405823 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fwgs5" event={"ID":"497e8281-9e64-4922-a7c1-6e20a0fcf8bf","Type":"ContainerDied","Data":"67b434e3d39dfdb1eac5ca260d7b47f920e0357c3c56c1336913aa1bd35bca23"} Nov 29 09:38:09 crc kubenswrapper[4795]: I1129 09:38:09.431351 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fwgs5" event={"ID":"497e8281-9e64-4922-a7c1-6e20a0fcf8bf","Type":"ContainerStarted","Data":"f4c236af9ccb9371df28eb9f97169e8c9c5037758a02312667ff30c4dce3303e"} Nov 29 09:38:13 crc kubenswrapper[4795]: I1129 09:38:13.500309 4795 generic.go:334] "Generic (PLEG): container finished" podID="497e8281-9e64-4922-a7c1-6e20a0fcf8bf" containerID="f4c236af9ccb9371df28eb9f97169e8c9c5037758a02312667ff30c4dce3303e" exitCode=0 Nov 29 09:38:13 crc kubenswrapper[4795]: I1129 09:38:13.500367 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fwgs5" event={"ID":"497e8281-9e64-4922-a7c1-6e20a0fcf8bf","Type":"ContainerDied","Data":"f4c236af9ccb9371df28eb9f97169e8c9c5037758a02312667ff30c4dce3303e"} Nov 29 09:38:14 crc kubenswrapper[4795]: I1129 09:38:14.513243 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fwgs5" event={"ID":"497e8281-9e64-4922-a7c1-6e20a0fcf8bf","Type":"ContainerStarted","Data":"a1693775c823fbd9f336b0c9293f9c2f04f7ef4fac439147d513acfbd4e14c8c"} Nov 29 09:38:14 crc kubenswrapper[4795]: I1129 09:38:14.544079 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fwgs5" podStartSLOduration=2.939667103 podStartE2EDuration="8.544062409s" podCreationTimestamp="2025-11-29 09:38:06 +0000 UTC" firstStartedPulling="2025-11-29 09:38:08.407956374 +0000 UTC m=+7134.383532164" lastFinishedPulling="2025-11-29 09:38:14.01235164 +0000 UTC m=+7139.987927470" observedRunningTime="2025-11-29 09:38:14.536476074 +0000 UTC m=+7140.512051864" watchObservedRunningTime="2025-11-29 09:38:14.544062409 +0000 UTC m=+7140.519638199" Nov 29 09:38:16 crc kubenswrapper[4795]: I1129 09:38:16.630745 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fwgs5" Nov 29 09:38:16 crc kubenswrapper[4795]: I1129 09:38:16.631176 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fwgs5" Nov 29 09:38:17 crc kubenswrapper[4795]: I1129 09:38:17.686156 4795 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fwgs5" podUID="497e8281-9e64-4922-a7c1-6e20a0fcf8bf" containerName="registry-server" probeResult="failure" output=< Nov 29 09:38:17 crc kubenswrapper[4795]: timeout: failed to connect service ":50051" within 1s Nov 29 09:38:17 crc kubenswrapper[4795]: > Nov 29 09:38:25 crc kubenswrapper[4795]: I1129 09:38:25.714690 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_5e8728db-5fe3-46f5-a628-2b3a0f708438/aodh-api/0.log" Nov 29 09:38:25 crc kubenswrapper[4795]: I1129 09:38:25.920271 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_5e8728db-5fe3-46f5-a628-2b3a0f708438/aodh-notifier/0.log" Nov 29 09:38:25 crc kubenswrapper[4795]: I1129 09:38:25.926475 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_5e8728db-5fe3-46f5-a628-2b3a0f708438/aodh-evaluator/0.log" Nov 29 09:38:25 crc kubenswrapper[4795]: I1129 09:38:25.972947 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_5e8728db-5fe3-46f5-a628-2b3a0f708438/aodh-listener/0.log" Nov 29 09:38:26 crc kubenswrapper[4795]: I1129 09:38:26.157156 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-dc94996-zmk9m_6613a7a2-0f90-4a83-80ea-18e316d6338d/barbican-api-log/0.log" Nov 29 09:38:26 crc kubenswrapper[4795]: I1129 09:38:26.174535 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-dc94996-zmk9m_6613a7a2-0f90-4a83-80ea-18e316d6338d/barbican-api/0.log" Nov 29 09:38:26 crc kubenswrapper[4795]: I1129 09:38:26.201464 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-65899db58b-mz594_0c64caf8-e57d-495f-985c-844edea0d146/barbican-keystone-listener/0.log" Nov 29 09:38:26 crc kubenswrapper[4795]: I1129 09:38:26.388122 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6875d4f667-x5hjc_1bca86ff-c24c-4d08-b7ed-be2433fe9735/barbican-worker/0.log" Nov 29 09:38:26 crc kubenswrapper[4795]: I1129 09:38:26.475976 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6875d4f667-x5hjc_1bca86ff-c24c-4d08-b7ed-be2433fe9735/barbican-worker-log/0.log" Nov 29 09:38:26 crc kubenswrapper[4795]: I1129 09:38:26.566300 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-65899db58b-mz594_0c64caf8-e57d-495f-985c-844edea0d146/barbican-keystone-listener-log/0.log" Nov 29 09:38:26 crc kubenswrapper[4795]: I1129 09:38:26.863985 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-glkt9_5db55adf-c067-44de-ad20-4b8a138e2576/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 09:38:26 crc kubenswrapper[4795]: I1129 09:38:26.970280 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_1187372d-c24e-4ca1-b985-64ba7bb4df2b/ceilometer-central-agent/0.log" Nov 29 09:38:27 crc kubenswrapper[4795]: I1129 09:38:27.035700 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_1187372d-c24e-4ca1-b985-64ba7bb4df2b/ceilometer-notification-agent/0.log" Nov 29 09:38:27 crc kubenswrapper[4795]: I1129 09:38:27.039202 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_1187372d-c24e-4ca1-b985-64ba7bb4df2b/proxy-httpd/0.log" Nov 29 09:38:27 crc kubenswrapper[4795]: I1129 09:38:27.150995 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_1187372d-c24e-4ca1-b985-64ba7bb4df2b/sg-core/0.log" Nov 29 09:38:27 crc kubenswrapper[4795]: I1129 09:38:27.239189 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_477216f2-bd7e-4768-9a1f-53915135fbc3/cinder-api-log/0.log" Nov 29 09:38:27 crc kubenswrapper[4795]: I1129 09:38:27.319539 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_477216f2-bd7e-4768-9a1f-53915135fbc3/cinder-api/0.log" Nov 29 09:38:27 crc kubenswrapper[4795]: I1129 09:38:27.465829 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_945f619e-60af-4c36-8ec9-a98d54c15276/cinder-scheduler/0.log" Nov 29 09:38:27 crc kubenswrapper[4795]: I1129 09:38:27.518473 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_945f619e-60af-4c36-8ec9-a98d54c15276/probe/0.log" Nov 29 09:38:27 crc kubenswrapper[4795]: I1129 09:38:27.621060 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-5c5jq_ccffb059-764c-49a4-afd1-356ba3189628/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 09:38:27 crc kubenswrapper[4795]: I1129 09:38:27.723263 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-9p64t_c19e0492-7b5e-4a23-a1aa-f09ea195448d/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 09:38:27 crc kubenswrapper[4795]: I1129 09:38:27.754321 4795 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fwgs5" podUID="497e8281-9e64-4922-a7c1-6e20a0fcf8bf" containerName="registry-server" probeResult="failure" output=< Nov 29 09:38:27 crc kubenswrapper[4795]: timeout: failed to connect service ":50051" within 1s Nov 29 09:38:27 crc kubenswrapper[4795]: > Nov 29 09:38:27 crc kubenswrapper[4795]: I1129 09:38:27.883041 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5596c69fcc-wq44q_2ad1010b-e9f3-4cbf-aefe-4ad2bed529e1/init/0.log" Nov 29 09:38:28 crc kubenswrapper[4795]: I1129 09:38:28.065391 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5596c69fcc-wq44q_2ad1010b-e9f3-4cbf-aefe-4ad2bed529e1/init/0.log" Nov 29 09:38:28 crc kubenswrapper[4795]: I1129 09:38:28.139476 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-lvqcr_ce2a356f-1605-4fb0-ae3c-a40094296d8f/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 09:38:28 crc kubenswrapper[4795]: I1129 09:38:28.143828 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5596c69fcc-wq44q_2ad1010b-e9f3-4cbf-aefe-4ad2bed529e1/dnsmasq-dns/0.log" Nov 29 09:38:28 crc kubenswrapper[4795]: I1129 09:38:28.355279 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_3625e087-5469-4cc2-b580-13d7201ff475/glance-httpd/0.log" Nov 29 09:38:28 crc kubenswrapper[4795]: I1129 09:38:28.422094 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_3625e087-5469-4cc2-b580-13d7201ff475/glance-log/0.log" Nov 29 09:38:28 crc kubenswrapper[4795]: I1129 09:38:28.620423 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_6199696f-3f60-4893-8029-6e62879319f9/glance-log/0.log" Nov 29 09:38:28 crc kubenswrapper[4795]: I1129 09:38:28.645611 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_6199696f-3f60-4893-8029-6e62879319f9/glance-httpd/0.log" Nov 29 09:38:29 crc kubenswrapper[4795]: I1129 09:38:29.302788 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-9d7d54f9b-6mtps_2d909210-4168-4e0a-967e-dfde70b1762b/heat-engine/0.log" Nov 29 09:38:29 crc kubenswrapper[4795]: I1129 09:38:29.543674 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-ckcwr_96e2c266-8570-40c8-adbc-d4939bde4ad9/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 09:38:29 crc kubenswrapper[4795]: I1129 09:38:29.696115 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-tppn7_5e0dffd4-e9e3-434d-b842-5b5849bf2fa9/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 09:38:29 crc kubenswrapper[4795]: I1129 09:38:29.802902 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-94c46bb5b-pj8dm_59489eb7-639e-4155-b88d-45aee638fbaa/heat-api/0.log" Nov 29 09:38:29 crc kubenswrapper[4795]: I1129 09:38:29.857717 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-6f66c97b48-7q9bs_d6b4f039-61d1-4b2c-b912-69c1bde3e4a6/heat-cfnapi/0.log" Nov 29 09:38:30 crc kubenswrapper[4795]: I1129 09:38:30.046499 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29406781-9xlk7_3eaf8e63-a9e1-47a4-b093-d1f65a80c4db/keystone-cron/0.log" Nov 29 09:38:30 crc kubenswrapper[4795]: I1129 09:38:30.076162 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_259b25c2-59e5-4ee8-bedc-23b7423bfae6/kube-state-metrics/0.log" Nov 29 09:38:30 crc kubenswrapper[4795]: I1129 09:38:30.369485 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-mdl5h_8223751f-5de9-4d5c-a9b2-200cf9c164ee/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 09:38:30 crc kubenswrapper[4795]: I1129 09:38:30.389830 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_logging-edpm-deployment-openstack-edpm-ipam-g2krp_4700d212-5bd7-4b67-a36a-ae486608b8a8/logging-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 09:38:30 crc kubenswrapper[4795]: I1129 09:38:30.411478 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7db455fcf4-bs6l9_f0a7d947-7e48-449a-a691-63de87afc9c4/keystone-api/0.log" Nov 29 09:38:30 crc kubenswrapper[4795]: I1129 09:38:30.636101 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mysqld-exporter-0_1f634ca3-f70b-49c4-9cc4-54fc9a0f4cde/mysqld-exporter/0.log" Nov 29 09:38:31 crc kubenswrapper[4795]: I1129 09:38:31.101621 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-bcc9b4b57-btgmc_25dbe68e-5c4f-4d79-afb2-a0ac640aa889/neutron-httpd/0.log" Nov 29 09:38:31 crc kubenswrapper[4795]: I1129 09:38:31.136833 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2k97_b55cbe5f-b90e-47f3-a446-82d1577ac07d/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 09:38:31 crc kubenswrapper[4795]: I1129 09:38:31.179898 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-bcc9b4b57-btgmc_25dbe68e-5c4f-4d79-afb2-a0ac640aa889/neutron-api/0.log" Nov 29 09:38:31 crc kubenswrapper[4795]: I1129 09:38:31.862067 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_e29df3aa-49f9-4776-8b5d-6448d3032696/nova-cell0-conductor-conductor/0.log" Nov 29 09:38:32 crc kubenswrapper[4795]: I1129 09:38:32.026113 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_f574bcc1-8e96-4c98-a600-1fcd846864d9/nova-api-log/0.log" Nov 29 09:38:32 crc kubenswrapper[4795]: I1129 09:38:32.060772 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_a16ef858-2118-49ef-be27-4389ab4c34dc/nova-cell1-conductor-conductor/0.log" Nov 29 09:38:32 crc kubenswrapper[4795]: I1129 09:38:32.351158 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-ss9hv_51f82fb3-fb43-4802-9ce6-46930382229b/nova-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 09:38:32 crc kubenswrapper[4795]: I1129 09:38:32.510387 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_e9e160dc-75ec-49d4-8145-76df59c61dda/nova-cell1-novncproxy-novncproxy/0.log" Nov 29 09:38:32 crc kubenswrapper[4795]: I1129 09:38:32.618025 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_c23a0993-0a7b-4452-bdcc-a199abf1de88/nova-metadata-log/0.log" Nov 29 09:38:32 crc kubenswrapper[4795]: I1129 09:38:32.667943 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_f574bcc1-8e96-4c98-a600-1fcd846864d9/nova-api-api/0.log" Nov 29 09:38:33 crc kubenswrapper[4795]: I1129 09:38:33.421472 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a9c19857-e09b-4c26-bf5a-a64655eaa024/mysql-bootstrap/0.log" Nov 29 09:38:33 crc kubenswrapper[4795]: I1129 09:38:33.647181 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a9c19857-e09b-4c26-bf5a-a64655eaa024/mysql-bootstrap/0.log" Nov 29 09:38:33 crc kubenswrapper[4795]: I1129 09:38:33.661606 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a9c19857-e09b-4c26-bf5a-a64655eaa024/galera/0.log" Nov 29 09:38:33 crc kubenswrapper[4795]: I1129 09:38:33.686257 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_7e58a5e7-cc35-47ee-af21-e80500efd523/nova-scheduler-scheduler/0.log" Nov 29 09:38:33 crc kubenswrapper[4795]: I1129 09:38:33.915438 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_a2fd879b-6f46-437e-acf0-c60e879af239/mysql-bootstrap/0.log" Nov 29 09:38:34 crc kubenswrapper[4795]: I1129 09:38:34.109427 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_a2fd879b-6f46-437e-acf0-c60e879af239/galera/0.log" Nov 29 09:38:34 crc kubenswrapper[4795]: I1129 09:38:34.145532 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_a2fd879b-6f46-437e-acf0-c60e879af239/mysql-bootstrap/0.log" Nov 29 09:38:34 crc kubenswrapper[4795]: I1129 09:38:34.298170 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_de5e89b3-d4a1-4ef0-bf3a-814cb09d5ade/openstackclient/0.log" Nov 29 09:38:34 crc kubenswrapper[4795]: I1129 09:38:34.417100 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-j2gnj_fb13c276-73ed-4b9b-90fd-58d6ae6e4169/openstack-network-exporter/0.log" Nov 29 09:38:34 crc kubenswrapper[4795]: I1129 09:38:34.643283 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-fjltq_e351bdf6-6e04-4bbd-bfae-e28c7bf2179f/ovsdb-server-init/0.log" Nov 29 09:38:34 crc kubenswrapper[4795]: I1129 09:38:34.813571 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-fjltq_e351bdf6-6e04-4bbd-bfae-e28c7bf2179f/ovsdb-server-init/0.log" Nov 29 09:38:34 crc kubenswrapper[4795]: I1129 09:38:34.816406 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-fjltq_e351bdf6-6e04-4bbd-bfae-e28c7bf2179f/ovs-vswitchd/0.log" Nov 29 09:38:34 crc kubenswrapper[4795]: I1129 09:38:34.866261 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-fjltq_e351bdf6-6e04-4bbd-bfae-e28c7bf2179f/ovsdb-server/0.log" Nov 29 09:38:34 crc kubenswrapper[4795]: I1129 09:38:34.991656 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-r5w67_77e980be-cb41-448f-96d7-0c99fec4d400/ovn-controller/0.log" Nov 29 09:38:35 crc kubenswrapper[4795]: I1129 09:38:35.206408 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-mrqcm_1a1dfc06-678d-4418-a57f-7a9a2ba2c441/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 09:38:35 crc kubenswrapper[4795]: I1129 09:38:35.337896 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_3f2bc988-6251-4a6d-95ab-8610dc2a2650/openstack-network-exporter/0.log" Nov 29 09:38:35 crc kubenswrapper[4795]: I1129 09:38:35.423469 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_3f2bc988-6251-4a6d-95ab-8610dc2a2650/ovn-northd/0.log" Nov 29 09:38:35 crc kubenswrapper[4795]: I1129 09:38:35.623661 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_acd08e2d-0e1b-473c-ae31-d63d742d2061/ovsdbserver-nb/0.log" Nov 29 09:38:35 crc kubenswrapper[4795]: I1129 09:38:35.636369 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_acd08e2d-0e1b-473c-ae31-d63d742d2061/openstack-network-exporter/0.log" Nov 29 09:38:35 crc kubenswrapper[4795]: I1129 09:38:35.773712 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_75918c04-f960-4321-8894-582921ced50d/openstack-network-exporter/0.log" Nov 29 09:38:35 crc kubenswrapper[4795]: I1129 09:38:35.856636 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_75918c04-f960-4321-8894-582921ced50d/ovsdbserver-sb/0.log" Nov 29 09:38:35 crc kubenswrapper[4795]: I1129 09:38:35.887470 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_c23a0993-0a7b-4452-bdcc-a199abf1de88/nova-metadata-metadata/0.log" Nov 29 09:38:36 crc kubenswrapper[4795]: I1129 09:38:36.257409 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-8686d8994d-2mhmq_e3c503b5-e625-4ba5-af4d-9ff304b3f371/placement-api/0.log" Nov 29 09:38:36 crc kubenswrapper[4795]: I1129 09:38:36.278277 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_9301745a-4dd1-469a-a37f-465f65a063e4/init-config-reloader/0.log" Nov 29 09:38:36 crc kubenswrapper[4795]: I1129 09:38:36.289042 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-8686d8994d-2mhmq_e3c503b5-e625-4ba5-af4d-9ff304b3f371/placement-log/0.log" Nov 29 09:38:36 crc kubenswrapper[4795]: I1129 09:38:36.567957 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_9301745a-4dd1-469a-a37f-465f65a063e4/init-config-reloader/0.log" Nov 29 09:38:36 crc kubenswrapper[4795]: I1129 09:38:36.570264 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_9301745a-4dd1-469a-a37f-465f65a063e4/config-reloader/0.log" Nov 29 09:38:36 crc kubenswrapper[4795]: I1129 09:38:36.586809 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_9301745a-4dd1-469a-a37f-465f65a063e4/thanos-sidecar/0.log" Nov 29 09:38:36 crc kubenswrapper[4795]: I1129 09:38:36.606436 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_9301745a-4dd1-469a-a37f-465f65a063e4/prometheus/0.log" Nov 29 09:38:36 crc kubenswrapper[4795]: I1129 09:38:36.704914 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fwgs5" Nov 29 09:38:36 crc kubenswrapper[4795]: I1129 09:38:36.795696 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fwgs5" Nov 29 09:38:36 crc kubenswrapper[4795]: I1129 09:38:36.884862 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_85a82139-8137-40d2-a6e9-b384592f9919/setup-container/0.log" Nov 29 09:38:36 crc kubenswrapper[4795]: I1129 09:38:36.999124 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_85a82139-8137-40d2-a6e9-b384592f9919/setup-container/0.log" Nov 29 09:38:37 crc kubenswrapper[4795]: I1129 09:38:37.051071 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_85a82139-8137-40d2-a6e9-b384592f9919/rabbitmq/0.log" Nov 29 09:38:37 crc kubenswrapper[4795]: I1129 09:38:37.107447 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_d1c7dfa2-1b2a-438d-9378-fd998f873999/setup-container/0.log" Nov 29 09:38:37 crc kubenswrapper[4795]: I1129 09:38:37.309287 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_d1c7dfa2-1b2a-438d-9378-fd998f873999/setup-container/0.log" Nov 29 09:38:37 crc kubenswrapper[4795]: I1129 09:38:37.340458 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_d1c7dfa2-1b2a-438d-9378-fd998f873999/rabbitmq/0.log" Nov 29 09:38:37 crc kubenswrapper[4795]: I1129 09:38:37.374429 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-mcx48_7bdb8420-3c48-48f2-977d-f163da761f04/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 09:38:37 crc kubenswrapper[4795]: I1129 09:38:37.475353 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fwgs5"] Nov 29 09:38:37 crc kubenswrapper[4795]: I1129 09:38:37.582958 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-vsfsq_52ed8cc8-9050-49af-ad5b-b48bc27eeb12/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 09:38:37 crc kubenswrapper[4795]: I1129 09:38:37.663588 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-r5c5p_fd6f9117-b1f3-4533-b3f6-3b614a790521/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 09:38:37 crc kubenswrapper[4795]: I1129 09:38:37.760148 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fwgs5" podUID="497e8281-9e64-4922-a7c1-6e20a0fcf8bf" containerName="registry-server" containerID="cri-o://a1693775c823fbd9f336b0c9293f9c2f04f7ef4fac439147d513acfbd4e14c8c" gracePeriod=2 Nov 29 09:38:37 crc kubenswrapper[4795]: I1129 09:38:37.972527 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-9nkc7_2e79257d-05f2-41d6-97cb-0872075ec6bf/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 09:38:38 crc kubenswrapper[4795]: I1129 09:38:38.006005 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-mmx5g_a8a9108e-9590-423a-819e-9b009a41e91a/ssh-known-hosts-edpm-deployment/0.log" Nov 29 09:38:38 crc kubenswrapper[4795]: I1129 09:38:38.324732 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7ff5f779bc-nzx8l_dd1d8c65-0785-455c-9991-e32eea8a9b83/proxy-server/0.log" Nov 29 09:38:38 crc kubenswrapper[4795]: I1129 09:38:38.440643 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fwgs5" Nov 29 09:38:38 crc kubenswrapper[4795]: I1129 09:38:38.493662 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-ww42q_60a004a5-f226-49aa-b9e7-12a384ddece6/swift-ring-rebalance/0.log" Nov 29 09:38:38 crc kubenswrapper[4795]: I1129 09:38:38.588429 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/497e8281-9e64-4922-a7c1-6e20a0fcf8bf-utilities\") pod \"497e8281-9e64-4922-a7c1-6e20a0fcf8bf\" (UID: \"497e8281-9e64-4922-a7c1-6e20a0fcf8bf\") " Nov 29 09:38:38 crc kubenswrapper[4795]: I1129 09:38:38.588529 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/497e8281-9e64-4922-a7c1-6e20a0fcf8bf-catalog-content\") pod \"497e8281-9e64-4922-a7c1-6e20a0fcf8bf\" (UID: \"497e8281-9e64-4922-a7c1-6e20a0fcf8bf\") " Nov 29 09:38:38 crc kubenswrapper[4795]: I1129 09:38:38.588788 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bx52q\" (UniqueName: \"kubernetes.io/projected/497e8281-9e64-4922-a7c1-6e20a0fcf8bf-kube-api-access-bx52q\") pod \"497e8281-9e64-4922-a7c1-6e20a0fcf8bf\" (UID: \"497e8281-9e64-4922-a7c1-6e20a0fcf8bf\") " Nov 29 09:38:38 crc kubenswrapper[4795]: I1129 09:38:38.590074 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/497e8281-9e64-4922-a7c1-6e20a0fcf8bf-utilities" (OuterVolumeSpecName: "utilities") pod "497e8281-9e64-4922-a7c1-6e20a0fcf8bf" (UID: "497e8281-9e64-4922-a7c1-6e20a0fcf8bf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 09:38:38 crc kubenswrapper[4795]: I1129 09:38:38.593896 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7ff5f779bc-nzx8l_dd1d8c65-0785-455c-9991-e32eea8a9b83/proxy-httpd/0.log" Nov 29 09:38:38 crc kubenswrapper[4795]: I1129 09:38:38.605526 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/497e8281-9e64-4922-a7c1-6e20a0fcf8bf-kube-api-access-bx52q" (OuterVolumeSpecName: "kube-api-access-bx52q") pod "497e8281-9e64-4922-a7c1-6e20a0fcf8bf" (UID: "497e8281-9e64-4922-a7c1-6e20a0fcf8bf"). InnerVolumeSpecName "kube-api-access-bx52q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 09:38:38 crc kubenswrapper[4795]: I1129 09:38:38.624195 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_28437f9f-e92e-46d7-9ffb-fcdda5dea25e/account-auditor/0.log" Nov 29 09:38:38 crc kubenswrapper[4795]: I1129 09:38:38.695015 4795 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/497e8281-9e64-4922-a7c1-6e20a0fcf8bf-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 09:38:38 crc kubenswrapper[4795]: I1129 09:38:38.697669 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bx52q\" (UniqueName: \"kubernetes.io/projected/497e8281-9e64-4922-a7c1-6e20a0fcf8bf-kube-api-access-bx52q\") on node \"crc\" DevicePath \"\"" Nov 29 09:38:38 crc kubenswrapper[4795]: I1129 09:38:38.724817 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/497e8281-9e64-4922-a7c1-6e20a0fcf8bf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "497e8281-9e64-4922-a7c1-6e20a0fcf8bf" (UID: "497e8281-9e64-4922-a7c1-6e20a0fcf8bf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 09:38:38 crc kubenswrapper[4795]: I1129 09:38:38.738176 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_28437f9f-e92e-46d7-9ffb-fcdda5dea25e/account-reaper/0.log" Nov 29 09:38:38 crc kubenswrapper[4795]: I1129 09:38:38.784341 4795 generic.go:334] "Generic (PLEG): container finished" podID="497e8281-9e64-4922-a7c1-6e20a0fcf8bf" containerID="a1693775c823fbd9f336b0c9293f9c2f04f7ef4fac439147d513acfbd4e14c8c" exitCode=0 Nov 29 09:38:38 crc kubenswrapper[4795]: I1129 09:38:38.784383 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fwgs5" event={"ID":"497e8281-9e64-4922-a7c1-6e20a0fcf8bf","Type":"ContainerDied","Data":"a1693775c823fbd9f336b0c9293f9c2f04f7ef4fac439147d513acfbd4e14c8c"} Nov 29 09:38:38 crc kubenswrapper[4795]: I1129 09:38:38.784409 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fwgs5" event={"ID":"497e8281-9e64-4922-a7c1-6e20a0fcf8bf","Type":"ContainerDied","Data":"ddfed506df8d1ae83d17a06aae22d1bff4ca5f0e46c0638e3c27a4423487127e"} Nov 29 09:38:38 crc kubenswrapper[4795]: I1129 09:38:38.784426 4795 scope.go:117] "RemoveContainer" containerID="a1693775c823fbd9f336b0c9293f9c2f04f7ef4fac439147d513acfbd4e14c8c" Nov 29 09:38:38 crc kubenswrapper[4795]: I1129 09:38:38.784571 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fwgs5" Nov 29 09:38:38 crc kubenswrapper[4795]: I1129 09:38:38.800413 4795 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/497e8281-9e64-4922-a7c1-6e20a0fcf8bf-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 09:38:38 crc kubenswrapper[4795]: I1129 09:38:38.811181 4795 scope.go:117] "RemoveContainer" containerID="f4c236af9ccb9371df28eb9f97169e8c9c5037758a02312667ff30c4dce3303e" Nov 29 09:38:38 crc kubenswrapper[4795]: I1129 09:38:38.839863 4795 scope.go:117] "RemoveContainer" containerID="67b434e3d39dfdb1eac5ca260d7b47f920e0357c3c56c1336913aa1bd35bca23" Nov 29 09:38:38 crc kubenswrapper[4795]: I1129 09:38:38.851468 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fwgs5"] Nov 29 09:38:38 crc kubenswrapper[4795]: I1129 09:38:38.867966 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_28437f9f-e92e-46d7-9ffb-fcdda5dea25e/account-server/0.log" Nov 29 09:38:38 crc kubenswrapper[4795]: I1129 09:38:38.870693 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_28437f9f-e92e-46d7-9ffb-fcdda5dea25e/container-auditor/0.log" Nov 29 09:38:38 crc kubenswrapper[4795]: I1129 09:38:38.872460 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_28437f9f-e92e-46d7-9ffb-fcdda5dea25e/account-replicator/0.log" Nov 29 09:38:38 crc kubenswrapper[4795]: I1129 09:38:38.895372 4795 scope.go:117] "RemoveContainer" containerID="a1693775c823fbd9f336b0c9293f9c2f04f7ef4fac439147d513acfbd4e14c8c" Nov 29 09:38:38 crc kubenswrapper[4795]: E1129 09:38:38.896424 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1693775c823fbd9f336b0c9293f9c2f04f7ef4fac439147d513acfbd4e14c8c\": container with ID starting with a1693775c823fbd9f336b0c9293f9c2f04f7ef4fac439147d513acfbd4e14c8c not found: ID does not exist" containerID="a1693775c823fbd9f336b0c9293f9c2f04f7ef4fac439147d513acfbd4e14c8c" Nov 29 09:38:38 crc kubenswrapper[4795]: I1129 09:38:38.896484 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1693775c823fbd9f336b0c9293f9c2f04f7ef4fac439147d513acfbd4e14c8c"} err="failed to get container status \"a1693775c823fbd9f336b0c9293f9c2f04f7ef4fac439147d513acfbd4e14c8c\": rpc error: code = NotFound desc = could not find container \"a1693775c823fbd9f336b0c9293f9c2f04f7ef4fac439147d513acfbd4e14c8c\": container with ID starting with a1693775c823fbd9f336b0c9293f9c2f04f7ef4fac439147d513acfbd4e14c8c not found: ID does not exist" Nov 29 09:38:38 crc kubenswrapper[4795]: I1129 09:38:38.896510 4795 scope.go:117] "RemoveContainer" containerID="f4c236af9ccb9371df28eb9f97169e8c9c5037758a02312667ff30c4dce3303e" Nov 29 09:38:38 crc kubenswrapper[4795]: I1129 09:38:38.896528 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fwgs5"] Nov 29 09:38:38 crc kubenswrapper[4795]: E1129 09:38:38.899299 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4c236af9ccb9371df28eb9f97169e8c9c5037758a02312667ff30c4dce3303e\": container with ID starting with f4c236af9ccb9371df28eb9f97169e8c9c5037758a02312667ff30c4dce3303e not found: ID does not exist" containerID="f4c236af9ccb9371df28eb9f97169e8c9c5037758a02312667ff30c4dce3303e" Nov 29 09:38:38 crc kubenswrapper[4795]: I1129 09:38:38.899334 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4c236af9ccb9371df28eb9f97169e8c9c5037758a02312667ff30c4dce3303e"} err="failed to get container status \"f4c236af9ccb9371df28eb9f97169e8c9c5037758a02312667ff30c4dce3303e\": rpc error: code = NotFound desc = could not find container \"f4c236af9ccb9371df28eb9f97169e8c9c5037758a02312667ff30c4dce3303e\": container with ID starting with f4c236af9ccb9371df28eb9f97169e8c9c5037758a02312667ff30c4dce3303e not found: ID does not exist" Nov 29 09:38:38 crc kubenswrapper[4795]: I1129 09:38:38.899360 4795 scope.go:117] "RemoveContainer" containerID="67b434e3d39dfdb1eac5ca260d7b47f920e0357c3c56c1336913aa1bd35bca23" Nov 29 09:38:38 crc kubenswrapper[4795]: E1129 09:38:38.904128 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67b434e3d39dfdb1eac5ca260d7b47f920e0357c3c56c1336913aa1bd35bca23\": container with ID starting with 67b434e3d39dfdb1eac5ca260d7b47f920e0357c3c56c1336913aa1bd35bca23 not found: ID does not exist" containerID="67b434e3d39dfdb1eac5ca260d7b47f920e0357c3c56c1336913aa1bd35bca23" Nov 29 09:38:38 crc kubenswrapper[4795]: I1129 09:38:38.904199 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67b434e3d39dfdb1eac5ca260d7b47f920e0357c3c56c1336913aa1bd35bca23"} err="failed to get container status \"67b434e3d39dfdb1eac5ca260d7b47f920e0357c3c56c1336913aa1bd35bca23\": rpc error: code = NotFound desc = could not find container \"67b434e3d39dfdb1eac5ca260d7b47f920e0357c3c56c1336913aa1bd35bca23\": container with ID starting with 67b434e3d39dfdb1eac5ca260d7b47f920e0357c3c56c1336913aa1bd35bca23 not found: ID does not exist" Nov 29 09:38:39 crc kubenswrapper[4795]: I1129 09:38:39.093760 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_28437f9f-e92e-46d7-9ffb-fcdda5dea25e/container-replicator/0.log" Nov 29 09:38:39 crc kubenswrapper[4795]: I1129 09:38:39.102086 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_28437f9f-e92e-46d7-9ffb-fcdda5dea25e/container-server/0.log" Nov 29 09:38:39 crc kubenswrapper[4795]: I1129 09:38:39.140757 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_28437f9f-e92e-46d7-9ffb-fcdda5dea25e/object-auditor/0.log" Nov 29 09:38:39 crc kubenswrapper[4795]: I1129 09:38:39.142103 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_28437f9f-e92e-46d7-9ffb-fcdda5dea25e/container-updater/0.log" Nov 29 09:38:39 crc kubenswrapper[4795]: I1129 09:38:39.401423 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_28437f9f-e92e-46d7-9ffb-fcdda5dea25e/object-server/0.log" Nov 29 09:38:39 crc kubenswrapper[4795]: I1129 09:38:39.401749 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_28437f9f-e92e-46d7-9ffb-fcdda5dea25e/object-updater/0.log" Nov 29 09:38:39 crc kubenswrapper[4795]: I1129 09:38:39.429762 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_28437f9f-e92e-46d7-9ffb-fcdda5dea25e/object-expirer/0.log" Nov 29 09:38:39 crc kubenswrapper[4795]: I1129 09:38:39.501571 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_28437f9f-e92e-46d7-9ffb-fcdda5dea25e/object-replicator/0.log" Nov 29 09:38:39 crc kubenswrapper[4795]: I1129 09:38:39.604735 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_28437f9f-e92e-46d7-9ffb-fcdda5dea25e/rsync/0.log" Nov 29 09:38:39 crc kubenswrapper[4795]: I1129 09:38:39.704829 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_28437f9f-e92e-46d7-9ffb-fcdda5dea25e/swift-recon-cron/0.log" Nov 29 09:38:39 crc kubenswrapper[4795]: I1129 09:38:39.744160 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-z67pl_de36795d-fe29-4964-bcc6-c63bf2eda290/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 09:38:39 crc kubenswrapper[4795]: I1129 09:38:39.934654 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-power-monitoring-edpm-deployment-openstack-edpm-fjmtk_46efc96a-a270-4709-a9a1-cf8d60484215/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 09:38:40 crc kubenswrapper[4795]: I1129 09:38:40.211213 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_04e8ed70-5207-4d59-8de4-b96ad0270b54/test-operator-logs-container/0.log" Nov 29 09:38:40 crc kubenswrapper[4795]: I1129 09:38:40.300408 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="497e8281-9e64-4922-a7c1-6e20a0fcf8bf" path="/var/lib/kubelet/pods/497e8281-9e64-4922-a7c1-6e20a0fcf8bf/volumes" Nov 29 09:38:40 crc kubenswrapper[4795]: I1129 09:38:40.358404 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-k58jj_b8224f42-d933-4b1a-bab0-8f79fa3a5369/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 09:38:40 crc kubenswrapper[4795]: I1129 09:38:40.930879 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_95552069-4919-43f3-88d5-2c40ff4c0836/tempest-tests-tempest-tests-runner/0.log" Nov 29 09:38:42 crc kubenswrapper[4795]: I1129 09:38:42.118714 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8swqp"] Nov 29 09:38:42 crc kubenswrapper[4795]: E1129 09:38:42.121766 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="497e8281-9e64-4922-a7c1-6e20a0fcf8bf" containerName="extract-utilities" Nov 29 09:38:42 crc kubenswrapper[4795]: I1129 09:38:42.121947 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="497e8281-9e64-4922-a7c1-6e20a0fcf8bf" containerName="extract-utilities" Nov 29 09:38:42 crc kubenswrapper[4795]: E1129 09:38:42.121968 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="497e8281-9e64-4922-a7c1-6e20a0fcf8bf" containerName="registry-server" Nov 29 09:38:42 crc kubenswrapper[4795]: I1129 09:38:42.121974 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="497e8281-9e64-4922-a7c1-6e20a0fcf8bf" containerName="registry-server" Nov 29 09:38:42 crc kubenswrapper[4795]: E1129 09:38:42.122008 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="497e8281-9e64-4922-a7c1-6e20a0fcf8bf" containerName="extract-content" Nov 29 09:38:42 crc kubenswrapper[4795]: I1129 09:38:42.122014 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="497e8281-9e64-4922-a7c1-6e20a0fcf8bf" containerName="extract-content" Nov 29 09:38:42 crc kubenswrapper[4795]: I1129 09:38:42.122259 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="497e8281-9e64-4922-a7c1-6e20a0fcf8bf" containerName="registry-server" Nov 29 09:38:42 crc kubenswrapper[4795]: I1129 09:38:42.127078 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8swqp" Nov 29 09:38:42 crc kubenswrapper[4795]: I1129 09:38:42.133380 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8swqp"] Nov 29 09:38:42 crc kubenswrapper[4795]: I1129 09:38:42.301645 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trkk5\" (UniqueName: \"kubernetes.io/projected/cae3ce7b-e0ae-49fb-8d06-839566da3354-kube-api-access-trkk5\") pod \"certified-operators-8swqp\" (UID: \"cae3ce7b-e0ae-49fb-8d06-839566da3354\") " pod="openshift-marketplace/certified-operators-8swqp" Nov 29 09:38:42 crc kubenswrapper[4795]: I1129 09:38:42.301752 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cae3ce7b-e0ae-49fb-8d06-839566da3354-catalog-content\") pod \"certified-operators-8swqp\" (UID: \"cae3ce7b-e0ae-49fb-8d06-839566da3354\") " pod="openshift-marketplace/certified-operators-8swqp" Nov 29 09:38:42 crc kubenswrapper[4795]: I1129 09:38:42.301838 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cae3ce7b-e0ae-49fb-8d06-839566da3354-utilities\") pod \"certified-operators-8swqp\" (UID: \"cae3ce7b-e0ae-49fb-8d06-839566da3354\") " pod="openshift-marketplace/certified-operators-8swqp" Nov 29 09:38:42 crc kubenswrapper[4795]: I1129 09:38:42.404718 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trkk5\" (UniqueName: \"kubernetes.io/projected/cae3ce7b-e0ae-49fb-8d06-839566da3354-kube-api-access-trkk5\") pod \"certified-operators-8swqp\" (UID: \"cae3ce7b-e0ae-49fb-8d06-839566da3354\") " pod="openshift-marketplace/certified-operators-8swqp" Nov 29 09:38:42 crc kubenswrapper[4795]: I1129 09:38:42.404838 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cae3ce7b-e0ae-49fb-8d06-839566da3354-catalog-content\") pod \"certified-operators-8swqp\" (UID: \"cae3ce7b-e0ae-49fb-8d06-839566da3354\") " pod="openshift-marketplace/certified-operators-8swqp" Nov 29 09:38:42 crc kubenswrapper[4795]: I1129 09:38:42.404959 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cae3ce7b-e0ae-49fb-8d06-839566da3354-utilities\") pod \"certified-operators-8swqp\" (UID: \"cae3ce7b-e0ae-49fb-8d06-839566da3354\") " pod="openshift-marketplace/certified-operators-8swqp" Nov 29 09:38:42 crc kubenswrapper[4795]: I1129 09:38:42.405832 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cae3ce7b-e0ae-49fb-8d06-839566da3354-utilities\") pod \"certified-operators-8swqp\" (UID: \"cae3ce7b-e0ae-49fb-8d06-839566da3354\") " pod="openshift-marketplace/certified-operators-8swqp" Nov 29 09:38:42 crc kubenswrapper[4795]: I1129 09:38:42.406168 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cae3ce7b-e0ae-49fb-8d06-839566da3354-catalog-content\") pod \"certified-operators-8swqp\" (UID: \"cae3ce7b-e0ae-49fb-8d06-839566da3354\") " pod="openshift-marketplace/certified-operators-8swqp" Nov 29 09:38:42 crc kubenswrapper[4795]: I1129 09:38:42.440670 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trkk5\" (UniqueName: \"kubernetes.io/projected/cae3ce7b-e0ae-49fb-8d06-839566da3354-kube-api-access-trkk5\") pod \"certified-operators-8swqp\" (UID: \"cae3ce7b-e0ae-49fb-8d06-839566da3354\") " pod="openshift-marketplace/certified-operators-8swqp" Nov 29 09:38:42 crc kubenswrapper[4795]: I1129 09:38:42.460927 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8swqp" Nov 29 09:38:43 crc kubenswrapper[4795]: I1129 09:38:43.000797 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8swqp"] Nov 29 09:38:43 crc kubenswrapper[4795]: I1129 09:38:43.877783 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8swqp" event={"ID":"cae3ce7b-e0ae-49fb-8d06-839566da3354","Type":"ContainerDied","Data":"a13805fedb246ab29313e7bf96ed7b6e8601ecfbca629e0ed8073fbfd90afff0"} Nov 29 09:38:43 crc kubenswrapper[4795]: I1129 09:38:43.881608 4795 generic.go:334] "Generic (PLEG): container finished" podID="cae3ce7b-e0ae-49fb-8d06-839566da3354" containerID="a13805fedb246ab29313e7bf96ed7b6e8601ecfbca629e0ed8073fbfd90afff0" exitCode=0 Nov 29 09:38:43 crc kubenswrapper[4795]: I1129 09:38:43.881670 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8swqp" event={"ID":"cae3ce7b-e0ae-49fb-8d06-839566da3354","Type":"ContainerStarted","Data":"ec498be89e8a2f4d9da65a68a9f92829fde84c938821ad60be4462474bc8d85c"} Nov 29 09:38:44 crc kubenswrapper[4795]: I1129 09:38:44.893985 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8swqp" event={"ID":"cae3ce7b-e0ae-49fb-8d06-839566da3354","Type":"ContainerStarted","Data":"fc8a06950ba38905d374d2ca28506d6201138582384541f14a299994651a1d17"} Nov 29 09:38:45 crc kubenswrapper[4795]: I1129 09:38:45.910827 4795 generic.go:334] "Generic (PLEG): container finished" podID="cae3ce7b-e0ae-49fb-8d06-839566da3354" containerID="fc8a06950ba38905d374d2ca28506d6201138582384541f14a299994651a1d17" exitCode=0 Nov 29 09:38:45 crc kubenswrapper[4795]: I1129 09:38:45.911028 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8swqp" event={"ID":"cae3ce7b-e0ae-49fb-8d06-839566da3354","Type":"ContainerDied","Data":"fc8a06950ba38905d374d2ca28506d6201138582384541f14a299994651a1d17"} Nov 29 09:38:46 crc kubenswrapper[4795]: I1129 09:38:46.924656 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8swqp" event={"ID":"cae3ce7b-e0ae-49fb-8d06-839566da3354","Type":"ContainerStarted","Data":"cde8922eb55fdb480473e577754bda8842717fed009023288484430b49406756"} Nov 29 09:38:46 crc kubenswrapper[4795]: I1129 09:38:46.962219 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8swqp" podStartSLOduration=2.471376098 podStartE2EDuration="4.962199144s" podCreationTimestamp="2025-11-29 09:38:42 +0000 UTC" firstStartedPulling="2025-11-29 09:38:43.882995576 +0000 UTC m=+7169.858571366" lastFinishedPulling="2025-11-29 09:38:46.373818632 +0000 UTC m=+7172.349394412" observedRunningTime="2025-11-29 09:38:46.956000919 +0000 UTC m=+7172.931576709" watchObservedRunningTime="2025-11-29 09:38:46.962199144 +0000 UTC m=+7172.937774934" Nov 29 09:38:49 crc kubenswrapper[4795]: I1129 09:38:49.667279 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_ab20548b-7f96-4eb8-aa44-80425459c0ed/memcached/0.log" Nov 29 09:38:52 crc kubenswrapper[4795]: I1129 09:38:52.461846 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8swqp" Nov 29 09:38:52 crc kubenswrapper[4795]: I1129 09:38:52.463891 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8swqp" Nov 29 09:38:52 crc kubenswrapper[4795]: I1129 09:38:52.510882 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8swqp" Nov 29 09:38:53 crc kubenswrapper[4795]: I1129 09:38:53.040732 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8swqp" Nov 29 09:38:53 crc kubenswrapper[4795]: I1129 09:38:53.090805 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8swqp"] Nov 29 09:38:55 crc kubenswrapper[4795]: I1129 09:38:55.002833 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8swqp" podUID="cae3ce7b-e0ae-49fb-8d06-839566da3354" containerName="registry-server" containerID="cri-o://cde8922eb55fdb480473e577754bda8842717fed009023288484430b49406756" gracePeriod=2 Nov 29 09:38:55 crc kubenswrapper[4795]: I1129 09:38:55.808915 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8swqp" Nov 29 09:38:55 crc kubenswrapper[4795]: I1129 09:38:55.849001 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trkk5\" (UniqueName: \"kubernetes.io/projected/cae3ce7b-e0ae-49fb-8d06-839566da3354-kube-api-access-trkk5\") pod \"cae3ce7b-e0ae-49fb-8d06-839566da3354\" (UID: \"cae3ce7b-e0ae-49fb-8d06-839566da3354\") " Nov 29 09:38:55 crc kubenswrapper[4795]: I1129 09:38:55.849480 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cae3ce7b-e0ae-49fb-8d06-839566da3354-utilities\") pod \"cae3ce7b-e0ae-49fb-8d06-839566da3354\" (UID: \"cae3ce7b-e0ae-49fb-8d06-839566da3354\") " Nov 29 09:38:55 crc kubenswrapper[4795]: I1129 09:38:55.849560 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cae3ce7b-e0ae-49fb-8d06-839566da3354-catalog-content\") pod \"cae3ce7b-e0ae-49fb-8d06-839566da3354\" (UID: \"cae3ce7b-e0ae-49fb-8d06-839566da3354\") " Nov 29 09:38:55 crc kubenswrapper[4795]: I1129 09:38:55.850452 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cae3ce7b-e0ae-49fb-8d06-839566da3354-utilities" (OuterVolumeSpecName: "utilities") pod "cae3ce7b-e0ae-49fb-8d06-839566da3354" (UID: "cae3ce7b-e0ae-49fb-8d06-839566da3354"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 09:38:55 crc kubenswrapper[4795]: I1129 09:38:55.851070 4795 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cae3ce7b-e0ae-49fb-8d06-839566da3354-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 09:38:55 crc kubenswrapper[4795]: I1129 09:38:55.862386 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cae3ce7b-e0ae-49fb-8d06-839566da3354-kube-api-access-trkk5" (OuterVolumeSpecName: "kube-api-access-trkk5") pod "cae3ce7b-e0ae-49fb-8d06-839566da3354" (UID: "cae3ce7b-e0ae-49fb-8d06-839566da3354"). InnerVolumeSpecName "kube-api-access-trkk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 09:38:55 crc kubenswrapper[4795]: I1129 09:38:55.901852 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cae3ce7b-e0ae-49fb-8d06-839566da3354-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cae3ce7b-e0ae-49fb-8d06-839566da3354" (UID: "cae3ce7b-e0ae-49fb-8d06-839566da3354"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 09:38:55 crc kubenswrapper[4795]: I1129 09:38:55.953420 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trkk5\" (UniqueName: \"kubernetes.io/projected/cae3ce7b-e0ae-49fb-8d06-839566da3354-kube-api-access-trkk5\") on node \"crc\" DevicePath \"\"" Nov 29 09:38:55 crc kubenswrapper[4795]: I1129 09:38:55.953445 4795 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cae3ce7b-e0ae-49fb-8d06-839566da3354-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 09:38:56 crc kubenswrapper[4795]: I1129 09:38:56.016325 4795 generic.go:334] "Generic (PLEG): container finished" podID="cae3ce7b-e0ae-49fb-8d06-839566da3354" containerID="cde8922eb55fdb480473e577754bda8842717fed009023288484430b49406756" exitCode=0 Nov 29 09:38:56 crc kubenswrapper[4795]: I1129 09:38:56.016372 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8swqp" event={"ID":"cae3ce7b-e0ae-49fb-8d06-839566da3354","Type":"ContainerDied","Data":"cde8922eb55fdb480473e577754bda8842717fed009023288484430b49406756"} Nov 29 09:38:56 crc kubenswrapper[4795]: I1129 09:38:56.016378 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8swqp" Nov 29 09:38:56 crc kubenswrapper[4795]: I1129 09:38:56.016412 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8swqp" event={"ID":"cae3ce7b-e0ae-49fb-8d06-839566da3354","Type":"ContainerDied","Data":"ec498be89e8a2f4d9da65a68a9f92829fde84c938821ad60be4462474bc8d85c"} Nov 29 09:38:56 crc kubenswrapper[4795]: I1129 09:38:56.016431 4795 scope.go:117] "RemoveContainer" containerID="cde8922eb55fdb480473e577754bda8842717fed009023288484430b49406756" Nov 29 09:38:56 crc kubenswrapper[4795]: I1129 09:38:56.045225 4795 scope.go:117] "RemoveContainer" containerID="fc8a06950ba38905d374d2ca28506d6201138582384541f14a299994651a1d17" Nov 29 09:38:56 crc kubenswrapper[4795]: I1129 09:38:56.062565 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8swqp"] Nov 29 09:38:56 crc kubenswrapper[4795]: I1129 09:38:56.085857 4795 scope.go:117] "RemoveContainer" containerID="a13805fedb246ab29313e7bf96ed7b6e8601ecfbca629e0ed8073fbfd90afff0" Nov 29 09:38:56 crc kubenswrapper[4795]: I1129 09:38:56.108373 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8swqp"] Nov 29 09:38:56 crc kubenswrapper[4795]: I1129 09:38:56.141624 4795 scope.go:117] "RemoveContainer" containerID="cde8922eb55fdb480473e577754bda8842717fed009023288484430b49406756" Nov 29 09:38:56 crc kubenswrapper[4795]: E1129 09:38:56.142060 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cde8922eb55fdb480473e577754bda8842717fed009023288484430b49406756\": container with ID starting with cde8922eb55fdb480473e577754bda8842717fed009023288484430b49406756 not found: ID does not exist" containerID="cde8922eb55fdb480473e577754bda8842717fed009023288484430b49406756" Nov 29 09:38:56 crc kubenswrapper[4795]: I1129 09:38:56.142089 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cde8922eb55fdb480473e577754bda8842717fed009023288484430b49406756"} err="failed to get container status \"cde8922eb55fdb480473e577754bda8842717fed009023288484430b49406756\": rpc error: code = NotFound desc = could not find container \"cde8922eb55fdb480473e577754bda8842717fed009023288484430b49406756\": container with ID starting with cde8922eb55fdb480473e577754bda8842717fed009023288484430b49406756 not found: ID does not exist" Nov 29 09:38:56 crc kubenswrapper[4795]: I1129 09:38:56.142110 4795 scope.go:117] "RemoveContainer" containerID="fc8a06950ba38905d374d2ca28506d6201138582384541f14a299994651a1d17" Nov 29 09:38:56 crc kubenswrapper[4795]: E1129 09:38:56.142361 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc8a06950ba38905d374d2ca28506d6201138582384541f14a299994651a1d17\": container with ID starting with fc8a06950ba38905d374d2ca28506d6201138582384541f14a299994651a1d17 not found: ID does not exist" containerID="fc8a06950ba38905d374d2ca28506d6201138582384541f14a299994651a1d17" Nov 29 09:38:56 crc kubenswrapper[4795]: I1129 09:38:56.142386 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc8a06950ba38905d374d2ca28506d6201138582384541f14a299994651a1d17"} err="failed to get container status \"fc8a06950ba38905d374d2ca28506d6201138582384541f14a299994651a1d17\": rpc error: code = NotFound desc = could not find container \"fc8a06950ba38905d374d2ca28506d6201138582384541f14a299994651a1d17\": container with ID starting with fc8a06950ba38905d374d2ca28506d6201138582384541f14a299994651a1d17 not found: ID does not exist" Nov 29 09:38:56 crc kubenswrapper[4795]: I1129 09:38:56.142400 4795 scope.go:117] "RemoveContainer" containerID="a13805fedb246ab29313e7bf96ed7b6e8601ecfbca629e0ed8073fbfd90afff0" Nov 29 09:38:56 crc kubenswrapper[4795]: E1129 09:38:56.142650 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a13805fedb246ab29313e7bf96ed7b6e8601ecfbca629e0ed8073fbfd90afff0\": container with ID starting with a13805fedb246ab29313e7bf96ed7b6e8601ecfbca629e0ed8073fbfd90afff0 not found: ID does not exist" containerID="a13805fedb246ab29313e7bf96ed7b6e8601ecfbca629e0ed8073fbfd90afff0" Nov 29 09:38:56 crc kubenswrapper[4795]: I1129 09:38:56.142671 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a13805fedb246ab29313e7bf96ed7b6e8601ecfbca629e0ed8073fbfd90afff0"} err="failed to get container status \"a13805fedb246ab29313e7bf96ed7b6e8601ecfbca629e0ed8073fbfd90afff0\": rpc error: code = NotFound desc = could not find container \"a13805fedb246ab29313e7bf96ed7b6e8601ecfbca629e0ed8073fbfd90afff0\": container with ID starting with a13805fedb246ab29313e7bf96ed7b6e8601ecfbca629e0ed8073fbfd90afff0 not found: ID does not exist" Nov 29 09:38:56 crc kubenswrapper[4795]: I1129 09:38:56.302240 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cae3ce7b-e0ae-49fb-8d06-839566da3354" path="/var/lib/kubelet/pods/cae3ce7b-e0ae-49fb-8d06-839566da3354/volumes" Nov 29 09:39:08 crc kubenswrapper[4795]: I1129 09:39:08.557462 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6f4a794a7e4af1c5b0c6f5512a2ac7527d4469dfd9e88afc91933ffc30kzj2m_2d7978d1-9d8a-4d2d-a6f8-d44c5c2b4eef/util/0.log" Nov 29 09:39:08 crc kubenswrapper[4795]: I1129 09:39:08.817921 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6f4a794a7e4af1c5b0c6f5512a2ac7527d4469dfd9e88afc91933ffc30kzj2m_2d7978d1-9d8a-4d2d-a6f8-d44c5c2b4eef/util/0.log" Nov 29 09:39:08 crc kubenswrapper[4795]: I1129 09:39:08.819030 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6f4a794a7e4af1c5b0c6f5512a2ac7527d4469dfd9e88afc91933ffc30kzj2m_2d7978d1-9d8a-4d2d-a6f8-d44c5c2b4eef/pull/0.log" Nov 29 09:39:08 crc kubenswrapper[4795]: I1129 09:39:08.889019 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6f4a794a7e4af1c5b0c6f5512a2ac7527d4469dfd9e88afc91933ffc30kzj2m_2d7978d1-9d8a-4d2d-a6f8-d44c5c2b4eef/pull/0.log" Nov 29 09:39:09 crc kubenswrapper[4795]: I1129 09:39:09.009703 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6f4a794a7e4af1c5b0c6f5512a2ac7527d4469dfd9e88afc91933ffc30kzj2m_2d7978d1-9d8a-4d2d-a6f8-d44c5c2b4eef/util/0.log" Nov 29 09:39:09 crc kubenswrapper[4795]: I1129 09:39:09.009865 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6f4a794a7e4af1c5b0c6f5512a2ac7527d4469dfd9e88afc91933ffc30kzj2m_2d7978d1-9d8a-4d2d-a6f8-d44c5c2b4eef/pull/0.log" Nov 29 09:39:09 crc kubenswrapper[4795]: I1129 09:39:09.047270 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6f4a794a7e4af1c5b0c6f5512a2ac7527d4469dfd9e88afc91933ffc30kzj2m_2d7978d1-9d8a-4d2d-a6f8-d44c5c2b4eef/extract/0.log" Nov 29 09:39:09 crc kubenswrapper[4795]: I1129 09:39:09.277159 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-859b6ccc6-qtbtd_86217734-815f-461c-a32d-8d744192003e/kube-rbac-proxy/0.log" Nov 29 09:39:09 crc kubenswrapper[4795]: I1129 09:39:09.284209 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-klbwf_7bed5103-966d-43d3-92f1-73a2f8b6d551/kube-rbac-proxy/0.log" Nov 29 09:39:09 crc kubenswrapper[4795]: I1129 09:39:09.351640 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-klbwf_7bed5103-966d-43d3-92f1-73a2f8b6d551/manager/0.log" Nov 29 09:39:09 crc kubenswrapper[4795]: I1129 09:39:09.474924 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-859b6ccc6-qtbtd_86217734-815f-461c-a32d-8d744192003e/manager/0.log" Nov 29 09:39:09 crc kubenswrapper[4795]: I1129 09:39:09.556789 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-8mk4s_7bfb1aea-ee92-4033-ba5f-c7ac79c60d0e/kube-rbac-proxy/0.log" Nov 29 09:39:09 crc kubenswrapper[4795]: I1129 09:39:09.604782 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-8mk4s_7bfb1aea-ee92-4033-ba5f-c7ac79c60d0e/manager/0.log" Nov 29 09:39:09 crc kubenswrapper[4795]: I1129 09:39:09.768131 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-668d9c48b9-dr8f4_3ff17662-f7b1-4870-9ef2-18a81fdb5d73/kube-rbac-proxy/0.log" Nov 29 09:39:09 crc kubenswrapper[4795]: I1129 09:39:09.845158 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-668d9c48b9-dr8f4_3ff17662-f7b1-4870-9ef2-18a81fdb5d73/manager/0.log" Nov 29 09:39:09 crc kubenswrapper[4795]: I1129 09:39:09.914353 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-xjns5_36a279fc-25f1-407e-a1c6-6b8689d68cd2/kube-rbac-proxy/0.log" Nov 29 09:39:10 crc kubenswrapper[4795]: I1129 09:39:10.051030 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-xjns5_36a279fc-25f1-407e-a1c6-6b8689d68cd2/manager/0.log" Nov 29 09:39:10 crc kubenswrapper[4795]: I1129 09:39:10.051049 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-spfsh_d4e1473d-8426-452b-8030-764680cc5a20/kube-rbac-proxy/0.log" Nov 29 09:39:10 crc kubenswrapper[4795]: I1129 09:39:10.125177 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-spfsh_d4e1473d-8426-452b-8030-764680cc5a20/manager/0.log" Nov 29 09:39:10 crc kubenswrapper[4795]: I1129 09:39:10.278358 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-rd6w8_cc9825dd-340b-4dda-ab8a-91d95ee67678/kube-rbac-proxy/0.log" Nov 29 09:39:10 crc kubenswrapper[4795]: I1129 09:39:10.387007 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-rd6w8_cc9825dd-340b-4dda-ab8a-91d95ee67678/manager/0.log" Nov 29 09:39:10 crc kubenswrapper[4795]: I1129 09:39:10.503531 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-5q5dd_b53e6db5-a3fe-4f86-9b6a-49eb7d4629fd/kube-rbac-proxy/0.log" Nov 29 09:39:10 crc kubenswrapper[4795]: I1129 09:39:10.511180 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-5q5dd_b53e6db5-a3fe-4f86-9b6a-49eb7d4629fd/manager/0.log" Nov 29 09:39:10 crc kubenswrapper[4795]: I1129 09:39:10.616208 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-546d4bdf48-cn6z7_1d6dd43f-eee0-4257-adbb-a53218a86eb9/kube-rbac-proxy/0.log" Nov 29 09:39:10 crc kubenswrapper[4795]: I1129 09:39:10.806566 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-6546668bfd-mvrzj_36512615-d21b-4484-af03-ffa1d325883b/kube-rbac-proxy/0.log" Nov 29 09:39:10 crc kubenswrapper[4795]: I1129 09:39:10.808456 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-546d4bdf48-cn6z7_1d6dd43f-eee0-4257-adbb-a53218a86eb9/manager/0.log" Nov 29 09:39:10 crc kubenswrapper[4795]: I1129 09:39:10.864733 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-6546668bfd-mvrzj_36512615-d21b-4484-af03-ffa1d325883b/manager/0.log" Nov 29 09:39:10 crc kubenswrapper[4795]: I1129 09:39:10.986413 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-cmvt9_94d164fa-c521-4617-8338-1eba3ee1c31d/kube-rbac-proxy/0.log" Nov 29 09:39:11 crc kubenswrapper[4795]: I1129 09:39:11.077262 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-cmvt9_94d164fa-c521-4617-8338-1eba3ee1c31d/manager/0.log" Nov 29 09:39:11 crc kubenswrapper[4795]: I1129 09:39:11.156873 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-njfdg_bfb2e88b-d2db-4afa-8511-e1a896eb9039/kube-rbac-proxy/0.log" Nov 29 09:39:11 crc kubenswrapper[4795]: I1129 09:39:11.257083 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-njfdg_bfb2e88b-d2db-4afa-8511-e1a896eb9039/manager/0.log" Nov 29 09:39:11 crc kubenswrapper[4795]: I1129 09:39:11.321061 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-966jn_8de1af69-5c67-4669-83d5-02de0ecd32d3/kube-rbac-proxy/0.log" Nov 29 09:39:11 crc kubenswrapper[4795]: I1129 09:39:11.433707 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-966jn_8de1af69-5c67-4669-83d5-02de0ecd32d3/manager/0.log" Nov 29 09:39:11 crc kubenswrapper[4795]: I1129 09:39:11.532279 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-d5d4r_f2367076-6d52-4047-908c-c1e32c4ca2c4/kube-rbac-proxy/0.log" Nov 29 09:39:11 crc kubenswrapper[4795]: I1129 09:39:11.539355 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-d5d4r_f2367076-6d52-4047-908c-c1e32c4ca2c4/manager/0.log" Nov 29 09:39:11 crc kubenswrapper[4795]: I1129 09:39:11.685013 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd4wf8kl_543b785e-bdb9-4582-b9dd-8a987b5129f6/kube-rbac-proxy/0.log" Nov 29 09:39:11 crc kubenswrapper[4795]: I1129 09:39:11.767004 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd4wf8kl_543b785e-bdb9-4582-b9dd-8a987b5129f6/manager/0.log" Nov 29 09:39:12 crc kubenswrapper[4795]: I1129 09:39:12.161080 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-fc66n_cf43b8b5-a117-4ed8-853b-869086fd5197/registry-server/0.log" Nov 29 09:39:12 crc kubenswrapper[4795]: I1129 09:39:12.200117 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-5c79c4cd8-99qw8_1b9bc471-e43a-403f-8bd9-83744b7746a7/operator/0.log" Nov 29 09:39:12 crc kubenswrapper[4795]: I1129 09:39:12.440475 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-vdmph_a197813b-f5c3-49c1-81f6-b6b2e08e0617/kube-rbac-proxy/0.log" Nov 29 09:39:12 crc kubenswrapper[4795]: I1129 09:39:12.511746 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-vdmph_a197813b-f5c3-49c1-81f6-b6b2e08e0617/manager/0.log" Nov 29 09:39:12 crc kubenswrapper[4795]: I1129 09:39:12.624552 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-9g674_f5e4dece-2c02-41a1-ac8b-ec46eb2a3d45/kube-rbac-proxy/0.log" Nov 29 09:39:12 crc kubenswrapper[4795]: I1129 09:39:12.754037 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-9g674_f5e4dece-2c02-41a1-ac8b-ec46eb2a3d45/manager/0.log" Nov 29 09:39:12 crc kubenswrapper[4795]: I1129 09:39:12.839930 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-27n4r_7ce03a92-9abd-485c-b949-fb95301de889/operator/0.log" Nov 29 09:39:12 crc kubenswrapper[4795]: I1129 09:39:12.969925 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-6fkwt_e56bb4ff-9936-4876-8616-0958e9892fa3/kube-rbac-proxy/0.log" Nov 29 09:39:13 crc kubenswrapper[4795]: I1129 09:39:13.039899 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-6fkwt_e56bb4ff-9936-4876-8616-0958e9892fa3/manager/0.log" Nov 29 09:39:13 crc kubenswrapper[4795]: I1129 09:39:13.106628 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-d486dbd66-bt6tr_c75b943b-8281-4fbd-a94a-3d5db0475d5d/kube-rbac-proxy/0.log" Nov 29 09:39:13 crc kubenswrapper[4795]: I1129 09:39:13.302436 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-k6h4m_4eea915e-348e-48a3-b5e1-767648dac19d/kube-rbac-proxy/0.log" Nov 29 09:39:13 crc kubenswrapper[4795]: I1129 09:39:13.319046 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-k6h4m_4eea915e-348e-48a3-b5e1-767648dac19d/manager/0.log" Nov 29 09:39:13 crc kubenswrapper[4795]: I1129 09:39:13.487832 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-8688fc7b8-5sbpb_868e2666-5606-4891-ba11-ac02f852c48d/manager/0.log" Nov 29 09:39:13 crc kubenswrapper[4795]: I1129 09:39:13.510042 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-h74tt_78c2fefa-d0f0-4123-9513-231b2c3ca5fd/manager/0.log" Nov 29 09:39:13 crc kubenswrapper[4795]: I1129 09:39:13.559841 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-h74tt_78c2fefa-d0f0-4123-9513-231b2c3ca5fd/kube-rbac-proxy/0.log" Nov 29 09:39:13 crc kubenswrapper[4795]: I1129 09:39:13.568706 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-d486dbd66-bt6tr_c75b943b-8281-4fbd-a94a-3d5db0475d5d/manager/0.log" Nov 29 09:39:31 crc kubenswrapper[4795]: I1129 09:39:31.810753 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-t6scb_a9829eb2-ba44-41ca-a0f7-fd92d6114927/control-plane-machine-set-operator/0.log" Nov 29 09:39:31 crc kubenswrapper[4795]: I1129 09:39:31.962352 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-v4fx7_725af35a-cc1c-4178-ae7f-e909af583a5f/kube-rbac-proxy/0.log" Nov 29 09:39:32 crc kubenswrapper[4795]: I1129 09:39:32.032607 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-v4fx7_725af35a-cc1c-4178-ae7f-e909af583a5f/machine-api-operator/0.log" Nov 29 09:39:44 crc kubenswrapper[4795]: I1129 09:39:44.038907 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-l4ctt_286839ed-cd16-46c0-81a4-d0c90bb32fb4/cert-manager-controller/0.log" Nov 29 09:39:44 crc kubenswrapper[4795]: I1129 09:39:44.184344 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-2vns9_a921719c-ebed-49c3-9482-87b58c96c819/cert-manager-cainjector/0.log" Nov 29 09:39:44 crc kubenswrapper[4795]: I1129 09:39:44.194622 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-mn9s2_780eadcf-077c-4f71-8570-5ebbca30d61e/cert-manager-webhook/0.log" Nov 29 09:39:56 crc kubenswrapper[4795]: I1129 09:39:56.766303 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7fbb5f6569-qxwvv_127f1845-59b8-4b9f-9702-2aae122b06e3/nmstate-console-plugin/0.log" Nov 29 09:39:56 crc kubenswrapper[4795]: I1129 09:39:56.844490 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zqvpk"] Nov 29 09:39:56 crc kubenswrapper[4795]: E1129 09:39:56.845032 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cae3ce7b-e0ae-49fb-8d06-839566da3354" containerName="registry-server" Nov 29 09:39:56 crc kubenswrapper[4795]: I1129 09:39:56.845049 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="cae3ce7b-e0ae-49fb-8d06-839566da3354" containerName="registry-server" Nov 29 09:39:56 crc kubenswrapper[4795]: E1129 09:39:56.845069 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cae3ce7b-e0ae-49fb-8d06-839566da3354" containerName="extract-utilities" Nov 29 09:39:56 crc kubenswrapper[4795]: I1129 09:39:56.845077 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="cae3ce7b-e0ae-49fb-8d06-839566da3354" containerName="extract-utilities" Nov 29 09:39:56 crc kubenswrapper[4795]: E1129 09:39:56.845085 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cae3ce7b-e0ae-49fb-8d06-839566da3354" containerName="extract-content" Nov 29 09:39:56 crc kubenswrapper[4795]: I1129 09:39:56.845090 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="cae3ce7b-e0ae-49fb-8d06-839566da3354" containerName="extract-content" Nov 29 09:39:56 crc kubenswrapper[4795]: I1129 09:39:56.845345 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="cae3ce7b-e0ae-49fb-8d06-839566da3354" containerName="registry-server" Nov 29 09:39:56 crc kubenswrapper[4795]: I1129 09:39:56.849505 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zqvpk" Nov 29 09:39:56 crc kubenswrapper[4795]: I1129 09:39:56.868295 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zqvpk"] Nov 29 09:39:56 crc kubenswrapper[4795]: I1129 09:39:56.926705 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-c5pwq_b1c76aa0-5bd2-4df9-8555-83bb44cb23b7/nmstate-handler/0.log" Nov 29 09:39:56 crc kubenswrapper[4795]: I1129 09:39:56.972820 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-f8cgx_471b40b9-dbc5-467e-abd1-18e64ea6a111/kube-rbac-proxy/0.log" Nov 29 09:39:56 crc kubenswrapper[4795]: I1129 09:39:56.981030 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clf2d\" (UniqueName: \"kubernetes.io/projected/9e6e6d17-42c8-46ac-84b3-1554f65c0482-kube-api-access-clf2d\") pod \"community-operators-zqvpk\" (UID: \"9e6e6d17-42c8-46ac-84b3-1554f65c0482\") " pod="openshift-marketplace/community-operators-zqvpk" Nov 29 09:39:56 crc kubenswrapper[4795]: I1129 09:39:56.981327 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e6e6d17-42c8-46ac-84b3-1554f65c0482-catalog-content\") pod \"community-operators-zqvpk\" (UID: \"9e6e6d17-42c8-46ac-84b3-1554f65c0482\") " pod="openshift-marketplace/community-operators-zqvpk" Nov 29 09:39:56 crc kubenswrapper[4795]: I1129 09:39:56.981834 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e6e6d17-42c8-46ac-84b3-1554f65c0482-utilities\") pod \"community-operators-zqvpk\" (UID: \"9e6e6d17-42c8-46ac-84b3-1554f65c0482\") " pod="openshift-marketplace/community-operators-zqvpk" Nov 29 09:39:57 crc kubenswrapper[4795]: I1129 09:39:57.028094 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-f8cgx_471b40b9-dbc5-467e-abd1-18e64ea6a111/nmstate-metrics/0.log" Nov 29 09:39:57 crc kubenswrapper[4795]: I1129 09:39:57.083505 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e6e6d17-42c8-46ac-84b3-1554f65c0482-utilities\") pod \"community-operators-zqvpk\" (UID: \"9e6e6d17-42c8-46ac-84b3-1554f65c0482\") " pod="openshift-marketplace/community-operators-zqvpk" Nov 29 09:39:57 crc kubenswrapper[4795]: I1129 09:39:57.083615 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clf2d\" (UniqueName: \"kubernetes.io/projected/9e6e6d17-42c8-46ac-84b3-1554f65c0482-kube-api-access-clf2d\") pod \"community-operators-zqvpk\" (UID: \"9e6e6d17-42c8-46ac-84b3-1554f65c0482\") " pod="openshift-marketplace/community-operators-zqvpk" Nov 29 09:39:57 crc kubenswrapper[4795]: I1129 09:39:57.083679 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e6e6d17-42c8-46ac-84b3-1554f65c0482-catalog-content\") pod \"community-operators-zqvpk\" (UID: \"9e6e6d17-42c8-46ac-84b3-1554f65c0482\") " pod="openshift-marketplace/community-operators-zqvpk" Nov 29 09:39:57 crc kubenswrapper[4795]: I1129 09:39:57.084104 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e6e6d17-42c8-46ac-84b3-1554f65c0482-utilities\") pod \"community-operators-zqvpk\" (UID: \"9e6e6d17-42c8-46ac-84b3-1554f65c0482\") " pod="openshift-marketplace/community-operators-zqvpk" Nov 29 09:39:57 crc kubenswrapper[4795]: I1129 09:39:57.084124 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e6e6d17-42c8-46ac-84b3-1554f65c0482-catalog-content\") pod \"community-operators-zqvpk\" (UID: \"9e6e6d17-42c8-46ac-84b3-1554f65c0482\") " pod="openshift-marketplace/community-operators-zqvpk" Nov 29 09:39:57 crc kubenswrapper[4795]: I1129 09:39:57.103370 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clf2d\" (UniqueName: \"kubernetes.io/projected/9e6e6d17-42c8-46ac-84b3-1554f65c0482-kube-api-access-clf2d\") pod \"community-operators-zqvpk\" (UID: \"9e6e6d17-42c8-46ac-84b3-1554f65c0482\") " pod="openshift-marketplace/community-operators-zqvpk" Nov 29 09:39:57 crc kubenswrapper[4795]: I1129 09:39:57.167461 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zqvpk" Nov 29 09:39:57 crc kubenswrapper[4795]: I1129 09:39:57.169475 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-5b5b58f5c8-7c5mp_cadc9dc9-f67d-440d-9169-9f7816d26a56/nmstate-operator/0.log" Nov 29 09:39:57 crc kubenswrapper[4795]: I1129 09:39:57.289791 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f6d4c5ccb-jk8vq_dcd14fe7-954f-445a-bd8d-0a62399e71d5/nmstate-webhook/0.log" Nov 29 09:39:57 crc kubenswrapper[4795]: I1129 09:39:57.739992 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zqvpk"] Nov 29 09:39:58 crc kubenswrapper[4795]: I1129 09:39:58.706075 4795 generic.go:334] "Generic (PLEG): container finished" podID="9e6e6d17-42c8-46ac-84b3-1554f65c0482" containerID="07b24e3e05a189bf3b57e540e094050dd77971ab1c58cbf03217d637744263e4" exitCode=0 Nov 29 09:39:58 crc kubenswrapper[4795]: I1129 09:39:58.706361 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zqvpk" event={"ID":"9e6e6d17-42c8-46ac-84b3-1554f65c0482","Type":"ContainerDied","Data":"07b24e3e05a189bf3b57e540e094050dd77971ab1c58cbf03217d637744263e4"} Nov 29 09:39:58 crc kubenswrapper[4795]: I1129 09:39:58.706389 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zqvpk" event={"ID":"9e6e6d17-42c8-46ac-84b3-1554f65c0482","Type":"ContainerStarted","Data":"4cf91a8c5816ba6fe502f9164abcc07918c1885996095743dbf4b8b5a723d127"} Nov 29 09:39:59 crc kubenswrapper[4795]: I1129 09:39:59.717860 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zqvpk" event={"ID":"9e6e6d17-42c8-46ac-84b3-1554f65c0482","Type":"ContainerStarted","Data":"72b1205df1197395b0824a1023efbe41db31e97cb0b06d90eb53ead850a84444"} Nov 29 09:40:00 crc kubenswrapper[4795]: I1129 09:40:00.729654 4795 generic.go:334] "Generic (PLEG): container finished" podID="9e6e6d17-42c8-46ac-84b3-1554f65c0482" containerID="72b1205df1197395b0824a1023efbe41db31e97cb0b06d90eb53ead850a84444" exitCode=0 Nov 29 09:40:00 crc kubenswrapper[4795]: I1129 09:40:00.729764 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zqvpk" event={"ID":"9e6e6d17-42c8-46ac-84b3-1554f65c0482","Type":"ContainerDied","Data":"72b1205df1197395b0824a1023efbe41db31e97cb0b06d90eb53ead850a84444"} Nov 29 09:40:01 crc kubenswrapper[4795]: I1129 09:40:01.743922 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zqvpk" event={"ID":"9e6e6d17-42c8-46ac-84b3-1554f65c0482","Type":"ContainerStarted","Data":"4099ebe5b400117ff4bfda6ef4fda37411666c0d722e759d9f372b11d09545af"} Nov 29 09:40:01 crc kubenswrapper[4795]: I1129 09:40:01.782699 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zqvpk" podStartSLOduration=3.291823704 podStartE2EDuration="5.78267397s" podCreationTimestamp="2025-11-29 09:39:56 +0000 UTC" firstStartedPulling="2025-11-29 09:39:58.708566666 +0000 UTC m=+7244.684142456" lastFinishedPulling="2025-11-29 09:40:01.199416932 +0000 UTC m=+7247.174992722" observedRunningTime="2025-11-29 09:40:01.765520504 +0000 UTC m=+7247.741096294" watchObservedRunningTime="2025-11-29 09:40:01.78267397 +0000 UTC m=+7247.758249800" Nov 29 09:40:07 crc kubenswrapper[4795]: I1129 09:40:07.168050 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zqvpk" Nov 29 09:40:07 crc kubenswrapper[4795]: I1129 09:40:07.169711 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zqvpk" Nov 29 09:40:07 crc kubenswrapper[4795]: I1129 09:40:07.261259 4795 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zqvpk" Nov 29 09:40:07 crc kubenswrapper[4795]: I1129 09:40:07.884041 4795 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zqvpk" Nov 29 09:40:07 crc kubenswrapper[4795]: I1129 09:40:07.956246 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zqvpk"] Nov 29 09:40:09 crc kubenswrapper[4795]: I1129 09:40:09.851519 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zqvpk" podUID="9e6e6d17-42c8-46ac-84b3-1554f65c0482" containerName="registry-server" containerID="cri-o://4099ebe5b400117ff4bfda6ef4fda37411666c0d722e759d9f372b11d09545af" gracePeriod=2 Nov 29 09:40:10 crc kubenswrapper[4795]: I1129 09:40:10.452368 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zqvpk" Nov 29 09:40:10 crc kubenswrapper[4795]: I1129 09:40:10.511563 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e6e6d17-42c8-46ac-84b3-1554f65c0482-catalog-content\") pod \"9e6e6d17-42c8-46ac-84b3-1554f65c0482\" (UID: \"9e6e6d17-42c8-46ac-84b3-1554f65c0482\") " Nov 29 09:40:10 crc kubenswrapper[4795]: I1129 09:40:10.511693 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clf2d\" (UniqueName: \"kubernetes.io/projected/9e6e6d17-42c8-46ac-84b3-1554f65c0482-kube-api-access-clf2d\") pod \"9e6e6d17-42c8-46ac-84b3-1554f65c0482\" (UID: \"9e6e6d17-42c8-46ac-84b3-1554f65c0482\") " Nov 29 09:40:10 crc kubenswrapper[4795]: I1129 09:40:10.511760 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e6e6d17-42c8-46ac-84b3-1554f65c0482-utilities\") pod \"9e6e6d17-42c8-46ac-84b3-1554f65c0482\" (UID: \"9e6e6d17-42c8-46ac-84b3-1554f65c0482\") " Nov 29 09:40:10 crc kubenswrapper[4795]: I1129 09:40:10.513149 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e6e6d17-42c8-46ac-84b3-1554f65c0482-utilities" (OuterVolumeSpecName: "utilities") pod "9e6e6d17-42c8-46ac-84b3-1554f65c0482" (UID: "9e6e6d17-42c8-46ac-84b3-1554f65c0482"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 09:40:10 crc kubenswrapper[4795]: I1129 09:40:10.525526 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e6e6d17-42c8-46ac-84b3-1554f65c0482-kube-api-access-clf2d" (OuterVolumeSpecName: "kube-api-access-clf2d") pod "9e6e6d17-42c8-46ac-84b3-1554f65c0482" (UID: "9e6e6d17-42c8-46ac-84b3-1554f65c0482"). InnerVolumeSpecName "kube-api-access-clf2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 09:40:10 crc kubenswrapper[4795]: I1129 09:40:10.608946 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e6e6d17-42c8-46ac-84b3-1554f65c0482-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9e6e6d17-42c8-46ac-84b3-1554f65c0482" (UID: "9e6e6d17-42c8-46ac-84b3-1554f65c0482"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 09:40:10 crc kubenswrapper[4795]: I1129 09:40:10.614903 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-clf2d\" (UniqueName: \"kubernetes.io/projected/9e6e6d17-42c8-46ac-84b3-1554f65c0482-kube-api-access-clf2d\") on node \"crc\" DevicePath \"\"" Nov 29 09:40:10 crc kubenswrapper[4795]: I1129 09:40:10.614931 4795 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e6e6d17-42c8-46ac-84b3-1554f65c0482-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 09:40:10 crc kubenswrapper[4795]: I1129 09:40:10.614940 4795 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e6e6d17-42c8-46ac-84b3-1554f65c0482-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 09:40:10 crc kubenswrapper[4795]: I1129 09:40:10.696096 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-585cfc87fc-tt7jf_48d9911d-3ed8-4474-9537-cbfcb462dd44/manager/0.log" Nov 29 09:40:10 crc kubenswrapper[4795]: I1129 09:40:10.715093 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-585cfc87fc-tt7jf_48d9911d-3ed8-4474-9537-cbfcb462dd44/kube-rbac-proxy/0.log" Nov 29 09:40:10 crc kubenswrapper[4795]: I1129 09:40:10.860983 4795 generic.go:334] "Generic (PLEG): container finished" podID="9e6e6d17-42c8-46ac-84b3-1554f65c0482" containerID="4099ebe5b400117ff4bfda6ef4fda37411666c0d722e759d9f372b11d09545af" exitCode=0 Nov 29 09:40:10 crc kubenswrapper[4795]: I1129 09:40:10.861011 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zqvpk" event={"ID":"9e6e6d17-42c8-46ac-84b3-1554f65c0482","Type":"ContainerDied","Data":"4099ebe5b400117ff4bfda6ef4fda37411666c0d722e759d9f372b11d09545af"} Nov 29 09:40:10 crc kubenswrapper[4795]: I1129 09:40:10.861058 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zqvpk" event={"ID":"9e6e6d17-42c8-46ac-84b3-1554f65c0482","Type":"ContainerDied","Data":"4cf91a8c5816ba6fe502f9164abcc07918c1885996095743dbf4b8b5a723d127"} Nov 29 09:40:10 crc kubenswrapper[4795]: I1129 09:40:10.861100 4795 scope.go:117] "RemoveContainer" containerID="4099ebe5b400117ff4bfda6ef4fda37411666c0d722e759d9f372b11d09545af" Nov 29 09:40:10 crc kubenswrapper[4795]: I1129 09:40:10.861115 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zqvpk" Nov 29 09:40:10 crc kubenswrapper[4795]: I1129 09:40:10.884066 4795 scope.go:117] "RemoveContainer" containerID="72b1205df1197395b0824a1023efbe41db31e97cb0b06d90eb53ead850a84444" Nov 29 09:40:10 crc kubenswrapper[4795]: I1129 09:40:10.911165 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zqvpk"] Nov 29 09:40:10 crc kubenswrapper[4795]: I1129 09:40:10.918966 4795 scope.go:117] "RemoveContainer" containerID="07b24e3e05a189bf3b57e540e094050dd77971ab1c58cbf03217d637744263e4" Nov 29 09:40:10 crc kubenswrapper[4795]: I1129 09:40:10.922934 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zqvpk"] Nov 29 09:40:10 crc kubenswrapper[4795]: I1129 09:40:10.958211 4795 scope.go:117] "RemoveContainer" containerID="4099ebe5b400117ff4bfda6ef4fda37411666c0d722e759d9f372b11d09545af" Nov 29 09:40:10 crc kubenswrapper[4795]: E1129 09:40:10.958673 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4099ebe5b400117ff4bfda6ef4fda37411666c0d722e759d9f372b11d09545af\": container with ID starting with 4099ebe5b400117ff4bfda6ef4fda37411666c0d722e759d9f372b11d09545af not found: ID does not exist" containerID="4099ebe5b400117ff4bfda6ef4fda37411666c0d722e759d9f372b11d09545af" Nov 29 09:40:10 crc kubenswrapper[4795]: I1129 09:40:10.958714 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4099ebe5b400117ff4bfda6ef4fda37411666c0d722e759d9f372b11d09545af"} err="failed to get container status \"4099ebe5b400117ff4bfda6ef4fda37411666c0d722e759d9f372b11d09545af\": rpc error: code = NotFound desc = could not find container \"4099ebe5b400117ff4bfda6ef4fda37411666c0d722e759d9f372b11d09545af\": container with ID starting with 4099ebe5b400117ff4bfda6ef4fda37411666c0d722e759d9f372b11d09545af not found: ID does not exist" Nov 29 09:40:10 crc kubenswrapper[4795]: I1129 09:40:10.958741 4795 scope.go:117] "RemoveContainer" containerID="72b1205df1197395b0824a1023efbe41db31e97cb0b06d90eb53ead850a84444" Nov 29 09:40:10 crc kubenswrapper[4795]: E1129 09:40:10.959028 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72b1205df1197395b0824a1023efbe41db31e97cb0b06d90eb53ead850a84444\": container with ID starting with 72b1205df1197395b0824a1023efbe41db31e97cb0b06d90eb53ead850a84444 not found: ID does not exist" containerID="72b1205df1197395b0824a1023efbe41db31e97cb0b06d90eb53ead850a84444" Nov 29 09:40:10 crc kubenswrapper[4795]: I1129 09:40:10.959061 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72b1205df1197395b0824a1023efbe41db31e97cb0b06d90eb53ead850a84444"} err="failed to get container status \"72b1205df1197395b0824a1023efbe41db31e97cb0b06d90eb53ead850a84444\": rpc error: code = NotFound desc = could not find container \"72b1205df1197395b0824a1023efbe41db31e97cb0b06d90eb53ead850a84444\": container with ID starting with 72b1205df1197395b0824a1023efbe41db31e97cb0b06d90eb53ead850a84444 not found: ID does not exist" Nov 29 09:40:10 crc kubenswrapper[4795]: I1129 09:40:10.959083 4795 scope.go:117] "RemoveContainer" containerID="07b24e3e05a189bf3b57e540e094050dd77971ab1c58cbf03217d637744263e4" Nov 29 09:40:10 crc kubenswrapper[4795]: E1129 09:40:10.959939 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07b24e3e05a189bf3b57e540e094050dd77971ab1c58cbf03217d637744263e4\": container with ID starting with 07b24e3e05a189bf3b57e540e094050dd77971ab1c58cbf03217d637744263e4 not found: ID does not exist" containerID="07b24e3e05a189bf3b57e540e094050dd77971ab1c58cbf03217d637744263e4" Nov 29 09:40:10 crc kubenswrapper[4795]: I1129 09:40:10.959988 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07b24e3e05a189bf3b57e540e094050dd77971ab1c58cbf03217d637744263e4"} err="failed to get container status \"07b24e3e05a189bf3b57e540e094050dd77971ab1c58cbf03217d637744263e4\": rpc error: code = NotFound desc = could not find container \"07b24e3e05a189bf3b57e540e094050dd77971ab1c58cbf03217d637744263e4\": container with ID starting with 07b24e3e05a189bf3b57e540e094050dd77971ab1c58cbf03217d637744263e4 not found: ID does not exist" Nov 29 09:40:11 crc kubenswrapper[4795]: I1129 09:40:11.941414 4795 patch_prober.go:28] interesting pod/machine-config-daemon-bkmq6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 09:40:11 crc kubenswrapper[4795]: I1129 09:40:11.941488 4795 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 09:40:12 crc kubenswrapper[4795]: I1129 09:40:12.293050 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e6e6d17-42c8-46ac-84b3-1554f65c0482" path="/var/lib/kubelet/pods/9e6e6d17-42c8-46ac-84b3-1554f65c0482/volumes" Nov 29 09:40:26 crc kubenswrapper[4795]: I1129 09:40:26.181653 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_cluster-logging-operator-ff9846bd-gh2bv_ba3b17d1-c4c3-4575-b722-c8134c6cd690/cluster-logging-operator/0.log" Nov 29 09:40:26 crc kubenswrapper[4795]: I1129 09:40:26.391928 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_collector-plv7b_a6a9c50b-4559-45f6-a382-a236c88aa72e/collector/0.log" Nov 29 09:40:26 crc kubenswrapper[4795]: I1129 09:40:26.394562 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-compactor-0_65b29f76-cf84-4166-b1b1-17927cbfd032/loki-compactor/0.log" Nov 29 09:40:26 crc kubenswrapper[4795]: I1129 09:40:26.576700 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-distributor-76cc67bf56-2j6fx_2fd41086-3cec-46c6-a4ed-82885461095c/loki-distributor/0.log" Nov 29 09:40:26 crc kubenswrapper[4795]: I1129 09:40:26.627657 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-575bf4587d-cslrg_04a66bf0-d1a8-4bf7-85e4-8974cc247cd0/gateway/0.log" Nov 29 09:40:26 crc kubenswrapper[4795]: I1129 09:40:26.665509 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-575bf4587d-cslrg_04a66bf0-d1a8-4bf7-85e4-8974cc247cd0/opa/0.log" Nov 29 09:40:26 crc kubenswrapper[4795]: I1129 09:40:26.816985 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-575bf4587d-fvfcs_c5d428b2-eb39-4936-819d-08321d96d015/gateway/0.log" Nov 29 09:40:26 crc kubenswrapper[4795]: I1129 09:40:26.867913 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-575bf4587d-fvfcs_c5d428b2-eb39-4936-819d-08321d96d015/opa/0.log" Nov 29 09:40:27 crc kubenswrapper[4795]: I1129 09:40:27.023136 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-index-gateway-0_419b2242-f6e9-429b-89fe-a8e499b5952b/loki-index-gateway/0.log" Nov 29 09:40:27 crc kubenswrapper[4795]: I1129 09:40:27.167917 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-ingester-0_ec05fb5e-2fc9-424a-a305-4ac1734df8d5/loki-ingester/0.log" Nov 29 09:40:27 crc kubenswrapper[4795]: I1129 09:40:27.286426 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-querier-5895d59bb8-jqcx2_996554b0-3876-4c69-be10-a5f2c4a5c2e4/loki-querier/0.log" Nov 29 09:40:27 crc kubenswrapper[4795]: I1129 09:40:27.396073 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-query-frontend-84558f7c9f-g2h7b_73ef818d-4038-418e-87e6-a16224e788c5/loki-query-frontend/0.log" Nov 29 09:40:41 crc kubenswrapper[4795]: I1129 09:40:41.941102 4795 patch_prober.go:28] interesting pod/machine-config-daemon-bkmq6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 09:40:41 crc kubenswrapper[4795]: I1129 09:40:41.941745 4795 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 09:40:42 crc kubenswrapper[4795]: I1129 09:40:42.970072 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-wc858_402aa6f5-7950-4290-ab83-bd5bafa2a8d7/kube-rbac-proxy/0.log" Nov 29 09:40:43 crc kubenswrapper[4795]: I1129 09:40:43.160452 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-wc858_402aa6f5-7950-4290-ab83-bd5bafa2a8d7/controller/0.log" Nov 29 09:40:43 crc kubenswrapper[4795]: I1129 09:40:43.243960 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tkql9_166b46fd-e087-476d-9491-d173847e5fb9/cp-frr-files/0.log" Nov 29 09:40:43 crc kubenswrapper[4795]: I1129 09:40:43.465932 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tkql9_166b46fd-e087-476d-9491-d173847e5fb9/cp-reloader/0.log" Nov 29 09:40:43 crc kubenswrapper[4795]: I1129 09:40:43.465966 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tkql9_166b46fd-e087-476d-9491-d173847e5fb9/cp-frr-files/0.log" Nov 29 09:40:43 crc kubenswrapper[4795]: I1129 09:40:43.541136 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tkql9_166b46fd-e087-476d-9491-d173847e5fb9/cp-metrics/0.log" Nov 29 09:40:43 crc kubenswrapper[4795]: I1129 09:40:43.553114 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tkql9_166b46fd-e087-476d-9491-d173847e5fb9/cp-reloader/0.log" Nov 29 09:40:43 crc kubenswrapper[4795]: I1129 09:40:43.784162 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tkql9_166b46fd-e087-476d-9491-d173847e5fb9/cp-frr-files/0.log" Nov 29 09:40:43 crc kubenswrapper[4795]: I1129 09:40:43.801538 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tkql9_166b46fd-e087-476d-9491-d173847e5fb9/cp-metrics/0.log" Nov 29 09:40:43 crc kubenswrapper[4795]: I1129 09:40:43.831356 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tkql9_166b46fd-e087-476d-9491-d173847e5fb9/cp-reloader/0.log" Nov 29 09:40:43 crc kubenswrapper[4795]: I1129 09:40:43.856842 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tkql9_166b46fd-e087-476d-9491-d173847e5fb9/cp-metrics/0.log" Nov 29 09:40:44 crc kubenswrapper[4795]: I1129 09:40:44.022376 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tkql9_166b46fd-e087-476d-9491-d173847e5fb9/cp-frr-files/0.log" Nov 29 09:40:44 crc kubenswrapper[4795]: I1129 09:40:44.042823 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tkql9_166b46fd-e087-476d-9491-d173847e5fb9/cp-reloader/0.log" Nov 29 09:40:44 crc kubenswrapper[4795]: I1129 09:40:44.109692 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tkql9_166b46fd-e087-476d-9491-d173847e5fb9/cp-metrics/0.log" Nov 29 09:40:44 crc kubenswrapper[4795]: I1129 09:40:44.159885 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tkql9_166b46fd-e087-476d-9491-d173847e5fb9/controller/0.log" Nov 29 09:40:44 crc kubenswrapper[4795]: I1129 09:40:44.246838 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tkql9_166b46fd-e087-476d-9491-d173847e5fb9/frr-metrics/0.log" Nov 29 09:40:44 crc kubenswrapper[4795]: I1129 09:40:44.423111 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tkql9_166b46fd-e087-476d-9491-d173847e5fb9/kube-rbac-proxy/0.log" Nov 29 09:40:44 crc kubenswrapper[4795]: I1129 09:40:44.489355 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tkql9_166b46fd-e087-476d-9491-d173847e5fb9/kube-rbac-proxy-frr/0.log" Nov 29 09:40:44 crc kubenswrapper[4795]: I1129 09:40:44.503321 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tkql9_166b46fd-e087-476d-9491-d173847e5fb9/reloader/0.log" Nov 29 09:40:44 crc kubenswrapper[4795]: I1129 09:40:44.721134 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7fcb986d4-gbxmx_cc6bc09a-5187-429a-8f93-1f57bb5cd0d0/frr-k8s-webhook-server/0.log" Nov 29 09:40:45 crc kubenswrapper[4795]: I1129 09:40:45.033401 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-85bfb995d5-2snm7_00a1cd25-da5a-4ff5-b0d0-632a4ccdc0a3/webhook-server/0.log" Nov 29 09:40:45 crc kubenswrapper[4795]: I1129 09:40:45.040842 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-77967fb544-pfl5d_88516493-98c7-4365-9293-73456d8d0913/manager/0.log" Nov 29 09:40:45 crc kubenswrapper[4795]: I1129 09:40:45.788363 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-9ckmx_c433634f-86e7-44a7-9dfa-e0d09a1f5747/kube-rbac-proxy/0.log" Nov 29 09:40:46 crc kubenswrapper[4795]: I1129 09:40:46.449099 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-9ckmx_c433634f-86e7-44a7-9dfa-e0d09a1f5747/speaker/0.log" Nov 29 09:40:46 crc kubenswrapper[4795]: I1129 09:40:46.520199 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tkql9_166b46fd-e087-476d-9491-d173847e5fb9/frr/0.log" Nov 29 09:41:00 crc kubenswrapper[4795]: I1129 09:41:00.863943 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8w8dhw_bf4954ba-ab7e-4c71-af52-f1cf638d0a0b/util/0.log" Nov 29 09:41:01 crc kubenswrapper[4795]: I1129 09:41:01.045621 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8w8dhw_bf4954ba-ab7e-4c71-af52-f1cf638d0a0b/util/0.log" Nov 29 09:41:01 crc kubenswrapper[4795]: I1129 09:41:01.088565 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8w8dhw_bf4954ba-ab7e-4c71-af52-f1cf638d0a0b/pull/0.log" Nov 29 09:41:01 crc kubenswrapper[4795]: I1129 09:41:01.089303 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8w8dhw_bf4954ba-ab7e-4c71-af52-f1cf638d0a0b/pull/0.log" Nov 29 09:41:01 crc kubenswrapper[4795]: I1129 09:41:01.267163 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8w8dhw_bf4954ba-ab7e-4c71-af52-f1cf638d0a0b/extract/0.log" Nov 29 09:41:01 crc kubenswrapper[4795]: I1129 09:41:01.308274 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8w8dhw_bf4954ba-ab7e-4c71-af52-f1cf638d0a0b/util/0.log" Nov 29 09:41:01 crc kubenswrapper[4795]: I1129 09:41:01.329397 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8w8dhw_bf4954ba-ab7e-4c71-af52-f1cf638d0a0b/pull/0.log" Nov 29 09:41:01 crc kubenswrapper[4795]: I1129 09:41:01.481572 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwpl88_0830d1da-b370-4d0d-8418-7dca445d0ca5/util/0.log" Nov 29 09:41:01 crc kubenswrapper[4795]: I1129 09:41:01.684173 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwpl88_0830d1da-b370-4d0d-8418-7dca445d0ca5/pull/0.log" Nov 29 09:41:01 crc kubenswrapper[4795]: I1129 09:41:01.718188 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwpl88_0830d1da-b370-4d0d-8418-7dca445d0ca5/pull/0.log" Nov 29 09:41:01 crc kubenswrapper[4795]: I1129 09:41:01.746524 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwpl88_0830d1da-b370-4d0d-8418-7dca445d0ca5/util/0.log" Nov 29 09:41:01 crc kubenswrapper[4795]: I1129 09:41:01.928818 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwpl88_0830d1da-b370-4d0d-8418-7dca445d0ca5/pull/0.log" Nov 29 09:41:01 crc kubenswrapper[4795]: I1129 09:41:01.963753 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwpl88_0830d1da-b370-4d0d-8418-7dca445d0ca5/util/0.log" Nov 29 09:41:01 crc kubenswrapper[4795]: I1129 09:41:01.979175 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fwpl88_0830d1da-b370-4d0d-8418-7dca445d0ca5/extract/0.log" Nov 29 09:41:02 crc kubenswrapper[4795]: I1129 09:41:02.154903 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921076vbr_45700729-39c7-4828-81fd-3763836d1dbe/util/0.log" Nov 29 09:41:02 crc kubenswrapper[4795]: I1129 09:41:02.314712 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921076vbr_45700729-39c7-4828-81fd-3763836d1dbe/pull/0.log" Nov 29 09:41:02 crc kubenswrapper[4795]: I1129 09:41:02.319744 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921076vbr_45700729-39c7-4828-81fd-3763836d1dbe/util/0.log" Nov 29 09:41:02 crc kubenswrapper[4795]: I1129 09:41:02.355060 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921076vbr_45700729-39c7-4828-81fd-3763836d1dbe/pull/0.log" Nov 29 09:41:02 crc kubenswrapper[4795]: I1129 09:41:02.742326 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921076vbr_45700729-39c7-4828-81fd-3763836d1dbe/pull/0.log" Nov 29 09:41:02 crc kubenswrapper[4795]: I1129 09:41:02.889200 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921076vbr_45700729-39c7-4828-81fd-3763836d1dbe/extract/0.log" Nov 29 09:41:02 crc kubenswrapper[4795]: I1129 09:41:02.961117 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921076vbr_45700729-39c7-4828-81fd-3763836d1dbe/util/0.log" Nov 29 09:41:03 crc kubenswrapper[4795]: I1129 09:41:03.075038 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f5mq9r_45ac49e4-eef2-4cef-bac3-06fb40427b2b/util/0.log" Nov 29 09:41:03 crc kubenswrapper[4795]: I1129 09:41:03.216660 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f5mq9r_45ac49e4-eef2-4cef-bac3-06fb40427b2b/util/0.log" Nov 29 09:41:03 crc kubenswrapper[4795]: I1129 09:41:03.247717 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f5mq9r_45ac49e4-eef2-4cef-bac3-06fb40427b2b/pull/0.log" Nov 29 09:41:03 crc kubenswrapper[4795]: I1129 09:41:03.304365 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f5mq9r_45ac49e4-eef2-4cef-bac3-06fb40427b2b/pull/0.log" Nov 29 09:41:03 crc kubenswrapper[4795]: I1129 09:41:03.509055 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f5mq9r_45ac49e4-eef2-4cef-bac3-06fb40427b2b/extract/0.log" Nov 29 09:41:03 crc kubenswrapper[4795]: I1129 09:41:03.537727 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f5mq9r_45ac49e4-eef2-4cef-bac3-06fb40427b2b/util/0.log" Nov 29 09:41:03 crc kubenswrapper[4795]: I1129 09:41:03.543271 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f5mq9r_45ac49e4-eef2-4cef-bac3-06fb40427b2b/pull/0.log" Nov 29 09:41:03 crc kubenswrapper[4795]: I1129 09:41:03.714706 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hm892_1be51a9d-eb05-4b33-85aa-2134496eb1b6/util/0.log" Nov 29 09:41:03 crc kubenswrapper[4795]: I1129 09:41:03.883391 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hm892_1be51a9d-eb05-4b33-85aa-2134496eb1b6/util/0.log" Nov 29 09:41:03 crc kubenswrapper[4795]: I1129 09:41:03.957164 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hm892_1be51a9d-eb05-4b33-85aa-2134496eb1b6/pull/0.log" Nov 29 09:41:03 crc kubenswrapper[4795]: I1129 09:41:03.959025 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hm892_1be51a9d-eb05-4b33-85aa-2134496eb1b6/pull/0.log" Nov 29 09:41:04 crc kubenswrapper[4795]: I1129 09:41:04.158042 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hm892_1be51a9d-eb05-4b33-85aa-2134496eb1b6/pull/0.log" Nov 29 09:41:04 crc kubenswrapper[4795]: I1129 09:41:04.158907 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hm892_1be51a9d-eb05-4b33-85aa-2134496eb1b6/extract/0.log" Nov 29 09:41:04 crc kubenswrapper[4795]: I1129 09:41:04.206856 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hm892_1be51a9d-eb05-4b33-85aa-2134496eb1b6/util/0.log" Nov 29 09:41:04 crc kubenswrapper[4795]: I1129 09:41:04.398188 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8vhrd_edee21c8-0c84-4772-826f-0fa6de3076ba/extract-utilities/0.log" Nov 29 09:41:04 crc kubenswrapper[4795]: I1129 09:41:04.603166 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8vhrd_edee21c8-0c84-4772-826f-0fa6de3076ba/extract-content/0.log" Nov 29 09:41:04 crc kubenswrapper[4795]: I1129 09:41:04.605266 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8vhrd_edee21c8-0c84-4772-826f-0fa6de3076ba/extract-content/0.log" Nov 29 09:41:04 crc kubenswrapper[4795]: I1129 09:41:04.614239 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8vhrd_edee21c8-0c84-4772-826f-0fa6de3076ba/extract-utilities/0.log" Nov 29 09:41:04 crc kubenswrapper[4795]: I1129 09:41:04.811121 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8vhrd_edee21c8-0c84-4772-826f-0fa6de3076ba/extract-content/0.log" Nov 29 09:41:04 crc kubenswrapper[4795]: I1129 09:41:04.812312 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8vhrd_edee21c8-0c84-4772-826f-0fa6de3076ba/extract-utilities/0.log" Nov 29 09:41:04 crc kubenswrapper[4795]: I1129 09:41:04.864661 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8xl7p_fda09a90-a63a-44bd-a55a-5cc411392f7d/extract-utilities/0.log" Nov 29 09:41:05 crc kubenswrapper[4795]: I1129 09:41:05.092056 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8xl7p_fda09a90-a63a-44bd-a55a-5cc411392f7d/extract-utilities/0.log" Nov 29 09:41:05 crc kubenswrapper[4795]: I1129 09:41:05.137978 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8xl7p_fda09a90-a63a-44bd-a55a-5cc411392f7d/extract-content/0.log" Nov 29 09:41:05 crc kubenswrapper[4795]: I1129 09:41:05.212164 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8xl7p_fda09a90-a63a-44bd-a55a-5cc411392f7d/extract-content/0.log" Nov 29 09:41:05 crc kubenswrapper[4795]: I1129 09:41:05.385016 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8xl7p_fda09a90-a63a-44bd-a55a-5cc411392f7d/extract-utilities/0.log" Nov 29 09:41:05 crc kubenswrapper[4795]: I1129 09:41:05.393405 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8xl7p_fda09a90-a63a-44bd-a55a-5cc411392f7d/extract-content/0.log" Nov 29 09:41:05 crc kubenswrapper[4795]: I1129 09:41:05.630613 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-nx2gj_d49cd6f6-0b90-4c8f-9e8f-30a52c232522/marketplace-operator/0.log" Nov 29 09:41:05 crc kubenswrapper[4795]: I1129 09:41:05.721787 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kvpxx_18e2d8ec-4739-4097-8772-98689f8d8626/extract-utilities/0.log" Nov 29 09:41:05 crc kubenswrapper[4795]: I1129 09:41:05.951200 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kvpxx_18e2d8ec-4739-4097-8772-98689f8d8626/extract-utilities/0.log" Nov 29 09:41:05 crc kubenswrapper[4795]: I1129 09:41:05.975077 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kvpxx_18e2d8ec-4739-4097-8772-98689f8d8626/extract-content/0.log" Nov 29 09:41:06 crc kubenswrapper[4795]: I1129 09:41:06.029901 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kvpxx_18e2d8ec-4739-4097-8772-98689f8d8626/extract-content/0.log" Nov 29 09:41:06 crc kubenswrapper[4795]: I1129 09:41:06.050617 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8xl7p_fda09a90-a63a-44bd-a55a-5cc411392f7d/registry-server/0.log" Nov 29 09:41:06 crc kubenswrapper[4795]: I1129 09:41:06.152231 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8vhrd_edee21c8-0c84-4772-826f-0fa6de3076ba/registry-server/0.log" Nov 29 09:41:06 crc kubenswrapper[4795]: I1129 09:41:06.227606 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kvpxx_18e2d8ec-4739-4097-8772-98689f8d8626/extract-content/0.log" Nov 29 09:41:06 crc kubenswrapper[4795]: I1129 09:41:06.285081 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kvpxx_18e2d8ec-4739-4097-8772-98689f8d8626/extract-utilities/0.log" Nov 29 09:41:06 crc kubenswrapper[4795]: I1129 09:41:06.413483 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tr957_8efac1d2-dfa3-48d7-9928-823442690b91/extract-utilities/0.log" Nov 29 09:41:06 crc kubenswrapper[4795]: I1129 09:41:06.508791 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kvpxx_18e2d8ec-4739-4097-8772-98689f8d8626/registry-server/0.log" Nov 29 09:41:06 crc kubenswrapper[4795]: I1129 09:41:06.771797 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tr957_8efac1d2-dfa3-48d7-9928-823442690b91/extract-content/0.log" Nov 29 09:41:06 crc kubenswrapper[4795]: I1129 09:41:06.835180 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tr957_8efac1d2-dfa3-48d7-9928-823442690b91/extract-utilities/0.log" Nov 29 09:41:06 crc kubenswrapper[4795]: I1129 09:41:06.903747 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tr957_8efac1d2-dfa3-48d7-9928-823442690b91/extract-content/0.log" Nov 29 09:41:07 crc kubenswrapper[4795]: I1129 09:41:07.019742 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tr957_8efac1d2-dfa3-48d7-9928-823442690b91/extract-content/0.log" Nov 29 09:41:07 crc kubenswrapper[4795]: I1129 09:41:07.049600 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tr957_8efac1d2-dfa3-48d7-9928-823442690b91/extract-utilities/0.log" Nov 29 09:41:08 crc kubenswrapper[4795]: I1129 09:41:08.091649 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tr957_8efac1d2-dfa3-48d7-9928-823442690b91/registry-server/0.log" Nov 29 09:41:11 crc kubenswrapper[4795]: I1129 09:41:11.941352 4795 patch_prober.go:28] interesting pod/machine-config-daemon-bkmq6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 09:41:11 crc kubenswrapper[4795]: I1129 09:41:11.941904 4795 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 09:41:11 crc kubenswrapper[4795]: I1129 09:41:11.941949 4795 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" Nov 29 09:41:11 crc kubenswrapper[4795]: I1129 09:41:11.942886 4795 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"216a6fd5200a9fb496152b641226ae7296ea85812450cb9b81cd8b2ac3b4a1d5"} pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 09:41:11 crc kubenswrapper[4795]: I1129 09:41:11.942946 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" containerID="cri-o://216a6fd5200a9fb496152b641226ae7296ea85812450cb9b81cd8b2ac3b4a1d5" gracePeriod=600 Nov 29 09:41:12 crc kubenswrapper[4795]: I1129 09:41:12.611165 4795 generic.go:334] "Generic (PLEG): container finished" podID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerID="216a6fd5200a9fb496152b641226ae7296ea85812450cb9b81cd8b2ac3b4a1d5" exitCode=0 Nov 29 09:41:12 crc kubenswrapper[4795]: I1129 09:41:12.611207 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" event={"ID":"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1","Type":"ContainerDied","Data":"216a6fd5200a9fb496152b641226ae7296ea85812450cb9b81cd8b2ac3b4a1d5"} Nov 29 09:41:12 crc kubenswrapper[4795]: I1129 09:41:12.611470 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" event={"ID":"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1","Type":"ContainerStarted","Data":"71c147b2a6e1831846318f34e8b60797d25055c13936239694977cf2a1c9bc31"} Nov 29 09:41:12 crc kubenswrapper[4795]: I1129 09:41:12.611496 4795 scope.go:117] "RemoveContainer" containerID="a0961216ed7f33ef63b3697986049c3d884283e351de87a69a04219f3b1944a4" Nov 29 09:41:20 crc kubenswrapper[4795]: I1129 09:41:20.855510 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-668cf9dfbb-xmczn_69e46873-1e0c-4187-810e-584aa956ba47/prometheus-operator/0.log" Nov 29 09:41:21 crc kubenswrapper[4795]: I1129 09:41:21.052324 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-8fdc8d5d-9scj8_63008df8-40b1-4ab0-966e-d88d426e3b1b/prometheus-operator-admission-webhook/0.log" Nov 29 09:41:21 crc kubenswrapper[4795]: I1129 09:41:21.090904 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-8fdc8d5d-r6sxj_5f84b151-fbdd-40bc-9457-ec560370a162/prometheus-operator-admission-webhook/0.log" Nov 29 09:41:21 crc kubenswrapper[4795]: I1129 09:41:21.261306 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-7d5fb4cbfb-q86dd_5194485f-a306-493d-a1a3-f33030371413/observability-ui-dashboards/0.log" Nov 29 09:41:21 crc kubenswrapper[4795]: I1129 09:41:21.275922 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-d8bb48f5d-7h8sj_52ff2b6a-cefb-4a70-ac45-9d0c5b9d315d/operator/0.log" Nov 29 09:41:21 crc kubenswrapper[4795]: I1129 09:41:21.467507 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5446b9c989-rb87j_7cca6c3b-ab29-47c1-94fe-bb8d2ac90062/perses-operator/0.log" Nov 29 09:41:36 crc kubenswrapper[4795]: I1129 09:41:36.921854 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-585cfc87fc-tt7jf_48d9911d-3ed8-4474-9537-cbfcb462dd44/kube-rbac-proxy/0.log" Nov 29 09:41:36 crc kubenswrapper[4795]: I1129 09:41:36.954416 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-585cfc87fc-tt7jf_48d9911d-3ed8-4474-9537-cbfcb462dd44/manager/0.log" Nov 29 09:43:25 crc kubenswrapper[4795]: I1129 09:43:25.334369 4795 generic.go:334] "Generic (PLEG): container finished" podID="043b9785-4403-44ff-b8d1-e2d279b1ccdb" containerID="8b678a3d9a1a31588ab9270df0ab923693284d184c39e9755808d9726a725009" exitCode=0 Nov 29 09:43:25 crc kubenswrapper[4795]: I1129 09:43:25.334451 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cpgxf/must-gather-qkqh8" event={"ID":"043b9785-4403-44ff-b8d1-e2d279b1ccdb","Type":"ContainerDied","Data":"8b678a3d9a1a31588ab9270df0ab923693284d184c39e9755808d9726a725009"} Nov 29 09:43:25 crc kubenswrapper[4795]: I1129 09:43:25.336280 4795 scope.go:117] "RemoveContainer" containerID="8b678a3d9a1a31588ab9270df0ab923693284d184c39e9755808d9726a725009" Nov 29 09:43:26 crc kubenswrapper[4795]: I1129 09:43:26.379079 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-cpgxf_must-gather-qkqh8_043b9785-4403-44ff-b8d1-e2d279b1ccdb/gather/0.log" Nov 29 09:43:32 crc kubenswrapper[4795]: E1129 09:43:32.329402 4795 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.107:47406->38.102.83.107:42443: write tcp 38.102.83.107:47406->38.102.83.107:42443: write: broken pipe Nov 29 09:43:36 crc kubenswrapper[4795]: I1129 09:43:36.366838 4795 scope.go:117] "RemoveContainer" containerID="59d6e219b8000eb0dff07fab472dbca3f018d457ac1d7e054ead4e879ff86496" Nov 29 09:43:37 crc kubenswrapper[4795]: I1129 09:43:37.798244 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-cpgxf/must-gather-qkqh8"] Nov 29 09:43:37 crc kubenswrapper[4795]: I1129 09:43:37.798950 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-cpgxf/must-gather-qkqh8" podUID="043b9785-4403-44ff-b8d1-e2d279b1ccdb" containerName="copy" containerID="cri-o://3384d58cb121046efec389f34a14dedbe2291f28bd62d64eb9c03b8791bb8e99" gracePeriod=2 Nov 29 09:43:37 crc kubenswrapper[4795]: I1129 09:43:37.817819 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-cpgxf/must-gather-qkqh8"] Nov 29 09:43:38 crc kubenswrapper[4795]: I1129 09:43:38.288320 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-cpgxf_must-gather-qkqh8_043b9785-4403-44ff-b8d1-e2d279b1ccdb/copy/0.log" Nov 29 09:43:38 crc kubenswrapper[4795]: I1129 09:43:38.289202 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cpgxf/must-gather-qkqh8" Nov 29 09:43:38 crc kubenswrapper[4795]: I1129 09:43:38.375171 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/043b9785-4403-44ff-b8d1-e2d279b1ccdb-must-gather-output\") pod \"043b9785-4403-44ff-b8d1-e2d279b1ccdb\" (UID: \"043b9785-4403-44ff-b8d1-e2d279b1ccdb\") " Nov 29 09:43:38 crc kubenswrapper[4795]: I1129 09:43:38.375254 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vb2d6\" (UniqueName: \"kubernetes.io/projected/043b9785-4403-44ff-b8d1-e2d279b1ccdb-kube-api-access-vb2d6\") pod \"043b9785-4403-44ff-b8d1-e2d279b1ccdb\" (UID: \"043b9785-4403-44ff-b8d1-e2d279b1ccdb\") " Nov 29 09:43:38 crc kubenswrapper[4795]: I1129 09:43:38.382684 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/043b9785-4403-44ff-b8d1-e2d279b1ccdb-kube-api-access-vb2d6" (OuterVolumeSpecName: "kube-api-access-vb2d6") pod "043b9785-4403-44ff-b8d1-e2d279b1ccdb" (UID: "043b9785-4403-44ff-b8d1-e2d279b1ccdb"). InnerVolumeSpecName "kube-api-access-vb2d6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 09:43:38 crc kubenswrapper[4795]: I1129 09:43:38.481272 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vb2d6\" (UniqueName: \"kubernetes.io/projected/043b9785-4403-44ff-b8d1-e2d279b1ccdb-kube-api-access-vb2d6\") on node \"crc\" DevicePath \"\"" Nov 29 09:43:38 crc kubenswrapper[4795]: I1129 09:43:38.491157 4795 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-cpgxf_must-gather-qkqh8_043b9785-4403-44ff-b8d1-e2d279b1ccdb/copy/0.log" Nov 29 09:43:38 crc kubenswrapper[4795]: I1129 09:43:38.491674 4795 generic.go:334] "Generic (PLEG): container finished" podID="043b9785-4403-44ff-b8d1-e2d279b1ccdb" containerID="3384d58cb121046efec389f34a14dedbe2291f28bd62d64eb9c03b8791bb8e99" exitCode=143 Nov 29 09:43:38 crc kubenswrapper[4795]: I1129 09:43:38.491738 4795 scope.go:117] "RemoveContainer" containerID="3384d58cb121046efec389f34a14dedbe2291f28bd62d64eb9c03b8791bb8e99" Nov 29 09:43:38 crc kubenswrapper[4795]: I1129 09:43:38.491877 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cpgxf/must-gather-qkqh8" Nov 29 09:43:38 crc kubenswrapper[4795]: I1129 09:43:38.514784 4795 scope.go:117] "RemoveContainer" containerID="8b678a3d9a1a31588ab9270df0ab923693284d184c39e9755808d9726a725009" Nov 29 09:43:38 crc kubenswrapper[4795]: I1129 09:43:38.563688 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/043b9785-4403-44ff-b8d1-e2d279b1ccdb-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "043b9785-4403-44ff-b8d1-e2d279b1ccdb" (UID: "043b9785-4403-44ff-b8d1-e2d279b1ccdb"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 09:43:38 crc kubenswrapper[4795]: I1129 09:43:38.564759 4795 scope.go:117] "RemoveContainer" containerID="3384d58cb121046efec389f34a14dedbe2291f28bd62d64eb9c03b8791bb8e99" Nov 29 09:43:38 crc kubenswrapper[4795]: E1129 09:43:38.565206 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3384d58cb121046efec389f34a14dedbe2291f28bd62d64eb9c03b8791bb8e99\": container with ID starting with 3384d58cb121046efec389f34a14dedbe2291f28bd62d64eb9c03b8791bb8e99 not found: ID does not exist" containerID="3384d58cb121046efec389f34a14dedbe2291f28bd62d64eb9c03b8791bb8e99" Nov 29 09:43:38 crc kubenswrapper[4795]: I1129 09:43:38.565234 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3384d58cb121046efec389f34a14dedbe2291f28bd62d64eb9c03b8791bb8e99"} err="failed to get container status \"3384d58cb121046efec389f34a14dedbe2291f28bd62d64eb9c03b8791bb8e99\": rpc error: code = NotFound desc = could not find container \"3384d58cb121046efec389f34a14dedbe2291f28bd62d64eb9c03b8791bb8e99\": container with ID starting with 3384d58cb121046efec389f34a14dedbe2291f28bd62d64eb9c03b8791bb8e99 not found: ID does not exist" Nov 29 09:43:38 crc kubenswrapper[4795]: I1129 09:43:38.565254 4795 scope.go:117] "RemoveContainer" containerID="8b678a3d9a1a31588ab9270df0ab923693284d184c39e9755808d9726a725009" Nov 29 09:43:38 crc kubenswrapper[4795]: E1129 09:43:38.565484 4795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b678a3d9a1a31588ab9270df0ab923693284d184c39e9755808d9726a725009\": container with ID starting with 8b678a3d9a1a31588ab9270df0ab923693284d184c39e9755808d9726a725009 not found: ID does not exist" containerID="8b678a3d9a1a31588ab9270df0ab923693284d184c39e9755808d9726a725009" Nov 29 09:43:38 crc kubenswrapper[4795]: I1129 09:43:38.565503 4795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b678a3d9a1a31588ab9270df0ab923693284d184c39e9755808d9726a725009"} err="failed to get container status \"8b678a3d9a1a31588ab9270df0ab923693284d184c39e9755808d9726a725009\": rpc error: code = NotFound desc = could not find container \"8b678a3d9a1a31588ab9270df0ab923693284d184c39e9755808d9726a725009\": container with ID starting with 8b678a3d9a1a31588ab9270df0ab923693284d184c39e9755808d9726a725009 not found: ID does not exist" Nov 29 09:43:38 crc kubenswrapper[4795]: I1129 09:43:38.583725 4795 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/043b9785-4403-44ff-b8d1-e2d279b1ccdb-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 29 09:43:40 crc kubenswrapper[4795]: I1129 09:43:40.295203 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="043b9785-4403-44ff-b8d1-e2d279b1ccdb" path="/var/lib/kubelet/pods/043b9785-4403-44ff-b8d1-e2d279b1ccdb/volumes" Nov 29 09:43:41 crc kubenswrapper[4795]: I1129 09:43:41.940889 4795 patch_prober.go:28] interesting pod/machine-config-daemon-bkmq6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 09:43:41 crc kubenswrapper[4795]: I1129 09:43:41.940959 4795 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 09:44:11 crc kubenswrapper[4795]: I1129 09:44:11.941202 4795 patch_prober.go:28] interesting pod/machine-config-daemon-bkmq6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 09:44:11 crc kubenswrapper[4795]: I1129 09:44:11.941933 4795 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 09:44:36 crc kubenswrapper[4795]: I1129 09:44:36.440414 4795 scope.go:117] "RemoveContainer" containerID="aa64b7eeb4cfd3796dba387209ec8e80d5e5b9d929812371c652bdab794ea5eb" Nov 29 09:44:41 crc kubenswrapper[4795]: I1129 09:44:41.941374 4795 patch_prober.go:28] interesting pod/machine-config-daemon-bkmq6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 09:44:41 crc kubenswrapper[4795]: I1129 09:44:41.942065 4795 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 09:44:41 crc kubenswrapper[4795]: I1129 09:44:41.942135 4795 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" Nov 29 09:44:41 crc kubenswrapper[4795]: I1129 09:44:41.943448 4795 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"71c147b2a6e1831846318f34e8b60797d25055c13936239694977cf2a1c9bc31"} pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 09:44:41 crc kubenswrapper[4795]: I1129 09:44:41.943659 4795 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerName="machine-config-daemon" containerID="cri-o://71c147b2a6e1831846318f34e8b60797d25055c13936239694977cf2a1c9bc31" gracePeriod=600 Nov 29 09:44:42 crc kubenswrapper[4795]: E1129 09:44:42.074557 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:44:42 crc kubenswrapper[4795]: I1129 09:44:42.298387 4795 generic.go:334] "Generic (PLEG): container finished" podID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" containerID="71c147b2a6e1831846318f34e8b60797d25055c13936239694977cf2a1c9bc31" exitCode=0 Nov 29 09:44:42 crc kubenswrapper[4795]: I1129 09:44:42.298730 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" event={"ID":"1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1","Type":"ContainerDied","Data":"71c147b2a6e1831846318f34e8b60797d25055c13936239694977cf2a1c9bc31"} Nov 29 09:44:42 crc kubenswrapper[4795]: I1129 09:44:42.298764 4795 scope.go:117] "RemoveContainer" containerID="216a6fd5200a9fb496152b641226ae7296ea85812450cb9b81cd8b2ac3b4a1d5" Nov 29 09:44:42 crc kubenswrapper[4795]: I1129 09:44:42.299983 4795 scope.go:117] "RemoveContainer" containerID="71c147b2a6e1831846318f34e8b60797d25055c13936239694977cf2a1c9bc31" Nov 29 09:44:42 crc kubenswrapper[4795]: E1129 09:44:42.300525 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:44:53 crc kubenswrapper[4795]: I1129 09:44:53.275705 4795 scope.go:117] "RemoveContainer" containerID="71c147b2a6e1831846318f34e8b60797d25055c13936239694977cf2a1c9bc31" Nov 29 09:44:53 crc kubenswrapper[4795]: E1129 09:44:53.276832 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:45:00 crc kubenswrapper[4795]: I1129 09:45:00.176530 4795 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406825-xqw7z"] Nov 29 09:45:00 crc kubenswrapper[4795]: E1129 09:45:00.177764 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e6e6d17-42c8-46ac-84b3-1554f65c0482" containerName="extract-content" Nov 29 09:45:00 crc kubenswrapper[4795]: I1129 09:45:00.177783 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e6e6d17-42c8-46ac-84b3-1554f65c0482" containerName="extract-content" Nov 29 09:45:00 crc kubenswrapper[4795]: E1129 09:45:00.177802 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="043b9785-4403-44ff-b8d1-e2d279b1ccdb" containerName="gather" Nov 29 09:45:00 crc kubenswrapper[4795]: I1129 09:45:00.177851 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="043b9785-4403-44ff-b8d1-e2d279b1ccdb" containerName="gather" Nov 29 09:45:00 crc kubenswrapper[4795]: E1129 09:45:00.177874 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e6e6d17-42c8-46ac-84b3-1554f65c0482" containerName="registry-server" Nov 29 09:45:00 crc kubenswrapper[4795]: I1129 09:45:00.177882 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e6e6d17-42c8-46ac-84b3-1554f65c0482" containerName="registry-server" Nov 29 09:45:00 crc kubenswrapper[4795]: E1129 09:45:00.177914 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e6e6d17-42c8-46ac-84b3-1554f65c0482" containerName="extract-utilities" Nov 29 09:45:00 crc kubenswrapper[4795]: I1129 09:45:00.177923 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e6e6d17-42c8-46ac-84b3-1554f65c0482" containerName="extract-utilities" Nov 29 09:45:00 crc kubenswrapper[4795]: E1129 09:45:00.177952 4795 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="043b9785-4403-44ff-b8d1-e2d279b1ccdb" containerName="copy" Nov 29 09:45:00 crc kubenswrapper[4795]: I1129 09:45:00.177971 4795 state_mem.go:107] "Deleted CPUSet assignment" podUID="043b9785-4403-44ff-b8d1-e2d279b1ccdb" containerName="copy" Nov 29 09:45:00 crc kubenswrapper[4795]: I1129 09:45:00.225076 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="043b9785-4403-44ff-b8d1-e2d279b1ccdb" containerName="copy" Nov 29 09:45:00 crc kubenswrapper[4795]: I1129 09:45:00.225202 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e6e6d17-42c8-46ac-84b3-1554f65c0482" containerName="registry-server" Nov 29 09:45:00 crc kubenswrapper[4795]: I1129 09:45:00.225240 4795 memory_manager.go:354] "RemoveStaleState removing state" podUID="043b9785-4403-44ff-b8d1-e2d279b1ccdb" containerName="gather" Nov 29 09:45:00 crc kubenswrapper[4795]: I1129 09:45:00.227292 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406825-xqw7z" Nov 29 09:45:00 crc kubenswrapper[4795]: I1129 09:45:00.230911 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406825-xqw7z"] Nov 29 09:45:00 crc kubenswrapper[4795]: I1129 09:45:00.231820 4795 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 29 09:45:00 crc kubenswrapper[4795]: I1129 09:45:00.231907 4795 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 29 09:45:00 crc kubenswrapper[4795]: I1129 09:45:00.310222 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a5ee2cfe-21ed-4ae3-92b4-1962e0c69ad7-config-volume\") pod \"collect-profiles-29406825-xqw7z\" (UID: \"a5ee2cfe-21ed-4ae3-92b4-1962e0c69ad7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406825-xqw7z" Nov 29 09:45:00 crc kubenswrapper[4795]: I1129 09:45:00.310814 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zn55n\" (UniqueName: \"kubernetes.io/projected/a5ee2cfe-21ed-4ae3-92b4-1962e0c69ad7-kube-api-access-zn55n\") pod \"collect-profiles-29406825-xqw7z\" (UID: \"a5ee2cfe-21ed-4ae3-92b4-1962e0c69ad7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406825-xqw7z" Nov 29 09:45:00 crc kubenswrapper[4795]: I1129 09:45:00.310995 4795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a5ee2cfe-21ed-4ae3-92b4-1962e0c69ad7-secret-volume\") pod \"collect-profiles-29406825-xqw7z\" (UID: \"a5ee2cfe-21ed-4ae3-92b4-1962e0c69ad7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406825-xqw7z" Nov 29 09:45:00 crc kubenswrapper[4795]: I1129 09:45:00.412234 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a5ee2cfe-21ed-4ae3-92b4-1962e0c69ad7-secret-volume\") pod \"collect-profiles-29406825-xqw7z\" (UID: \"a5ee2cfe-21ed-4ae3-92b4-1962e0c69ad7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406825-xqw7z" Nov 29 09:45:00 crc kubenswrapper[4795]: I1129 09:45:00.412757 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a5ee2cfe-21ed-4ae3-92b4-1962e0c69ad7-config-volume\") pod \"collect-profiles-29406825-xqw7z\" (UID: \"a5ee2cfe-21ed-4ae3-92b4-1962e0c69ad7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406825-xqw7z" Nov 29 09:45:00 crc kubenswrapper[4795]: I1129 09:45:00.412884 4795 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zn55n\" (UniqueName: \"kubernetes.io/projected/a5ee2cfe-21ed-4ae3-92b4-1962e0c69ad7-kube-api-access-zn55n\") pod \"collect-profiles-29406825-xqw7z\" (UID: \"a5ee2cfe-21ed-4ae3-92b4-1962e0c69ad7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406825-xqw7z" Nov 29 09:45:00 crc kubenswrapper[4795]: I1129 09:45:00.413907 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a5ee2cfe-21ed-4ae3-92b4-1962e0c69ad7-config-volume\") pod \"collect-profiles-29406825-xqw7z\" (UID: \"a5ee2cfe-21ed-4ae3-92b4-1962e0c69ad7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406825-xqw7z" Nov 29 09:45:00 crc kubenswrapper[4795]: I1129 09:45:00.426640 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a5ee2cfe-21ed-4ae3-92b4-1962e0c69ad7-secret-volume\") pod \"collect-profiles-29406825-xqw7z\" (UID: \"a5ee2cfe-21ed-4ae3-92b4-1962e0c69ad7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406825-xqw7z" Nov 29 09:45:00 crc kubenswrapper[4795]: I1129 09:45:00.428282 4795 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zn55n\" (UniqueName: \"kubernetes.io/projected/a5ee2cfe-21ed-4ae3-92b4-1962e0c69ad7-kube-api-access-zn55n\") pod \"collect-profiles-29406825-xqw7z\" (UID: \"a5ee2cfe-21ed-4ae3-92b4-1962e0c69ad7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406825-xqw7z" Nov 29 09:45:00 crc kubenswrapper[4795]: I1129 09:45:00.553150 4795 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406825-xqw7z" Nov 29 09:45:01 crc kubenswrapper[4795]: I1129 09:45:01.047410 4795 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406825-xqw7z"] Nov 29 09:45:01 crc kubenswrapper[4795]: I1129 09:45:01.555877 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406825-xqw7z" event={"ID":"a5ee2cfe-21ed-4ae3-92b4-1962e0c69ad7","Type":"ContainerStarted","Data":"013f824b0ef040f1199b0797c60ada0680ec78c4d0c714c5a2509affa1e90d46"} Nov 29 09:45:01 crc kubenswrapper[4795]: I1129 09:45:01.556189 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406825-xqw7z" event={"ID":"a5ee2cfe-21ed-4ae3-92b4-1962e0c69ad7","Type":"ContainerStarted","Data":"326611be99ffadbc0e40a928ab7f247e66996f348512b0e33a48dd49ddb1c564"} Nov 29 09:45:01 crc kubenswrapper[4795]: I1129 09:45:01.588290 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29406825-xqw7z" podStartSLOduration=1.5882705160000001 podStartE2EDuration="1.588270516s" podCreationTimestamp="2025-11-29 09:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:45:01.575811194 +0000 UTC m=+7547.551387004" watchObservedRunningTime="2025-11-29 09:45:01.588270516 +0000 UTC m=+7547.563846306" Nov 29 09:45:02 crc kubenswrapper[4795]: I1129 09:45:02.570733 4795 generic.go:334] "Generic (PLEG): container finished" podID="a5ee2cfe-21ed-4ae3-92b4-1962e0c69ad7" containerID="013f824b0ef040f1199b0797c60ada0680ec78c4d0c714c5a2509affa1e90d46" exitCode=0 Nov 29 09:45:02 crc kubenswrapper[4795]: I1129 09:45:02.570833 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406825-xqw7z" event={"ID":"a5ee2cfe-21ed-4ae3-92b4-1962e0c69ad7","Type":"ContainerDied","Data":"013f824b0ef040f1199b0797c60ada0680ec78c4d0c714c5a2509affa1e90d46"} Nov 29 09:45:04 crc kubenswrapper[4795]: I1129 09:45:04.044946 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406825-xqw7z" Nov 29 09:45:04 crc kubenswrapper[4795]: I1129 09:45:04.223727 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a5ee2cfe-21ed-4ae3-92b4-1962e0c69ad7-secret-volume\") pod \"a5ee2cfe-21ed-4ae3-92b4-1962e0c69ad7\" (UID: \"a5ee2cfe-21ed-4ae3-92b4-1962e0c69ad7\") " Nov 29 09:45:04 crc kubenswrapper[4795]: I1129 09:45:04.223841 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a5ee2cfe-21ed-4ae3-92b4-1962e0c69ad7-config-volume\") pod \"a5ee2cfe-21ed-4ae3-92b4-1962e0c69ad7\" (UID: \"a5ee2cfe-21ed-4ae3-92b4-1962e0c69ad7\") " Nov 29 09:45:04 crc kubenswrapper[4795]: I1129 09:45:04.224113 4795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zn55n\" (UniqueName: \"kubernetes.io/projected/a5ee2cfe-21ed-4ae3-92b4-1962e0c69ad7-kube-api-access-zn55n\") pod \"a5ee2cfe-21ed-4ae3-92b4-1962e0c69ad7\" (UID: \"a5ee2cfe-21ed-4ae3-92b4-1962e0c69ad7\") " Nov 29 09:45:04 crc kubenswrapper[4795]: I1129 09:45:04.225299 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5ee2cfe-21ed-4ae3-92b4-1962e0c69ad7-config-volume" (OuterVolumeSpecName: "config-volume") pod "a5ee2cfe-21ed-4ae3-92b4-1962e0c69ad7" (UID: "a5ee2cfe-21ed-4ae3-92b4-1962e0c69ad7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 09:45:04 crc kubenswrapper[4795]: I1129 09:45:04.230893 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5ee2cfe-21ed-4ae3-92b4-1962e0c69ad7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a5ee2cfe-21ed-4ae3-92b4-1962e0c69ad7" (UID: "a5ee2cfe-21ed-4ae3-92b4-1962e0c69ad7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 09:45:04 crc kubenswrapper[4795]: I1129 09:45:04.233804 4795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5ee2cfe-21ed-4ae3-92b4-1962e0c69ad7-kube-api-access-zn55n" (OuterVolumeSpecName: "kube-api-access-zn55n") pod "a5ee2cfe-21ed-4ae3-92b4-1962e0c69ad7" (UID: "a5ee2cfe-21ed-4ae3-92b4-1962e0c69ad7"). InnerVolumeSpecName "kube-api-access-zn55n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 09:45:04 crc kubenswrapper[4795]: I1129 09:45:04.327820 4795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zn55n\" (UniqueName: \"kubernetes.io/projected/a5ee2cfe-21ed-4ae3-92b4-1962e0c69ad7-kube-api-access-zn55n\") on node \"crc\" DevicePath \"\"" Nov 29 09:45:04 crc kubenswrapper[4795]: I1129 09:45:04.327867 4795 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a5ee2cfe-21ed-4ae3-92b4-1962e0c69ad7-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 29 09:45:04 crc kubenswrapper[4795]: I1129 09:45:04.327876 4795 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a5ee2cfe-21ed-4ae3-92b4-1962e0c69ad7-config-volume\") on node \"crc\" DevicePath \"\"" Nov 29 09:45:04 crc kubenswrapper[4795]: I1129 09:45:04.603743 4795 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406825-xqw7z" event={"ID":"a5ee2cfe-21ed-4ae3-92b4-1962e0c69ad7","Type":"ContainerDied","Data":"326611be99ffadbc0e40a928ab7f247e66996f348512b0e33a48dd49ddb1c564"} Nov 29 09:45:04 crc kubenswrapper[4795]: I1129 09:45:04.603789 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="326611be99ffadbc0e40a928ab7f247e66996f348512b0e33a48dd49ddb1c564" Nov 29 09:45:04 crc kubenswrapper[4795]: I1129 09:45:04.603801 4795 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406825-xqw7z" Nov 29 09:45:04 crc kubenswrapper[4795]: I1129 09:45:04.657811 4795 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406780-lzb48"] Nov 29 09:45:04 crc kubenswrapper[4795]: I1129 09:45:04.671569 4795 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406780-lzb48"] Nov 29 09:45:05 crc kubenswrapper[4795]: I1129 09:45:05.277689 4795 scope.go:117] "RemoveContainer" containerID="71c147b2a6e1831846318f34e8b60797d25055c13936239694977cf2a1c9bc31" Nov 29 09:45:05 crc kubenswrapper[4795]: E1129 09:45:05.278251 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:45:06 crc kubenswrapper[4795]: I1129 09:45:06.299039 4795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4375e703-065e-4402-a428-bac0bf6a3339" path="/var/lib/kubelet/pods/4375e703-065e-4402-a428-bac0bf6a3339/volumes" Nov 29 09:45:19 crc kubenswrapper[4795]: I1129 09:45:19.276517 4795 scope.go:117] "RemoveContainer" containerID="71c147b2a6e1831846318f34e8b60797d25055c13936239694977cf2a1c9bc31" Nov 29 09:45:19 crc kubenswrapper[4795]: E1129 09:45:19.277828 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:45:30 crc kubenswrapper[4795]: I1129 09:45:30.275750 4795 scope.go:117] "RemoveContainer" containerID="71c147b2a6e1831846318f34e8b60797d25055c13936239694977cf2a1c9bc31" Nov 29 09:45:30 crc kubenswrapper[4795]: E1129 09:45:30.276494 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:45:36 crc kubenswrapper[4795]: I1129 09:45:36.533949 4795 scope.go:117] "RemoveContainer" containerID="9104c0b16de50726be82830322594823409f70e2345913e658bad8fce96a99d3" Nov 29 09:45:41 crc kubenswrapper[4795]: I1129 09:45:41.276176 4795 scope.go:117] "RemoveContainer" containerID="71c147b2a6e1831846318f34e8b60797d25055c13936239694977cf2a1c9bc31" Nov 29 09:45:41 crc kubenswrapper[4795]: E1129 09:45:41.277037 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:45:52 crc kubenswrapper[4795]: I1129 09:45:52.275784 4795 scope.go:117] "RemoveContainer" containerID="71c147b2a6e1831846318f34e8b60797d25055c13936239694977cf2a1c9bc31" Nov 29 09:45:52 crc kubenswrapper[4795]: E1129 09:45:52.276484 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:46:03 crc kubenswrapper[4795]: I1129 09:46:03.276823 4795 scope.go:117] "RemoveContainer" containerID="71c147b2a6e1831846318f34e8b60797d25055c13936239694977cf2a1c9bc31" Nov 29 09:46:03 crc kubenswrapper[4795]: E1129 09:46:03.277749 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:46:15 crc kubenswrapper[4795]: I1129 09:46:15.277485 4795 scope.go:117] "RemoveContainer" containerID="71c147b2a6e1831846318f34e8b60797d25055c13936239694977cf2a1c9bc31" Nov 29 09:46:15 crc kubenswrapper[4795]: E1129 09:46:15.278316 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:46:27 crc kubenswrapper[4795]: I1129 09:46:27.276062 4795 scope.go:117] "RemoveContainer" containerID="71c147b2a6e1831846318f34e8b60797d25055c13936239694977cf2a1c9bc31" Nov 29 09:46:27 crc kubenswrapper[4795]: E1129 09:46:27.277798 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:46:40 crc kubenswrapper[4795]: I1129 09:46:40.276145 4795 scope.go:117] "RemoveContainer" containerID="71c147b2a6e1831846318f34e8b60797d25055c13936239694977cf2a1c9bc31" Nov 29 09:46:40 crc kubenswrapper[4795]: E1129 09:46:40.277031 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:46:54 crc kubenswrapper[4795]: I1129 09:46:54.287021 4795 scope.go:117] "RemoveContainer" containerID="71c147b2a6e1831846318f34e8b60797d25055c13936239694977cf2a1c9bc31" Nov 29 09:46:54 crc kubenswrapper[4795]: E1129 09:46:54.288039 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:47:05 crc kubenswrapper[4795]: I1129 09:47:05.275379 4795 scope.go:117] "RemoveContainer" containerID="71c147b2a6e1831846318f34e8b60797d25055c13936239694977cf2a1c9bc31" Nov 29 09:47:05 crc kubenswrapper[4795]: E1129 09:47:05.277470 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:47:18 crc kubenswrapper[4795]: I1129 09:47:18.277212 4795 scope.go:117] "RemoveContainer" containerID="71c147b2a6e1831846318f34e8b60797d25055c13936239694977cf2a1c9bc31" Nov 29 09:47:18 crc kubenswrapper[4795]: E1129 09:47:18.278954 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:47:33 crc kubenswrapper[4795]: I1129 09:47:33.277126 4795 scope.go:117] "RemoveContainer" containerID="71c147b2a6e1831846318f34e8b60797d25055c13936239694977cf2a1c9bc31" Nov 29 09:47:33 crc kubenswrapper[4795]: E1129 09:47:33.277952 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1" Nov 29 09:47:48 crc kubenswrapper[4795]: I1129 09:47:48.276644 4795 scope.go:117] "RemoveContainer" containerID="71c147b2a6e1831846318f34e8b60797d25055c13936239694977cf2a1c9bc31" Nov 29 09:47:48 crc kubenswrapper[4795]: E1129 09:47:48.277521 4795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkmq6_openshift-machine-config-operator(1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkmq6" podUID="1cf68fd3-dd2b-4cf2-b052-7b5a0965e9f1"